Ubuntu's Mark Shuttleworth Pulls No Punches on Red Hat and VMware in OpenStack Cloud ( 63

At OpenStack Summit in Vancouver, Canada this week, Canonical CEO and Ubuntu Linux founder Mark Shuttleworth came out firing at two of his major enterprise OpenStack competitors: Red Hat and VMware. He claimed that Canonical OpenStack is a better deal than either Red Hat or VMware's OpenStack offerings. From a report: Shuttleworth opened quietly enough, saying, "Mission is to remove all the friction from deploying OpenStack. We can deliver OpenStack deployments with two people in less two weeks anywhere in the world." So far, so typical for a keynote speech. But, then Shuttleworth started to heat things up: "Amazon increased efficiency, so now everyone is driving down cost of infrastructure. Everyone engages with Ubuntu, not Red Hat or VMware. Google, IBM, Microsoft are investing and innovating to drive down the cost of infrastructure. Every single one of those companies works with Canonical to deliver public services."

Then, Shuttleworth got down to brass tacks: "Not one of them engages with VMware to offer those public services. They can't afford to. Clearly, they have the cash, but they have to compete on efficiencies, and so does your private cloud." So, Canonical is rolling rolling out a migration service to help users shift from VMware to a "fully managed" version of Canonical's Ubuntu OpenStack distribution. Customers want this, Shuttleworth said, because, "When we take out VMware we are regularly told that our fully managed OpenStack solution costs half of the equivalent VMware service."

Open Source

Computer History Museum Makes Eudora Email Client Source Code Available To the Public ( 53

Computer History Museum (CHM), an institution which explores the history of computing and its impact on the human experience, announced on Tuesday the public release and long-term preservation of the Eudora source code, one of the early successful email clients, as part of its Center for Software History's Historical Source Code. The release comes after a five-year negotiation with Qualcomm. From the press release: The first version of Eudora was created in the 1980s by Steve Dorner who was working at the University of Illinois at Urbana-Champaign. It took Dorner over a year to create the first version of Eudora, which had 50,000 lines of C code and ran only on the Apple Macintosh. In 1991, Qualcomm licensed Eudora from the University of Illinois and distributed it free of charge. Qualcomm later released Eudora as a consumer product in 1993, and it quickly gained popularity. Available both for the IBM PC and the Apple Macintosh, in its heyday Eudora had tens of millions of users. After 15 years, in 2006, Qualcomm decided that Eudora was no longer consistent with their other major project lines, and they stopped development. The discussion with Qualcomm for the release of the Eudora source code by the company's museum took five years. Len Shustek, the chairman of the board of trustees of the Computer History Museum, writes: Eventually many email clients were written for personal computers, but few became as successful as Eudora. Available both for the IBM PC and the Apple Macintosh, in its heyday Eudora had tens of millions of happy users. Eudora was elegant, fast, feature-rich, and could cope with mail repositories containing hundreds of thousands of messages. In my opinion it was the finest email client ever written, and it has yet to be surpassed. I still use it today, but, alas, the last version of Eudora was released in 2006. It may not be long for this world. With thanks to Qualcomm, we are pleased to release the Eudora source code for its historical interest, and with the faint hope that it might be resuscitated. I will muse more about that later.

IBM Warns Quantum Computing Will Break Encryption ( 197

Long-time Slashdot reader CrtxReavr shares a report from ZDNet: Quantum computers will be able to instantly break the encryption of sensitive data protected by today's strongest security, warns the head of IBM Research. This could happen in a little more than five years because of advances in quantum computer technologies. "Anyone that wants to make sure that their data is protected for longer than 10 years should move to alternate forms of encryption now," said Arvind Krishna, director of IBM Research... Quantum computers can solve some types of problems near-instantaneously compared with billions of years of processing using conventional computers... Advances in novel materials and in low-temperature physics have led to many breakthroughs in the quantum computing field in recent years, and large commercial quantum computer systems will soon be viable and available within five years...

In addition to solving tough computing problems, quantum computers could save huge amounts of energy, as server farms proliferate and applications such as bitcoin grow in their compute needs. Each computation takes just a few watts, yet it could take several server farms to accomplish if it were run on conventional systems.

The original submission raises another possibility. "What I wonder is, if encryption can be 'instantly broken,' does this also mean that remaining crypto-coins can be instantly discovered?"

Chinese Scientists Develop Photonic Quantum Analog Computing Chip ( 55

hackingbear writes from a report via Xinhua: Chinese scientists demonstrated the first two-dimensional quantum walks of single photons in real spatial space, which may provide a powerful platform to boost analog quantum computing. Scientists at Shanghai Jiaotong University reported in a paper published in the journal Science Advances a three-dimensional photonic chip with a scale up to 49x49 nodes, by using a technique called femtosecond direct writing. Universal quantum computers, under develop by IBM, Google, Alibaba and other American and Chinese rivals, are far from being feasible before error correction and full connections between the increasing numbers of qubits could be realized. In contrast, analog quantum computers, or quantum simulators, can be built in a straightforward way to solve practical problems directly without error correction, and potentially be able to beat the computational power of classical computers in the near future.

IBM Bans Staff From Using Removable Storage Devices ( 167

An anonymous reader shares a report: In an advisory to employees, IBM global chief Information security officer Shamla Naidoo said the company "is expanding the practise of prohibiting data transfer to all removable portable storage devices (eg: USB, SD card, flash drive)." The advisory stated some pockets of IBM have had this policy for a while, but "over the next few weeks we are implementing this policy worldwide." Big Blue's doing this because "the possible financial and reputational damage from misplaced, lost or misused removable portable storage devices must be minimised." IBMers are advised to use Big Blue's preferred sync 'n' share service to move data around.

Edge Computing: Explained ( 159

An anonymous reader shares a report from The Verge, written by Paul Miller: In the beginning, there was One Big Computer. Then, in the Unix era, we learned how to connect to that computer using dumb (not a pejorative) terminals. Next we had personal computers, which was the first time regular people really owned the hardware that did the work. Right now, in 2018, we're firmly in the cloud computing era. Many of us still own personal computers, but we mostly use them to access centralized services like Dropbox, Gmail, Office 365, and Slack. Additionally, devices like Amazon Echo, Google Chromecast, and the Apple TV are powered by content and intelligence that's in the cloud -- as opposed to the DVD box set of Little House on the Prairie or CD-ROM copy of Encarta you might've enjoyed in the personal computing era. As centralized as this all sounds, the truly amazing thing about cloud computing is that a seriously large percentage of all companies in the world now rely on the infrastructure, hosting, machine learning, and compute power of a very select few cloud providers: Amazon, Microsoft, Google, and IBM.

The advent of edge computing as a buzzword you should perhaps pay attention to is the realization by these companies that there isn't much growth left in the cloud space. Almost everything that can be centralized has been centralized. Most of the new opportunities for the "cloud" lie at the "edge." The word edge in this context means literal geographic distribution. Edge computing is computing that's done at or near the source of the data, instead of relying on the cloud at one of a dozen data centers to do all the work. It doesn't mean the cloud will disappear. It means the cloud is coming to you.
Miller goes on to "examine what people mean practically when they extoll edge computing," focusing on latency, privacy and security, and bandwidth.

The Longest Straight Path You Could Travel On Water Without Hitting Land ( 141

An anonymous reader quotes a report from Gizmodo: Back in 2012, a Reddit user posted a map claiming to show the longest straight line that could be traversed across the ocean without hitting land. Intrigued, a pair of computer scientists have developed an algorithm that corroborates the route, while also demonstrating the longest straight line that can be taken on land. The researchers, Rohan Chabukswar from United Technologies Research Center Ireland, and Kushal Mukherjee from IBM Research India, created the algorithm in response to a map posted by reddit user user kepleronlyknows, who goes by Patrick Anderson in real life. His map showed a long, 20,000 mile route extending from Pakistan through the southern tips of Africa and South America and finally ending in an epic trans-Pacific journey to Siberia. On a traditional 2D map, the path looks nothing like a straight line; but remember, the Earth is a sphere.

Anderson didn't provide any evidence for the map, or an explanation for how the route was calculated. In light of this, Chabukswar and Mukherjee embarked upon a project to figure out if the straight line route was indeed the longest, and to see if it was possible for a computer algorithm to solve the problem, both for straight line passages on water without hitting land or an ice sheet, and for a continuous straight line passage on land without hitting a major body of water. Their ensuing analysis was posted to the pre-print arXiv server earlier this month, and has yet to go through peer review.
"There would be 233,280,000 great circles to consider to find the global optimum, and each great circle would have 21,600 individual points to process -- a staggering 5,038,848,000,000 points to verify," the researchers wrote in their study.
Red Hat Software

Red Hat Enterprise Linux Version 7.5 Released ( 64

On Tuesday Red Hat announced the general availability of Red Hat Enterprise Linux version 7.5. An anonymous reader writes: Serving as a consistent foundation for hybrid cloud environments, Red Hat Enterprise Linux 7.5 provides enhanced security and compliance controls, tools to reduce storage costs, and improved usability, as well as further integration with Microsoft Windows infrastructure both on-premise and in Microsoft Azure.

New features include a large combination of Ansible Automation with OpenSCAP, and LUKS-encrypted removable storage devices can be now automatically unlocked using NBDE. The Gnome shell has been re-based to version 3.26, the Kernel version is 3.10.0-862, and the kernel-alt packages include kernel version 4.14 with support for 64-bit ARM, IBM POWER9 (little endian), and IBM z Systems, while KVM virtualization is now supported on IBM POWER8/POWER9 systems.

See the detailed release notes here.
Open Source

Red Hat CEO Talks About State Of Open Source ( 64

To mark Red Hat's 25th anniversary, TechCrunch spoke with the company's CEO Jim Whitehurst to talk about the past, present and future of the company, and open-source software in general. An excerpt: "Ten years ago, open source at the time was really focused on offering viable alternatives to traditional software," he told me. "We were selling layers of technology to replace existing technology. [...] At the time, it was open source showing that we can build open-source tech at lower cost. The value proposition was that it was cheaper." At the time, he argues, the market was about replacing Windows with Linux or IBM's WebSphere with JBoss. And that defined Red Hat's role in the ecosystem, too, which was less about technological information than about packaging. "For Red Hat, we started off taking these open-source projects and making them usable for traditional enterprises," said Whitehurst.

About five or six ago, something changed, though. Large corporations, including Google and Facebook, started open sourcing their own projects because they didn't look at some of the infrastructure technologies they opened up as competitive advantages. Instead, having them out in the open allowed them to profit from the ecosystems that formed around that. "The biggest part is it's not just Google and Facebook finding religion," said Whitehurst. "The social tech around open source made it easy to make projects happen. Companies got credit for that." He also noted that developers now look at their open-source contributions as part of their resume. With an increasingly mobile workforce that regularly moves between jobs, companies that want to compete for talent are almost forced to open source at least some of the technologies that don't give them a competitive advantage.

In October, Whitehurst also answered questions from Slashdot readers.

Cutting 'Old Heads' at IBM ( 216

An anonymous reader shares a report: As the world's dominant technology firm, payrolls at International Business Machines swelled to nearly a quarter-million U.S. white-collar workers in the 1980s. Its profits helped underwrite a broad agenda of racial equality, equal pay for women and an unbeatable offer of great wages and something close to lifetime employment, all in return for unswerving loyalty. But when high tech suddenly started shifting and companies went global, IBM faced the changing landscape with a distinction most of its fiercest competitors didn't have: a large number of experienced and aging U.S. employees.

The company reacted with a strategy that, in the words of one confidential planning document, would "correct seniority mix." It slashed IBM's U.S. workforce by as much as three-quarters from its 1980s peak, replacing a substantial share with younger, less-experienced and lower-paid workers and sending many positions overseas. ProPublica estimates that in the past five years alone, IBM has eliminated more than 20,000 American employees ages 40 and over, about 60 percent of its estimated total U.S. job cuts during those years. In making these cuts, IBM has flouted or outflanked U.S. laws and regulations intended to protect later-career workers from age discrimination, according to a ProPublica review of internal company documents, legal filings and public records, as well as information provided via interviews and questionnaires filled out by more than 1,000 former IBM employees.


IBM Unveils the 'World's Smallest Computer' ( 164

On the first day of IBM Think 2018, the company's flagship conference, IBM has unveiled what it claims is the world's smallest computer. It's smaller than a grain of salt and features the computer power of the x86 chip from 1990. Mashable first spotted this gem: The computer will cost less than ten cents to manufacture, and will also pack "several hundred thousand transistors," according to the company. These will allow it to "monitor, analyze, communicate, and even act on data." It works with blockchain. Specifically, this computer will be a data source for blockchain applications. It's intended to help track the shipment of goods and detect theft, fraud, and non-compliance. It can also do basic AI tasks, such as sorting the data it's given. According to IBM, this is only the beginning. "Within the next five years, cryptographic anchors -- such as ink dots or tiny computers smaller than a grain of salt -- will be embedded in everyday objects and devices," says IBM head of research Arvind Krishna. If he's correct, we'll see way more of these tiny systems in objects and devices in the years to come. It's not clear yet when this thing will be released -- IBM researchers are currently testing its first prototype.
Open Source

Microsoft Joins Group Working To 'Cure' Open-Source Licensing Issues ( 104

Microsoft is joining Red Hat, Facebook, Google and IBM in committing to extending right to "cure" open source licensing noncompliance before taking legal measures. From a report: On March 19, officials from Microsoft -- along with CA Technologies, Cisco, HPE, SAP and SUSE -- said they'd work with open together with the already-committed vendors to provide more "predictability" for users of open source software. "The large ecosystems of projects using the GPLv2 and LGPLv2.x licenses will benefit from adoption of this more balanced approach to termination derived from GPLv3," explained Red Hat in a press release announcing the new license-compliance partners. The companies which have agreed to adopt the "Common Cure Rights Commitment" said before they file or continue to prosecute those accused of violating covered licenses, they will allow for users to cure and reinstate their licenses.
Open Source

Vim Beats Emacs in 'Linux Journal' Reader Survey ( 195

The newly-relaunched Linux Journal is conducting its annual "Reader's Choice Awards," and this month announced the winners for Best Text Editor, Best Laptop, and Best Domain Registrar. Vim was chosen as the best editor by 35% of respondents, handily beating GNU Emacs (19%) Sublime Text (10%) and Atom (8%). Readers' Choice winner Vim is an extremely powerful editor with a user interface based on Bill Joy's 40-plus-year-old vi, but with many improved-upon features including extensive customization with key mappings and plugins. Linux Journal reader David Harrison points out another great thing about Vim "is that it's basically everywhere. It's available on every major platform."
For best laptop their readers picked Lenovo (32%), followed by Dell (25%) and System76 (11%). The ThinkPad began life at IBM, but in 2005, it was purchased by Lenovo along with the rest of IBM's PC business. Lenovo evolved the line, and today the company is well known as a geek favorite. Lenovo's ThinkPads are quiet, fast and arguably have one of the best keyboards (fighting words!). Linux Journal readers say Lenovo's Linux support is excellent, leaving many to ponder why the company doesn't ship laptops with Linux installed.
In February readers also voted on the best web browser, choosing Firefox (57%) over Chrome (17%) and Chromium (7%). And they also voted on the best Linux distribution, ultimately selecting Debian (33%), open SUSE (12%), and Fedora (11%).

China's Alibaba is Investing Huge Sums in AI Research and Resources -- and It Is Building Tools To Challenge Google and Amazon ( 30

Alibaba is already using AI and machine learning to optimize its supply chain, personalize recommendations, and build products like Tmall Genie, a home device similar to the Amazon Echo. China's two other tech supergiants, Tencent and Baidu, are likewise pouring money into AI research. The government plans to build an AI industry worth around $150 billion by 2030 and has called on the country's researchers to dominate the field by then. But Alibaba's ambition is to be the leader in providing cloud-based AI. From a report: Like cloud storage (think Dropbox) or cloud computing (Amazon Web Services), cloud AI will make powerful resources cheaply and readily available to anyone with a computer and an internet connection, enabling new kinds of businesses to grow. The real race in AI between China and the US, then, will be one between the two countries' big cloud companies, which will vie to be the provider of choice for companies and cities that want to make use of AI. And if Alibaba is anything to go by, China's tech giants are ready to compete with Google, Amazon, IBM, and Microsoft to serve up AI on tap. Which company dominates this industry will have a huge say in how AI evolves and how it is used.

[...] There have been other glimpses of Alibaba's progress in AI lately. Last month a research team at the company released an AI program capable of reading a piece of text, and answering simple questions about that text, more accurately than anything ever built before. The text was in English, not Chinese, because the program was trained on the Stanford Question Answering Dataset (SQuAD), a benchmark used to test computerized question-and-answer systems. [...] One advantage China's tech companies have over their Western counterparts is the government's commitment to AI. Smart cities that use the kind of technology found in Shanghai's metro kiosks are likely to be in the country's future. One of Alibaba's cloud AI tools is a suite called City Brain, designed for tasks like managing traffic data and analyzing footage from city video cameras.


Google Unveils 72-Qubit Quantum Computer With Low Error Rates ( 76

An anonymous reader quotes a report from Tom's Hardware: Google announced a 72-qubit universal quantum computer that promises the same low error rates the company saw in its first 9-qubit quantum computer. Google believes that this quantum computer, called Bristlecone, will be able to bring us to an age of quantum supremacy. In a recent announcement, Google said: "If a quantum processor can be operated with low enough error, it would be able to outperform a classical supercomputer on a well-defined computer science problem, an achievement known as quantum supremacy. These random circuits must be large in both number of qubits as well as computational length (depth). Although no one has achieved this goal yet, we calculate quantum supremacy can be comfortably demonstrated with 49 qubits, a circuit depth exceeding 40, and a two-qubit error below 0.5%. We believe the experimental demonstration of a quantum processor outperforming a supercomputer would be a watershed moment for our field, and remains one of our key objectives."

According to Google, a minimum error rate for quantum computers needs to be in the range of less than 1%, coupled with close to 100 qubits. Google seems to have achieved this so far with 72-qubit Bristlecone and its 1% error rate for readout, 0.1% for single-qubit gates, and 0.6% for two-qubit gates. Quantum computers will begin to become highly useful in solving real-world problems when we can achieve error rates of 0.1-1% coupled with hundreds of thousand to millions of qubits. According to Google, an ideal quantum computer would have at least hundreds of millions of qubits and an error rate lower than 0.01%. That may take several decades to achieve, even if we assume a "Moore's Law" of some kind for quantum computers (which so far seems to exist, seeing the progress of both Google and IBM in the past few years, as well as D-Wave).


Do Neural Nets Dream of Electric Sheep? ( 201

An anonymous reader shares a post: If you've been on the internet today, you've probably interacted with a neural network. They're a type of machine learning algorithm that's used for everything from language translation to finance modeling. One of their specialties is image recognition. Several companies -- including Google, Microsoft, IBM, and Facebook -- have their own algorithms for labeling photos. But image recognition algorithms can make really bizarre mistakes. Microsoft Azure's computer vision API added the above caption and tags. But there are no sheep in the image. None. I zoomed all the way in and inspected every speck. It also tagged sheep in this image. I happen to know there were sheep nearby. But none actually present. Here's one more example. In fact, the neural network hallucinated sheep every time it saw a landscape of this type. What's going on here?

Are neural networks just hyper-vigilant, finding sheep everywhere? No, as it turns out. They only see sheep where they expect to see them. They can find sheep easily in fields and mountainsides, but as soon as sheep start showing up in weird places, it becomes obvious how much the algorithms rely on guessing and probabilities. Bring sheep indoors, and they're labeled as cats. Pick up a sheep (or a goat) in your arms, and they're labeled as dogs.


AI Will Create New Jobs But Skills Must Shift, Say Tech Giants ( 73

AI will create more jobs than it destroys was the not-so-subtle rebuttal from tech giants to growing concern over the impact of automation technologies on employment. Execs from Google, IBM and Salesforce were questioned about the wider societal implications of their technologies during a panel session here at Mobile World Congress. From a report: Behshad Behzadi, who leads the engineering teams working on Google's eponymously named AI voice assistant, claimed many jobs will be "complemented" by AI, with AI technologies making it "easier" for humans to carry out tasks. "For sure there is some shift in the jobs. There's lots of jobs which will [be created which don't exist today]. Think about flight attendant jobs before there was planes and commercial flights. No one could really predict that this job will appear. So there are jobs which will be appearing of that type that are related to the AI," he said. "I think the topic is a super important topic. How jobs and AI is related -- I don't think it's one company or one country which can solve it alone. It's all together we could think about this topic," he added. "But it's really an opportunity, it's not a threat." "From IBM's perspective we firmly believe that every profession will be impacted by AI. There's no question. We also believe that there will be more jobs created," chimed in Bob Lord, IBM's chief digital officer. "We also believe that there'll be more jobs created.

IBM's Watson Is Going To Space ( 59

Yesterday, IBM announced it would be providing the AI brain for a robot being built by Airbus to accompany astronauts aboard the International Space Station (ISS). "The robot, which looks like a flying volleyball with a low-resolution face, is being deployed with Germany astronaut Alexander Gerst in June for a six month mission," reports The Next Web. "It's called CIMON, an acronym for Crew Interactive Mobile Companion, and it's headed to space to do science stuff." From the report: It'll help crew members conduct medical experiments, study crystals, and play with a Rubix cube. Best of all, just like "Wilson," the other volleyball with a face and Tom Hanks' costar in the movie Castaway, CIMON can be the astronauts' friend. According to an IBM blog post: "CIMON's digital face, voice and use of artificial intelligence make it a 'colleague' to the crew members. This collegial 'working relationship' facilitates how astronauts work through their prescribed checklists of experiments, now entering into a genuine dialogue with their interactive assistant."

IBM Sues Microsoft's New Chief Diversity Officer To Protect Diversity Trade Secrets ( 197

theodp writes: GeekWire reports that IBM has filed suit against longtime exec Lindsay-Rae McIntyre, alleging that her new position as Microsoft's chief diversity officer violates a year-long non-compete agreement, allowing Microsoft to use IBM's internal secrets to boost its own diversity efforts. A hearing is set for Feb. 22, but in the meantime, a U.S. District Judge has temporarily barred McIntyre from working at Microsoft. "IBM has gone to great lengths to safeguard as secret the confidential information that McIntyre possesses," Big Blue explained in a court filing, citing its repeated success (in 2012, 2013, 2015, 2016, 2017) in getting the U.S. government to quash FOIA requests for IBM's EEO-1 Reports on the grounds that the mandatory race/ethnicity and gender filings represent "confidential proprietary trade secret information." IBM's argument may raise some eyebrows, considering that other tech giants -- including Google, Microsoft, Apple, and Facebook -- voluntarily disclosed their EEO-1s years ago after coming under pressure from Rev. Jesse Jackson and the Congressional Black Caucus. In 2010, IBM stopped disclosing U.S. headcount data in its annual report as it accelerated overseas hiring.

New Silicon Chip-Based Quantum Computer Passes Major Test ( 22

An anonymous reader quotes a report from Gizmodo: Researchers from two teams now working with Intel have reported advances in a new quantum computing architecture, called spin qubits, in a pair of papers out today. They're obviously not the full-purpose quantum computers of the future. But they've got a major selling point over other quantum computing designs. "We made these qubits in silicon chips, similar to what's used in classical computer processes," study author Thomas Watson from TU Delft in the Netherlands told me. "The hope is that by doing things this way, we can potentially scale up to larger numbers needed to perform useful quantum computing."

Today, a research group at TU Delft, called QuTech, announced that they'd successfully tested two "spin qubits." These qubits involve the interaction of two confined electrons in a silicon chip. Each electron has a property called spin, which sort of turns it into a tiny magnet, with two states: "up" and "down." The researchers control the electrons with actual cobalt magnets and microwave pulses. They measure the electron's spins by watching how nearby electric charges react to the trapped electrons' movements. Those researchers, now working in partnership with Intel, were able to perform some quantum algorithms, including the well-known Grover search algorithm (basically, they could search through a list of four things), according to their paper published today in Nature. Additionally, a team of physicists led by Jason Petta at Princeton reported in Nature that they were able to pair light particles, called photons, to corresponding electron spins. This just means that distant spin qubits might be able to talk to one another using photons, allowing for larger quantum computers.
There are some advantages to these systems. "Present-day semiconductor technology could create these spin qubits, and they would be smaller than the superconducting chips used by IBM," reports Gizmodo. "Additionally, they stay quantum longer than other systems." The drawbacks include the fact that it's very difficult to measure the spins of these qubits, and even more difficult to get them to interact with each other. UC Berkeley postdoc Sydney Schreppler also mentioned that the qubbits needed to be really close to each other.

Slashdot Top Deals