A New Sampling Algorithm Could Eliminate Sensor Saturation ( 135

Baron_Yam shared an article from Science Daily: Researchers from MIT and the Technical University of Munich have developed a new technique that could lead to cameras that can handle light of any intensity, and audio that doesn't skip or pop. Virtually any modern information-capture device -- such as a camera, audio recorder, or telephone -- has an analog-to-digital converter in it, a circuit that converts the fluctuating voltages of analog signals into strings of ones and zeroes. Almost all commercial analog-to-digital converters (ADCs), however, have voltage limits. If an incoming signal exceeds that limit, the ADC either cuts it off or flatlines at the maximum voltage. This phenomenon is familiar as the pops and skips of a "clipped" audio signal or as "saturation" in digital images -- when, for instance, a sky that looks blue to the naked eye shows up on-camera as a sheet of white.

Last week, at the International Conference on Sampling Theory and Applications, researchers from MIT and the Technical University of Munich presented a technique that they call unlimited sampling, which can accurately digitize signals whose voltage peaks are far beyond an ADC's voltage limit. The consequence could be cameras that capture all the gradations of color visible to the human eye, audio that doesn't skip, and medical and environmental sensors that can handle both long periods of low activity and the sudden signal spikes that are often the events of interest.

One of the paper's author's explains that "The idea is very simple. If you have a number that is too big to store in your computer memory, you can take the modulo of the number."

Crypto-Bashing Prime Minister Argues The Laws Of Mathematics Don't Apply In Australia ( 330

An anonymous reader quotes the Independent:Australian Prime Minister Malcolm Turnbull has said the laws of mathematics come second to the law of the land in a row over privacy and encryption... When challenged by a technology journalist over whether it was possible to tackle the problem of criminals using encryption -- given that platform providers claim they are currently unable to break into the messages even if required to do so by law -- the Prime Minister raised eyebrows as he made his reply. "Well the laws of Australia prevail in Australia, I can assure you of that. The laws of mathematics are very commendable, but the only law that applies in Australia is the law of Australia," he said... "The important thing is to recognise the challenge and call on the companies for assistance. I am sure they know morally they should... They have to face up to their responsibility."
Facebook has already issued a statement saying that they "appreciate the important work law enforcement does, and we understand the need to carry out investigations. That's why we already have a protocol in place to respond to any requests we can.

"At the same time, weakening encrypted systems for them would mean weakening it for everyone."

Mathematical Biology Is Our Secret Weapon In the Fight Against Disease ( 57

An anonymous reader shares excerpts from a Scientific American article: In recent years, increasingly detailed experimental procedures have lead to a huge influx in the biological data available to scientists. This data is being used to generate hypotheses about the complexity of previously abstruse biological systems. In order to test these hypotheses, they must be written down in the form of a model which can be interrogated to determine whether it correctly mimics the biological observations. Mathematics is the natural language in which to do this. In addition, the advent of, and subsequent increase in, computational ability over the last 60 years has enabled us to suggest and then interrogate complex mathematical models of biological systems. The realisation that biological systems can be treated mathematically, coupled with the computational ability to build and investigate detailed biological models, has led to the dramatic increase in the popularity of mathematical biology. Maths has become a vital weapon in the scientific armoury we have to tackle some of the most pressing questions in medical, biological and ecological science in the 21st century. By describing biological systems mathematically and then using the resulting models, we can gain insights that are impossible to access though experiments and verbal reasoning alone. Mathematical biology is incredibly important if we want to change biology from a descriptive into a predictive science -- giving us power, for example, to avert pandemics or to alter the effects of debilitating diseases.

The Quirky Habits of Certified Science Geniuses ( 190

dryriver shares a report from the BBC: Celebrated inventor and physicist Nikola Tesla swore by toe exercises -- every night, he'd repeatedly "squish" his toes, 100 times for each foot, according to the author Marc J Seifer. While it's not entirely clear exactly what that exercise involved, Tesla claimed it helped to stimulate his brain cells. The most prolific mathematician of the 20th Century, Paul Erdos, preferred a different kind of stimulant: amphetamine, which he used to fuel 20-hour number benders. When a friend bet him $500 that he couldn't stop for a month, he won but complained "You've set mathematics back a month." Newton, meanwhile, bragged about the benefits of celibacy. When he died in 1727, he had transformed our understanding of the natural world forever and left behind 10 million words of notes; he was also, by all accounts, still a virgin (Tesla was also celibate, though he later claimed he fell in love with a pigeon). It's common knowledge that sleep is good for your brain -- and Einstein took this advice more seriously than most. He reportedly slept for at least 10 hours per day -- nearly one and a half times as much as the average American today (6.8 hours). But can you really slumber your way to a sharper mind? Many of the world's most brilliant scientific minds were also fantastically weird. From Pythagoras' outright ban on beans to Benjamin Franklin's naked "air baths," the path to greatness is paved with some truly peculiar habits.

Jean Sammet, Co-Designer of COBOL, Dies at 89 ( 73

theodp writes: A NY Times obituary reports that early software engineer and co-designer of COBOL Jean Sammet died on May 20 in Maryland at age 89. "Sammet was a graduate student in math when she first encountered a computer in 1949 at the Univ. of Illinois at Urbana-Champaign," the Times reports. While Grace Hopper is often called the "mother of COBOL," Hopper "was not one of the six people, including Sammet, who designed the language -- a fact Sammet rarely failed to point out... 'I yield to no one in my admiration for Grace,' she said. 'But she was not the mother, creator or developer of COBOL.'"
By 1960 the Pentagon had announced it wouldn't buy computers unless they ran COBOL, inadvertently creating an industry standard. COBOL "really was very good at handling formatted data," Brian Kernighan, tells the Times, which reports that today "More than 200 billion lines of COBOL code are now in use and an estimated 2 billion lines are added or changed each year, according to IBM Research."

Sammet was entirely self-taught, and in an interview two months ago shared a story about how her supervisor in 1955 had asked if she wanted to become a computer programmer. "What's a programmer?" she asked. He replied, "I don't know, but I know we need one." Within five years she'd become the section head of MOBIDIC Programming at Sylvania Electric Products, and had helped design COBOL -- before moving on to IBM, where she worked for the next 27 years and created the FORTRAN-based computer algebra system FORMAC.

When AI Botches Your Medical Diagnosis, Who's To Blame? ( 200

Robert Hart has posed an interested question in his report on Quartz: When artificial intelligence botches your medical diagnosis, who's to blame? Do you blame the AI, designer or organization? It's just one of many questions popping up and starting to be seriously pondered by experts as artificial intelligence and automation continue to become more entwined into our daily lives. From the report: The prospect of being diagnosed by an AI might feel foreign and impersonal at first, but what if you were told that a robot physician was more likely to give you a correct diagnosis? Medical error is currently the third leading cause of death in the U.S., and as many as one in six patients in the British NHS receive incorrect diagnoses. With statistics like these, it's unsurprising that researchers at Johns Hopkins University believe diagnostic errors to be "the next frontier for patient safety." Of course, there are downsides. AI raises profound questions regarding medical responsibility. Usually when something goes wrong, it is a fairly straightforward matter to determine blame. A misdiagnosis, for instance, would likely be the responsibility of the presiding physician. A faulty machine or medical device that harms a patient would likely see the manufacturer or operator held to account. What would this mean for an AI?

President Trump's Budget Includes a $2 Trillion Math Error ( 356

An anonymous reader quotes a report from TIME: President Trump's budget includes a simple accounting error that adds up to a $2 trillion oversight. Under the proposed budget released Tuesday, the Trump Administration's proposed tax cuts would boost economic growth enough to pay for $1.3 trillion in spending by 2027. But the tax cuts are also supposed to be revenue-neutral, meaning that trillion dollars is already supposed to pay for the money lost from the tax cuts. Former Treasury Secretary Lawrence Summers called the oversight an "elementary double count" and "a logical error of the kind that would justify failing a student in an introductory economics course" in an op-ed in the Washington Post.
United States

The Reign of the $100 Graphing Calculator Required By Every US Math Class Is Finally Ending ( 281

If you took a math class at some point in the US, there is likely a bulky $100 calculator gathering dust somewhere in your closet. Fast forward to today, and the Texas Instruments 84 -- or the TI 84-Plus, or the TI-89 or any of the other even more expensive hardware variants -- is quickly losing relevance. Engadget adds: Thanks to a new deal, they'll soon get a free option. Starting this spring, pupils in 14 US states will be able to use the TI-like Desmos online calculator during standardized testing run by the Smarter Balanced consortium. "We think students shouldn't have to buy this old, underpowered device anymore," Desmos CEO Eli Luberoff said. The Desmos calculator will be embedded directly into the assessments, meaning students will have access during tests with no need for an external device. It'll also be available to students in grades 6 through 8 and high school throughout the year. The calculator is free to use, and the company makes money by charging organizations to use it, according to Bloomberg.

'U Can't Talk to Ur Professor Like This' ( 486

Millennial college students have become far too casual when they talk with their professors, reads an opinion piece on The New York Times. Addressing professors by their first names and sending misspelled, informal emails with text abbreviations have become common practices (Editor's note: the link could be paywalled; here's a syndicated source) among many students than educators would like, Molly Worthen, an assistant professor of history at the University of North Carolina, Chapel Hill adds. From the article: Over the past decade or two, college students have become far more casual in their interactions with faculty members. My colleagues around the country grumble about students' sloppy emails and blithe informality. "When students started calling me by my first name, I felt that was too far, and I've got to say something," Mark Tomforde, a math professor at the University of Houston said. Sociologists who surveyed undergraduate syllabuses from 2004 and 2010 found that in 2004, 14 percent addressed issues related to classroom etiquette; six years later, that number had more than doubled, to 33 percent. This phenomenon crosses socio-economic lines. My colleagues at Stanford gripe as much as the ones who teach at state schools, and students from more privileged backgrounds are often the worst offenders. [...] Insisting on traditional etiquette is also simply good pedagogy. It's a teacher's job to correct sloppy prose, whether in an essay or an email. And I suspect that most of the time, students who call faculty members by their first names and send slangy messages are not seeking a more casual rapport. They just don't know they should do otherwise -- no one has bothered to explain it to them. Explaining the rules of professional interaction is not an act of condescension; it's the first step in treating students like adults.

Power of Modern Programming Languages is That They Are Expressive, Readable, Concise, Precise, and Executable ( 268

An anonymous reader shares a Scientific American article: Programming has changed. In first generation languages like FORTRAN and C, the burden was on programmers to translate high-level concepts into code. With modern programming languages -- I'll use Python as an example -- we use functions, objects, modules, and libraries to extend the language, and that doesn't just make programs better, it changes what programming is. Programming used to be about translation: expressing ideas in natural language, working with them in math notation, then writing flowcharts and pseudocode, and finally writing a program. Translation was necessary because each language offers different capabilities. Natural language is expressive and readable, pseudocode is more precise, math notation is concise, and code is executable. But the price of translation is that we are limited to the subset of ideas we can express effectively in each language. Some ideas that are easy to express computationally are awkward to write in math notation, and the symbolic manipulations we do in math are impossible in most programming languages. The power of modern programming languages is that they are expressive, readable, concise, precise, and executable. That means we can eliminate middleman languages and use one language to explore, learn, teach, and think.

Debian Developer Imprisoned In Russia Over Alleged Role In Riots ( 93

An anonymous reader writes: "Dmitry Bogatov, Debian developer and Tor node admin, is still being held in a Moscow jail," tweeted the EFF Saturday. IT Wire reports that the 25-year-old math teacher was arrested earlier this month "on suspicion of organizing riots," and is expected to be held in custody until June 8. "The panel investigating the protests claims Bogatov posted several incitory messages on the forum; for example, one claim said he was asking people to bring 'bottles, fabric, gasoline, turpentine, foam plastic' to Red Square, according to a post at Hacker News. The messages were sent in the name of one Airat Bashirov and happened to be transmitted through the Tor node that Bogatov was running. The Hacker News post said Bogatov's lawyer had produced surveillance video footage to show that he was elsewhere at the time when the messages were posted.
"After Dmitry's arrest," reports the Free Bogatov site, "Airat Bashirov continue to post messages. News outlets 'Open Russia' and 'Mediazona' even got a chance to speak with him."

Earlier this month the Debian GNU/Linux project also posted a message of support, noting Dmitry maintains several packages for command line and system tools, and saying their group "honours his good work and strong dedication to Debian and Free Software... we hope he is back as soon as possible to his endeavours... In the meantime, the Debian Project has taken measures to secure its systems by removing Dmitry's keys in the case that they are compromised."

Oregon Fines Man For Writing a Complaint Email Stating 'I Am An Engineer' ( 734

pogopop77 quotes a report from Motherboard: In September 2014, Mats Jarlstrom, an electronics engineer living in Beaverton, Oregon, sent an email to the state's engineering board. The email claimed that yellow traffic lights don't last long enough, which "puts the public at risk." "I would like to present these facts for your review and comments," he wrote. This email resulted not with a meeting, but with a threat from The Oregon State Board of Examiners for Engineering and Land Surveying [stating]: "ORS 672.020(1) prohibits the practice of engineering in Oregon without registration -- at a minimum, your use of the title 'electronics engineer' and the statement 'I'm an engineer' create violations." In January of this year, Jarlstrom was officially fined $500 by the state for the crime of "practicing engineering without being registered." Since the engineering board in Oregon said Jarlstrom should not be free to publish or present his ideas about the fast-turning yellow traffic lights, due to his "practice of engineering in Oregon without registration," he and the Institute for Justice sued them in federal court for violating his First Amendment rights. "I'm not practicing engineering, I'm just using basic mathematics and physics, Newtonian laws of motion, to make calculations and talk about what I found," he said. Sam Gedge, an attorney for the Institute for Justice, told Motherboard: "Mats has a clear First Amendment right to talk about anything from taxes to traffic lights. It's an instance of a licensing board trying to suppress speech."

Google's AlphaGo Will Face Its Biggest Challenge Yet Next Month -- But Why Is It Still Playing? ( 115

From a report on The Guardian: A year on from its victory over Go star Lee Sedol, Google DeepMind is preparing a "festival" of exhibition matches for its board game-playing AI, AlphaGo, to see how far it has evolved in the last 12 months. Headlining the event will be a one-on-one match against the current number one player of the ancient Asian game, 19-year-old Chinese professional Ke Jie. DeepMind has had its eye on this match since even before AlphaGo beat Lee. On the eve of his trip to Seoul in March 2016, the company's co-founder, Demis Hassabis, told the Guardian: "There's a young kid in China who's very, very strong, who might want to play us." As well as the one-on-one match with Jie, which will be played over the course of three games, AlphaGo will take part in two other games with slightly odder formats. But why is Google's AI still playing Go, you ask? An article on The Outline adds: Its [Google's] experiments with Go -- a game thought to be years away from being conquered by AI before last year -- are designed to bring us closer to designing a computer with human-like understanding that can solve problems like a human mind can. Historically, there have been tasks that humans do well -- communicating, improvising, emoting -- and tasks that computers do well, which tend to be those that require lots of computations -- like math of any kind, including statistical analysis and modeling of, say, journeying to the moon. Slowly, artificial intelligence scientists have been pushing that barrier. [...] Go is played on a board with an 19-by-19 grid (updated after readers pointed out it's not 18x18 grid). Each player takes turn placing stones (one player with white, the other with black) on empty intersections of the grid. The goal is to completely surround the stones of another player, removing them from the board. The number of possible positions compared to chess thanks in part to the size of the board and ability to take any unoccupied position is part of what makes it so complex. As DeepMind co-founder Demis Hassabis put it last year, "There are 1,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000, 000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000, 000,000,000,000,000,000,000,000,000,000 possible positions."

Ask Slashdot: Should I Move From Java To Scala? 245

"Scala is one of the JVM languages that manages to maintain a hip and professional vibe at the same time," writes long-time Slashdot reader Qbertino -- building up to a big question: One reason for this probably being that Scala was built by people who knew what they were doing. It has been around for a few years now in a mature form and I got curious about it a few years back. My question to the Slashdot community: Is getting into Scala worthwhile from a practical/industry standpoint or is it better to just stick with Java? Have you done larger, continuous multi-year, multi-man and mission-critical applications in Scala and what are your experiences?
The original submission asks two related questions. First, "Do you have to be a CS/math genius to make sense of Scala and use it correctly?" But more importantly, "Is Scala there to stay wherever it is deployed and used in real-world scenarios, or are there pitfalls and cracks showing up that would deter you from using Scala once again?" So share your experiences and answers in the comments. Would you recommend moving from Java to Scala?

More Compulsory Math Lessons Do Not Encourage Women To Pursue STEM Careers, Study Finds ( 239

An anonymous reader shares a report: The demand for employees in STEM careers (science, technology, engineering and math) is particularly high, as corporations compete to attract skilled professionals in the international market. What is known as "curriculum intensification" is often used around the world to attract more university entrants -- and particularly more women -- to these subjects; that is to say, students have on average more mandatory math courses at a higher level. Scientists from the LEAD Graduate School and Research Network at the University of Tubingen have now studied whether more advanced math lessons at high schools actually encourages women to pursue STEM careers. Their work shows that an increase in advanced math courses during two years before the final school-leaving exams does not automatically create the desired effects. On the contrary: one upper secondary school reform in Germany, where all high school students have to take higher level math courses, has only increased the gender differences regarding their interests in activities related to the STEM fields. The young female students' belief in their own math abilities was lower after the reform than before. The results have now been published in the Journal of Educational Psychology.

No, We Probably Don't Live in a Computer Simulation, Says Physicist ( 418

Science doesn't have all the answers. There are plenty of things it may never prove, like whether there's a God. Or whether we're living in a computer simulation, something proposed by Swedish philosopher Nick Bostrom. From an article on Gizmodo: This kind of thinking made at least one person angry, theoretical physicist and science writer Sabine Hossenfelder from the Frankfurt Institute for Advanced Studies in Germany. Last week, she took to her blog Backreactions to vent. It's not the statement "we're living in a simulation" that upsets Hossenfelder. It's the fact that philosophers are making assertions that, if true, should most certainly manifest themselves in our laws of physics. "I'm not saying it's impossible," Hossenfelder told Gizmodo. "But I want to see some backup for this claim." Backup to prove such a claim would require a lot of work and a lot of math, enough to solve some of the most complex problems in theoretical physics.

Math Teacher Solves Adobe Semaphore Puzzle ( 52

linuxwrangler writes: For over 4 years, lights atop Adobe's office building in San Jose have flashed out a secret message. This week, the puzzle was solved by Tennessee math teacher Jimmy Waters. As part of the winnings, Adobe is donating software and 3D printers to Waters' school in his name. "The semaphore had been transmitting the audio broadcast of Neil Armstrong's historic moon landing in 1969," reports The Mercury News. "That's right, not the text but the actual audio." The report provides some backstory: "Waters discovered the project, San Jose Semaphore, last summer while he was looking up something about Thomas Pynchon's 1966 novel, 'The Crying of Lot 49.' The text of that work was the code originally programmed by New York-based artist Ben Rubin in 2006. Seeing there was a new message, Waters began trying to decipher it while watching and writing down the sequences online from Tennessee. He discovered a pattern that led him to believe it could represent a space -- or a silence -- in an audio file, and when he graphed the results it looked like an audio wave. He dismissed that as being too difficult but came back to it and eventually ran his results into a program that would convert his numbers to audio. The first results came back sounding like chipmunks squeaking. So he tweaked things and found himself listening to the historic broadcast, which ends with Armstrong's famous line, 'That's one small step for man, one giant leap for mankind.'" You can listen to the semaphore message here.

Cooling To Absolute Zero Mathematically Outlawed After a Century ( 210

After more than 100 years of debate -- which at one point even elicited interest from Albert Einstein and Max Planck, physicists have finally offered up mathematical proof of the third law of thermodynamics, which states that a temperature of absolute zero cannot be physically achieved because it's impossible for the entropy (or disorder) of a system to hit zero. While scientists have long suspected that there's an intrinsic 'speed limit' on the act of cooling in our Universe that prevents us from ever achieving absolute zero (0 Kelvin, -273.15 C, or -459.67 F), this is the strongest evidence yet that our current laws of physics hold true when it comes to the lowest possible temperature. From a report on NewScientist: Now Jonathan Oppenheim and Lluis Masanes at University College London have mathematically derived the unattainability principle and placed limits on how fast a system can cool, creating a general proof of the third law. "In computer science, people ask this question all the time: how long does it take to perform a computation?" says Oppenheim. "Just as a computing machine performs a computation, a cooling machine cools a system." So, he and Masanes asked how long it takes to get cold. Cooling can be thought of as a series of steps: heat is removed from the system and dumped into the surrounding environment again and again, and each time the system gets colder. How cold depends on how much work can be done to remove the heat and the size of the reservoir for dumping it. By applying mathematical techniques from quantum information theory, they proved that no real system will ever reach 0 kelvin: it would take an infinite number of steps. Getting close to absolute zero is possible, though, and Masanes and Oppenheim quantified the steps of cooling, setting speed limits for how cold a given system can get in finite time.

This Is How the Number 3.14 Got the Name 'Pi' ( 133

An anonymous reader shares a Time article: Ancient research on real numbers likely "didn't get improved upon until the age of Newton," says John Conway, mathematics professor emeritus at Princeton University who once won the school's Pi Day pie-eating contest. Sir Isaac Newton recorded 16 digits of pi in 1665, later admitting that he was "ashamed" of how long he had worked on the computations, as it meant that he had "no other business at the time," per the MAA. It was not until the 18th century -- about two millennia after the significance of the number 3.14 was first calculated by Archimedes -- that the name "pi" was first used to denote the number. In other words, the Greek letter used to represent the idea was not actually picked by the Ancient Greeks who discovered it. British mathematician William Jones came up with the Greek letter and symbol for the figure in 1706, and it was popularized by Swiss mathematician Leonhard Euler, Catherine the Great's mathematician, a few decades later. "Euler was a much better mathematician than the people who used [pi] before, and he wrote very good textbooks," says Conway. "He used it because the Greek letter Pi corresponds with the letter 'P'... and pi is about the perimeter of the circle."

Slashdot Top Deals