theodp writes: Khan Academy, the online education website that's the toast of the TED set, has attracted millions of dollars from the likes of Google, KPCB, Bill Gates, and others with its promise to reinvent education. And last summer, tech billionaires pitched Khan Academy to policymakers as a cure for what ails public schools. Still, in a blog post on How Khan Academy is using Machine Learning to Assess Student Mastery, an intern acknowledged that the naive 'streak model' used by Khan Academy to determine student proficiency — collect 10 correct answers in a row or start over (a la the toddler game Hi Ho! Cherry-O) — had 'serious flaws,' but would be replaced with a better proficiency model based on more sophisticated machine learning techniques. Alas, some commenters were less-than-impressed with the accompanying discussion of the statistical techniques that Khan Academy came up with. 'It appears that you are in the process of rediscovering item response theory [IRT],' wrote one commenter, referring Khan Academy to a Wikipedia entry on the topic and two textbooks. IRT, which is used for the development of high-stakes tests like the GMAT and GRE, was employed in a circa-1982 computerized assessment system at the Univ. of Illinois. 'The suggestion of item response theory (IRT) occurred to me, as well,' wrote another, who provided additional references, including an intro to computer-adaptive testing. 'You might find it useful to collaborate with members of the educational data mining research community,' suggested a third, pointing to additional readings. Others chimed in with their own math homework assignments for the Khan Academy Team. However, it doesn't look like Khan Academy has any time right now for math tutoring. In an update to the original blog post, Khan Academy announced Saturday that its own testing of its logistic regression proficiency model 'gave us the confidence to roll out from 10% to 100% of users,' and the crude — but perhaps that's OK — model has been launched site-wide.