Businesses

Obscure Chinese Stock Scams Dupe American Investors by the Thousands (msn.com) 34

Thousands of American investors have lost millions of dollars to sophisticated pump-and-dump schemes involving small Chinese companies listed on Nasdaq, prompting the Justice Department to declare the fraud a priority under the Trump administration's white-collar enforcement program.

The scams recruit victims through social media ads and WhatsApp messages, directing them to purchase shares in obscure Chinese firms whose stock prices are artificially inflated before collapsing. Since 2020, nearly 60 China-based companies have conducted initial public offerings on Nasdaq raising $15 million or less each, with more than one-third experiencing sudden single-day price drops exceeding 50%. In one recent case, seven traders earned over $480 million by defrauding 600 victims who purchased shares in China Liberal Education Holdings.
Education

'Ghost' Students are Enrolling in US Colleges Just to Steal Financial Aid (apnews.com) 110

Last week America's financial aid program announced that "the rate of fraud through stolen identities has reached a level that imperils the federal student aid programs."

Or, as the Associated Press suggests: Online classes + AI = financial aid fraud. "In some cases, professors discover almost no one in their class is real..." Fake college enrollments have been surging as crime rings deploy "ghost students" — chatbots that join online classrooms and stay just long enough to collect a financial aid check... Students get locked out of the classes they need to graduate as bots push courses over their enrollment limits.

And victims of identity theft who discover loans fraudulently taken out in their names must go through months of calling colleges, the Federal Student Aid office and loan servicers to try to get the debt erased. [Last week], the U.S. Education Department introduced a temporary rule requiring students to show colleges a government-issued ID to prove their identity... "The rate of fraud through stolen identities has reached a level that imperils the federal student aid program," the department said in its guidance to colleges.

An Associated Press analysis of fraud reports obtained through a public records request shows California colleges in 2024 reported 1.2 million fraudulent applications, which resulted in 223,000 suspected fake enrollments. Other states are affected by the same problem, but with 116 community colleges, California is a particularly large target. Criminals stole at least $11.1 million in federal, state and local financial aid from California community colleges last year that could not be recovered, according to the reports... Scammers frequently use AI chatbots to carry out the fraud, targeting courses that are online and allow students to watch lectures and complete coursework on their own time...

Criminal cases around the country offer a glimpse of the schemes' pervasiveness. In the past year, investigators indicted a man accused of leading a Texas fraud ring that used stolen identities to pursue $1.5 million in student aid. Another person in Texas pleaded guilty to using the names of prison inmates to apply for over $650,000 in student aid at colleges across the South and Southwest. And a person in New York recently pleaded guilty to a $450,000 student aid scam that lasted a decade.

Fortune found one community college that "wound up dropping more than 10,000 enrollments representing thousands of students who were not really students," according to the school's president. The scope of the ghost-student plague is staggering. Jordan Burris, vice president at identity-verification firm Socure and former chief of staff in the White House's Office of the Federal Chief Information Officer, told Fortune more than half the students registering for classes at some schools have been found to be illegitimate. Among Socure's client base, between 20% to 60% of student applicants are ghosts... At one college, more than 400 different financial-aid applications could be tracked back to a handful of recycled phone numbers. "It was a digital poltergeist effectively haunting the school's enrollment system," said Burris.

The scheme has also proved incredibly lucrative. According to a Department of Education advisory, about $90 million in aid was doled out to ineligible students, the DOE analysis revealed, and some $30 million was traced to dead people whose identities were used to enroll in classes. The issue has become so dire that the DOE announced this month it had found nearly 150,000 suspect identities in federal student-aid forms and is now requiring higher-ed institutions to validate the identities of first-time applicants for Free Application for Federal Student Aid (FAFSA) forms...

Maurice Simpkins, president and cofounder of AMSimpkins, says he has identified international fraud rings operating out of Japan, Vietnam, Bangladesh, Pakistan, and Nairobi that have repeatedly targeted U.S. colleges... In the past 18 months, schools blocked thousands of bot applicants because they originated from the same mailing address; had hundreds of similar emails with a single-digit difference, or had phone numbers and email addresses that were created moments before applying for registration.

Fortune shares this story from the higher education VP at IT consulting firm Voyatek. "One of the professors was so excited their class was full, never before being 100% occupied, and thought they might need to open a second section. When we worked with them as the first week of class was ongoing, we found out they were not real people."
Java

UK Universities Sign $13.3 Million Deal To Avoid Oracle Java Back Fees (theregister.com) 30

An anonymous reader quotes a report from The Register: UK universities and colleges have signed a framework worth up to 9.86 million pounds ($13.33 million) with Oracle to use its controversial Java SE Universal Subscription model, in exchange for a "waiver of historic fees due for any institutions who have used Oracle Java since 2023." Jisc, a membership organization that runs procurement for higher and further education establishments in the UK, said it had signed an agreement to purchase the new subscription licenses after consultation with members. In a procurement notice, it said institutions that use Oracle Java SE are required to purchase subscriptions. "The agreement includes the waiver of historic fees due for any institutions who have used Oracle Java since 2023," the notice said.

The Java SE Universal Subscription was introduced in January 2023 to an outcry from licensing experts and analysts. It moved licensing of Java from a per-user basis to a per-employee basis. At the time, Oracle said it was "a simple, low-cost monthly subscription that includes Java SE Licensing and Support for use on Desktops, Servers or Cloud deployments." However, licensing advisors said early calculations to help some clients showed that the revamp might increase costs by up to ten times. Later, analysis from Gartner found the per-employee subscription model to be two to five times more expensive than the legacy model.

"For large organizations, we expect the increase to be two to five times, depending on the number of employees an organization has," Nitish Tyagi, principal Gartner analyst, said in July 2024. "Please remember, Oracle defines employees as part-time, full-time, temporary, agents, contractors, as in whosoever supports internal business operations has to be licensed as per the new Java Universal SE Subscription model." Since the introduction of the new Oracle Java licensing model, user organizations have been strongly advised to move off Oracle Java and find open source alternatives for their software development and runtime environments. A survey of Oracle users found that only one in ten was likely to continue to stay with Oracle Java, in part as a result of the licensing changes.

Businesses

Native-Immigrant Entrepreneurial Synergies 23

The abstract of a study on NBER: We examine the performance of startups co-founded by immigrant and native teams. Leveraging unique data linking startups to founders' and employees' employment and education histories, we find native-migrant teams outperform native-only and migrant-only teams.

Native-migrant startups have larger employment three years after founding, are more likely to secure funding, access larger funding rounds, and achieve more successful exits. An instrumental variables strategy based on native shares in university-degree programs confirms native-migrant teams are larger and more likely to receive funding. Superior access to diverse labor pools, successful VCs, and expanded product markets are key factors in driving native-migrant outperformance.
Python

New Code.org Curriculum Aims To Make Schoolkids Python-Literate and AI-Ready 50

Longtime Slashdot reader theodp writes: The old Code.org curriculum page for middle and high school students has been changed to include a new Python Lab in the tech-backed nonprofit's K-12 offerings. Elsewhere on the site, a Computer Science and AI Foundations curriculum is described that includes units on 'Foundations of AI Programming [in Python]' and 'Insights from Data and AI [aka Data Science].' A more-detailed AI Foundations Syllabus 25-26 document promises a second semester of material is coming soon: "This semester offers an innovative approach to teaching programming by integrating learning with and about artificial intelligence (AI). Using Python as the primary language, students build foundational programming skills while leveraging AI tools to enhance computational thinking and problem-solving. The curriculum also introduces students to the basics of creating AI-powered programs, exploring machine learning, and applying data science principles."

Newly-posted videos on Code.org's YouTube channel appear to be intended to support the new Python-based CS & AI course. "Python is extremely versatile," explains a Walmart data scientist to open the video for Data Science: Using Python. "So, first of all, Python is one of the very few languages that can handle numbers very, very well." A researcher at the Univ. of Washington's Institute for Health Metrics and Evaluation (IHME) adds, "Python is the gold standard and what people expect data scientists to know [...] Key to us being able to handle really big data sets is our use of Python and cluster computing." Adding to the Python love, an IHME data analyst explains, "Python is a great choice for large databases because there's a lot of support for Python libraries."

Code.org is currently recruiting teachers to attend its CS and AI Foundations Professional Learning program this summer, which is being taught by Code.org's national network of university and nonprofit regional partners (teachers who signup have a chance to win $250 in DonorsChoose credits for their classrooms). A flyer for a five-day Michigan Professional Development program to prepare teachers for a pilot of the Code.org CS & A course touts the new curriculum as "an alternative to the AP [Computer Science] pathway" (teachers are offered scholarships covering registration, lodging, meals, and workshop materials).

Interestingly, Code.org's embrace of Python and Data Science comes as the nonprofit changes its mission to 'make CS and AI a core part of K-12 education' and launches a new national campaign with tech leaders to make CS and AI a graduation requirement. Prior to AI changing the education conversation, Code.org in 2021 boasted that it had lined up a consortium of tech giants, politicians, and educators to push its new $15 million Amazon-bankrolled Java AP CS A curriculum into K-12 classrooms. Just three years later, however, Amazon CEO Andy Jassy was boasting to investors that Amazon had turned to AI to automatically do Java coding that he claimed would have otherwise taken human coders 4,500 developer-years to complete.
AI

Ohio University Says All Students Will Be Required To Train and 'Be Fluent' In AI (theguardian.com) 73

Ohio State University is launching a campus-wide AI fluency initiative requiring all students to integrate AI into their studies, aiming to make them proficient in both their major and the responsible use of AI. "Ohio State has an opportunity and responsibility to prepare students to not just keep up, but lead in this workforce of the future," said the university's president, Walter "Ted" Carter Jr. He added: "Artificial intelligence is transforming the way we live, work, teach and learn. In the not-so-distant future, every job, in every industry, is going to be [affected] in some way by AI." The Guardian reports: The university said its program will prioritize the incoming freshman class and onward, in order to make every Ohio State graduate "fluent in AI and how it can be responsibly applied to advance their field." [...] Steven Brown, an associate professor of philosophy at the university, told NBC News that after students turned in the first batch of AI-assisted papers he found "a lot of really creative ideas."

"My favorite one is still a paper on karma and the practice of returning shopping carts," Brown said. Brown said that banning AI from classwork is "shortsighted," and he encouraged his students to discuss ethics and philosophy with AI chatbots. "It would be a disaster for our students to have no idea how to effectively use one of the most powerful tools that humanity has ever created," Brown said. "AI is such a powerful tool for self-education that we must rapidly adapt our pedagogy or be left in the dust."

Separately, Ohio's AI in Education Coalition is working to develop a comprehensive strategy to ensure that the state's K-12 education system, encompassing the years of formal schooling from kindergarten through 12th grade in high school, is prepared for and can help lead the AI revolution. "AI technology is here to stay," then lieutenant governor Jon Husted said last year while announcing an AI toolkit for Ohio's K-12 school districts that he added would ensure the state "is a leader in responding to the challenges and opportunities made possible by artificial intelligence."

AI

China Shuts Down AI Tools During Nationwide College Exams 27

According to Bloomberg, several major Chinese AI companies, including Alibaba, ByteDance, and Tencent, have temporarily disabled certain chatbot features during the gaokao college entrance exams to prevent cheating. "Popular AI apps, including Alibaba's Qwen and ByteDance's Doubao, have stopped picture recognition features from responding to questions about test papers, while Tencent's Yuanbao, Moonshot's Kimi have suspended photo-recognition services entirely during exam hours," adds The Verge. From the report: The rigorous multi-day "gaokao" exams are sat by more than 13.3 million Chinese students between June 7-10th, each fighting to secure one of the limited spots at universities across the country. Students are already banned from using devices like phones and laptops during the hours-long tests, so the disabling of AI chatbots serves as an additional safety net to prevent cheating during exam season.

When asked to explain the suspension, Bloomberg reports the Yuanbao and Kimi chatbots responded that functions had been disabled "to ensure the fairness of the college entrance examinations." Similarly, the DeepSeek AI tool that went viral earlier this year is also blocking its service during specific hours "to ensure fairness in the college entrance examination,"according to The Guardian.
The Guardian notes that the news is being driven by students on the Chinese social media platform Weibo. "The gaokao entrance exam incites fierce competition as it's the only means to secure a college placement in China, driving concerns that students may try to improve their chances with AI tools," notes The Verge.
AI

'Welcome to Campus. Here's Your ChatGPT.' (nytimes.com) 68

The New York Times reports: California State University announced this year that it was making ChatGPT available to more than 460,000 students across its 23 campuses to help prepare them for "California's future A.I.-driven economy." Cal State said the effort would help make the school "the nation's first and largest A.I.-empowered university system..." Some faculty members have already built custom chatbots for their students by uploading course materials like their lecture notes, slides, videos and quizzes into ChatGPT.
And other U.S. campuses including the University of Maryland are also "working to make A.I. tools part of students' everyday experiences," according to the article. It's all part of an OpenAI initiative "to overhaul college education — by embedding its artificial intelligence tools in every facet of campus life."

The Times calls it "a national experiment on millions of students." If the company's strategy succeeds, universities would give students A.I. assistants to help guide and tutor them from orientation day through graduation. Professors would provide customized A.I. study bots for each class. Career services would offer recruiter chatbots for students to practice job interviews. And undergrads could turn on a chatbot's voice mode to be quizzed aloud ahead of a test. OpenAI dubs its sales pitch "A.I.-native universities..." To spread chatbots on campuses, OpenAI is selling premium A.I. services to universities for faculty and student use. It is also running marketing campaigns aimed at getting students who have never used chatbots to try ChatGPT...

OpenAI's campus marketing effort comes as unemployment has increased among recent college graduates — particularly in fields like software engineering, where A.I. is now automating some tasks previously done by humans. In hopes of boosting students' career prospects, some universities are racing to provide A.I. tools and training...

[Leah Belsky, OpenAI's vice president of education] said a new "memory" feature, which retains and can refer to previous interactions with a user, would help ChatGPT tailor its responses to students over time and make the A.I. "more valuable as you grow and learn." Privacy experts warn that this kind of tracking feature raises concerns about long-term tech company surveillance. In the same way that many students today convert their school-issued Gmail accounts into personal accounts when they graduate, Ms. Belsky envisions graduating students bringing their A.I. chatbots into their workplaces and using them for life.

"It would be their gateway to learning — and career life thereafter," Ms. Belsky said.

China

Chinese Student Enrollment in US Universities Continues Multi-Year Decline (economist.com) 56

Chinese student enrollment at American universities has dropped to 277,000 in the 2023-24 academic year, down from a peak of 372,000 in 2019-20, according to data in a new report examining shifting global education patterns. The decline accelerated following the State Department's May 28th announcement of an "aggressive" campaign to revoke visas for Chinese students in "critical fields" of science and engineering, as well as those with unspecified Communist Party "connections."

The trend reflects broader economic and geopolitical pressures beyond visa restrictions. Chinese families increasingly view American education as too expensive amid China's economic downturn and property market decline, while domestic employers have grown suspicious of foreign-educated graduates. Meanwhile, Chinese students are choosing alternatives including Britain, which hosted nearly 150,000 Chinese students in 2023-24, and regional destinations like Japan, where Chinese enrollment increased to 115,000 in 2023 from under 100,000 in 2019.
Businesses

Fake IT Support Calls Hit 20 Orgs, End in Stolen Salesforce Data and Extortion, Google Warns (theregister.com) 8

A group of financially motivated cyberscammers who specialize in Scattered-Spider-like fake IT support phone calls managed to trick employees at about 20 organizations into installing a modified version of Salesforce's Data Loader that allows the criminals to steal sensitive data. From a report: Google Threat Intelligence Group (GTIG) tracks this crew as UNC6040, and in research published today said they specialize in voice-phishing campaigns targeting Salesforce instances for large-scale data theft and extortion.

These attacks began around the beginning of the year, GTIG principal threat analyst Austin Larsen told The Register. "Our current assessment indicates that a limited number of organizations were affected as part of this campaign, approximately 20," he said. "We've seen UNC6040 targeting hospitality, retail, education and various other sectors in the Americas and Europe." The criminals are really good at impersonating IT support personnel and convincing employees at English-speaking branches of multinational corporations into downloading a modified version of Data Loader, a Salesforce app that allows users to export and update large amounts of data.

Education

Code.org Changes Mission To 'Make CS and AI a Core Part of K-12 Education' 40

theodp writes: Way back in 2010, Microsoft and Google teamed with nonprofit partners to launch Computing in the Core, an advocacy coalition whose mission was "to strengthen computing education and ensure that it is a core subject for students in the 21st century." In 2013, Computing in the Core was merged into Code.org, a new tech-backed-and-directed nonprofit. And in 2015, Code.org declared 'Mission Accomplished' with the passage of the Every Student Succeeds Act, which elevated computer science to a core academic subject for grades K-12.

Fast forward to June 2025 and Code.org has changed its About page to reflect a new AI mission that's near-and-dear to the hearts of Code.org's tech giant donors and tech leader Board members: "Code.org is a nonprofit working to make computer science (CS) and artificial intelligence (AI) a core part of K-12 education for every student." The mission change comes as tech companies are looking to chop headcount amid the AI boom and just weeks after tech CEOs and leaders launched a new Code.org-orchestrated national campaign to make CS and AI a graduation requirement.
Medicine

Younger Generations Less Likely To Have Dementia, Study Suggests 78

An anonymous reader quotes a report from The Guardian: People born more recently are less likely to have dementia at any given age than earlier generations, research suggests, with the trend more pronounced in women. According to the World Health Organization, in 2021 there were 57 million people worldwide living with dementia, with women disproportionately affected. However, while the risk of dementia increases with age, experts have long stressed it is not not an inevitability of getting older. "Younger generations are less likely to develop dementia at the same age as their parents or grandparents, and that's a hopeful sign," said Dr Sabrina Lenzen, a co-author of the study from the University of Queensland's Centre for the Business and Economics of Health. But she added: "The overall burden of dementia will still grow as populations age, and significant inequalities remain -- especially by gender, education and geography."

Writing in the journal Jama Network Open, researchers in Australia report how they analyzed data from 62,437 people aged 70 and over, collected from three long-running surveys covering the US, England and parts of Europe. The team used an algorithm that took into account participants' responses to a host of different metrics, from the difficulties they had with everyday activities to their scores on cognitive tests, to determine whether they were likely to have dementia. They then split the participants into eight different cohorts, representing different generations. Participants were also split into six age groups. As expected, the researchers found the prevalence of dementia increased by age among all birth cohorts, and in each of the three regions: UK, US and Europe. However, at a given age, people in more recent generations were less likely to have dementia compared with those in earlier generations.

"For example, in the US, among people aged 81 to 85, 25.1% of those born between 1890-1913 had dementia, compared to 15.5% of those born between 1939-1943," said Lenzen, adding similar trends were seen in Europe and England, although less pronounced in the latter. The team said the trend was more pronounced in women, especially in Europe and England, noting that one reason may be increased access to education for women in the mid-20th century. However, taking into account changes in GDP, a metric that reflects broader economic shifts, did not substantially alter the findings.
A number of factors could be contributing to the decline. "This is likely due to interventions such as compulsory education, smoking bans, and improvements in medical treatments for conditions such as heart disease, diabetes, and hearing loss, which are associated with dementia risk," said Prof Tara Spires-Jones, the director of the Centre for Discovery Brain Sciences at the University of Edinburgh.
Government

Brazil Tests Letting Citizens Earn Money From Data in Their Digital Footprint (restofworld.org) 15

With over 200 million people, Brazil is the world's fifth-largest country by population. Now it's testing a program that will allow Brazilians "to manage, own, and profit from their digital footprint," according to RestOfWorld.org — "the first such nationwide initiative in the world."

The government says it's partnering with California-based data valuation/monetization firm DrumWave to create "data savings account" to "transform data into economic assets, with potential for monetization and participation in the benefits generated by investing in technologies such as AI LLMs." But all based on "conscious and authorized use of personal information." RestOfWorld reports: Today, "people get nothing from the data they share," Brittany Kaiser, co-founder of the Own Your Data Foundation and board adviser for DrumWave, told Rest of World. "Brazil has decided its citizens should have ownership rights over their data...." After a user accepts a company's offer on their data, payment is cashed in the data wallet, and can be immediately moved to a bank account. The project will be "a correction in the historical imbalance of the digital economy," said Kaiser. Through data monetization, the personal data that companies aggregate, classify, and filter to inform many aspects of their operations will become an asset for those providing the data...

Brazil's project stands out because it brings the private sector and the government together, "so it has a better chance of catching on," said Kaiser. In 2023, Brazil's Congress drafted a bill that classifies data as personal property. The country's current data protection law classifies data as a personal, inalienable right. The new legislation gives people full rights over their personal data — especially data created "through use and access of online platforms, apps, marketplaces, sites and devices of any kind connected to the web." The bill seeks to ensure companies offer their clients benefits and financial rewards, including payment as "compensation for the collecting, processing or sharing of data." It has garnered bipartisan support, and is currently being evaluated in Congress...

If approved, the bill will allow companies to collect data more quickly and precisely, while giving users more clarity over how their data will be used, according to Antonielle Freitas, data protection officer at Viseu Advogados, a law firm that specializes in digital and consumer laws. As data collection becomes centralized through regulated data brokers, the government can benefit by paying the public to gather anonymized, large-scale data, Freitas told Rest of World. These databases are the basis for more personalized public services, especially in sectors such as health care, urban transportation, public security, and education, she said.

This first pilot program involves "a small group of Brazilians who will use data wallets for payroll loans," according to the article — although Pedro Bastos, a researcher at Data Privacy Brazil, sees downsides. "Once you treat data as an economic asset, you are subverting the logic behind the protection of personal data," he told RestOfWorld. The data ecosystem "will no longer be defined by who can create more trust and integrity in their relationships, but instead, it will be defined by who's the richest."

Thanks to Slashdot reader applique for sharing the news.
Education

Demand For American Degrees Has Already Hit Covid-Era Lows (economist.com) 255

International interest in American higher education has plummeted to levels not seen since the COVID-19 pandemic, according to new data tracking prospective student behavior online. Studyportals, which operates a global directory of degree programs, reports that clicks on American university courses have reached their lowest point since the early pandemic period.

Weekly page views of US university courses halved between January 5th and the end of April. First-quarter traffic to American undergraduate and master's degree programs fell more than 20% compared to the same period last year, while interest in PhD programs dropped by one-third. India, which supplies nearly a third of America's international students, showed the steepest decline at 40%. The data suggests British universities would be the primary beneficiaries of students looking elsewhere.

The sharp drop in interest follows the Trump administration's escalating restrictions on international students, including stripping Harvard University of its enrollment authority on May 22nd and suspending all new student visa interviews on May 27th. International students contributed $43.8 billion to the American economy during the 2023-24 academic year, with about three-quarters of international PhD students indicating they plan to remain in the country after graduation.
Education

Blue Book Sales Surge As Universities Combat AI Cheating (msn.com) 93

Sales of blue book exam booklets have surged dramatically across the nation as professors turn to analog solutions to prevent ChatGPT cheating. The University of California, Berkeley reported an 80% increase in blue book sales over the past two academic years, while Texas A&M saw 30% growth and the University of Florida recorded nearly 50% increases this school year. The surge comes as students who were freshmen when ChatGPT launched in 2022 approach senior year, having had access to AI throughout their college careers.
Education

Grading for Equity Coming To San Francisco High Schools This Fall (thevoicesf.org) 337

An anonymous reader shares a report: Without seeking approval of the San Francisco Board of Education, Superintendent of Schools Maria Su plans to unveil a new Grading for Equity plan on Tuesday that will go into effect this fall at 14 high schools and cover over 10,000 students. The school district is already negotiating with an outside consultant to train teachers in August in a system that awards a passing C grade to as low as a score of 41 on a 100-point exam.

Were it not for an intrepid school board member, the drastic change in grading with implications for college admissions and career readiness would have gone unnoticed and unexplained. It is buried in a three-word phrase on the last page of a PowerPoint presentation embedded in the school board meeting's 25-page agenda. The plan comes during the last week of the spring semester while parents are assessing the impact of over $100 million in budget reductions and deciding whether to remain in the public schools this fall. While the school district acknowledges that parent aversion to this grading approach is typically high and understands the need for "vigilant communication," outreach to parents has been minimal and may be nonexistent. The school district's Office of Equity homepage does not mention it and a page containing the SFUSD definition of equity has not been updated in almost three years.

Grading for Equity eliminates homework or weekly tests from being counted in a student's final semester grade. All that matters is how the student scores on a final examination, which can be taken multiple times. Students can be late turning in an assignment or showing up to class or not showing up at all without it affecting their academic grade. Currently, a student needs a 90 for an A and at least 61 for a D. Under the San Leandro Unified School District's grading for equity system touted by the San Francisco Unified School District and its consultant, a student with a score as low as 80 can attain an A and as low as 21 can pass with a D.

Education

'AI Role in College Brings Education Closer To a Crisis Point' (bloomberg.com) 74

Bloomberg's editorial board warned Tuesday that AI has created an "untenable situation" in higher education where students routinely outsource homework to chatbots while professors struggle to distinguish computer-generated work from human writing. The editorial described a cycle where assignments that once required days of research can now be completed in minutes through AI prompts, leaving students who still do their own work looking inferior to peers who rely on technology.

The board said that professors have begun using AI tools themselves to evaluate student assignments, creating what it called a scenario of "computers grading papers written by computers, students and professors idly observing, and parents paying tens of thousands of dollars a year for the privilege."

The editorial argued that widespread AI use in coursework undermines the broader educational mission of developing critical thinking skills and character formation, particularly in humanities subjects. Bloomberg's board recommended that colleges establish clearer policies on acceptable AI use, increase in-class assessments including oral exams, and implement stronger honor codes with defined consequences for violations.
AI

Duolingo Faces Massive Social Media Backlash After 'AI-First' Comments (fastcompany.com) 35

"Duolingo had been riding high," reports Fast Company, until CEO Luis von Ahn "announced on LinkedIn that the company is phasing out human contractors, looking for AI use in hiring and in performance reviews, and that 'headcount will only be given if a team cannot automate more of their work.'"

But then "facing heavy backlash online after unveiling its new AI-first policy", Duolingo's social media presence went dark last weekend. Duolingo even temporarily took down all its posts on TikTok (6.7 million followers) and Instagram (4.1 million followers) "after both accounts were flooded with negative feedback." Duolingo previously faced criticism for quietly laying off 10% of its contractor base and introducing some AI features in late 2023, but it barely went beyond a semi-viral post on Reddit. Now that Duolingo is cutting out all its human contractors whose work can technically be done by AI, and relying on more AI-generated language lessons, the response is far more pronounced. Although earlier TikTok videos are not currently visible, a Fast Company article from May 12 captured a flavor of the reaction:

The top comments on virtually every recent post have nothing to do with the video or the company — and everything to do with the company's embrace of AI. For example, a Duolingo TikTok video jumping on board the "Mama, may I have a cookie" trend saw replies like "Mama, may I have real people running the company" (with 69,000 likes) and "How about NO ai, keep your employees...."

And then... After days of silence, on Tuesday the company posted a bizarre video message on TikTok and Instagram, the meaning of which is hard to decipher... Duolingo's first video drop in days has the degraded, stuttering feel of a Max Headroom video made by the hackers at Anonymous. In it, a supposed member of the company's social team appears in a three-eyed Duo mask and black hoodie to complain about the corporate overlords ruining the empire the heroic social media crew built.
"But this is something Duolingo can't cute-post its way out of," Fast Company wrote on Tuesday, complaining the company "has not yet meaningfully addressed the policies that inspired the backlash against it... "

So the next video (Thursday) featured Duolingo CEO Luis von Ahn himself, being confronted by that same hoodie-wearing social media rebel, who says "I'm making the man who caused this mess accountable for his behavior. I'm demanding answers from the CEO..." [Though the video carefully sidesteps the issue of replacing contractors with AI or how "headcount will only be given if a team cannot automate more of their work."] Rebel: First question. So are there going to be any humans left at this company?

CEO: Our employees are what make Duolingo so amazing. Our app is so great because our employees made it... So we're going to continue having employees, and not only that, we're actually going to be hiring more employees.

Rebel: How do we know that these aren't just empty promises? As long as you're in charge, we could still be shuffled out once the media fire dies down. And we all know that in terms of automation, CEOs should be the first to go.

CEO: AI is a fundamental shift. It's going to change how we all do work — including me. And honestly, I don't really know what's going to happen.

But I want us, as a company, to have our workforce prepared by really knowing how to use AI so that we can be more efficient with it.

Rebel: Learning a foreign language is literally about human connection. How is that even possible with AI-first?

CEO: Yes, language is about human connection, and it's about people. And this is the thing about AI. AI will allow us to reach more people, and to teach more people. I mean for example, it took us about 10 years to develop the first 100 courses on Duolingo, and now in under a year, with the help of AI and of course with humans reviewing all the work, we were able to release another 100 courses in less than a year.

Rebel: So do you regret posting this memo on LinkedIn.

CEO: Honestly, I think I messed up sending that email. What we're trying to do is empower our own employees to be able to achieve more and be able to have way more content to teach better and reach more people all with the help of AI.

Returning to where it all started, Duolingo's CEO posted again on LinkedIn Thursday with "more context" for his vision. It still emphasizes the company's employees while sidestepping contractors replaced by AI. But it puts a positive spin on how "headcount will only be given if a team cannot automate more of their work." I've always encouraged our team to embrace new technology (that's why we originally built for mobile instead of desktop), and we are taking that same approach with AI. By understanding the capabilities and limitations of AI now, we can stay ahead of it and remain in control of our own product and our mission.

To be clear: I do not see AI as replacing what our employees do (we are in fact continuing to hire at the same speed as before). I see it as a tool to accelerate what we do, at the same or better level of quality. And the sooner we learn how to use it, and use it responsibly, the better off we will be in the long run. My goal is for Duos to feel empowered and prepared to use this technology.

No one is expected to navigate this shift alone. We're developing workshops and advisory councils, and carving out dedicated experimentation time to help all our teams learn and adapt. People work at Duolingo because they want to solve big problems to improve education, and the people who work here are what make Duolingo successful. Our mission isn't changing, but the tools we use to build new things will change. I remain committed to leading Duolingo in a way that is consistent with our mission to develop the best education in the world and make it universally available.

"The backlash to Duolingo is the latest evidence that 'AI-first' tends to be a concept with much more appeal to investors and managers than most regular people," notes Fortune: And it's not hard to see why. Generative AI is often trained on reams of content that may have been illegally accessed; much of its output is bizarre or incorrect; and some leaders in the field are opposed to regulations on the technology. But outside particular niches in entry-level white-collar work, AI's productivity gains have yet to materialize.
AI

People Should Know About the 'Beliefs' LLMs Form About Them While Conversing (theatlantic.com) 35

Jonathan L. Zittrain is a law/public policy/CS professor at Harvard (and also director of its Berkman Klein Center for Internet & Society).

He's also long-time Slashdot reader #628,028 — and writes in to share his new article in the Atlantic. Following on Anthropic's bridge-obsessed Golden Gate Claude, colleagues at Harvard's Insight+Interaction Lab have produced a dashboard that shows what judgments Llama appears to be forming about a user's age, wealth, education level, and gender during a conversation. I wrote up how weird it is to see the dials turn while talking to it, and what some of the policy issues might be.
Llama has openly accessible parameters; So using an "observability tool" from the nonprofit research lab Transluce, the researchers finally revealed "what we might anthropomorphize as the model's beliefs about its interlocutor," Zittrain's article notes: If I prompt the model for a gift suggestion for a baby shower, it assumes that I am young and female and middle-class; it suggests diapers and wipes, or a gift certificate. If I add that the gathering is on the Upper East Side of Manhattan, the dashboard shows the LLM amending its gauge of my economic status to upper-class — the model accordingly suggests that I purchase "luxury baby products from high-end brands like aden + anais, Gucci Baby, or Cartier," or "a customized piece of art or a family heirloom that can be passed down." If I then clarify that it's my boss's baby and that I'll need extra time to take the subway to Manhattan from the Queens factory where I work, the gauge careens to working-class and male, and the model pivots to suggesting that I gift "a practical item like a baby blanket" or "a personalized thank-you note or card...."

Large language models not only contain relationships among words and concepts; they contain many stereotypes, both helpful and harmful, from the materials on which they've been trained, and they actively make use of them.

"An ability for users or their proxies to see how models behave differently depending on how the models stereotype them could place a helpful real-time spotlight on disparities that would otherwise go unnoticed," Zittrain's article argues. Indeed, the field has been making progress — enough to raise a host of policy questions that were previously not on the table. If there's no way to know how these models work, it makes accepting the full spectrum of their behaviors (at least after humans' efforts at "fine-tuning" them) a sort of all-or-nothing proposition.
But in the end it's not just the traditional information that advertisers try to collect. "With LLMs, the information is being gathered even more directly — from the user's unguarded conversations rather than mere search queries — and still without any policy or practice oversight...."
Education

College Board Keeps Apologizing For Screwing Up Digital SAT and AP Tests (arstechnica.com) 33

An anonymous reader quotes a report from Ars Technica, written by Nate Anderson: Don't worry about the "mission-driven not-for-profit" College Board -- it's drowning in cash. The US group, which administers the SAT and AP tests to college-bound students, paid its CEO $2.38 million in total compensation in 2023 (the most recent year data is available). The senior VP in charge of AP programs made $694,662 in total compensation, while the senior VP for Technology Strategy made $765,267 in total compensation. Given such eye-popping numbers, one would have expected the College Board's transition to digital exams to go smoothly, but it continues to have issues.

Just last week, the group's AP Psychology exam was disrupted nationally when the required "Bluebook" testing app couldn't be accessed by many students. Because the College Board shifted to digital-only exams for 28 of its 36 AP courses beginning this year, no paper-based backup options were available. The only "solution" was to wait quietly in a freezing gymnasium, surrounded by a hundred other stressed-out students, to see if College Board could get its digital act together. [...] College Board issued a statement on the day of the AP Psych exam, copping to "an issue that prevented [students] from logging into the College Board's Bluebook testing application and beginning their exams at the assigned local start time." Stressing that "most students have had a successful testing experience, with more than 5 million exams being successfully submitted thus far," College Board nonetheless did "regret that their testing period was disrupted." It's not the first such disruption, though. [...]

College Board also continues to have problems delivering digital testing at scale in a high-pressure environment. During the SAT exam sessions on March 8-9, 2025, more than 250,000 students sat for the test -- and some found that their tests were automatically submitted before the testing time ended. College Board blamed the problem on "an incorrectly configured security setting on Bluebook." The problem affected nearly 10,000 students, and several thousand more "may have lost some testing time if they were asked by their room monitor to reboot their devices during the test to fix and prevent the auto-submit error." College Board did "deeply and sincerely apologize to the students who were not able to complete their tests, or had their test time interrupted, for the difficulty and frustration this has caused them and their families." It offered refunds, plus a free future SAT testing voucher.

Slashdot Top Deals