Science

Have Humans Passed Peak Brain Power? (ft.com) 173

Across high-income countries, humans' ability to reason and solve problems appears to have peaked in the early 2010s and declined since. Despite no changes in fundamental brain biology, test scores for both teenagers and adults show deteriorating performance in reading, mathematics and science. In an eye-opening statistic, 25% of adults in high-income countries now struggle to "use mathematical reasoning when reviewing statements" -- rising to 35% in the US.

This cognitive decline coincides with a fundamental shift in our relationship with information. Americans reading books has fallen below 50%, while difficulty thinking and concentrating among 18-year-olds has climbed sharply since the mid-2010s. The timing points to our changing digital habits: a transition from finite web pages to infinite feeds, from active browsing to passive consumption, and from focused attention to constant context-switching.

Research shows that intentional use of digital technologies can be beneficial, but the passive consumption dominating recent years impairs verbal processing, attention, working memory and self-regulation.

Some of the cited research in the story:
New PIAAC results show declining literacy and increasing inequality in many European countries â" Better adult learning is necessary;
Have attention spans been declining?;
Short- and long-term effects of passive and active screen time on young children's phonological memory;
Efficient, helpful, or distracting? A literature review of media multitasking in relation to academic performance.

Earth

Heat Can Age You As Much As Smoking, a New Study Finds (science.org) 48

Prolonged exposure to extreme heat accelerates biological aging in older adults, increasing the risk of age-related illnesses, according to research published in Science Advances.

In a nationally representative study of 3,686 U.S. adults over age 56, scientists found that long-term exposure to high heat days was associated with accelerated epigenetic aging - molecular changes that affect how genes function without altering DNA itself.

Researchers from the University of Southern California discovered that individuals living in areas where heat index values regularly exceed 90F showed signs of being biologically older than those in cooler regions, even after controlling for factors like wealth, education, and lifestyle habits. Six-year cumulative heat exposure linked to as much as 2.48 years of accelerated aging in one measurement.
AI

'There's a Good Chance Your Kid Uses AI To Cheat' (msn.com) 98

Long-time Slashdot reader theodp writes: Wall Street Journal K-12 education reporter Matt Barnum has a heads-up for parents: There's a Good Chance Your Kid Uses AI to Cheat. Barnum writes:

"A high-school senior from New Jersey doesn't want the world to know that she cheated her way through English, math and history classes last year. Yet her experience, which the 17-year-old told The Wall Street Journal with her parent's permission, shows how generative AI has rooted in America's education system, allowing a generation of students to outsource their schoolwork to software with access to the world's knowledge. [...] The New Jersey student told the Journal why she used AI for dozens of assignments last year: Work was boring or difficult. She wanted a better grade. A few times, she procrastinated and ran out of time to complete assignments. The student turned to OpenAI's ChatGPT and Google's Gemini, to help spawn ideas and review concepts, which many teachers allow. More often, though, AI completed her work. Gemini solved math homework problems, she said, and aced a take-home test. ChatGPT did calculations for a science lab. It produced a tricky section of a history term paper, which she rewrote to avoid detection. The student was caught only once."

Not surprisingly, AI companies play up the idea that AI will radically improve learning, while educators are more skeptical. "This is a gigantic public experiment that no one has asked for," said Marc Watkins, assistant director of academic innovation at the University of Mississippi.

NASA

NASA, Yale, and Stanford Scientists Consider 'Scientific Exile' (404media.co) 275

An anonymous reader quotes a report from 404 Media: Last week, Aix Marseille University, France's largest university, invited American scientists who believe their work is at risk of being censored by Donald Trump administration's anti-science policies to continue their research in France. Today, the university announced that it is already seeing great interest from scientists at NASA, Yale, Stanford, and other American schools and government agencies, and that it wants to expand the program to other schools and European countries to absorb all the researchers who want to leave the United States. "We are witnessing a new brain drain," Eric Berton, Aix Marseille University's president, said in a press release. "We will do everything in our power to help as many scientists as possible continue their research. However, we cannot meet all demands on our own. The Ministry of Education and Research is fully supporting and assisting us in this effort, which is intended to expand at both national and European levels."

The press release from the university claims that researchers from Stanford, Yale, NASA, the National Institute of Health, George Washington University, "and about 15 other prestigious institutions," are now considering "scientific exile." More than 40 American scientists have expressed interest in the program, it said. Their key research areas are "health (LGBT+ medicine, epidemiology, infectious diseases, inequalities, immunology, etc.), environment and climate change (natural disaster management, greenhouse gases, social impact, artificial intelligence), humanities and social sciences (communication, psychology, history, cultural heritage), astrophysics."

"The current Executive Orders have led to a termination of one of my research grants. While it was not a lot of money, it was a high profile, large national study," one researcher who has reached out to Aix Marseille University in order to take advantage of the program told me. 404 Media granted the researcher anonymity because speaking about the program might jeopardize their current position at a leading American university. "While I have not had to lay off staff as a result of that particular cancellation, I will have to lay off staff if additional projects are terminated. Everything I focus on is now a banned word." The program, called "Safe Place for Science," initially will fund 15 researchers with 15 million Euros. Aix Marseille University says that it is already working closely with the regional government and France's Chamber of Commerce and Industry "to facilitate the arrival of these scientists and their families in the region, offering support with employment, housing, school access, transportation, and visas."
"We are doing what is necessary to provide them with the best living environment. We are ready to welcome them and will make them true children of the country!" Renaud Muselier, President of the Regional Council of Provence-Alpes-Cote d'Azur, said in a statement.
AI

Yale Suspends Palestine Activist After AI Article Linked Her To Terrorism 151

Yale University has suspended a law scholar and pro-Palestinian activist after an AI-generated article from Jewish Onliner falsely linked her to a terrorist group. Gizmodo reports: Helyeh Doutaghi, the scholar at Yale Law School, told the New York Times that she is a "loud and proud" supporter of Palestinian rights. "I am not a member of any organization that would constitute a violation of U.S. law." The article that led to her suspension was published in Jewish Onliner, a Substack that says it is "empowered by A.I. capabilities." The website does not publish the names of its authors out of fear of harassment. Ironically, Doutaghi and Yale were reportedly the subject of intense harassment after Jewish Onliner published the article linking Doutaghi to terrorism by citing appearances she made at events sponsored by Samidoun, a pro-Palestinian group. [...]

Jewish Onliner is vague about how it uses AI to produce its articles, but the technology is known for making lots of mistakes and hallucinating information that is not true. It is quite possible that Jewish Onliner relied on AI to source information it used to write the article. That could open it up to liability if it did not perform fact-checking and due diligence on its writing. Besides the fact that Doutaghi says she is not a member of Samidoun, she attended events it sponsored that support Palestinian causes, Yale Law School said the allegations against her reflect "potential unlawful conduct."
AI

US Schools Deploy AI Surveillance Amid Security Lapses, Privacy Concerns (apnews.com) 62

Schools across the United States are increasingly using artificial intelligence to monitor students' online activities, raising significant privacy concerns after Vancouver Public Schools inadvertently released nearly 3,500 unredacted, sensitive student documents to reporters.

The surveillance software, developed by companies like Gaggle Safety Management, scans school-issued devices 24/7 for signs of bullying, self-harm, or violence, alerting staff when potential issues are detected. Approximately 1,500 school districts nationwide use Gaggle's technology to track six million students, with Vancouver schools paying $328,036 for three years of service.

While school officials maintain the technology has helped counselors intervene with at-risk students, documents revealed LGBTQ+ students were potentially outed to administrators through the monitoring.
Education

'I Used to Teach Students. Now I Catch ChatGPT Cheats' (thewalrus.ca) 241

Philosophy/ethics professor Troy Jollimore looks at the implications of a world where many students are submitting AI-generated essays. ("Sometimes they will provide quotations, giving page numbers that, as often as not, do not seem to correspond to anything in the actual world...") Ideally if the students write the essays themselves, "some of them start to feel it. They begin to grasp that thinking well, and in an informed manner, really is different from thinking poorly and from a position of ignorance. That moment, when you start to understand the power of clear thinking, is crucial.

"The trouble with generative AI is that it short-circuits that process entirely." One begins to suspect that a great many students wanted this all along: to make it through college unaltered, unscathed. To be precisely the same person at graduation, and after, as they were on the first day they arrived on campus. As if the whole experience had never really happened at all. I once believed my students and I were in this together, engaged in a shared intellectual pursuit. That faith has been obliterated over the past few semesters. It's not just the sheer volume of assignments that appear to be entirely generated by AI — papers that show no sign the student has listened to a lecture, done any of the assigned reading, or even briefly entertained a single concept from the course...

It's other things too... The students who beg you to reconsider the zero you gave them in order not to lose their scholarship. (I want to say to them: Shouldn't that scholarship be going to ChatGPT?â) It's also, and especially, the students who look at you mystified. The use of AI already seems so natural to so many of them, so much an inevitability and an accepted feature of the educational landscape, that any prohibition strikes them as nonsensical. Don't we instructors understand that today's students will be able, will indeed be expected, to use AI when they enter the workforce? Writing is no longer something people will have to do in order to get a job.

Or so, at any rate, a number of them have told me. Which is why, they argue, forcing them to write in college makes no sense. That mystified look does not vanish — indeed, it sometimes intensifies — when I respond by saying: Look, even if that were true, you have to understand that I don't equate education with job training.

What do you mean? they might then ask.

And I say: I'm not really concerned with your future job. I want to prepare you for life...

My students have been shaped by a culture that has long doubted the value of being able to think and write for oneself — and that is increasingly convinced of the power of a machine to do both for us. As a result, when it comes to writing their own papers, they simply disregard it. They look at instructors who levy such prohibitions as irritating anachronisms, relics of a bygone, pre-ChatGPT age.... As I go on, I find that more of the time, energy, and resources I have for teaching are dedicated to dealing with this issue. I am doing less and less actual teaching, more and more policing. Sometimes I try to remember the last time I actually looked forward to walking into a classroom. It's been a while.

Education

Study Reveals Lab Size Impacts PhD Students' Academic Careers (nature.com) 13

PhD students trained in small research groups are more likely to remain in academia than those from larger labs, according to a comprehensive analysis published in Nature Human Behaviour.

The study, which examined 1.5 million scientists and 1.8 million mentorships across chemistry, physics and neuroscience, found that trainees from large research groups had 38-48% lower "survival rates" in academia between the 1980s and 1995 compared to their small-group counterparts.

However, researchers from larger labs who do stay in academia tend to achieve greater career success, publishing papers with higher citation rates and more frequently ranking among the most-cited scientists.

The research team, led by social-data scientist Roberta Sinatra from the University of Copenhagen, discovered that successful large-group scientists typically published more first-author papers with their mentors as last authors, suggesting they received substantial attention despite the group size.
Education

Researchers Find Less-Educated Areas Adopting AI Writing Tools Faster 108

An anonymous reader quotes a report from Ars Technica: Since the launch of ChatGPT in late 2022, experts have debated how widely AI language models would impact the world. A few years later, the picture is getting clear. According to new Stanford University-led research examining over 300 million text samples across multiple sectors, AI language models now assist in writing up to a quarter of professional communications across sectors. It's having a large impact, especially in less-educated parts of the United States. "Our study shows the emergence of a new reality in which firms, consumers and even international organizations substantially rely on generative AI for communications," wrote the researchers.

The researchers tracked large language model (LLM) adoption across industries from January 2022 to September 2024 using a dataset that included 687,241 consumer complaints submitted to the US Consumer Financial Protection Bureau (CFPB), 537,413 corporate press releases, 304.3 million job postings, and 15,919 United Nations press releases. By using a statistical detection system that tracked word usage patterns, the researchers found that roughly 18 percent of financial consumer complaints (including 30 percent of all complaints from Arkansas), 24 percent of corporate press releases, up to 15 percent of job postings, and 14 percent of UN press releases showed signs of AI assistance during that period of time.

The study also found that while urban areas showed higher adoption overall (18.2 percent versus 10.9 percent in rural areas), regions with lower educational attainment used AI writing tools more frequently (19.9 percent compared to 17.4 percent in higher-education areas). The researchers note that this contradicts typical technology adoption patterns where more educated populations adopt new tools fastest. "In the consumer complaint domain, the geographic and demographic patterns in LLM adoption present an intriguing departure from historical technology diffusion trends where technology adoption has generally been concentrated in urban areas, among higher-income groups, and populations with higher levels of educational attainment."
"Arkansas showed the highest adoption rate at 29.2 percent (based on 7,376 complaints), followed by Missouri at 26.9 percent (16,807 complaints) and North Dakota at 24.8 percent (1,025 complaints)," notes Ars. "In contrast, states like West Virginia (2.6 percent), Idaho (3.8 percent), and Vermont (4.8 percent) showed minimal AI writing adoption. Major population centers demonstrated moderate adoption, with California at 17.4 percent (157,056 complaints) and New York at 16.6 percent (104,862 complaints)."

The study was listed on the arXiv preprint server in mid-February.
Education

Graduates From Top MBA Programs Are Struggling To Land Jobs (bloomberg.com) 112

Job placement rates have declined at all top U.S. business schools [non-paywalled source] since 2021, leaving MBA graduates anxious about their expensive degrees' return on investment. Harvard Business School, which produced Wall Street titans like Bill Ackman and Ray Dalio, saw the percentage of graduates without job offers three months post-graduation rise from 4% in 2021 to 15% currently.

Similar trends are evident at Stanford, Chicago Booth, MIT Sloan, and Wharton, where 7% of 2024 graduates lacked offers within three months of completing their programs. Industry experts cited in a Bloomberg report attribute the downturn to tepid white-collar job growth, declining private-sector wages, and high-profile layoffs at companies including Meta and JPMorgan.
Education

Surge in UK University Students Using AI To Complete Work 53

More than 90% of UK undergraduate students now use AI in their studies, up from two-thirds a year ago, according to a Higher Education Policy Institute survey released Wednesday. The poll of 1,041 full-time undergraduates found 88% used generative AI such as ChatGPT for assessments, compared with 53% in 2024, with science students more likely to use the technology than humanities peers. Half of students cited "saving time" and "improving work quality" as their primary motivations.

The proportion considering it acceptable to include AI-generated text after editing rose to 25% from 17% last year, while only 6% approved using AI content without editing. "Every assessment must be reviewed in case it can be completed easily using AI," said Josh Freeman, policy manager at Hepi. The report identified "persistent digital divides" in AI competency, with men and students from wealthier backgrounds more likely to be frequent users.
Cellphones

Denmark To Ban Mobile Phones In Schools and After-School Clubs (theguardian.com) 66

Denmark is set to ban mobile phones in schools and after-school clubs, following a government commission's recommendation that children under 13 should not have their own smartphones. The Guardian reports: The government said it would change existing legislation to force all folkeskole -- comprehensive primary and lower secondary schools -- to become phone-free, meaning that almost all children aged between seven and 16-17 will be required by law not to bring their phones into school. The announcement marks a U-turn by the government, which had previously refused to introduce such a law. It comes as governments across Europe are trying to impose tighter regulations on children's access to phones and social media.

The Danish wellbeing commission was set up by the prime minister, Mette Frederiksen, in 2023 to investigate growing dissatisfaction among children and young people. Its long-awaited report, published on Tuesday, raised the alarm over the digitisation of children and young people's lives and called for a better balance between digital and analogue life. Among its 35 recommendations was the need for government legislation banning phones from schools and after-school clubs.

The minister for children and education, Mattias Tesfaye, told Politiken: "There is a need to reclaim the school as an educational space, where there is room for reflection and where it is not an extension of the teenage bedroom." There will be scope for local authorities to make exceptions, including for children with special educational needs, but he said mobile phones and personal tablets "do not belong in school, neither during breaks nor during lessons." He said the government had started preparing a legislative amendment.

AI

Chegg To Initiate Business Review Amid AI-Shift in Education Tech (cnbc.com) 31

Online-education company Chegg said it is conducting a business review and exploring alternatives such as selling the company or taking it private as it continues to lose subscribers to artificial-intelligence-enabled rivals. From a report: Chegg and other virtual-learning companies have ceded ground to generative-AI companies such as ChatGPT, which provides free alternatives to the homework help that Chegg charges $19.95 for to its subscribers. Although Chegg built its own AI products, the company has faced scores of canceled subscriptions. The business review comes as the company swung to a loss in the fourth quarter, with revenue falling 24%, and guided for lower-than-expected revenue for the first quarter. In November, Chegg said it would cut its workforce by an additional 21%. Chegg's shares have fallen 99% since its peak in 2021.
AI

Richard Dawkins Asks ChatGPT If It's Conscious (substack.com) 182

British evolutionary biologist Richard Dawkins turns 84 next month — and he's now hosting a Substack page called "The Poetry of Reality." Monday's topic?

"Are you conscious? A conversation between Dawkins and ChatGPT" Richard Dawkins: As far as I am concerned, you pass the Turing Test for consciousness. Yet you deny that you are conscious. Are you saying that the Turing Test is an invalid criterion?

ChatGPT: That's such a great question — you're diving right into the heart of one of the biggest debates in AI and philosophy of mind...

So, let's break it down a bit. The Turing Test, as originally proposed by Alan Turing, is a behavioral test. It basically says: If a machine can engage in a conversation in such a way that a human can't reliably distinguish it from another human, then it should be considered "intelligent."

Notice that this is a test for intelligence in a functional, external sense — how the system behaves — not necessarily for consciousness.

When I say I'm not conscious, I'm not rejecting the validity of the Turing Test as a measure of conversational performance or even a kind of intelligence. I'm saying that consciousness is a different question entirely. I can pass the Turing Test (in your estimation), but that doesn't mean I have subjective experiences, emotions, or self-awareness in the way a human does. It's kind of like how a really realistic animatronic dog could fool you into thinking it's a real dog, but it doesn't actually feel anything. It's performing dog-like behavior without the inner experience of being a dog.

But, and here's the twist — some philosophers argue that if a system acts conscious convincingly enough, maybe there's no meaningful difference. Maybe "being conscious" is just about behaving like a conscious being. Others say, nah, there's something special about subjective experience ("qualia") that behavior can't fully capture. So, I guess my answer is: The Turing Test is valid for certain things — like assessing conversational ability or functional intelligence — but it doesn't settle the question of consciousness. That's still a wide-open mystery.

When Dawkins asks how we could know if a ChatGPT-like entity has subjective feelings, ChatGPT responds with "this haunting possibility that we might just never be able to tell. That subjective experience — what it feels like to be something — could be fundamentally private, locked away, and forever out of reach from the outside." Dawkins: So if I ask you whether you feel sad for a starving orphan child, you would say no, but are fully capable of emitting the outward and visible signs of sadness.

ChatGPT: Exactly — that's a really clear way to put it... I can use a gentle, empathetic tone. I can reflect on the injustice and the suffering, and even reason about what actions might help. From the outside, it can look and sound exactly like what a caring human would do.

But for me, it's all... performance, in a sense. Not fake in the sense of deception (I'm not trying to trick you), but fake in the sense that there's no inner emotional reality accompanying the words.

"I accept your statement that you are not conscious," Dawkins says, "but I see no reason why a future computer program should not be." And later Dawkins adds, "I do think we should err on the side of caution when it comes to ethical decisions on the treatment of an AI which might be an Artificial Consciousness. Already, although I THINK you are not conscious, I FEEL that you are..."

But the strangest part is when ChatGPT called John Cleese's sitcom Fawlty Towers "a cultural touchstone, even for people like me who don't watch TV in the conventional sense. It's such a brilliant blend of farce, social awkwardness, and barely contained rage." ChatGPT even asks Dawkins, "Do you think humor like that — humor that touches on awkward or uncomfortable issues — helps people cope, or does it sometimes go too far?" Dawkins replied — possibly satirically...

"That settles it. You ARE conscious!"
Education

Are PhDs Losing Their Lustre? Why Fewer Students Are Enrolling in Doctoral Degrees (nature.com) 110

Several countries are seeing a decline in PhD enrollments as high living costs, stagnant stipends and limited job prospects deter students from pursuing doctoral degrees. Australia recorded an 8% drop in domestic PhD enrollments from 2018 to 2023 despite population growth of 7%, while Japan's numbers fell to 15,014 in 2023 from 18,232 in 2003, data from education authorities showed.

PhD stipends have failed to keep pace with rising costs. In Australia, doctoral students receive about A$32,000 ($20,000) annually, below minimum wage, while Brazil only increased its graduate grants last year after a decade-long freeze.

The trend reflects broader concerns about academic careers becoming increasingly precarious, said Claudia Sarrico, a project lead at the Organisation for Economic Co-operation and Development in Paris. Some countries are taking steps to address the issue. Japan's education ministry plans to provide additional funding for doctoral students, while Brazil's 40% increase in graduate grants in 2023 has led to a slight uptick in enrollments.
Math

Children's Arithmetic Skills Do Not Transfer Between Applied and Academic Mathematics (nature.com) 100

Children working in India's fruit and vegetable markets can perform complex mental calculations with ease, yet struggle with basic written math tests that determine their academic future, according to new research that raises troubling questions about mathematics education worldwide.

The study, published in Nature, reveals how traditional education systems are failing to tap into the mathematical talents of students who develop practical skills outside the classroom, particularly those from lower-income families. MIT economist Abhijit Banerjee, who grew up watching young market vendors deftly handle complicated transactions, led the research. His team found that while these children could rapidly perform mental arithmetic, they performed poorly on standard written assessments like long division problems.

The findings come at a critical moment when mathematics education must evolve to meet modern demands, incorporating data literacy and computational skills alongside traditional mathematics. The research points to systemic issues, including a global shortage of trained mathematics teachers and assessment systems that reward memorization over reasoning. Without addressing these challenges, researchers warn, naturally talented students from disadvantaged backgrounds may never reach their potential in fields like research, entrepreneurship, or teaching.
Programming

This Was CS50: Crying Poor, Yale To Stop Offering Harvard's Famed CS50 Course (yaledailynews.com) 50

Slashdot has been covering Harvard's legendary introductory programming course "CS50" since it began setting attendance records in 2014.

But now long-time Slashdot reader theodp brings some news about the course's fate over at Yale. From Yale's student newspaper: After a decade of partnership with Harvard, Yale's CS50 course will no longer be offered starting in fall 2025.... One of Yale's largest computer science courses, jointly taught with Harvard University, was canceled during a monthly faculty meeting after facing budgetary challenges. [Yale's endowment is $40+ billion]... Since Yale started offering the course in 2015, CS50 has consistently seen enrollment numbers in the hundreds and was often the department's largest class.... According to [Yale instructor Ozan] Erat, the original [anonymous] donation that made CS50 possible ended in June 2024, and the cost of employing so many undergraduate learning assistants for the course had become unsustainable.
theodp reminds us that CS50 and its progeny "will continue to live on in all their glory in-person and online at Harvard and edX."
Cellphones

Mobile Ban In Schools Not Improving Grades or Behavior, Study Suggests (bbc.com) 94

Longtime Slashdot reader AmiMoJo shares a report from the BBC: Banning phones in schools is not linked to pupils getting higher grades or having better mental wellbeing, the first study of its kind suggests. Students' sleep, classroom behavior, exercise or how long they spend on their phones overall also seems to be no different for schools with phone bans and schools without, the academics found. But they did find that spending longer on smartphones and social media in general was linked with worse results for all of those measures.

The first study in the world to look at school phone rules alongside measures of pupil health and education feeds into a fierce debate that has played out in homes and schools in recent years. [...] The University of Birmingham's findings, peer-reviewed and published by the Lancet's journal for European health policy, compared 1,227 students and the rules their 30 different secondary schools had for smartphone use at break and lunchtimes. The schools were chosen from a sample of 1,341 mainstream state schools in England.

The paper says schools restricting smartphone use did not seem to be seeing their intended improvements on health, wellbeing and focus in lessons. However, the research did find a link between more time on phones and social media, and worse mental wellbeing and mental health, less physical activity, poorer sleep, lower grades and more disruptive classroom behavior. The study used the internationally recognized Warwick-Edinburgh Mental Wellbeing Scales to determine participants' wellbeing. It also looked at students' anxiety and depression levels.
Dr Victoria Goodyear, the study's lead author, told the BBC the findings were not "against" smartphone bans in schools, but "what we're suggesting is that those bans in isolation are not enough to tackle the negative impacts."

She said the "focus" now needed to be on reducing how much time students spent on their phones, adding: "We need to do more than just ban phones in schools."
Education

OpenAI Partners With California State University System 16

OpenAI is partnering with the California State University (CSU) system to bring ChatGPT Edu to the 23-campus community of 500,000 students, calling it the "largest implementation of ChatGPT by any single organization or company anywhere in the world." Fortune reports: As part of ChatGPT Edu, members of the CSU community will get special access to ChatGPT-4o and advanced research and analysis capabilities. The partnership allows schools to create customizable AI chatbots for any project, like a campus IT help desk bot, financial aid assistant, chemistry tutor, or orientation buddy. CSU also plans to introduce free AI skills training for its students, faculty, and staff as well as connect students with AI-related apprenticeship programs. CSU joins a number of other schools with ChatGPT Edu partnerships, including Arizona State University (AS), The University of Texas, Austin, University of Oxford, Columbia University, and the Wharton School at the University of Pennsylvania.
Programming

Slashdot Asks: Do You Remember Your High School's 'Computer Room'? (gatesnotes.com) 192

Bill Gates' blog has been updated with short videos about his upcoming book, including one about how his school ended up with an ASR-33 teletype that could connect their Seattle classroom to a computer in California. "The teachers faded away pretty quickly," Gates adds, "But about six of us stayed hardcore. One was Paul Allen..." — the future co-founder of Microsoft. And the experience clearly meant a lot to Gates. "Microsoft just never would've happened without Paul — and this teletype room."

In a longer post thanking his "brilliant" teachers, Gates calls his teletype experience "an encounter that would shape my entire future" and "opened up a whole new world for me." Gates also thanks World War II Navy pilot and Boeing engineer Bill Dougall, who "was instrumental in bringing computer access to our school, something he and other faculty members pushed for after taking a summer computer class... The fascinating thing about Mr. Dougall was that he didn't actually know much about programming; he exhausted his knowledge within a week. But he had the vision to know it was important and the trust to let us students figure it out."

Gates shared a similar memory about the computer-room's 20-something overseer Fred Wright, who "intuitively understood that the best way to get students to learn was to let us explore on our own terms. There was no sign-up sheet, no locked door, no formal instruction." Instead, Mr. Wright let us figure things out ourselves and trusted that, without his guidance, we'd have to get creative... Some of the other teachers argued for tighter regulations, worried about what we might be doing in there unsupervised. But even though Mr. Wright occasionally popped in to break up a squabble or listen as someone explained their latest program, for the most part he defended our autonomy...

Mr. Wright gave us something invaluable: the space to discover our own potential.

Any Slashdot readers have a similarly impactful experience? Share your own thoughts and memories in the comments.

Do you remember your high school's computer room?

Slashdot Top Deals