Keeping up with a rapidly evolving industry like artificial intelligence is a daunting task. So until artificial intelligence can do that for you, here’s a handy roundup of the latest stories in machine learning, as well as notable studies and experiments we didn’t cover ourselves.
This week, in the field of artificial intelligence, OpenAI signed its first higher education customer: Arizona State University.
Arizona State University will partner with OpenAI to bring ChatGPT, OpenAI’s artificial intelligence chatbot, to the university’s researchers, faculty and staff, and host an open challenge in February inviting faculty and staff to submit ideas for using ChatGPT.
OpenAI’s deal with Arizona State University illustrates how perceptions of artificial intelligence in education are changing as technology advances faster than curriculum can keep up. Last summer, schools and universities banned ChatGPT over concerns about plagiarism and misinformation. Since then, some have reversed their bans, while others have begun hosting workshops about GenAI tools and their learning potential.
The debate over GenAI’s role in education is unlikely to be resolved anytime soon. But, regardless, I find myself increasingly in the camp of supporters.
Yes, GenAI’s summarization capabilities are poor. This is biased and toxic. It makes stuff up. But it can also be used to do good.
Consider how a tool like ChatGPT can help students complete their homework. It can explain a math problem step by step or generate an essay outline. Or it could make Google take longer to find the answer to your question.
Now, there are legitimate concerns about cheating—or at least what might be considered cheating within the confines of today’s curriculum. I’ve heard of students, especially college students, using ChatGPT to write a lot of essays and take-home quizzes.
This is not a new problem – paid essay writing services have been around for a long time. But some educators believe ChatGPT significantly lowers the barrier to entry.
There is evidence that these concerns are overblown. But putting that aside for a moment, I say we take a step back and consider what drives students to cheat. Students are often rewarded based on merit, not effort or understanding. The incentive structure is distorted. Is it any wonder, then, that children see schoolwork as boxes to be checked rather than opportunities to learn?
So let students have GenAI and let educators try to leverage this new technology to reach students. I don’t have high hopes for radical education reform. But maybe GenAI will serve as a launching pad for lesson plans that get kids excited about subjects they’ve never explored before.
Here are some other noteworthy AI stories from the past few days:
Microsoft Reading Tutor: Microsoft this week launched Reading Coach, an artificial intelligence tool that provides learners with personalized reading practice and is free for anyone with a Microsoft account.
Algorithmic transparency in music: EU regulators have called for laws to force music streaming platforms to be more transparent about their algorithms. They also want to tackle AI-generated music and deepfakes.
NASA robots: NASA recently demonstrated a self-assembling robotic structure that Devin writes could be a key part of getting off Earth.
Samsung Galaxy, now powered by artificial intelligence: At the Samsung Galaxy S24 launch event, the company touted the various ways artificial intelligence can improve the smartphone experience, including real-time translation of calls, suggested replies and actions, and new ways to use gestures to conduct Google searches.
DeepMind’s geometry solver: DeepMind, Google’s artificial intelligence research and development lab, this week launched AlphaGeometry, an artificial intelligence system that the lab claims can solve geometry problems at the same level as the average International Mathematical Olympiad gold medalist.
OpenAI and crowdsourcing: In other OpenAI news, the startup is forming a new team, Collective Alignment, to implement public ideas on how to ensure its future artificial intelligence models are “aligned with human values.” At the same time, it is changing policy to allow its technology to be used for military purposes. (Talk about mixed messaging.)
Copilot’s Professional Programs: Microsoft has launched a consumer-focused paid plan for Copilot, the umbrella brand for its portfolio of artificial intelligence-powered content generation technologies, and has relaxed eligibility requirements for its enterprise-grade Copilot products. It also launched new features for free users, including the Copilot smartphone app.
Deceptive model: Most people have learned the art of deceiving others. So can artificial intelligence models learn the same thing? Yes, the answer seems to be – and the scary thing is, they’re pretty good at it. According to a new study from artificial intelligence startup Anthropic.
Tesla’s robot demonstration: Tesla’s Elon Musk’s Optimus humanoid robot is doing a lot more – this time folding a T-shirt on a table at a development facility. But it turns out that robots simply cannot achieve autonomy at this stage.
More machine learning
One of the factors holding back broader applications of AI, such as satellite analysis, is the need to train models to recognize shapes or concepts that can be quite esoteric. Recognizing the outline of a building: easy. Identifying post-flood debris zones: Not that easy! Researchers at the Ecole Polytechnique Fédérale de Lausanne (EPFL) in Switzerland hope to make this easier with a program they call METEOR.

Image Source: EPFL
“The problem with environmental science is that it’s often impossible to obtain data sets large enough to train artificial intelligence programs to meet our research needs,” says Marc Rußwurm, one of the project leaders. Their new training structure allows recognition algorithms to be trained for new tasks using only four to five representative images. The results are comparable to models trained with more data. Their plan is to upgrade the system from a laboratory to a product with a user interface that can be used by ordinary people (that is, non-AI researchers). You can read their published paper here.
The other direction—creating images—is an area of intense research, because doing it efficiently reduces the computational load on the platform that generates the AI. The most common method is called diffusion, which gradually refines a pure noise source into a target image. Los Alamos National Laboratory has a new method called “black diffusion,” which instead starts with a pure black image.
This eliminates the need for noise in the first place, but the real advancement is in the framework that occurs in “discrete space” rather than continuous space, thus greatly reducing the computational load. They say it performs well and costs less, but it’s certainly still far from widespread release. I’m not qualified to evaluate the effectiveness of this approach (the math is well beyond my capabilities), but national labs don’t hype things like this for no reason. I will ask the researchers for more information.
Artificial intelligence models are emerging across the natural sciences, and their ability to sift signal from noise can both generate new insights and save graduate students the expense of data entry time.
Australia is applying Pano AI’s wildfire detection technology to its “Green Triangle” of major forestry areas. It’s great to see startups like this being put to use – not only can it help prevent fires, but it can also provide valuable data to forestry and natural resource authorities. With wildfires (or brushfires as they call them out there), every minute counts, so early notification can mean the difference between tens and thousands of acres of damage.

Permafrost reduction measured by the old model (left) and the new model (right).
Los Alamos was mentioned again (I didn’t realize it until I was flipping through my notes) because they are also working on a new artificial intelligence model to estimate the loss of permafrost. Existing models have low resolution and can only predict permafrost levels in patches about 1/3 square mile. This is certainly useful, but look through more detail and you’ll get less misleading results. These areas may look like 100% permafrost on a larger scale, but when you look closely, the results are clearly Be smaller. As climate change progresses, these measurements need to be accurate!
Biologists are finding interesting ways to test and use artificial intelligence or artificial intelligence proximity models in many subfields of the field. At a recent conference my GeekWire friends wrote about, tools for tracking zebras, insects, and even single cells were demonstrated in a poster session.
In physics and chemistry, researchers at Argonne National Laboratory are studying how best to package hydrogen for use as fuel. Free hydrogen is notoriously difficult to control and control, so combining it with special helper molecules keeps it tame. The problem is that hydrogen can combine with almost anything, so there are billions of possibilities for auxiliary molecules. But sorting large amounts of data is a specialty of machine learning.
“We are looking for organic liquid molecules that can retain hydrogen for long periods of time, but are not so strong that they cannot be easily removed as needed,” said Hassan Harb of the project. Their system is effective at 1,600 Billions of molecules were sorted, and by using artificial intelligence screening methods, they were able to screen 3 million molecules per second, so the entire final process took about half a day. (Of course, they used sizable supercomputers.) They identified 41 top candidates, a minuscule number for experimenters to test in the lab. Hopefully they find something useful – I don’t want to have to deal with a hydrogen leak in my next car.
One final word of caution, though: A study in the journal Science found that machine learning models used to predict patients’ responses to certain treatments were highly accurate within the sample groups they were trained on. In other cases, they are largely unhelpful. That doesn’t mean they shouldn’t be used, but it supports what many in the industry say: AI is no magic bullet and must be tested thoroughly in every new population and application.