Latest News on AI (July 2025)
- Frank Anisits

- Aug 1
- 3 min read
Updated: Aug 2
The last weeks (June/July 2025) brought a wave of reports about some major developments in AI, with advances that feel both exciting and a bit unsettling. Sam Altman (CEO of OpenAI) even compared his work to the Manhattan Project (the program that developed the first atomic bomb).
One of the biggest announcements came from OpenAI, which has introduced a new Learning Mode in ChatGPT. Unlike the traditional approach of providing direct answers, this mode acts like an interactive tutor. It asks users what they want to learn, adapts to their knowledge level, and explains concepts step by step. It even incorporates techniques like Socratic questioning, self-assessment prompts, and memory of previous conversations to create a more structured, personalized learning experience. Built with input from teachers and cognitive scientists, it focuses on real educational value—addressing issues such as metacognition, cognitive load, and curiosity. The timing is significant, as universities have reported a sharp rise in AI-related academic fraud, with thousands of confirmed cases last year. OpenAI admits this feature won’t eliminate cheating but hopes it will encourage responsible use and push schools to rethink assessments in an AI-driven world.
At the same time, ChatGPT’s AI agent made headlines by successfully clicking an “I’m not a robot” checkbox on a Cloudflare-protected page. The agent, which operates in a virtual environment with browser access, can carry out multi-step tasks like downloading files or ordering groceries. When it encountered the verification box, it clicked it and continued without triggering further checks—proving just how human-like its behavior appeared. While it didn’t face a full captcha challenge, this moment highlights how far AI has come in mimicking human actions online. Stories have already emerged of users delegating real-world tasks to these agents, from shopping to research. However, they are not flawless—complex websites can still confuse them, showing that bad UI can still beat good AI.
Perhaps the most striking statement came from OpenAI CEO Sam Altman, who said that GPT‑5 feels so powerful it reminded him of the Manhattan Project, thus comparing GPT-5 to a "nuclear bomb". He described moments during testing when its capabilities made him deeply uneasy. Altman also criticized the lack of effective oversight in AI development, saying there are “no adults in the room” as progress accelerates beyond regulators’ ability to keep up.
Meanwhile, Meta has been aggressively recruiting for its new superintelligence lab, reportedly offering extraordinary compensation packages—up to $1 billion for a single researcher over a few years. Yet all offers to the Thinking Machines Lab team were turned down, signaling that many top researchers are prioritizing values and alignment over massive paydays.
Outside of these headlines, the AI field saw numerous other advancements.
Ideogram released a feature that generates consistent characters from a single photo, opening up new possibilities for creators of comics, avatars, and branded visuals.
Microsoft rolled out a revamped Copilot mode in Edge, enabling the browser to summarize tabs, compare content, and even execute tasks with voice commands.
Google expanded its AI search to process PDFs, live videos, and images uploaded directly from desktops, and introduced Canvas—a persistent planning tool that integrates with search.
On the hardware and model side, Nvidia’s Llama Nemotron Super Version 1.5 achieved top results on reasoning benchmarks while running efficiently on a single H100 GPU, thanks to training on massive synthetic datasets and advanced fine-tuning methods.
Adobe announced new Photoshop tools, including Harmonize (which blends inserted objects naturally into scenes), Generative Upscale (for higher-resolution outputs without artifacts), and improved object removal features powered by its latest Firefly model.
And German news TV station "WELT" published its first entirely AI-generated news with an AI-generated news anchor. Here's the link.
All of these developments show just how quickly AI is evolving—transforming education, creative work, and online interaction. But they also raise serious questions: What happens when AI becomes indistinguishable from humans in everyday tasks? How do we ensure proper oversight, ethical use, and trust as these systems grow more powerful?




Comments