OpenAI has quietly replaced ChatGPT's default model with GPT-5.5 Instant, delivering a significant drop in hallucinations especially on high-stakes medical, legal, and financial queries. Internal tests show 52.5% fewer false claims compared to the previous GPT-5.3 Instant, and a 37.3% reduction on factual errors flagged by real users. The upgrade, which rolls out free to all users, also marks the first time an “Instant”-tier model is classified by OpenAI as High Capability in both cybersecurity and biological domains, triggering extra safeguards previously reserved for more powerful variants.
On the same day, a multi-university study published on arXiv poured cold water on fears that generative AI is supercharging cybercrime. Researchers from Cambridge, Edinburgh, and Strathclyde analyzed 97,895 underground forum threads posted since ChatGPT’s launch and found that 97.3% of threads contained no actual AI-assisted crime. So-called “Dark AI” tools like WormGPT generated buzz but produced almost no working malware, while jailbreaks for mainstream models often broke within days. Instead, the most measurable AI-driven crime involves low-level scams: SEO spam, romance fraud, AI-generated nudes sold for a dollar each, and get-rich-quick e‑books.
The contrast is sharp: as ChatGPT’s guardrails become more effective—GPT-5.5 Instant’s system card explicitly notes it won’t help with hacking—the underground is stuck with disposable jailbreaks and vibe-coded tools that even criminals don’t trust. One forum developer admitted their service was “nothing more than an unrestricted ChatGPT.” The researchers suggest the true disruption may come not from supercharged hackers, but from economic pressure pushing laid-off legitimate developers into low-level cybercrime.