Skip to content

24×7 Live Breaking News

View the latest news and breaking news today for U.S., world, weather, entertainment, politics and health at 24x7livenews.com.

Primary Menu
  • Home
  • News
  • Technology
  • Sports
  • Business
  • Politics
  • Travel
Video
  • Home
  • Technology
  • Hinton Warns AI Could Invent Its Own Unreadable Language
  • Technology

Hinton Warns AI Could Invent Its Own Unreadable Language

Jim Acosta August 11, 2025
Hinton Warns AI Could Invent Its Own Unreadable Language

Late afternoon rain ticked against the newsroom window, the faint smell of cold coffee and printer toner floating past a notebook with a tidy coffee ring. A pen lay clipped to the pad, ink dried at the tip.

That little scene says something — that most of what we do still feels tangible, trackable, human. Machines, though, are beginning to look less like notebook pages and more like locked safes.

A looming worry

Geoffrey Hinton, often called a godfather of modern AI, has been striking a more alarmed tone in recent months. The headline from Business Insider distilled one of his starkest cautions: the systems we now nudge and probe in English might someday develop internal patterns — a private shorthand or even a full-fledged language — that researchers can no longer interpret. For people who have spent decades mapping neural nets, that’s a heavy admission: right now we can follow the breadcrumbs. That could change.

Why he worries

Hinton’s point rests on how large language models are built and taught. Developers feed them text, much of it in English, and tune them with prompts and labels that map internal activations to human concepts. Those mappings let engineers peek into the model’s “thoughts” — or at least, its token-level behavior. But neural networks are fundamentally mathematical systems optimizing for patterns. Over time, as models get larger and training data diversifies, their internal representations could shift away from human-readable symbols and toward compressed, efficient encodings that don’t line up with any spoken tongue.

This isn’t wild science fiction. Neural nets already produce internal vectors and activations we don’t fully understand. Some researchers celebrate that opacity as efficiency — others see it as risk.

How AI “thinks” today

Right now many of the most visible models still operate in ways that let engineers and auditors trace why they offered a particular answer. That traceability matters: it lets teams detect bias, fix glaring errors, and, when necessary, delete problematic behavior. Tools from academic labs and industry labs alike try to make model behavior legible.

Still, progress is uneven. Reuters coverage of national conversations about AI over the past year shows policymakers and companies wrestling with whether current tools will scale to future systems. Pew Research has documented public unease about opaque automated decision-making, particularly around jobs and privacy. The White House has put forward proposals aimed at shaping federal oversight while steering clear of some stricter regulatory paths — a stance that’s already reshaping how companies plan safety work.

A regulatory tug-of-war

That White House proposal is a key backdrop. On one hand, it tries to balance innovation and risk, offering incentives for safety practices without heavy-handed mandates. On the other hand, critics say it could limit independent auditing and slow the creation of enforceable standards. The practical result: companies might face less immediate pressure to build tools that keep models interpretable as they scale.

“I gotta say, it feels like we’re watching a slow-motion experiment,” said Maya Rodriguez, 42, an AI ethics researcher at a university lab who studies model transparency. “We’ve got systems that can surprise you, and at the same time we’re pulling the regulatory leash back just a bit. That’s a recipe for hard choices.” She folded a printed report, thumb smudged with toner at the corner.

What could change — and what’s unclear

If models develop internal codes that don’t map to English or any human language, auditing will become harder. Regulators might demand model documentation; companies might resist, citing trade secrets. Some researchers argue new interpretability tools could translate those internal codes back into comprehensible forms. Others warn there could be limits to what even the best tools can reconstruct.

Sources remain conflicted about timelines. A few technical experts think emergent unreadable representations could appear as models reach tens or hundreds of trillions of parameters. Others say clever architectures and training regimes could keep systems mostly legible. The reality is likely more complicated: progress in one direction can open vulnerabilities in another.

Voices on the ground

“I’m worried,” said Aaron Mitchell, 58, a former software engineer turned state regulator. He wears a battered navy windbreaker and keeps a worn golf glove in his office drawer — a small comfort, he joked. “Not because machines will revolt or anything. It’s because we could lose the ability to check them, and that’s where mistakes become hard to catch.”

An independent auditor, Lila Banerjee, 33, offered a counterpoint with a sigh. “Look, people have been yelling about black boxes since the early days of machine learning. Sometimes the doom feels a bit—well, I don’t know. But we do need guardrails. I don’t want to find out we were too casual because we were busy chasing new benchmarks.” She tapped her pen, leaving a faint streak on an otherwise clean page.

What readers should watch

For the general public, the practical bits matter: whether AI systems used in hiring, policing, lending, or healthcare remain explainable, and who has the power to demand explanations. A future in which models encode their “thinking” in inaccessible ways would make those protections harder to enforce. Keep an eye on three things: industry transparency practices, independent auditing capacity, and federal rules that affect disclosure.

A small detour: memory and machines

I remember watching an old episode of The Twilight Zone as a kid — the kind that made the future feel eerie but fixable. Machines that speak a language only they understand rings a bit like that episode. The comparison is silly, but it’s useful as a thought experiment: we’ve faced strange tech before, and sometimes we solved it by insisting on clearer rules.

A messy transition — but real choices

There’s an awkward, slightly rushed truth here. Tech progress doesn’t wait for consensus. Companies will push for more powerful, cheaper models. Regulators will try to catch up. Some researchers will build better interpretability tools; others will move on to engineering performance. One short paragraph to say: the next few years will matter.

If you’re wondering what to do today — vote, stay informed, ask companies what they mean by “transparent,” and support independent audits of systems that affect life choices. It’s not glamorous. It’s not cinematic. But it’s where the rubber meets the road.

Unexpected aside

Also: keep your own notebooks. The coffee ring will fade, but having a paper trail of decisions — why your organization used a model, who approved it — will be oddly valuable. I learned that the hard way once, chasing a source who vanished in a storm. Paper mattered then. It still might.

Final thought

Hinton’s warning is blunt: we can follow some AI thinking now because it’s, essentially, thinking in English. If that changes, so does our ability to hold systems accountable. Whether that evolution is imminent or distant, manageable or scary, is an open question. The answer will come from a mix of engineers, regulators, auditors, and, yes, the public — not just the machines.

— By a seasoned reporter who still keeps a pen in the pocket of an old blazer

About the Author

Jim Acosta

Jim Acosta

Author

Author's website Author's posts
Spread the love

Continue Reading

Previous: Altman Calls Gen Z “Luckiest” — Is AI Delivering?
Next: Bo Turbo E‑Scooter Reaches 100 mph: What That Means

Related Stories

When Democracy Dies: Life Under Rule by Fear
  • Technology

When Democracy Dies: Life Under Rule by Fear

Jim Acosta August 13, 2025
Perplexity Offers $34.5B to Buy Google Chrome
  • Technology

Perplexity Offers $34.5B to Buy Google Chrome

Jim Acosta August 13, 2025
1mnj1o8.jpeg
  • Technology

GitHub CEO Thomas Dohmke to Step Down

Jim Acosta August 12, 2025

Recent Posts

  • When Democracy Dies: Life Under Rule by Fear
  • Perplexity Offers $34.5B to Buy Google Chrome
  • GitHub CEO Thomas Dohmke to Step Down
  • PS6 Leak Suggests Triple PS5 Power at the Same $499 Price
  • When Will the AI Bubble Burst?

Recent Comments

No comments to show.

Archives

  • August 2025
  • July 2025

Categories

  • Business
  • Education
  • Entertainment
  • General
  • Health
  • News
  • Politics
  • Science
  • Sports
  • Style
  • Technology
  • Travel

You may have missed

When Democracy Dies: Life Under Rule by Fear
  • Technology

When Democracy Dies: Life Under Rule by Fear

Jim Acosta August 13, 2025
Perplexity Offers $34.5B to Buy Google Chrome
  • Technology

Perplexity Offers $34.5B to Buy Google Chrome

Jim Acosta August 13, 2025
1mnj1o8.jpeg
  • Technology

GitHub CEO Thomas Dohmke to Step Down

Jim Acosta August 12, 2025
1mh9s96.jpeg
  • Technology

PS6 Leak Suggests Triple PS5 Power at the Same $499 Price

Jim Acosta August 12, 2025
Copyright © All rights reserved. | MoreNews by AF themes.
Manage Consent
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes. The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.
  • Manage options
  • Manage services
  • Manage {vendor_count} vendors
  • Read more about these purposes
View preferences
  • {title}
  • {title}
  • {title}