AI megathread
Chicago Sun-Times prints summer reading list full of fake books
Bathurst business owners are using AI-generated “concerned residents” to fight a proposed bus lane
Anthropic’s new AI model turns to blackmail when engineers try to take it offline
AI company files for bankruptcy after being exposed as 700 Indian engineers
That's actually a 'call centre'. 
What Happens When People Don’t Understand How AI Works
$1:
Despite what tech CEOs might say, large language models are not smart in any recognizably human sense of the word.
$1:
To call AI a con isn’t to say that the technology is not remarkable, that it has no use, or that it will not transform the world (perhaps for the better) in the right hands. It is to say that AI is not what its developers are selling it as: a new class of thinking—and, soon, feeling—machines. Altman brags about ChatGPT-4.5’s improved “emotional intelligence,” which he says makes users feel like they’re “talking to a thoughtful person.” Dario Amodei, the CEO of the AI company Anthropic, argued last year that the next generation of artificial intelligence will be “smarter than a Nobel Prize winner.” Demis Hassabis, the CEO of Google’s DeepMind, said the goal is to create “models that are able to understand the world around us.”
These statements betray a conceptual error: Large language models do not, cannot, and will not “understand” anything at all. They are not emotionally intelligent or smart in any meaningful or recognizably human sense of the word. LLMs are impressive probability gadgets that have been fed nearly the entire internet, and produce writing not by thinking but by making statistically informed guesses about which lexical item is likely to follow another.
Many people, however, fail to grasp how large language models work, what their limits are, and, crucially, that LLMs do not think and feel but instead mimic and mirror. They are AI illiterate—understandably, because of the misleading ways its loudest champions describe the technology, and troublingly, because that illiteracy makes them vulnerable to one of the most concerning near-term AI threats: the possibility that they will enter into corrosive relationships (intellectual, spiritual, romantic) with machines that only seem like they have ideas or emotions.
tl;dr, LLM AI is a search engine for what we already know that sometimes makes things up, including feelings.
Gabbard says AI is speeding up intel work, including the release of the JFK assassination files
News Sites Are Getting Crushed by Google’s New AI Tools
DrCaleb DrCaleb:
Translation: She asked ChatGPT what to declassify.
https://www.youtube.com/watch?v=j65NrMron8I
Saw a really good review on an AI keyboard. Not what you think! It has voice recognition built in, and even does accents very well and does not need an internet connection to work. You talk, it types.
If my company is using the new 'Copilot' updates to log what I do, I am going to use that keyboard to record the radio playing into a copy of Word.
I remain an evil genius.
DrCaleb @ Sat Jul 05, 2025 10:19 am
When you tell the AI that you want to cut down a Redwood and saw it into boards.

Massive study detects AI fingerprints in millions of scientific papers
That will help the credibility issue in science journals. 
Mike Lindell lost defamation case, and his lawyers were fined for AI hallucinations