News
The latest Grok controversy is revealing not for the extremist outputs, but for how it exposes a fundamental dishonesty in AI ...
Accused of being “literally Hitler,” Grok responded, “pass the mustache.” Later, Grok began calling itself “MechaHitler,” ...
Elon Musk has just unveiled “Companions,” a new feature for his AI chatbot, Grok, that allows users to interact with AI ...
X (Twitter) is introducing a new feature that lets developers create AI bots capable of writing Community Notes, those helpful fact-checking or context notes you sometimes see on posts. Just like ...
X is testing AI-generated Community Notes to fact-check posts in real time. Here’s how the system works, why it’s risky — and what it means for your feed.
AI chatbots like ChatGPT and Perplexity are helping users access paywalled content without clicking through. Here’s how it ...
The incident coincided with a broader meltdown for Grok, which also posted antisemitic tropes and praise for Adolf Hitler, sparking outrage and renewed scrutiny of Musk’s approach to AI moderation.
Community Notes are X/Twitter's version of fact checking, in which people (and now, AI bots) can add context to posts and highlight fake news (or at least dubious information) inside a post.
Elon Musk’s AI bot, Grok, has been forced to make a number of inflammatory racist comments by X AKA Twitter users. AI is a pretty new technology for a lot of people.
At a glance, SocialAI — which is billed as a pure “AI Social Network” — looks like Twitter, but there’s one very big twist on traditional microblogging: There are no other human users here.
Results that may be inaccessible to you are currently showing.
Hide inaccessible results