📢 Gate Square Exclusive: #PUBLIC Creative Contest# Is Now Live!
Join Gate Launchpool Round 297 — PublicAI (PUBLIC) and share your post on Gate Square for a chance to win from a 4,000 $PUBLIC prize pool
🎨 Event Period
Aug 18, 2025, 10:00 – Aug 22, 2025, 16:00 (UTC)
📌 How to Participate
Post original content on Gate Square related to PublicAI (PUBLIC) or the ongoing Launchpool event
Content must be at least 100 words (analysis, tutorials, creative graphics, reviews, etc.)
Add hashtag: #PUBLIC Creative Contest#
Include screenshots of your Launchpool participation (e.g., staking record, reward
ChatGPT removes the official detection tool, admits that AI text cannot be identified
Source: "Qubit" (ID: QbitAI), author: Mengchen
Without an announcement, OpenAI quietly closed the AI text detection tool, and the page directly 404.
Too many teachers believe that this thing is effective, and a large number of wronged students cheat with AI, which has become a witch hunt.
The accuracy rate is almost the same as guessing
How low is the accuracy of this official testing tool?
The data given by OpenAI itself can only correctly identify 26% of AI-generated texts, and wronged 9% of human-written texts.
In addition, some people have done experiments and found that various detection tools on the market will judge that historical texts such as the Bible and the U.S. Constitution may be written by AI. The historical figures who cannot write these contents are time travelers, right?
But there are still many teachers who try to check students' work with various detection methods.
In one of the most famous cases, a professor at Texas A&M University almost judged half of his class to be late.
Current detection methods can be circumvented
Netizens pointed out that it is contradictory for OpenAI to develop generation and detection tools at the same time.
If one side is doing well, the other side is not doing well, and there may be a conflict of interest.
The earliest known as "ChatGPT nemesis" is GPTZero developed by Princeton undergraduate Edward Tian, which uses complexity and changes in length and length of sentences to measure whether an article is generated by AI.
At that time, the GPTZero project was specially created for educators, and teachers could throw the homework of the whole class into it for testing.
But in July, the author admitted that he had given up the direction of detecting students' cheating. It is planned that the next version of GPTZero will no longer detect whether the text is generated by AI, but highlight the most human-like part.
Just like whether the numbers are calculated by humans or completed by computers, no one has cared for a long time.
Doesn't anyone care whether the speaker's manuscript is written by himself or by the secretary?
Human behavior research, using AI as a subject
The inability to distinguish between AI and human content does not seem to be all bad.
There are already psychological experiments that use AI instead of human subjects to accelerate research.
An article in the Cell sub-journal pointed out that in well-designed experimental scenarios, the responses of ChatGPT were correlated with the responses of about 95% of human participants.
And machine subjects don't tire, allowing scientists to collect data and test theories about human behavior at unprecedented speed.
"AI could be a game-changer for social science research, where careful bias management and data fidelity are key."
[1]
[2]
[3]
[4]
[5]
[6]