OpenAI shutters AI detector as a consequence of low accuracy
[ad_1]
Synthetic intelligence powerhouse OpenAI has discreetly pulled the pin on its AI-detection software program citing a low charge of accuracy.
The OpenAI-developed AI classifier was first launched on Jan. 31, and aimed to help customers, resembling lecturers and professors, in distinguishing human-written textual content from AI-generated textual content.
Nevertheless, per the unique blog post which introduced the launch of the software, the AI classifier has been shut down as of July 20:
“As of July 20, 2023, the AI classifier is not out there as a consequence of its low charge of accuracy.”
The hyperlink to the software is not practical, whereas the word supplied solely easy reasoning as to why the software was shut down. Nevertheless, the corporate defined that it was new, simpler methods of figuring out AI-generated content material.
“We’re working to include suggestions and are at the moment researching simpler provenance strategies for textual content, and have made a dedication to develop and deploy mechanisms that allow customers to know if audio or visible content material is AI-generated,” the word learn.

From the get go, OpenAI made it clear the detection software was liable to errors and couldn’t be thought of “totally dependable.”
The corporate mentioned limitations of its AI detection software included being “very inaccurate” at verifying textual content with lower than 1,000 characters and will “confidently” label textual content written by people as AI-generated.
Associated: Apple has its own GPT AI system but no stated plans for public release: Report
The classifier is the most recent of OpenAI’s merchandise to come back beneath scrutiny.
On July 18, researchers from Stanford and UC Berkeley published a study which revealed that OpenAI’s flagship product ChatGPT was getting considerably worse with age.
We evaluated #ChatGPT‘s habits over time and located substantial diffs in its responses to the *identical questions* between the June model of GPT4 and GPT3.5 and the March variations. The newer variations acquired worse on some duties. w/ Lingjiao Chen @matei_zaharia https://t.co/TGeN4T18Fd https://t.co/36mjnejERy pic.twitter.com/FEiqrUVbg6
— James Zou (@james_y_zou) July 19, 2023
Researchers discovered that over the course of the previous couple of months, ChatGPT-4’s skill to precisely establish prime numbers had plummeted from 97.6% to only 2.4%. Moreover, each ChatGPT-3.5 and ChatGPT-4 witnessed a major decline in with the ability to generate new strains of code.
AI Eye: AI’s trained on AI content go MAD, is Threads a loss leader for AI data?
[ad_2]
Source link