While people were busy judging ChatGPT for the good or the wrong values it may add to our lives, OpenAI tool came with a new and smashing tool, the AI Classifier. ChatGPT, apart from being in the talks for the fantastic things it can do, was also getting a notorious tag for the extent of misuse it could encourage.
The AI Classifier is trained to differentiate between AI-written and human-written text. It is a language model trained to correctly label AI-written as “likely AI-written” 26% of the time. While it also predicts false positives 9% of the time, detecting human-written text as AI-written.
OpenAI claims that this classifier will “inform mitigations for false claims that a human wrote AI-generated text: for example, running automated misinformation campaigns, using AI tools for academic dishonesty, and positioning an AI chatbot as a human.”
The AI tool is still in the making, and the owners have been honest about revealing its limitations to the public. They claim that the application should not be the primary decision-making tool as it has some drawbacks, such as being unreliable with short texts. It could perform better with languages other than English and easily predictable texts.
While AI Classifier is still a work in progress, the makers have released it for public use to check whether they need the tool and see how well the audience accepts it.