OpenAI Unveils CriticGPT

OpenAI Unveils CriticGPT

OpenAI launches CriticGPT to spot errors and bugs in AI-generated code.

OpenAI has introduced CriticGPT, a new AI model that can help identify mistakes in code generated by ChatGPT. The tool will improve the process of alignment in AI systems through what AI developers call Reinforcement Learning from Human Feedback or RLHF. This will eventually make the outputs from large language models more accurate.

According to OpenAI’s blogpost, CrticGPT catch errors in ChatGPT’s code output, demonstrating over 60% improvement. Users can also adjust the thoroughness of the tool when looking for bugs and exert more control over its tendency to hallucinate or highlight “errors” that don’t exist.

Besides having its own hallucinations, there are other flaws with the tool as well, like it could struggle to evaluate longer and more complex tasks since it is trained on comparatively brief responses from ChatGPT.

Also Read: OpenAI Acquires Database Analytics Firm Rockset

Although CriticGPT’s suggestions are not always correct, they can help trainers catch many more problems with model-written answers than they would without AI help.

OpenAI needs better tools to align increasingly complex AI systems. In its research on CriticGPT, the company found that applying RLHF to GPT-4 promises to help humans produce better RLHF data for GPT-4. They are planning to scale this work further and put it into practice.