Wordle

Write What You Like

After Whisper, OpenAI Quietly Launches Evals

Along with its much-awaited GPT-4, OpenAI has open sourced a software framework called Evals—– to evaluate the performance of its AI models. This release comes after the release of its Multilingual Speech Recognition System Whisper in September last year.

OpenAI says its, “staff actively review these evals when considering improvements to upcoming models.”

Additionally, the Microsoft backed AI firm stated that the tools will enable individuals to identify deficiencies in their models and provide feedback to direct enhancements.

Following the release of the ChatGPT and Whisper APIs last month, OpenAI initially stated that it would not utilise customer data to train its models. However, it has now opted for crowd-sourced methods to improve the resilience of its AI models.

It has followed the example of Meta’s Dynabench, which detects hate speech, analyses sentiment, and answers questions, among other things. Or ‘Break It, Build It’ platform developed by the University of Maryland’s CLIP Laboratory, which allows researchers to submit their models to users who are tasked with generating examples to overcome them.

OpenAI is hoping, ‘Evals becomes a vehicle to share and crowdsource benchmarks, representing a maximally wide set of failure modes and difficult tasks.’

For example, OpenAI created a logic puzzles evaluation that contains 10 prompts where GPT-4 fails.

Evals has also included several notebooks implementing academic benchmarks and a few variations of integrating small subsets of CoQA(A Conversational Question Answering Challenge) as an example.

To incentivize this, OpenAI plans to grant GPT-4 access to those who contribute “high-quality” benchmarks.

Published by

Leave a comment

Design a site like this with WordPress.com
Get started