OpenGenerativeAI / llm-colosseum
- понедельник, 29 июля 2024 г. в 00:00:02
Benchmark LLMs by fighting in Street Fighter 3! The new way to evaluate the quality of an LLM
Make LLM fight each other in real time in Street Fighter III.
Which LLM will be the best fighter ?
They need to be:
Street Fighter III assesses the ability of LLMs to understand their environment and take actions based on a specific context. As opposed to RL models, which blindly take actions based on the reward function, LLMs are fully aware of the context and act accordingly.
Our experimentations (342 fights so far) led to the following leaderboard. Each LLM has an ELO score based on its results
Model | Rating |
---|---|
🥇openai:gpt-3.5-turbo-0125 | 1776.11 |
🥈mistral:mistral-small-latest | 1586.16 |
🥉openai:gpt-4-1106-preview | 1584.78 |
openai:gpt-4 | 1517.2 |
openai:gpt-4-turbo-preview | 1509.28 |
openai:gpt-4-0125-preview | 1438.92 |
mistral:mistral-medium-latest | 1356.19 |
mistral:mistral-large-latest | 1231.36 |
Each player is controlled by an LLM. We send to the LLM a text description of the screen. The LLM decide on the next moves its character will make. The next moves depends on its previous moves, the moves of its opponents, its power and health bars.
~/.diambra/roms
make install
or pip install -r requirements.txt
.env
file and fill it with the content like in the .env.example
filemake run
To disable the LLM calls, set DISABLE_LLM
to True
in the .env
file.
It will choose the actions randomly.
Change the logging level in the script.py
file.
You can run the arena with local models using Ollama.
Make sure you have ollama installed, running, and with a model downloaded (run ollama serve mistral
in the terminal for example)
Run make local
to start the fight.
By default, it runs mistral against mistral. To use other models, you need to change the parameter model in ollama.py
.
from eval.game import Game, Player1, Player2
def main():
game = Game(
render=True,
save_game=True,
player_1=Player1(
nickname="Baby",
model="ollama:mistral", # change this
),
player_2=Player2(
nickname="Daddy",
model="ollama:mistral", # change this
),
)
game.run()
return 0
The convention we use is model_provider:model_name
. If you want to use another local model than Mistral, you can do ollama:some_other_model
The LLM is called in Robot.call_llm()
method of the agent/robot.py
file.
def call_llm(
self,
temperature: float = 0.7,
max_tokens: int = 50,
top_p: float = 1.0,
) -> str:
"""
Make an API call to the language model.
Edit this method to change the behavior of the robot!
"""
# self.model is a slug like mistral:mistral-small-latest or ollama:mistral
provider_name, model_name = get_provider_and_model(self.model)
client = get_sync_client(provider_name) # OpenAI client
# Generate the prompts
move_list = "- " + "\n - ".join([move for move in META_INSTRUCTIONS])
system_prompt = f"""You are the best and most aggressive Street Fighter III 3rd strike player in the world.
Your character is {self.character}. Your goal is to beat the other opponent. You respond with a bullet point list of moves.
{self.context_prompt()}
The moves you can use are:
{move_list}
----
Reply with a bullet point list of moves. The format should be: `- <name of the move>` separated by a new line.
Example if the opponent is close:
- Move closer
- Medium Punch
Example if the opponent is far:
- Fireball
- Move closer"""
# Call the LLM
completion = client.chat.completions.create(
model=model_name,
messages=[
{"role": "system", "content": system_prompt},
{"role": "user", "content": "Your next moves are:"},
],
temperature=temperature,
max_tokens=max_tokens,
top_p=top_p,
)
# Return the string to be parsed with regex
llm_response = completion.choices[0].message.content.strip()
return llm_response
To use another model or other prompts, make a call to another client in this function, change the system prompt, or make any fancy stuff.
Create a new class herited from Robot
that has the changes you want to make and open a PR.
We'll do our best to add it to the ranking!
Made with ❤️ by the OpenGenerativeAI team from phospho (@oulianov @Pierre-LouisBJT @Platinn) and Quivr (@StanGirard) during Mistral Hackathon 2024 in San Francisco