Skip to content

enable evaluation script to also evaluate remote models #294

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
42 changes: 34 additions & 8 deletions scripts/evaluate_best_checkpoint.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,9 +2,18 @@

"""
Example usage:
# to evaluate directory of checkpoints
python scripts/evaluate_best_checkpoint.py \
/path/to/checkpoint_dir \
best-checkpoint /path/to/checkpoint_dir \
--output-file /path/to/output_file

# to evaluate a single checkpoint
python scripts/evaluate_best_checkpoint.py evaluate \
--hf-model='meta-llama/Llama-3.1-8B-Instruct'

# OR for a local model
python scripts/evaluate_best_checkpoint.py evaluate \
--input-dir='/path/to/checkpoint'
"""

# Standard
Expand Down Expand Up @@ -131,7 +140,14 @@ def best_checkpoint(

@app.command()
def evaluate(
input_dir: Path = typer.Argument(..., help="Input directory to process"),
input_dir: Annotated[
Optional[Path],
typer.Option(help="Input directory to process"),
] = None,
hf_model: Annotated[
Optional[str],
typer.Option(help="The HF model repo to evaluate, e.g. 'meta-llama/Llama-3.1-8B-Instruct'"),
] = None,
tasks: Annotated[
Optional[list[str]],
typer.Option(
Expand All @@ -147,22 +163,32 @@ def evaluate(
"""
Evaluate a single checkpoint directory and save results to JSON file.
"""
if not input_dir.exists():
typer.echo(f"Error: Input directory '{input_dir}' does not exist")
if not input_dir and not hf_model:
typer.echo("Error: one of '--input-dir' or '--hf-model' must be provided")
raise typer.Exit(1)

if not input_dir.is_dir():
typer.echo(f"Error: '{input_dir}' is not a directory")
if input_dir and hf_model:
typer.echo("Error: '--input-dir' and '--hf-model' were both provided, but command only accepts one")
raise typer.Exit(1)


if input_dir:
if not input_dir.exists():
typer.echo(f"Error: Input directory '{input_dir}' does not exist")
raise typer.Exit(1)

if not input_dir.is_dir():
typer.echo(f"Error: '{input_dir}' is not a directory")
raise typer.Exit(1)

model_path = hf_model if hf_model else str(input_dir)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion (code-quality): Replace if-expression with or (or-if-exp-identity)

Suggested change
model_path = hf_model if hf_model else str(input_dir)
model_path = hf_model or str(input_dir)


ExplanationHere we find ourselves setting a value if it evaluates to True, and otherwise
using a default.

The 'After' case is a bit easier to read and avoids the duplication of
input_currency.

It works because the left-hand side is evaluated first. If it evaluates to
true then currency will be set to this and the right-hand side will not be
evaluated. If it evaluates to false the right-hand side will be evaluated and
currency will be set to DEFAULT_CURRENCY.

typer.echo("importing LeaderboardV2Evaluator, this may take a while...")
# First Party
from instructlab.eval.leaderboard import LeaderboardV2Evaluator

typer.echo("done")

evaluator = LeaderboardV2Evaluator(
model_path=str(input_dir), num_gpus=num_gpus, eval_config={"batch_size": "auto"}
model_path=model_path, num_gpus=num_gpus, eval_config={"batch_size": "auto"}
)
if tasks:
evaluator.tasks = tasks
Expand Down
Loading