refuel-ai / autolabel
- ΠΏΡΡΠ½ΠΈΡΠ°, 23 ΠΈΡΠ½Ρ 2023β―Π³. Π² 00:00:07
Label, clean and enrich text datasets with LLMs
Clean, labeled data at the speed of thought.
pip install refuel-autolabel
Access to large, clean and diverse labeled datasets is a critical component for any machine learning effort to be successful. State-of-the-art LLMs like GPT-4 are able to automatically label data with high accuracy, and at a fraction of the cost and time compared to manual labeling.
Autolabel is a Python library to label, clean and enrich text datasets with any Large Language Models (LLM) of your choice.
Autolabel provides a simple 3-step process for labeling data:
Let's imagine we are building an ML model to analyze sentiment analysis of movie review. We have a dataset of movie reviews that we'd like to get labeled first. For this case, here's what the example dataset and configs will look like:
{
"task_name": "MovieSentimentReview",
"task_type": "classification",
"model": {
"provider": "openai",
"name": "gpt-3.5-turbo"
},
"dataset": {
"label_column": "label",
"delimiter": ","
},
"prompt": {
"task_guidelines": "You are an expert at analyzing the sentiment of movie reviews. Your job is to classify the provided movie review into one of the following labels: {labels}",
"labels": [
"positive",
"negative",
"neutral",
],
"few_shot_examples": [
{
"example": "I got a fairly uninspired stupid film about how human industry is bad for nature.",
"label": "negative"
},
{
"example": "I loved this movie. I found it very heart warming to see Adam West, Burt Ward, Frank Gorshin, and Julie Newmar together again.",
"label": "positive"
},
{
"example": "This movie will be played next week at the Chinese theater.",
"label": "neutral"
}
],
"example_template": "Input: {example}\nOutput: {label}"
}
}
Initialize the labeling agent and pass it the config:
from autolabel import LabelingAgent
agent = LabelingAgent(config='config.json')
Preview an example prompt that will be sent to the LLM:
agent.plan('dataset.csv')
This prints:
ββββββββββββββββββββββββββββββββββββββββ 100/100 0:00:00 0:00:00
ββββββββββββββββββββββββββββ¬ββββββββββ
β Total Estimated Cost β $0.538 β
β Number of Examples β 200 β
β Average cost per example β 0.00269 β
ββββββββββββββββββββββββββββ΄ββββββββββ
βββββββββββββββββββββββββββββββββββββββββ
Prompt Example:
You are an expert at analyzing the sentiment of movie reviews. Your job is to classify the provided movie review into one of the following labels: [positive, negative, neutral]
Some examples with their output answers are provided below:
Example: I got a fairly uninspired stupid film about how human industry is bad for nature.
Output:
negative
Example: I loved this movie. I found it very heart warming to see Adam West, Burt Ward, Frank Gorshin, and Julie Newmar together again.
Output:
positive
Example: This movie will be played next week at the Chinese theater.
Output:
neutral
Now I want you to label the following example:
Input: A rare exception to the rule that great literature makes disappointing films.
Output:
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Finally, we can run the labeling on a subset or entirety of the dataset:
labels, output_df, metrics = agent.run('dataset.csv')
The output dataframe contains the label column:
output_df.head()
text ... MovieSentimentReview_llm_label
0 I was very excited about seeing this film, ant... ... negative
1 Serum is about a crazy doctor that finds a ser... ... negative
4 I loved this movie. I knew it would be chocked... ... positive
...
Refuel provides access to hosted open source LLMs for labeling, and for estimating confidence This is helpful, because you can calibrate a confidence threshold for your labeling task, and then route less confident labels to humans, while you still get the benefits of auto-labeling for the confident examples.
In order to use Refuel hosted LLMs, you can request access here.
Check out our technical report to learn more about the performance of various LLMs, and human annoators, on label quality, turnaround time and cost.
Our goal is to allow users to label, create or enrich any dataset, with any LLM - easily and quickly.
There are four focus areas for Autolabel for 2023:
We will be releasing a more detailed roadmap soon, but we love suggestions and contributions from the community. Chat with the Refuel team and Autolabel community on Discord or open Github issues to report bugs and request features.
Autolabel is a rapidly developing project. We welcome contributions in all forms - bug reports, pull requests and ideas for improving the library.