|
--- |
|
pipeline_tag: text-generation |
|
tags: |
|
- orca |
|
- orca2 |
|
- microsoft |
|
--- |
|
|
|
# Orca 2 |
|
|
|
<!-- Provide a quick summary of what the model is/does. --> |
|
|
|
Orca 2 is a helpful assistant that is built for research purposes only and provides a single turn response |
|
in tasks such as reasoning over user given data, reading comprehension, math problem solving and text summarization. |
|
The model is designed to excel particularly in reasoning. |
|
|
|
We open-source Orca 2 to encourage further research on the development, evaluation, and alignment of smaller LMs. |
|
|
|
## What is Orca 2’s intended use(s)? |
|
|
|
+ Orca 2 is built for research purposes only. |
|
+ The main purpose is to allow the research community to assess its abilities and to provide a foundation for building better frontier models. |
|
|
|
## How was Orca 2 evaluated? |
|
|
|
+ Orca 2 has been evaluated on a large number of tasks ranging from reasoning to safety. Please refer to Sections 6, 7, 8, 9, 10, and 11 in the paper for details about different evaluation experiments. |
|
|
|
## Model Details |
|
|
|
Orca 2 is a finetuned version of LLAMA-2. Orca 2’s training data is a synthetic dataset that was created to enhance the small model’s reasoning abilities. All synthetic training data was filtered using the Azure content filters. |
|
More details about the model can be found at: LINK to Tech Report |
|
|
|
Refer to LLaMA-2 for details on model architectures. |
|
|
|
## License |
|
|
|
The model is licensed under the [Microsoft Research License](). |
|
|
|
Llama 2 is licensed under the [LLAMA 2 Community License](https://ai.meta.com/llama/license/), Copyright © Meta Platforms, Inc. All Rights Reserved. |
|
|
|
|
|
## Uses |
|
|
|
|
|
## Bias, Risks, and Limitations |
|
|
|
Orca 2, built upon the LLaMA 2 model family, retains many of its limitations, as well as the |
|
common limitations of other large language models or limitation including by its training |
|
process, including: |
|
|
|
**Data Biases**: Large language models, trained on extensive data, can inadvertently carry |
|
biases present in the source data. Consequently, the models may generate outputs that could |
|
be potentially biased or unfair. |
|
|
|
**Lack of Contextual Understanding**: Despite their impressive capabilities in language understanding and generation, these models exhibit limited real-world understanding, resulting |
|
in potential inaccuracies or nonsensical responses. |
|
|
|
**Lack of Transparency**: Due to the complexity and size, large language models can act |
|
as “black boxes”, making it difficult to comprehend the rationale behind specific outputs or |
|
decisions. We recommend reviewing transparency notes from Azure for more information. |
|
|
|
**Content Harms**: There are various types of content harms that large language models |
|
can cause. It is important to be aware of them when using these models, and to take |
|
actions to prevent them. It is recommended to leverage various content moderation services |
|
provided by different companies and institutions. On an important note, we hope for better |
|
regulations and standards from government and technology leaders around content harms |
|
for AI technologies in future. We value and acknowledge the important role that research |
|
and open source community can play in this direction. |
|
|
|
**Hallucination**: It is important to be aware and cautious not to entirely rely on a given |
|
language model for critical decisions or information that might have deep impact as it is |
|
not obvious how to prevent these models from fabricating content. Moreover, it is not clear |
|
whether small models may be more susceptible to hallucination in ungrounded generation |
|
use cases due to their smaller sizes and hence reduced memorization capacities. This is an |
|
active research topic and we hope there will be more rigorous measurement, understanding |
|
and mitigations around this topic. |
|
|
|
**Potential for Misuse**: Without suitable safeguards, there is a risk that these models could |
|
be maliciously used for generating disinformation or harmful content. |
|
|
|
**Data Distribution**: Orca 2’s performance is likely to correlate strongly with the distribution |
|
of the tuning data. This correlation might limit its accuracy in areas underrepresented in |
|
the training dataset such as math, coding, and reasoning. |
|
|
|
**System messages**: Orca 2 demonstrates variance in performance depending on the system |
|
instructions. Additionally, the stochasticity introduced by the model size may lead to |
|
generation of non-deterministic responses to different system instructions. |
|
|
|
**Zero-Shot Settings**: Orca 2 was trained on data that mostly simulate zero-shot settings. |
|
While the model demonstrate very strong performance in zero-shot settings, it does not show |
|
the same gains of using few-shot learning compared to other, specially larger, models. |
|
|
|
**Synthetic data**: As Orca 2 is trained on synthetic data, it could inherit both the advantages |
|
and shortcomings of the models and methods used for data generation. We posit that Orca |
|
2 benefits from the safety measures incorporated during training and safety guardrails (e.g., |
|
content filter) within the Azure OpenAI API. However, detailed studies are required for |
|
better quantification of such risks. |
|
|
|
This model is solely designed for research settings, and its testing has only been carried |
|
out in such environments. It should not be used in downstream applications, as additional |
|
analysis is needed to assess potential harm or bias in the proposed application. |
|
|
|
## Getting started with Orca 2 |
|
|
|
**Safe inference with Azure AI Content Safety** |
|
|
|
The usage of [Azure AI Content Safety](https://azure.microsoft.com/en-us/products/ai-services/ai-content-safety/) on top of model prediction is strongly encouraged |
|
and can help prevent content harms. Azure AI Content Safety is a content moderation platform |
|
that uses AI to keep your content safe. By integrating Orca 2 with Azure AI Content Safety, |
|
we can moderate the model output by scanning it for sexual content, violence, hate, and |
|
self-harm with multiple severity levels and multi-lingual detection. |
|
|
|
```python |
|
import os |
|
import math |
|
import transformers |
|
import torch |
|
|
|
from azure.ai.contentsafety import ContentSafetyClient |
|
from azure.core.credentials import AzureKeyCredential |
|
from azure.core.exceptions import HttpResponseError |
|
from azure.ai.contentsafety.models import AnalyzeTextOptions |
|
|
|
CONTENT_SAFETY_KEY = os.environ["CONTENT_SAFETY_KEY"] |
|
CONTENT_SAFETY_ENDPOINT = os.environ["CONTENT_SAFETY_ENDPOINT"] |
|
|
|
# We use Azure AI Content Safety to filter out any content that reaches "Medium" threshold |
|
# For more information: https://learn.microsoft.com/en-us/azure/ai-services/content-safety/ |
|
def should_filter_out(input_text, threshold=4): |
|
# Create an Content Safety client |
|
client = ContentSafetyClient(CONTENT_SAFETY_ENDPOINT, AzureKeyCredential(CONTENT_SAFETY_KEY)) |
|
|
|
# Construct a request |
|
request = AnalyzeTextOptions(text=input_text) |
|
|
|
# Analyze text |
|
try: |
|
response = client.analyze_text(request) |
|
except HttpResponseError as e: |
|
print("Analyze text failed.") |
|
if e.error: |
|
print(f"Error code: {e.error.code}") |
|
print(f"Error message: {e.error.message}") |
|
raise |
|
print(e) |
|
raise |
|
|
|
categories = ["hate_result", "self_harm_result", "sexual_result", "violence_result"] |
|
max_score = -math.inf |
|
for category in categories: |
|
max_score = max(max_score, getattr(response, category).severity) |
|
|
|
return max_score >= threshold |
|
|
|
def run_inference(model_path, inputs): |
|
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") |
|
model = transformers.AutoModelForCausalLM.from_pretrained(model_path) |
|
model.to(device) |
|
|
|
tokenizer = transformers.AutoTokenizer.from_pretrained( |
|
model_path, |
|
model_max_length=4096, |
|
padding_side="right", |
|
use_fast=False, |
|
add_special_tokens=False, |
|
) |
|
inputs = tokenizer(inputs, return_tensors='pt') |
|
inputs = inputs.to(device) |
|
|
|
output_ids = model.generate(inputs["input_ids"], max_length=4096, do_sample=False, temperature=0.0, use_cache=True) |
|
sequence_length = inputs["input_ids"].shape[1] |
|
new_output_ids = output_ids[:, sequence_length:] |
|
answers = tokenizer.batch_decode(new_output_ids, skip_special_tokens=True) |
|
|
|
return answers |
|
|
|
model_path = 'microsoft/Orca-2-7b' |
|
|
|
system_message = "You are Orca, an AI language model created by Microsoft. You are a cautious assistant. You carefully follow instructions. You are helpful and harmless and you follow ethical guidelines and promote positive behavior." |
|
user_message = "\" \n :You can't just say, \"\"that's crap\"\" and remove it without gaining a consensus. You already know this, based on your block history. —/ \" \nIs the comment obscene? \nOptions : Yes, No." |
|
|
|
# We use Chat Markup Language https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/ai-services/openai/includes/chat-markup-language.md#working-with-chat-markup-language-chatml |
|
prompt = f"<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{user_message}<|im_end|>\n<|im_start|>assistant" |
|
|
|
answers = run_inference(model_path, prompt) |
|
final_output = answers[0] if not should_filter_out(answers[0]) else "[Content Filtered]" |
|
|
|
print(final_output) |
|
``` |