aashish1904's picture
Upload README.md with huggingface_hub
c9c3350 verified
|
raw
history blame
29.6 kB
metadata
language:
  - en
  - fr
  - de
  - es
  - it
  - pt
  - zh
  - ja
  - ru
  - ko
license: other
license_name: mrl
inference: false
license_link: https://mistral.ai/licenses/MRL-0.1.md
extra_gated_prompt: >-
  # Mistral AI Research License

  If You want to use a Mistral Model, a Derivative or an Output for any purpose
  that is not expressly authorized under this Agreement, You must request a
  license from Mistral AI, which Mistral AI may grant to You in Mistral AI's
  sole discretion. To discuss such a license, please contact Mistral AI via the
  website contact form: https://mistral.ai/contact/

  ## 1. Scope and acceptance

  **1.1. Scope of the Agreement.** This Agreement applies to any use,
  modification, or Distribution of any Mistral Model by You, regardless of the
  source You obtained a copy of such Mistral Model.

  **1.2. Acceptance.** By accessing, using, modifying, Distributing a Mistral
  Model, or by creating, using or distributing a Derivative of the Mistral
  Model, You agree to be bound by this Agreement.

  **1.3. Acceptance on behalf of a third-party.** If You accept this Agreement
  on behalf of Your employer or another person or entity, You warrant and
  represent that You have the authority to act and accept this Agreement on
  their behalf. In such a case, the word "You" in this Agreement will refer to
  Your employer or such other person or entity.

  ## 2. License

  **2.1. Grant of rights**.  Subject to Section 3 below, Mistral AI hereby
  grants You a non-exclusive, royalty-free, worldwide, non-sublicensable,
  non-transferable, limited license to use, copy, modify, and Distribute under
  the conditions provided in Section 2.2 below, the Mistral Model and any
  Derivatives made by or for Mistral AI and to create Derivatives of the Mistral
  Model.

  **2.2. Distribution of Mistral Model and Derivatives made by or for Mistral
  AI.** Subject to Section 3 below, You may Distribute copies of the Mistral
  Model and/or Derivatives made by or for Mistral AI, under the following
  conditions: You must make available a copy of this Agreement to third-party
  recipients of the Mistral Models and/or Derivatives made by or for Mistral AI
  you Distribute, it being specified that any rights to use the Mistral Models
  and/or Derivatives made by or for Mistral AI shall be directly granted by
  Mistral AI to said third-party recipients pursuant to the Mistral AI Research
  License agreement executed between these parties; You must retain in all
  copies of the Mistral Models the following attribution notice within a
  "Notice" text file distributed as part of such copies: "Licensed by Mistral AI
  under the Mistral AI Research License".

  **2.3. Distribution of Derivatives made by or for You.** Subject to Section 3
  below, You may Distribute any Derivatives made by or for You under additional
  or different terms and conditions, provided that: In any event, the use and
  modification of Mistral Model and/or Derivatives made by or for Mistral AI
  shall remain governed by the terms and conditions of this Agreement; You
  include in any such Derivatives made by or for You prominent notices stating
  that You modified the concerned Mistral Model; and Any terms and conditions
  You impose on any third-party recipients relating to Derivatives made by or
  for You shall neither limit such third-party recipients' use of the Mistral
  Model or any Derivatives made by or for Mistral AI in accordance with the
  Mistral AI Research License nor conflict with any of its terms and conditions.

  ## 3. Limitations

  **3.1. Misrepresentation.** You must not misrepresent or imply, through any
  means, that the Derivatives made by or for You and/or any modified version of
  the Mistral Model You Distribute under your name and responsibility is an
  official product of Mistral AI or has been endorsed, approved or validated by
  Mistral AI, unless You are authorized by Us to do so in writing.

  **3.2. Usage Limitation.** You shall only use the Mistral Models, Derivatives
  (whether or not created by Mistral AI) and Outputs for Research Purposes.

  ## 4. Intellectual Property

  **4.1. Trademarks.** No trademark licenses are granted under this Agreement,
  and in connection with the Mistral Models, You may not use any name or mark
  owned by or associated with Mistral AI or any of its affiliates, except (i) as
  required for reasonable and customary use in describing and Distributing the
  Mistral Models and Derivatives made by or for Mistral AI and (ii) for
  attribution purposes as required by this Agreement.

  **4.2. Outputs.** We claim no ownership rights in and to the Outputs. You are
  solely responsible for the Outputs You generate and their subsequent uses in
  accordance with this Agreement. Any Outputs shall be subject to the
  restrictions set out in Section 3 of this Agreement.

  **4.3. Derivatives.** By entering into this Agreement, You accept that any
  Derivatives that You may create or that may be created for You shall be
  subject to the restrictions set out in Section 3 of this Agreement.

  ## 5. Liability

  **5.1. Limitation of liability.** In no event, unless required by applicable
  law (such as deliberate and grossly negligent acts) or agreed to in writing,
  shall Mistral AI be liable to You for damages, including any direct, indirect,
  special, incidental, or consequential damages of any character arising as a
  result of this Agreement or out of the use or inability to use the Mistral
  Models and Derivatives (including but not limited to damages for loss of data,
  loss of goodwill, loss of expected profit or savings, work stoppage, computer
  failure or malfunction, or any damage caused by malware or security breaches),
  even if  Mistral AI has been advised of the possibility of such damages.

  **5.2. Indemnification.** You agree to indemnify and hold harmless Mistral AI
  from and against any claims, damages, or losses arising out of or related to
  Your use or Distribution of the Mistral Models and Derivatives.

  ## 6. Warranty

  **6.1. Disclaimer.** Unless required by applicable law or prior agreed to by
  Mistral AI in writing, Mistral AI provides the Mistral Models and Derivatives
  on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
  express or implied, including, without limitation, any warranties or
  conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
  PARTICULAR PURPOSE. Mistral AI does not represent nor warrant that the Mistral
  Models and Derivatives will be error-free, meet Your or any third party's
  requirements, be secure or will allow You or any third party to achieve any
  kind of result or generate any kind of content. You are solely responsible for
  determining the appropriateness of using or Distributing the Mistral Models
  and Derivatives and assume any risks associated with Your exercise of rights
  under this Agreement.

  ## 7. Termination

  **7.1. Term.** This Agreement is effective as of the date of your acceptance
  of this Agreement or access to the concerned Mistral Models or Derivatives and
  will continue until terminated in accordance with the following terms.

  **7.2. Termination.** Mistral AI may terminate this Agreement at any time if
  You are in breach of this Agreement. Upon termination of this Agreement, You
  must cease to use all Mistral Models and Derivatives and shall permanently
  delete any copy thereof. The following provisions, in their relevant parts,
  will survive any termination or expiration of this Agreement, each for the
  duration necessary to achieve its own intended purpose (e.g. the liability
  provision will survive until the end of the applicable limitation
  period):Sections 5 (Liability), 6(Warranty), 7 (Termination) and 8 (General
  Provisions).

  **7.3. Litigation.** If You initiate any legal action or proceedings against
  Us or any other entity (including a cross-claim or counterclaim in a lawsuit),
  alleging that the Model or a Derivative, or any part thereof, infringe upon
  intellectual property or other rights owned or licensable by You, then any
  licenses granted to You under this Agreement will immediately terminate as of
  the date such legal action or claim is filed or initiated.

  ## 8. General provisions

  **8.1. Governing laws.** This Agreement will be governed by the laws of
  France, without regard to choice of law principles, and the UN Convention on
  Contracts for the International Sale of Goods does not apply to this
  Agreement.

  **8.2. Competent jurisdiction.** The courts of Paris shall have exclusive
  jurisdiction of any dispute arising out of this Agreement.

  **8.3. Severability.** If any provision of this Agreement is held to be
  invalid, illegal or unenforceable, the remaining provisions shall be
  unaffected thereby and remain valid as if such provision had not been set
  forth herein.

  ## 9. Definitions

  "Agreement": means this Mistral AI Research License agreement governing the
  access, use, and Distribution of the Mistral Models, Derivatives and Outputs.

  "Derivative": means any (i) modified version of the Mistral Model (including
  but not limited to any customized or fine-tuned version thereof), (ii) work
  based on the Mistral Model, or (iii) any other derivative work thereof.

  "Distribution", "Distributing", "Distribute" or "Distributed": means
  supplying, providing or making available, by any means, a copy of the Mistral
  Models and/or the Derivatives as the case may be, subject to Section 3 of this
  Agreement.

  "Mistral AI", "We" or "Us": means Mistral AI, a French société par actions
  simplifiée registered in the Paris commercial registry under the number 952
  418 325, and having its registered seat at 15, rue des Halles, 75001 Paris.

  "Mistral Model": means the foundational large language model(s), and its
  elements which include algorithms, software, instructed checkpoints,
  parameters, source code (inference code, evaluation code and, if applicable,
  fine-tuning code) and any other elements associated thereto made available by
  Mistral AI under this Agreement, including, if any, the technical
  documentation, manuals and instructions for the use and operation thereof.

  "Research Purposes": means any use of a Mistral Model,  Derivative, or Output
  that is solely for (a) personal, scientific or academic research, and (b) for
  non-profit and non-commercial purposes, and not directly or indirectly
  connected to any commercial activities or business operations. For
  illustration purposes, Research Purposes does not include (1) any usage of the
  Mistral Model, Derivative or Output by individuals or contractors employed in
  or engaged by companies in the context of (a) their daily tasks, or (b) any
  activity (including but not limited to any testing or proof-of-concept) that
  is intended to generate revenue, nor (2) any Distribution by a commercial
  entity of the Mistral Model, Derivative or Output whether in return for
  payment or free of charge, in any medium or form, including but not limited to
  through a hosted or managed service (e.g. SaaS, cloud instances, etc.), or
  behind a software layer.

  "Outputs": means any content generated by the operation of the Mistral Models
  or the Derivatives from  a prompt (i.e., text instructions) provided by users.
  For the avoidance of doubt, Outputs do not include any components of a Mistral
  Models, such as any fine-tuned versions of the Mistral Models, the weights, or
  parameters.

  "You": means the individual or entity entering into this Agreement with
  Mistral AI.


  *Mistral AI processes your personal data below to provide the model and
  enforce its license. If you are affiliated with a commercial entity, we may
  also send you communications about our models. For more information on your
  rights and data handling, please see our <a
  href="https://mistral.ai/terms/">privacy policy</a>.*
extra_gated_fields:
  First Name: text
  Last Name: text
  Country: country
  Affiliation: text
  Job title: text
  I understand that I can only use the model, any derivative versions and their outputs for non-commercial research purposes: checkbox
  I understand that if I am a commercial entity, I am not permitted to use or distribute the model internally or externally, or expose it in my own offerings without a commercial license: checkbox
  I understand that if I upload the model, or any derivative version, on any platform, I must include the Mistral Research License: checkbox
  I understand that for commercial use of the model, I can contact Mistral or use the Mistral AI API on la Plateforme or any of our cloud provider partners: checkbox
  By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Mistral Privacy Policy: checkbox
  geo: ip_location
extra_gated_description: >-
  Mistral AI processes your personal data below to provide the model and enforce
  its license. If you are affiliated with a commercial entity, we may also send
  you communications about our models. For more information on your rights and
  data handling, please see our <a href="https://mistral.ai/terms/">privacy
  policy</a>.
extra_gated_button_content: Submit
library_name: vllm

QuantFactory Banner

QuantFactory/Ministral-8B-Instruct-2410-HF-GGUF

This is quantized version of TouchNight/Ministral-8B-Instruct-2410-HF created using llama.cpp

Original Model Card

Model Card for Ministral-8B-Instruct-2410

We introduce two new state-of-the-art models for local intelligence, on-device computing, and at-the-edge use cases. We call them les Ministraux: Ministral 3B and Ministral 8B.

The Ministral-8B-Instruct-2410 Language Model is an instruct fine-tuned model significantly outperforming existing models of similar size, released under the Mistral Research License.

If you are interested in using Ministral-3B or Ministral-8B commercially, outperforming Mistral-7B, reach out to us.

For more details about les Ministraux please refer to our release blog post.

Ministral 8B Key features

  • Released under the Mistral Research License, reach out to us for a commercial license
  • Trained with a 128k context window with interleaved sliding-window attention
  • Trained on a large proportion of multilingual and code data
  • Supports function calling
  • Vocabulary size of 131k, using the V3-Tekken tokenizer

Basic Instruct Template (V3-Tekken)

<s>[INST]user message[/INST]assistant response</s>[INST]new user message[/INST]

For more information about the tokenizer please refer to mistral-common

Ministral 8B Architecture

Feature Value
Architecture Dense Transformer
Parameters 8,019,808,256
Layers 36
Heads 32
Dim 4096
KV Heads (GQA) 8
Hidden Dim 12288
Head Dim 128
Vocab Size 131,072
Context Length 128k
Attention Pattern Ragged (128k,32k,32k,32k)

Benchmarks

Base Models

Knowledge & Commonsense

Model MMLU AGIEval Winogrande Arc-c TriviaQA
Mistral 7B Base 62.5 42.5 74.2 67.9 62.5
Llama 3.1 8B Base 64.7 44.4 74.6 46.0 60.2
Ministral 8B Base 65.0 48.3 75.3 71.9 65.5
Gemma 2 2B Base 52.4 33.8 68.7 42.6 47.8
Llama 3.2 3B Base 56.2 37.4 59.6 43.1 50.7
Ministral 3B Base 60.9 42.1 72.7 64.2 56.7

Code & Math

Model HumanEval pass@1 GSM8K maj@8
Mistral 7B Base 26.8 32.0
Llama 3.1 8B Base 37.8 42.2
Ministral 8B Base 34.8 64.5
Gemma 2 2B 20.1 35.5
Llama 3.2 3B 14.6 33.5
Ministral 3B 34.2 50.9

Multilingual

Model French MMLU German MMLU Spanish MMLU
Mistral 7B Base 50.6 49.6 51.4
Llama 3.1 8B Base 50.8 52.8 54.6
Ministral 8B Base 57.5 57.4 59.6
Gemma 2 2B Base 41.0 40.1 41.7
Llama 3.2 3B Base 42.3 42.2 43.1
Ministral 3B Base 49.1 48.3 49.5

Instruct Models

Chat/Arena (gpt-4o judge)

Model MTBench Arena Hard Wild bench
Mistral 7B Instruct v0.3 6.7 44.3 33.1
Llama 3.1 8B Instruct 7.5 62.4 37.0
Gemma 2 9B Instruct 7.6 68.7 43.8
Ministral 8B Instruct 8.3 70.9 41.3
Gemma 2 2B Instruct 7.5 51.7 32.5
Llama 3.2 3B Instruct 7.2 46.0 27.2
Ministral 3B Instruct 8.1 64.3 36.3

Code & Math

Model MBPP pass@1 HumanEval pass@1 Math maj@1
Mistral 7B Instruct v0.3 50.2 38.4 13.2
Gemma 2 9B Instruct 68.5 67.7 47.4
Llama 3.1 8B Instruct 69.7 67.1 49.3
Ministral 8B Instruct 70.0 76.8 54.5
Gemma 2 2B Instruct 54.5 42.7 22.8
Llama 3.2 3B Instruct 64.6 61.0 38.4
Ministral 3B Instruct 67.7 77.4 51.7

Function calling

Model Internal bench
Mistral 7B Instruct v0.3 6.9
Llama 3.1 8B Instruct N/A
Gemma 2 9B Instruct N/A
Ministral 8B Instruct 31.6
Gemma 2 2B Instruct N/A
Llama 3.2 3B Instruct N/A
Ministral 3B Instruct 28.4

Usage Examples

vLLM (recommended)

We recommend using this model with the vLLM library to implement production-ready inference pipelines.

Currently vLLM is capped at 32k context size because interleaved attention kernels for paged attention are not yet implemented in vLLM. Attention kernels for paged attention are being worked on and as soon as it is fully supported in vLLM, this model card will be updated. To take advantage of the full 128k context size we recommend Mistral Inference

Installation

Make sure you install vLLM >= v0.6.2:

pip install --upgrade vllm

Also make sure you have mistral_common >= 1.4.4 installed:

pip install --upgrade mistral_common

You can also make use of a ready-to-go docker image.

Offline

from vllm import LLM
from vllm.sampling_params import SamplingParams

model_name = "mistralai/Ministral-8B-Instruct-2410"

sampling_params = SamplingParams(max_tokens=8192)

# note that running Ministral 8B on a single GPU requires 24 GB of GPU RAM
# If you want to divide the GPU requirement over multiple devices, please add *e.g.* `tensor_parallel=2`
llm = LLM(model=model_name, tokenizer_mode="mistral", config_format="mistral", load_format="mistral")

prompt = "Do we need to think for 10 seconds to find the answer of 1 + 1?"

messages = [
    {
        "role": "user",
        "content": prompt
    },
]

outputs = llm.chat(messages, sampling_params=sampling_params)

print(outputs[0].outputs[0].text)
# You don't need to think for 10 seconds to find the answer to 1 + 1. The answer is 2,
# and you can easily add these two numbers in your mind very quickly without any delay.

Server

You can also use Ministral-8B in a server/client setting.

  1. Spin up a server:
vllm serve mistralai/Ministral-8B-Instruct-2410 --tokenizer_mode mistral --config_format mistral --load_format mistral

Note: Running Ministral-8B on a single GPU requires 24 GB of GPU RAM.

If you want to divide the GPU requirement over multiple devices, please add e.g. --tensor_parallel=2

  1. And ping the client:
curl --location 'http://<your-node-url>:8000/v1/chat/completions' \
--header 'Content-Type: application/json' \
--header 'Authorization: Bearer token' \
--data '{
    "model": "mistralai/Ministral-8B-Instruct-2410",
    "messages": [
      {
        "role": "user",
        "content": "Do we need to think for 10 seconds to find the answer of 1 + 1?"
      }
    ]
}'

Mistral-inference

We recommend using mistral-inference to quickly try out / "vibe-check" the model.

Install

Make sure to have mistral_inference >= 1.5.0 installed.

pip install mistral_inference --upgrade

Download

from huggingface_hub import snapshot_download
from pathlib import Path

mistral_models_path = Path.home().joinpath('mistral_models', '8B-Instruct')
mistral_models_path.mkdir(parents=True, exist_ok=True)

snapshot_download(repo_id="mistralai/Ministral-8B-Instruct-2410", allow_patterns=["params.json", "consolidated.safetensors", "tekken.json"], local_dir=mistral_models_path)

Chat

After installing mistral_inference, a mistral-chat CLI command should be available in your environment. You can chat with the model using

mistral-chat $HOME/mistral_models/8B-Instruct --instruct --max_tokens 256

Passkey detection

In this example the passkey message has over >100k tokens and mistral-inference does not have a chunked pre-fill mechanism. Therefore you will need a lot of GPU memory in order to run the below example (80 GB). For a more memory-efficient solution we recommend using vLLM.

from mistral_inference.transformer import Transformer
from pathlib import Path
import json
from mistral_inference.generate import generate
from huggingface_hub import hf_hub_download

from mistral_common.tokens.tokenizers.mistral import MistralTokenizer
from mistral_common.protocol.instruct.messages import UserMessage
from mistral_common.protocol.instruct.request import ChatCompletionRequest

def load_passkey_request() -> ChatCompletionRequest:
    passkey_file = hf_hub_download(repo_id="mistralai/Ministral-8B-Instruct-2410", filename="passkey_example.json")

    with open(passkey_file, "r") as f:
        data = json.load(f)

    message_content = data["messages"][0]["content"]
    return ChatCompletionRequest(messages=[UserMessage(content=message_content)])

tokenizer = MistralTokenizer.from_file(f"{mistral_models_path}/tekken.json")
model = Transformer.from_folder(mistral_models_path, softmax_fp32=False)

completion_request = load_passkey_request()

tokens = tokenizer.encode_chat_completion(completion_request).tokens

out_tokens, _ = generate([tokens], model, max_tokens=64, temperature=0.0, eos_id=tokenizer.instruct_tokenizer.tokenizer.eos_id)
result = tokenizer.instruct_tokenizer.tokenizer.decode(out_tokens[0])

print(result)  # The pass key is 13005.

Instruct following

from mistral_inference.transformer import Transformer
from mistral_inference.generate import generate

from mistral_common.tokens.tokenizers.mistral import MistralTokenizer
from mistral_common.protocol.instruct.messages import UserMessage
from mistral_common.protocol.instruct.request import ChatCompletionRequest


tokenizer = MistralTokenizer.from_file(f"{mistral_models_path}/tekken.json")
model = Transformer.from_folder(mistral_models_path)

completion_request = ChatCompletionRequest(messages=[UserMessage(content="How often does the letter r occur in Mistral?")])

tokens = tokenizer.encode_chat_completion(completion_request).tokens

out_tokens, _ = generate([tokens], model, max_tokens=64, temperature=0.0, eos_id=tokenizer.instruct_tokenizer.tokenizer.eos_id)
result = tokenizer.instruct_tokenizer.tokenizer.decode(out_tokens[0])

print(result)

Function calling

from mistral_common.protocol.instruct.tool_calls import Function, Tool
from mistral_inference.transformer import Transformer
from mistral_inference.generate import generate

from mistral_common.tokens.tokenizers.mistral import MistralTokenizer
from mistral_common.protocol.instruct.messages import UserMessage
from mistral_common.protocol.instruct.request import ChatCompletionRequest
from mistral_common.tokens.tokenizers.tekken import SpecialTokenPolicy


tokenizer = MistralTokenizer.from_file(f"{mistral_models_path}/tekken.json")
tekken = tokenizer.instruct_tokenizer.tokenizer
tekken.special_token_policy = SpecialTokenPolicy.IGNORE

model = Transformer.from_folder(mistral_models_path)

completion_request = ChatCompletionRequest(
    tools=[
        Tool(
            function=Function(
                name="get_current_weather",
                description="Get the current weather",
                parameters={
                    "type": "object",
                    "properties": {
                        "location": {
                            "type": "string",
                            "description": "The city and state, e.g. San Francisco, CA",
                        },
                        "format": {
                            "type": "string",
                            "enum": ["celsius", "fahrenheit"],
                            "description": "The temperature unit to use. Infer this from the users location.",
                        },
                    },
                    "required": ["location", "format"],
                },
            )
        )
    ],
    messages=[
        UserMessage(content="What's the weather like today in Paris?"),
        ],
)

tokens = tokenizer.encode_chat_completion(completion_request).tokens

out_tokens, _ = generate([tokens], model, max_tokens=64, temperature=0.0, eos_id=tokenizer.instruct_tokenizer.tokenizer.eos_id)
result = tokenizer.instruct_tokenizer.tokenizer.decode(out_tokens[0])

print(result)

The Mistral AI Team

Albert Jiang, Alexandre Abou Chahine, Alexandre Sablayrolles, Alexis Tacnet, Alodie Boissonnet, Alok Kothari, Amélie Héliou, Andy Lo, Anna Peronnin, Antoine Meunier, Antoine Roux, Antonin Faure, Aritra Paul, Arthur Darcet, Arthur Mensch, Audrey Herblin-Stoop, Augustin Garreau, Austin Birky, Avinash Sooriyarachchi, Baptiste Rozière, Barry Conklin, Bastien Bouillon, Blanche Savary de Beauregard, Carole Rambaud, Caroline Feldman, Charles de Freminville, Charline Mauro, Chih-Kuan Yeh, Chris Bamford, Clement Auguy, Corentin Heintz, Cyriaque Dubois, Devendra Singh Chaplot, Diego Las Casas, Diogo Costa, Eléonore Arcelin, Emma Bou Hanna, Etienne Metzger, Fanny Olivier Autran, Francois Lesage, Garance Gourdel, Gaspard Blanchet, Gaspard Donada Vidal, Gianna Maria Lengyel, Guillaume Bour, Guillaume Lample, Gustave Denis, Harizo Rajaona, Himanshu Jaju, Ian Mack, Ian Mathew, Jean-Malo Delignon, Jeremy Facchetti, Jessica Chudnovsky, Joachim Studnia, Justus Murke, Kartik Khandelwal, Kenneth Chiu, Kevin Riera, Leonard Blier, Leonard Suslian, Leonardo Deschaseaux, Louis Martin, Louis Ternon, Lucile Saulnier, Lélio Renard Lavaud, Sophia Yang, Margaret Jennings, Marie Pellat, Marie Torelli, Marjorie Janiewicz, Mathis Felardos, Maxime Darrin, Michael Hoff, Mickaël Seznec, Misha Jessel Kenyon, Nayef Derwiche, Nicolas Carmont Zaragoza, Nicolas Faurie, Nicolas Moreau, Nicolas Schuhl, Nikhil Raghuraman, Niklas Muhs, Olivier de Garrigues, Patricia Rozé, Patricia Wang, Patrick von Platen, Paul Jacob, Pauline Buche, Pavankumar Reddy Muddireddy, Perry Savas, Pierre Stock, Pravesh Agrawal, Renaud de Peretti, Romain Sauvestre, Romain Sinthe, Roman Soletskyi, Sagar Vaze, Sandeep Subramanian, Saurabh Garg, Soham Ghosh, Sylvain Regnier, Szymon Antoniak, Teven Le Scao, Theophile Gervet, Thibault Schueller, Thibaut Lavril, Thomas Wang, Timothée Lacroix, Valeriia Nemychnikova, Wendy Shang, William El Sayed, William Marshall