--- base_model: microsoft/phi-4 language: - en library_name: transformers license: mit license_link: https://huggingface.co/microsoft/phi-4/resolve/main/LICENSE pipeline_tag: text-generation tags: - phi - nlp - math - code - chat - conversational - mlx inference: parameters: temperature: 0 widget: - messages: - role: user content: How should I explain the Internet? --- # mlx-community/phi-4-bf16 The Model [mlx-community/phi-4-bf16](https://huggingface.co/mlx-community/phi-4-bf16) was converted to MLX format from [microsoft/phi-4](https://huggingface.co/microsoft/phi-4) using mlx-lm version **0.20.6**. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("mlx-community/phi-4-bf16") prompt="hello" if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None: messages = [{"role": "user", "content": prompt}] prompt = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) response = generate(model, tokenizer, prompt=prompt, verbose=True) ```