--- library_name: transformers tags: - mistral - sparse - pruned - wanda license: mit datasets: - HuggingFaceH4/ultrachat_200k - HuggingFaceH4/ultrafeedback_binarized language: - en --- # Model Card for kettleguts/zephyr-7b-beta_sparse05 This is a pruned version of HuggingFaceH4/zephyr-7b-beta found [here](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta). Wanda pruning was used to introduce 50% sparsity into the linear layers. Read the paper [here](https://arxiv.org/abs/2306.11695). ### Model Description [Here](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta#model-description) ## Uses This model is only useful for research purposes. The quality of its text generation is highly dependent on how it is prompted. Since it is heavily pruned, it sometimes behaves like a mush smaller model. ### Direct Use This model is not suitable for direct use outside of research. # Out-of-Scope Use This model should never be used for critical decisions involving health, life, employment, housing, law, etc. It should also never be used to harm anyone. ## Bias, Risks, and Limitations [No safegaurds have been added to this model.](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta#bias-risks-and-limitations) ## How to Get Started with the Model Use the code below to get started with the model:
```Python from transformers import pipeline pipe = pipeline("text-generation",model=model, tokenizer=tokenizer) messages = [ { "role": "system", "content": "You are a friendly chatbot who always responds as briefly as possible with prefect grammar.", }, {"role": "user", "content": "Briefly describe network pruning."}, ] prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95, pad_token_id = tokenizer.pad_token_id) text = str(outputs[0]).split('<|assistant|>\\n') print(text[-1]) ```
Output: >Network pruning, in the context of artificial intelligence and machine learning, refers to the process of removing unimportant or redundant connections, or "pruning," from a neural network\'s architecture. This is done to simplify and optimize the network\'s structure, reduce overfitting, and improve its efficiency, while preserving its overall performance. Pruning typically involves removing connections, neurons, or entire layers, based on metrics such as the weight or sparsity of the connection, or the amount of improvement gained by removing the connection. The goal is to prune the network in a way that balances the trade-off between model size and accuracy, while reducing the network\'s overall complexity and resource requirements. Pruning techniques can range from simple heuristics such as early stopping, to more sophisticated methods such as compressed and pruned models, and iterative and incremental pruning.'} ## Evaluation Pending ## Model Examination Pending ## Environmental Impact The calculations necessary to prune this model required less than 1 hour of time on a T4 GPU in Colab. ## Technical Specifications #### Software The bulk of this work was done using [Pytorch](https://pytorch.org/). They have an array of built-in [pruning tools](https://pytorch.org/docs/stable/nn.html#:~:text=Utility%20classes%20and%20functions%20for%20pruning%20Module%20parameters ) in torch.nn . Also check out the [tutorial](https://pytorch.org/tutorials/intermediate/pruning_tutorial.html) by [Michela Paganini](https://github.com/mickypaganini). ## Citation **BibTeX:** >@misc{tunstall2023zephyr, title={Zephyr: Direct Distillation of LM Alignment}, author={Lewis Tunstall and Edward Beeching and Nathan Lambert and Nazneen Rajani and Kashif Rasul and Younes Belkada and Shengyi Huang and Leandro von Werra and Clémentine Fourrier and Nathan Habib and Nathan Sarrazin and Omar Sanseviero and Alexander M. Rush and Thomas Wolf}, year={2023}, eprint={2310.16944}, archivePrefix={arXiv}, primaryClass={cs.LG} } >@misc{sun2023simple, title={A Simple and Effective Pruning Approach for Large Language Models}, author={Mingjie Sun and Zhuang Liu and Anna Bair and J. Zico Kolter}, year={2023}, eprint={2306.11695}, archivePrefix={arXiv}, primaryClass={cs.CL} }