cydxg commited on
Commit
153ac1d
1 Parent(s): 9d35dee

Create README_en.md

Browse files
Files changed (1) hide show
  1. README_en.md +56 -0
README_en.md ADDED
@@ -0,0 +1,56 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model:
3
+ - THUDM/glm-4-voice-9b
4
+ base_model_relation: quantized
5
+ ---
6
+ # GLM-4-Voice-9B (INT4 Quantized)
7
+
8
+ [中文](README.md) | [English](README_en.md)
9
+
10
+ ## Model Overview
11
+ GLM-4-Voice is an end-to-end speech model developed by Zhipu AI. It can directly understand and generate speech in both Chinese and English, facilitating real-time voice conversations. The model can also modify voice attributes such as emotion, tone, speech rate, and dialect based on user instructions. This repository features the INT4 quantized version of GLM-4-Voice-9B. After optimization, the memory requirements are significantly reduced, requiring only 12GB of GPU memory to run smoothly. Testing has shown that this model runs well on an NVIDIA GeForce RTX 3060 with 12GB of memory.
12
+
13
+ ## Usage Instructions
14
+
15
+ ### Creating a Virtual Environment
16
+ First, ensure you are using Python 3.10, and create a virtual environment:
17
+ ```bash
18
+ # Confirmed not compatible with python3.8/3.9/3.12 due to library compatibility issues
19
+ conda create -n GLM-4-Voice python=3.10
20
+ ```
21
+
22
+ ### Activate the Virtual Environment and Clone the Model
23
+ After activating the virtual environment, clone the model and code:
24
+ ```bash
25
+ conda activate GLM-4-Voice
26
+ git clone https://huggingface.co/cydxg/glm-4-voice-9b-int4
27
+ ```
28
+ For users in mainland China, you can use the following command to clone:
29
+ ```bash
30
+ git clone https://hf-mirror.com/cydxg/glm-4-voice-9b-int4
31
+ ```
32
+
33
+ ### Install Dependencies
34
+ Navigate to the model directory and install the required dependencies:
35
+ ```bash
36
+ cd glm-4-voice-9b-int4
37
+ pip install -r requirements.txt
38
+ mkdir third_party
39
+ cd third_party
40
+ git clone https://github.com/shivammehta25/Matcha-TTS Matcha-TTS
41
+ # Choose the appropriate version of torch based on your CUDA version
42
+ conda install pytorch==2.4.1 torchvision==0.19.1 torchaudio==2.4.1 pytorch-cuda=12.4 -c pytorch -c nvidia
43
+ ```
44
+
45
+ ### Start the Model Service
46
+ First, start the model service:
47
+ ```bash
48
+ python model_server.py
49
+ ```
50
+
51
+ ### Run the Web Demo
52
+ Next, run the web demo to access the model:
53
+ ```bash
54
+ python web_demo.py
55
+ ```
56
+ You can then access the model by visiting `http://localhost:8888`.