Update README.md
Browse files
README.md
CHANGED
@@ -39,7 +39,13 @@ This is the model card of a 🤗 transformers model that has been pushed on the
|
|
39 |
|
40 |
### Direct Use
|
41 |
|
42 |
-
Please proceed the following example **that purely relies on tranformers and torch
|
|
|
|
|
|
|
|
|
|
|
|
|
43 |
|
44 |
1. Setup ask method for inferring FlanT5 as follows:
|
45 |
```python
|
@@ -93,7 +99,7 @@ print(f"Emotion state of the speaker of `{target}` is: {flant5_response}")
|
|
93 |
```
|
94 |
|
95 |
The response is as follows:
|
96 |
-
|
97 |
|
98 |
|
99 |
### Downstream Use [optional]
|
|
|
39 |
|
40 |
### Direct Use
|
41 |
|
42 |
+
Please proceed the following example **that purely relies on tranformers and torch**.
|
43 |
+
|
44 |
+
This example could be found on google colab at the related [Github repo page](https://github.com/nicolay-r/THOR-ECAC)
|
45 |
+
|
46 |
+
You can still use the code below for a custom start by being independent from the THoR engine.
|
47 |
+
|
48 |
+
Here are the **4 steps** for direct model use:
|
49 |
|
50 |
1. Setup ask method for inferring FlanT5 as follows:
|
51 |
```python
|
|
|
99 |
```
|
100 |
|
101 |
The response is as follows:
|
102 |
+
> Emotion state of the speaker of `Jake: yaeh, I could not be mad at him for too long!` is: **anger**
|
103 |
|
104 |
|
105 |
### Downstream Use [optional]
|