pszemraj commited on
Commit
fcb0de4
·
verified ·
1 Parent(s): bfcae20

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +111 -0
README.md CHANGED
@@ -18,4 +18,115 @@ configs:
18
  data_files:
19
  - split: train
20
  path: data/train-*
 
 
 
 
 
 
 
 
 
 
 
21
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
18
  data_files:
19
  - split: train
20
  path: data/train-*
21
+ license: mit
22
+ task_categories:
23
+ - feature-extraction
24
+ language:
25
+ - en
26
+ tags:
27
+ - semantic-search
28
+ - embeddings
29
+ - emoji
30
+ size_categories:
31
+ - 1K<n<10K
32
  ---
33
+
34
+ # local emoji semantic search
35
+
36
+ Emoji, their text descriptions and precomputed text embeddings with [Alibaba-NLP/gte-large-en-v1.5](https://hf.co/Alibaba-NLP/gte-large-en-v1.5) for use in emoji semantic search. This work is largely inspired by the original [emoji-semantic-search repo](https://archive.md/ikcze) and aims to provide the data for fully local use/alternative, as the [demo](https://www.emojisearch.app/) is [not working](https://github.com/lilianweng/emoji-semantic-search/issues/6#issue-2724936875) as of a few days ago.
37
+
38
+ - this repo contains only pre-computed embedding "database", equivalent to [server/emoji-embeddings.jsonl.gz](https://github.com/lilianweng/emoji-semantic-search/blob/6a6f351852b99e7b899437fa31309595a9008cd1/server/emoji-embeddings.jsonl.gz) in the original repo, to use as the database for semantic search and replacing OpenAI calls with local compute
39
+ - if working with the original repo, the [inference class](https://github.com/lilianweng/emoji-semantic-search/blob/6a6f351852b99e7b899437fa31309595a9008cd1/server/app.py#L18) also needs to be updated to use SentenceTransformers instead of OpenAI calls
40
+ - the provided basic code is near-instant even on CPU 🔥
41
+
42
+ ## basic inference example
43
+
44
+
45
+ since the dataset is tiny, just load with pandas:
46
+
47
+ ```py
48
+ import pandas as pd
49
+
50
+ df = pd.read_parquet("hf://datasets/pszemraj/local-emoji-search-gte/data/train-00000-of-00001.parquet")
51
+ print(df.info())
52
+ ```
53
+
54
+ load the sentence-transformers model:
55
+
56
+ ```py
57
+ # Requires sentence_transformers>=2.7.0
58
+
59
+ from sentence_transformers import SentenceTransformer
60
+
61
+ model = SentenceTransformer('Alibaba-NLP/gte-large-en-v1.5', trust_remote_code=True)
62
+ ```
63
+
64
+ define a minimal semantic search inference function:
65
+
66
+ <details>
67
+ <summary>Click me to expand the inference function code</summary>
68
+
69
+
70
+ ```py
71
+ import numpy as np
72
+ import pandas as pd
73
+ from sentence_transformers import SentenceTransformer
74
+ from sentence_transformers.util import semantic_search
75
+
76
+
77
+ def get_top_emojis(
78
+ query: str,
79
+ emoji_df: pd.DataFrame,
80
+ model,
81
+ top_k: int = 5,
82
+ num_digits: int = 4,
83
+ ) -> list:
84
+ """
85
+ Performs semantic search to find the most relevant emojis for a given query.
86
+
87
+ Args:
88
+ query (str): The search query.
89
+ emoji_df (pd.DataFrame): DataFrame containing emoji metadata and embeddings.
90
+ model (SentenceTransformer): The sentence transformer model for encoding.
91
+ top_k (int): Number of top results to return.
92
+ num_digits (int): Number of digits to round scores to
93
+
94
+ Returns:
95
+ list: A list of dicts, where each dict represents a top match. Each dict has keys 'emoji', 'message', and 'score'
96
+ """
97
+ query_embed = model.encode(query)
98
+ embeddings_array = np.vstack(emoji_df.embed.values, dtype=np.float32)
99
+
100
+ hits = semantic_search(query_embed, embeddings_array, top_k=top_k)[0]
101
+
102
+ # Extract the top hits + metadata
103
+ results = [
104
+ {
105
+ "emoji": emoji_df.loc[hit["corpus_id"], "emoji"],
106
+ "message": emoji_df.loc[hit["corpus_id"], "message"],
107
+ "score": round(hit["score"], num_digits),
108
+ }
109
+ for hit in hits
110
+ ]
111
+ return results
112
+
113
+ ```
114
+
115
+ </details>
116
+
117
+ run inference!
118
+
119
+ ```py
120
+ import pprint as pp
121
+
122
+ query_text = "that is flames"
123
+ top_emojis = get_top_emojis(query_text, df, model, top_k=5)
124
+
125
+ pp.pprint(top_emojis, indent=2)
126
+
127
+ # [ {'emoji': '❤\u200d🔥', 'message': 'heart on fire', 'score': 0.7043},
128
+ # {'emoji': '🥵', 'message': 'hot face', 'score': 0.694},
129
+ # {'emoji': '😳', 'message': 'flushed face', 'score': 0.6794},
130
+ # {'emoji': '🔥', 'message': 'fire', 'score': 0.6744},
131
+ # {'emoji': '🧨', 'message': 'firecracker', 'score': 0.663}]
132
+ ```