Project SHADOW: Symbolic Higher-order Associative Deductive reasoning On Wikidata using LM probing
Abstract
We introduce SHADOW, a fine-tuned language model trained on an intermediate task using associative deductive reasoning, and measure its performance on a knowledge base construction task using Wikidata triple completion. We evaluate SHADOW on the LM-KBC 2024 challenge and show that it outperforms the baseline solution by 20% with a F1 score of 68.72%.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- DSTI at LLMs4OL 2024 Task A: Intrinsic versus extrinsic knowledge for type classification (2024)
- Dynamic Few-Shot Learning for Knowledge Graph Question Answering (2024)
- Large Language Model Enhanced Knowledge Representation Learning: A Survey (2024)
- Fact or Fiction? Improving Fact Verification with Knowledge Graphs through Simplified Subgraph Retrievals (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 1
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper