rpapers / README.md
Tanvir1337's picture
update sources list in readme
2a489a5 verified
|
raw
history blame
4.86 kB
metadata
pretty_name: research-papers
license: mit
tags:
  - arxiv
  - llm
  - cv
  - artificial intelligence
  - machine learning
  - deep learning
  - cellular automata
  - computer vision
size_categories:
  - n<1K
language:
  - en
  - code
annotations_creators: []
source_datasets: []

rpapers [PDF dataset]

Overview

This dataset is a curated collection of research papers in the field of AI/ML, with a particular focus on LLMs, as well as areas like Cellular Automata (mostly Lenia relevant) and more. The papers are sourced primarily from arXiv and other reputable research publications. This dataset is intended for research and educational purposes, offering a rich resource for exploring various trends and advancements in these rapidly evolving fields.

Contents

The dataset includes papers organized into categories reflecting different research areas and prominent organizations:

Topics:

  • Annotated: Papers pre-selected for potential annotation and deeper analysis.
  • Architecture: Papers focusing on model architectures like Transformers and Mamba.
  • Attention: Papers exploring different attention mechanisms, including variations and alternatives to the standard transformer attention.
  • Evals: Papers related to evaluation benchmarks and methodologies for LLMs and AI systems.
  • LoRA: Papers focusing on Low-Rank Adaptation (LoRA) for efficient fine-tuning of LLMs.
  • Mamba: Papers discussing the Mamba neural network architecture.
  • Meta AI: Papers published by Meta AI Research, covering a wide range of AI topics.
  • Microsoft: Papers published by Microsoft Research, particularly focusing on their work in LLMs and AI.
  • Misc: Papers on diverse topics not fitting neatly into other categories, including subfolders for specialized areas like Lenia (a type of cellular automaton).
  • Mistral: Papers related to the Mistral AI large language models.
  • Model Merging: Papers exploring techniques for merging and combining different models.
  • MoE: Papers discussing Mixture-of-Experts (MoE) architectures.
  • OpenAI: Papers published by OpenAI, including seminal work on LLMs and scaling laws.
  • Quantization: Papers focusing on model quantization techniques for efficient deployment.
  • Reports: Technical reports and overviews of specific models and systems, like Gemini, GPT-4, etc.
  • Stability AI: Papers published by Stability AI, often related to generative models and diffusion techniques.
  • Surveys: Comprehensive review papers providing overviews of specific research areas.
  • Uncategorized: Papers yet to be categorized.
  • Whitepapers: In-depth reports and analyses, often from industry organizations.

Source Publications and Organizations (Partial List):

Usage

This dataset is valuable for a wide range of tasks and applications:

  • Text Mining: Extracting insights and patterns from the text of research papers.
  • Natural Language Processing (NLP) Tasks: Developing and evaluating NLP models, such as summarization, question answering, and topic modeling.
  • Machine Learning Model Training: Using the papers as a source of knowledge for training advanced AI models.
  • Academic Research: Staying up-to-date on the latest advancements in AI/ML and related fields.
  • Curation and Analysis: Building further curated collections or analyzing trends in research.

License

The papers in this dataset are under various licenses, typically Creative Commons licenses. Please refer to the individual papers for their specific licensing terms.

Citation

If you use this dataset in your research, please cite the original papers as per their citation guidelines. You can also mention this repository as the source of the curated collection.