Datasets used to train RxT-Beta models - first generation of experimental Reactive Transformer (RxT) models trained on real-world data (English only)
AI & ML interests
AGI, ASI, Reactive Awareness Models, Real-Time Reactive Language Models, Memory Systems, Reactive Neural Networks & Event-Driven AI
Recent Activity
View all activity
Papers

TensorBLEU: Vectorized GPU-based BLEU Score Implementation for Per-Sentence In-Training Evaluation

Reactive Transformer (RxT) -- Stateful Real-Time Processing for Event-Driven Reactive Language Models
Experimental models with Sparse Query Attention layers. Reducing training time/cost by ~3-10% compared to GQA & MQA, with the same level performance
-
Sparse Query Attention (SQA): A Computationally Efficient Attention Mechanism with Query Heads Reduction
Paper • 2510.01817 • Published • 13 -
ReactiveAI/sSQAT-mm
Text Generation • 8.62M • Updated -
ReactiveAI/SQAT-mm
Text Generation • 8.57M • Updated -
ReactiveAI/xSQAT-mm
Text Generation • 8.52M • Updated
Experimental stateful real-time Reactive Transformer (RxT) models after supervised training stages
-
Reactive Transformer (RxT) -- Stateful Real-Time Processing for Event-Driven Reactive Language Models
Paper • 2510.03561 • Published • 23 -
ReactiveAI/RxT-Alpha-Supervised
Text Generation • 0.2B • Updated • 2 -
ReactiveAI/RxT-Alpha-Mini-Supervised
Text Generation • 0.1B • Updated • 20 -
ReactiveAI/RxT-Alpha-Micro-Supervised
Text Generation • 28.8M • Updated • 9
Datasets used for Interaction Supervised Fine-Tuning (SFT) of reactive models, made for real-time processing of single sequence (interaction)
Datasets used to train RxT-Beta models - first generation of experimental Reactive Transformer (RxT) models trained on real-world data (English only)
Experimental stateful real-time Reactive Transformer (RxT) models after supervised training stages
-
Reactive Transformer (RxT) -- Stateful Real-Time Processing for Event-Driven Reactive Language Models
Paper • 2510.03561 • Published • 23 -
ReactiveAI/RxT-Alpha-Supervised
Text Generation • 0.2B • Updated • 2 -
ReactiveAI/RxT-Alpha-Mini-Supervised
Text Generation • 0.1B • Updated • 20 -
ReactiveAI/RxT-Alpha-Micro-Supervised
Text Generation • 28.8M • Updated • 9
Experimental models with Sparse Query Attention layers. Reducing training time/cost by ~3-10% compared to GQA & MQA, with the same level performance
-
Sparse Query Attention (SQA): A Computationally Efficient Attention Mechanism with Query Heads Reduction
Paper • 2510.01817 • Published • 13 -
ReactiveAI/sSQAT-mm
Text Generation • 8.62M • Updated -
ReactiveAI/SQAT-mm
Text Generation • 8.57M • Updated -
ReactiveAI/xSQAT-mm
Text Generation • 8.52M • Updated
Datasets used for Interaction Supervised Fine-Tuning (SFT) of reactive models, made for real-time processing of single sequence (interaction)