RobbiePasquale commited on
Commit
5712fda
1 Parent(s): 7221607

Upload lightbulb_inf.py

Browse files

uploaded lightbulb inf because i know entropy/variance mcts inference works for this file lol

Files changed (1) hide show
  1. lightbulb_inf.py +1907 -0
lightbulb_inf.py ADDED
@@ -0,0 +1,1907 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import argparse
2
+ import math
3
+ import os
4
+ import torch
5
+ import torch.nn as nn
6
+ import torch.nn.functional as F
7
+ import torch.optim as optim
8
+ from torch.utils.data import DataLoader
9
+ import copy
10
+ from torch.optim.lr_scheduler import CosineAnnealingLR
11
+ from torch.cuda.amp import autocast, GradScaler
12
+ from datasets import load_dataset
13
+ from transformers import AutoTokenizer
14
+ from typing import List, Tuple
15
+
16
+ device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
17
+
18
+ def parse_args():
19
+ parser = argparse.ArgumentParser(description='Train or Inference with World Model and Tree of Thought.')
20
+ parser.add_argument('--model_name', type=str, default='gpt2', help='Pretrained model name or path')
21
+ parser.add_argument('--dataset_name', type=str, default='wikitext', help='Dataset name from HuggingFace Datasets')
22
+ parser.add_argument('--dataset_config', type=str, default='wikitext-2-raw-v1', help='Dataset configuration name')
23
+ parser.add_argument('--batch_size', type=int, default=4, help='Batch size')
24
+ parser.add_argument('--num_epochs', type=int, default=3, help='Number of epochs')
25
+ parser.add_argument('--max_length', type=int, default=128, help='Maximum sequence length')
26
+ parser.add_argument('--mcts_iterations', type=int, default=3, help='Number of MCTS Iterations')
27
+ parser.add_argument('--mcts_exploration_constant', type=float, default=1.414, help='Exploration constant for MCTS')
28
+ parser.add_argument('--accumulation_steps', type=int, default=4, help='Gradient accumulation steps')
29
+ parser.add_argument('--learning_rate', type=float, default=1e-4, help='Learning rate')
30
+ parser.add_argument('--weight_decay', type=float, default=1e-2, help='Weight decay')
31
+ parser.add_argument('--alpha', type=float, default=0.1, help='Entropy regularization weight')
32
+ parser.add_argument('--beta', type=float, default=0.1, help='Variance regularization weight')
33
+ parser.add_argument('--max_grad_norm', type=float, default=1.0, help='Max gradient norm for clipping')
34
+ parser.add_argument('--save_dir', type=str, default='./models', help='Directory to save the models')
35
+ parser.add_argument('--temperature', type=float, default=1.0, help='Temperature parameter for entropy and variance')
36
+ parser.add_argument('--mode', type=str, choices=['train', 'inference'], default='inference', help='Mode: train or inference')
37
+ parser.add_argument('--inference_mode', type=str, choices=['world_model', 'without_world_model', 'world_model_tree_of_thought'], default='world_model_tree_of_thought', help='Inference mode')
38
+ parser.add_argument('--query', type=str, default='', help='Input query for inference')
39
+ parser.add_argument('--train_mode', type=str, choices=['world_model', 'language_model'], default='world_model', help='Train world model or language model only')
40
+ parser.add_argument('--beam_size', type=int, default=5, help='Beam size for beam search')
41
+ parser.add_argument('--n_tokens_predict', type=int, default=3, help='Number of tokens to predict at each step')
42
+ parser.add_argument('--load_model', type=str, default=None,
43
+ help='Path to load saved model. If not provided, a new model will be initialized.')
44
+
45
+
46
+ # Use parse_known_args to ignore unknown arguments
47
+ args, unknown = parser.parse_known_args()
48
+ return args
49
+
50
+ def load_data(args, tokenizer):
51
+ # Load the dataset
52
+ dataset = load_dataset(args.dataset_name, args.dataset_config)
53
+
54
+ # Ensure the tokenizer has a padding token
55
+ if tokenizer.pad_token is None:
56
+ tokenizer.pad_token = tokenizer.eos_token
57
+
58
+ def tokenize_function(examples):
59
+ return tokenizer(examples['text'], truncation=True, max_length=args.max_length)
60
+
61
+ tokenized_datasets = dataset.map(
62
+ tokenize_function,
63
+ batched=True,
64
+ num_proc=4,
65
+ remove_columns=dataset['train'].column_names,
66
+ )
67
+
68
+ # Build inputs and labels for language modeling
69
+ block_size = args.max_length
70
+
71
+ def group_texts(examples):
72
+ # Concatenate all texts
73
+ concatenated_examples = {k: sum(examples[k], []) for k in examples.keys()}
74
+ total_length = len(concatenated_examples['input_ids'])
75
+ # We drop the small remainder
76
+ total_length = (total_length // block_size) * block_size
77
+ # Split by chunks of block_size
78
+ result = {
79
+ k: [t[i : i + block_size] for i in range(0, total_length, block_size)]
80
+ for k, t in concatenated_examples.items()
81
+ }
82
+ result['labels'] = result['input_ids'].copy()
83
+ return result
84
+
85
+ lm_datasets = tokenized_datasets.map(
86
+ group_texts,
87
+ batched=True,
88
+ num_proc=4,
89
+ )
90
+
91
+ # Create DataLoader
92
+ train_dataset = lm_datasets['train']
93
+ eval_dataset = lm_datasets['validation'] if 'validation' in lm_datasets else lm_datasets['test']
94
+
95
+ def data_collator(data):
96
+ return {
97
+ 'input_ids': torch.tensor([f['input_ids'] for f in data], dtype=torch.long),
98
+ 'labels': torch.tensor([f['labels'] for f in data], dtype=torch.long)
99
+ }
100
+
101
+ train_loader = DataLoader(
102
+ train_dataset,
103
+ shuffle=True,
104
+ batch_size=args.batch_size,
105
+ collate_fn=data_collator,
106
+ pin_memory=True, # Speeds up transfer to GPU
107
+ num_workers=4
108
+ )
109
+ eval_loader = DataLoader(
110
+ eval_dataset,
111
+ shuffle=False,
112
+ batch_size=args.batch_size,
113
+ collate_fn=data_collator,
114
+ pin_memory=True,
115
+ num_workers=4
116
+ )
117
+
118
+ return train_loader, eval_loader
119
+
120
+ def save_all_models(transformer_model, representation_network, dynamics_network, prediction_network, action_encoder, save_dir, epoch):
121
+ """
122
+ Save all models to the specified directory.
123
+
124
+ Args:
125
+ transformer_model (nn.Module): Transformer model.
126
+ representation_network (nn.Module): Representation network.
127
+ dynamics_network (nn.Module): Dynamics network.
128
+ prediction_network (nn.Module): Prediction network.
129
+ action_encoder (nn.Module): Action encoder.
130
+ save_dir (str): Directory to save the models.
131
+ epoch (int): Current epoch number.
132
+ """
133
+ os.makedirs(save_dir, exist_ok=True)
134
+
135
+ torch.save(transformer_model.state_dict(), os.path.join(save_dir, f'transformer_model_epoch_{epoch}.pt'))
136
+ torch.save(representation_network.state_dict(), os.path.join(save_dir, f'representation_network_epoch_{epoch}.pt'))
137
+ torch.save(dynamics_network.state_dict(), os.path.join(save_dir, f'dynamics_network_epoch_{epoch}.pt'))
138
+ torch.save(prediction_network.state_dict(), os.path.join(save_dir, f'prediction_network_epoch_{epoch}.pt'))
139
+ torch.save(action_encoder.state_dict(), os.path.join(save_dir, f'action_encoder_epoch_{epoch}.pt'))
140
+
141
+ print(f"All models saved for epoch {epoch}.")
142
+
143
+ class RotaryPositionalEncoding(nn.Module):
144
+ def __init__(self, d_model):
145
+ super(RotaryPositionalEncoding, self).__init__()
146
+ inv_freq = 1.0 / (10000 ** (torch.arange(0, d_model, 2).float() / d_model))
147
+ self.register_buffer('inv_freq', inv_freq)
148
+
149
+ def forward(self, x):
150
+ seq_len, batch_size, _ = x.size()
151
+ t = torch.arange(seq_len, device=x.device).type_as(self.inv_freq)
152
+ sinusoid_inp = torch.einsum("i,j->ij", t, self.inv_freq)
153
+ sin = sinusoid_inp.sin().unsqueeze(1) # (seq_len, 1, d_model/2)
154
+ cos = sinusoid_inp.cos().unsqueeze(1) # (seq_len, 1, d_model/2)
155
+
156
+ x1 = x[..., 0::2]
157
+ x2 = x[..., 1::2]
158
+
159
+ # Apply rotation
160
+ x_rotated = torch.zeros_like(x)
161
+ x_rotated[..., 0::2] = x1 * cos - x2 * sin
162
+ x_rotated[..., 1::2] = x1 * sin + x2 * cos
163
+
164
+ return x_rotated
165
+
166
+ class MultiHeadAttention(nn.Module):
167
+ def __init__(self, d_model, num_heads):
168
+ super(MultiHeadAttention, self).__init__()
169
+ assert d_model % num_heads == 0, "d_model must be divisible by num_heads"
170
+ self.d_k = d_model // num_heads
171
+ self.num_heads = num_heads
172
+ self.linear_q = nn.Linear(d_model, d_model)
173
+ self.linear_k = nn.Linear(d_model, d_model)
174
+ self.linear_v = nn.Linear(d_model, d_model)
175
+ self.linear_out = nn.Linear(d_model, d_model)
176
+
177
+ def forward(self, query, key, value, mask=None):
178
+ batch_size = query.size(0)
179
+ query = self.linear_q(query).view(batch_size, -1, self.num_heads, self.d_k).transpose(1, 2)
180
+ key = self.linear_k(key).view(batch_size, -1, self.num_heads, self.d_k).transpose(1, 2)
181
+ value = self.linear_v(value).view(batch_size, -1, self.num_heads, self.d_k).transpose(1, 2)
182
+
183
+ scores = torch.matmul(query, key.transpose(-2, -1)) / math.sqrt(self.d_k)
184
+ if mask is not None:
185
+ scores = scores.masked_fill(mask == 0, -1e4)
186
+ attn = F.softmax(scores, dim=-1)
187
+ output = torch.matmul(attn, value)
188
+
189
+ output = output.transpose(1, 2).contiguous().view(batch_size, -1, self.num_heads * self.d_k)
190
+ return self.linear_out(output)
191
+
192
+ class MoE(nn.Module):
193
+ def __init__(self, d_model, num_experts, d_ff, top_k=2, dropout=0.1):
194
+ super(MoE, self).__init__()
195
+ self.num_experts = num_experts
196
+ self.top_k = top_k
197
+ self.experts = nn.ModuleList([
198
+ nn.Sequential(
199
+ nn.Linear(d_model, d_ff),
200
+ nn.GELU() if i % 2 == 0 else nn.SiLU(),
201
+ nn.Linear(d_ff, d_model)
202
+ )
203
+ for i in range(num_experts)
204
+ ])
205
+ self.gate = nn.Linear(d_model, num_experts)
206
+ self.dropout = nn.Dropout(dropout)
207
+
208
+ def forward(self, x):
209
+ batch_size, seq_len, d_model = x.size()
210
+ # Compute gating scores
211
+ gate_scores = self.gate(x) # (batch_size, seq_len, num_experts)
212
+ top_k_scores, top_k_indices = torch.topk(gate_scores, self.top_k, dim=-1) # (batch_size, seq_len, top_k)
213
+ top_k_scores = F.softmax(top_k_scores, dim=-1) # (batch_size, seq_len, top_k)
214
+
215
+ # Initialize output
216
+ output = torch.zeros_like(x)
217
+
218
+ # Flatten batch and sequence dimensions
219
+ x_flat = x.view(-1, d_model) # (batch_size * seq_len, d_model)
220
+ output_flat = output.view(-1, d_model)
221
+ top_k_indices_flat = top_k_indices.view(-1, self.top_k) # (batch_size * seq_len, top_k)
222
+ top_k_scores_flat = top_k_scores.view(-1, self.top_k) # (batch_size * seq_len, top_k)
223
+
224
+ for k in range(self.top_k):
225
+ expert_idx_flat = top_k_indices_flat[:, k] # (batch_size * seq_len)
226
+ expert_scores_flat = top_k_scores_flat[:, k] # (batch_size * seq_len)
227
+ for e in range(self.num_experts):
228
+ mask = (expert_idx_flat == e) # Boolean mask
229
+ if mask.any():
230
+ x_masked = x_flat[mask] # Select tokens for expert e
231
+ expert_output = self.experts[e](x_masked) # Apply expert e
232
+ output_flat[mask] += expert_scores_flat[mask].unsqueeze(-1) * expert_output
233
+
234
+ output = output_flat.view(batch_size, seq_len, d_model)
235
+ return self.dropout(output)
236
+
237
+ class TransformerBlock(nn.Module):
238
+ def __init__(self, d_model, num_heads, d_ff, num_experts, dropout=0.1, top_k=2):
239
+ super(TransformerBlock, self).__init__()
240
+ self.self_attention = MultiHeadAttention(d_model, num_heads)
241
+ self.norm1 = nn.LayerNorm(d_model)
242
+ self.cross_attention = MultiHeadAttention(d_model, num_heads)
243
+ self.norm2 = nn.LayerNorm(d_model)
244
+ self.moe = MoE(d_model, num_experts, d_ff, top_k, dropout)
245
+ self.norm3 = nn.LayerNorm(d_model)
246
+
247
+ def forward(self, x, mask=None, enc_output=None, enc_mask=None):
248
+ # Self-attention
249
+ attn_output = self.self_attention(x, x, x, mask)
250
+ x = self.norm1(x + attn_output)
251
+ # Cross-attention (only in decoder)
252
+ if enc_output is not None:
253
+ cross_attn_output = self.cross_attention(x, enc_output, enc_output, enc_mask)
254
+ x = self.norm2(x + cross_attn_output)
255
+ # Feedforward/MoE
256
+ moe_output = self.moe(x)
257
+ return self.norm3(x + moe_output)
258
+
259
+ class Transformer(nn.Module):
260
+ def __init__(self, input_dim, d_model, num_heads, num_layers, d_ff, num_experts, output_dim, dropout=0.1, top_k=2):
261
+ super(Transformer, self).__init__()
262
+ self.embedding = nn.Embedding(input_dim, d_model, padding_idx=input_dim - 1)
263
+ self.rotary_positional_encoding = RotaryPositionalEncoding(d_model)
264
+ self.encoder_layers = nn.ModuleList(
265
+ [TransformerBlock(d_model, num_heads, d_ff, num_experts, dropout, top_k) for _ in range(num_layers)]
266
+ )
267
+ self.decoder_layers = nn.ModuleList(
268
+ [TransformerBlock(d_model, num_heads, d_ff, num_experts, dropout, top_k) for _ in range(num_layers)]
269
+ )
270
+ self.output_layer = nn.Linear(d_model, output_dim)
271
+ self.d_model = d_model
272
+
273
+ def forward(self, src, tgt, src_mask=None, tgt_mask=None):
274
+ # Encoder
275
+ src = self.embedding(src) * math.sqrt(self.d_model)
276
+ src = src.transpose(0, 1) # (batch_size, seq_len, d_model) -> (seq_len, batch_size, d_model)
277
+ src = self.rotary_positional_encoding(src)
278
+ src = src.transpose(0, 1) # (seq_len, batch_size, d_model) -> (batch_size, seq_len, d_model)
279
+ for layer in self.encoder_layers:
280
+ src = layer(src, src_mask)
281
+
282
+ # Decoder
283
+ tgt = self.embedding(tgt) * math.sqrt(self.d_model)
284
+ tgt = tgt.transpose(0, 1)
285
+ tgt = self.rotary_positional_encoding(tgt)
286
+ tgt = tgt.transpose(0, 1)
287
+ for layer in self.decoder_layers:
288
+ tgt = layer(tgt, tgt_mask, src, src_mask)
289
+ output = self.output_layer(tgt)
290
+ return output
291
+
292
+ def generate_with_beam_search(self, src, tokenizer, beam_size=5, max_length=20, n_tokens_predict=3, temperature=1.0):
293
+ """
294
+ Generate sequences using beam search with multi-token prediction.
295
+
296
+ Args:
297
+ src (torch.Tensor): Source input tensor of shape (batch_size, seq_len)
298
+ tokenizer: Tokenizer to access special tokens
299
+ beam_size (int): Size of the beam for beam search
300
+ max_length (int): Maximum length of the generated sequence
301
+ n_tokens_predict (int): Number of tokens to predict at each step
302
+ temperature (float): Temperature parameter for softmax
303
+
304
+ Returns:
305
+ List[Tuple[torch.Tensor, float]]: List of (sequence, score) tuples
306
+ """
307
+ batch_size = src.size(0)
308
+ device = src.device
309
+ vocab_size = self.output_layer.out_features
310
+
311
+ # Encode the source
312
+ src_enc = self.encode(src)
313
+
314
+ # Initialize beam
315
+ beam = [(torch.full((batch_size, 1), tokenizer.bos_token_id, dtype=torch.long, device=device),
316
+ 0.0, # log probability
317
+ torch.zeros(batch_size, device=device), # cumulative entropy
318
+ torch.zeros(batch_size, device=device))] # cumulative variance
319
+
320
+ for _ in range(max_length // n_tokens_predict):
321
+ all_candidates = []
322
+ for seq, score, cum_entropy, cum_variance in beam:
323
+ if seq[:, -1].item() == tokenizer.eos_token_id:
324
+ all_candidates.append((seq, score, cum_entropy, cum_variance))
325
+ continue
326
+
327
+ # Predict next n tokens
328
+ logits = self.predict_next_n_tokens(src_enc, seq, n_tokens_predict)
329
+
330
+ # Calculate probabilities, entropy, and variance
331
+ probs = F.softmax(logits / temperature, dim=-1)
332
+ entropy = -torch.sum(probs * torch.log(probs + 1e-9), dim=-1)
333
+ variance = torch.var(probs, dim=-1)
334
+
335
+ # Sample top-k tokens for each position
336
+ topk_probs, topk_indices = torch.topk(probs, k=beam_size, dim=-1)
337
+
338
+ # Generate all possible continuations
339
+ for i in range(beam_size ** n_tokens_predict):
340
+ indices = [i // (beam_size ** j) % beam_size for j in range(n_tokens_predict)]
341
+ new_tokens = topk_indices[:, range(n_tokens_predict), indices]
342
+ new_seq = torch.cat([seq, new_tokens], dim=-1)
343
+ new_score = score + torch.sum(torch.log(topk_probs[:, range(n_tokens_predict), indices]))
344
+ new_entropy = cum_entropy + torch.sum(entropy[:, indices])
345
+ new_variance = cum_variance + torch.sum(variance[:, indices])
346
+
347
+ all_candidates.append((new_seq, new_score, new_entropy, new_variance))
348
+
349
+ # Select top beam_size candidates
350
+ beam = sorted(all_candidates, key=lambda x: x[1] - 0.1 * x[2] + 0.05 * x[3], reverse=True)[:beam_size]
351
+
352
+ # Stop if all beams have ended
353
+ if all(seq[:, -1].item() == tokenizer.eos_token_id for seq, _, _, _ in beam):
354
+ break
355
+
356
+ return [(seq, score) for seq, score, _, _ in beam]
357
+
358
+ def encode(self, src):
359
+ src_emb = self.embedding(src) * math.sqrt(self.d_model)
360
+ src_emb = src_emb.transpose(0, 1)
361
+ src_emb = self.rotary_positional_encoding(src_emb)
362
+ src_emb = src_emb.transpose(0, 1)
363
+ src_enc = src_emb
364
+ for layer in self.encoder_layers:
365
+ src_enc = layer(src_enc)
366
+ return src_enc
367
+
368
+ def predict_next_n_tokens(self, src_enc, tgt_seq, n_tokens):
369
+ tgt_emb = self.embedding(tgt_seq) * math.sqrt(self.d_model)
370
+ tgt_emb = tgt_emb.transpose(0, 1)
371
+ tgt_emb = self.rotary_positional_encoding(tgt_emb)
372
+ tgt_emb = tgt_emb.transpose(0, 1)
373
+ tgt_dec = tgt_emb
374
+ for layer in self.decoder_layers:
375
+ tgt_dec = layer(tgt_dec, None, src_enc, None)
376
+ output = self.output_layer(tgt_dec[:, -1:])
377
+ return output.repeat(1, n_tokens, 1)
378
+
379
+ # Objective Functions
380
+
381
+ class InfoNCE_Loss(nn.Module):
382
+ def __init__(self, temperature=0.07):
383
+ super(InfoNCE_Loss, self).__init__()
384
+ self.temperature = temperature
385
+ self.cross_entropy = nn.CrossEntropyLoss()
386
+
387
+ def forward(self, z_i, z_j):
388
+ """
389
+ Args:
390
+ z_i (torch.Tensor): Flattened representations from view i, shape (2n, embed_dim)
391
+ z_j (torch.Tensor): Flattened representations from view j, shape (2n, embed_dim)
392
+
393
+ Returns:
394
+ torch.Tensor: InfoNCE loss
395
+ """
396
+ n = z_i.size(0)
397
+ z = torch.cat([z_i, z_j], dim=0) # Shape: (2n, embed_dim)
398
+
399
+ z = F.normalize(z, dim=1)
400
+ similarity_matrix = torch.matmul(z, z.T) # Shape: (2n, 2n)
401
+
402
+ # Create a mask to exclude self-similarity
403
+ mask = torch.eye(2 * n, device=z.device, dtype=torch.bool)
404
+ similarity_matrix = similarity_matrix.masked_fill(mask, -1e4) # Use a manageable negative value
405
+
406
+ # Create labels for contrastive learning
407
+ labels = torch.arange(n, device=z.device)
408
+ labels = torch.cat([labels + n, labels], dim=0) # Shape: (2n,)
409
+
410
+ # Apply temperature scaling
411
+ similarity_matrix /= self.temperature
412
+
413
+ # Compute cross-entropy loss
414
+ loss = self.cross_entropy(similarity_matrix, labels)
415
+ return loss
416
+
417
+ class CovarianceRegularization(nn.Module):
418
+ def __init__(self, lambda_reg=1e-3):
419
+ super(CovarianceRegularization, self).__init__()
420
+ self.lambda_reg = lambda_reg
421
+
422
+ def forward(self, embeddings):
423
+ """
424
+ Args:
425
+ embeddings (torch.Tensor): Embedding tensor, shape (batch_size, embed_dim)
426
+
427
+ Returns:
428
+ torch.Tensor: Covariance regularization loss
429
+ """
430
+ batch_size, embed_dim = embeddings.size()
431
+ mean = embeddings.mean(dim=0)
432
+ embeddings_centered = embeddings - mean
433
+ cov = (embeddings_centered.T @ embeddings_centered) / (batch_size - 1)
434
+ cov_loss = torch.sum(cov ** 2) - torch.sum(torch.diag(cov) ** 2)
435
+ return self.lambda_reg * cov_loss
436
+
437
+ class DynamicsPerformanceLoss(nn.Module):
438
+ def __init__(self, lambda_var=1e-3):
439
+ super(DynamicsPerformanceLoss, self).__init__()
440
+ self.lambda_var = lambda_var
441
+
442
+ def forward(self, true_next_state, predicted_next_state):
443
+ """
444
+ Args:
445
+ true_next_state (torch.Tensor): Ground truth next state, shape (batch_size, state_dim)
446
+ predicted_next_state (torch.Tensor): Predicted next state, shape (batch_size, state_dim)
447
+
448
+ Returns:
449
+ torch.Tensor: Dynamics performance loss
450
+ """
451
+ mse_loss = F.mse_loss(predicted_next_state, true_next_state)
452
+ variance_loss = torch.var(predicted_next_state, dim=0).mean()
453
+ return mse_loss + self.lambda_var * variance_loss
454
+
455
+ class ThoughtConsistencyLoss(nn.Module):
456
+ def __init__(self):
457
+ super(ThoughtConsistencyLoss, self).__init__()
458
+
459
+ def forward(self, true_next_state, perturbed_next_state):
460
+ """
461
+ Args:
462
+ true_next_state (torch.Tensor): Ground truth next state, shape (batch_size, state_dim)
463
+ perturbed_next_state (torch.Tensor): Perturbed next state, shape (batch_size, state_dim)
464
+
465
+ Returns:
466
+ torch.Tensor: Thought-consistency loss
467
+ """
468
+ return F.mse_loss(true_next_state, perturbed_next_state)
469
+
470
+ class PolicyValueJointLoss(nn.Module):
471
+ def __init__(self, lambda_value=0.5):
472
+ super(PolicyValueJointLoss, self).__init__()
473
+ self.lambda_value = lambda_value
474
+ self.cross_entropy = nn.CrossEntropyLoss()
475
+ self.mse_loss = nn.MSELoss()
476
+
477
+ def forward(self, policy_logits, true_policy, value_pred, true_value):
478
+ """
479
+ Args:
480
+ policy_logits (torch.Tensor): Logits from the policy network, shape (batch_size * seq_len, num_actions)
481
+ true_policy (torch.Tensor): Ground truth policy, shape (batch_size * seq_len, num_actions)
482
+ value_pred (torch.Tensor): Predicted values, shape (batch_size * seq_len)
483
+ true_value (torch.Tensor): Ground truth values, shape (batch_size * seq_len)
484
+
485
+ Returns:
486
+ torch.Tensor: Combined policy and value loss
487
+ """
488
+ policy_logits = policy_logits.view(-1, policy_logits.size(-1))
489
+ true_policy = true_policy.view(-1, true_policy.size(-1))
490
+ value_pred = value_pred.view(-1)
491
+ true_value = true_value.view(-1)
492
+
493
+ policy_loss = self.cross_entropy(policy_logits, true_policy.argmax(dim=1))
494
+ value_loss = self.mse_loss(value_pred, true_value)
495
+ return policy_loss + self.lambda_value * value_loss
496
+
497
+ class ActionDiversityReward(nn.Module):
498
+ def __init__(self, lambda_div=1e-3):
499
+ super(ActionDiversityReward, self).__init__()
500
+ self.lambda_div = lambda_div
501
+
502
+ def forward(self, action_embeddings):
503
+ """
504
+ Args:
505
+ action_embeddings (torch.Tensor): Embeddings of actions, shape (batch_size, embed_dim)
506
+
507
+ Returns:
508
+ torch.Tensor: Action diversity loss
509
+ """
510
+ similarity_matrix = F.cosine_similarity(action_embeddings.unsqueeze(1), action_embeddings.unsqueeze(0), dim=2)
511
+ # Zero out self-similarity
512
+ similarity_matrix = similarity_matrix - torch.eye(similarity_matrix.size(0)).to(action_embeddings.device)
513
+ diversity_loss = torch.sum(similarity_matrix ** 2)
514
+ return self.lambda_div * diversity_loss
515
+
516
+ class ExpectedThoughtValueLoss(nn.Module):
517
+ def __init__(self):
518
+ super(ExpectedThoughtValueLoss, self).__init__()
519
+
520
+ def forward(self, mcts_best_values):
521
+ """
522
+ Args:
523
+ mcts_best_values (torch.Tensor): Best values from MCTS, shape (batch_size)
524
+
525
+ Returns:
526
+ torch.Tensor: ETV loss
527
+ """
528
+ return -mcts_best_values.mean()
529
+
530
+ class ExplorationRegularization(nn.Module):
531
+ def __init__(self, lambda_expl=1e-3):
532
+ super(ExplorationRegularization, self).__init__()
533
+ self.lambda_expl = lambda_expl
534
+
535
+ def forward(self, visit_counts):
536
+ """
537
+ Args:
538
+ visit_counts (torch.Tensor): Visit counts for actions, shape (batch_size, num_actions)
539
+
540
+ Returns:
541
+ torch.Tensor: Exploration regularization loss
542
+ """
543
+ reward = torch.sum(1.0 / (visit_counts + 1), dim=-1)
544
+ return self.lambda_expl * reward.mean()
545
+
546
+ class KL_DivergenceLoss(nn.Module):
547
+ def __init__(self):
548
+ super(KL_DivergenceLoss, self).__init__()
549
+
550
+ def forward(self, old_policy, new_policy):
551
+ """
552
+ Args:
553
+ old_policy (torch.Tensor): Old policy probabilities, shape (batch_size, num_actions)
554
+ new_policy (torch.Tensor): New policy probabilities, shape (batch_size, num_actions)
555
+
556
+ Returns:
557
+ torch.Tensor: KL divergence loss
558
+ """
559
+ kl_div = F.kl_div(new_policy.log(), old_policy, reduction='batchmean')
560
+ return kl_div
561
+
562
+ # MuZero Components
563
+
564
+ class ActionEncoder(nn.Module):
565
+ def __init__(self, action_vocab_size, embed_dim):
566
+ super(ActionEncoder, self).__init__()
567
+ self.embedding = nn.Embedding(action_vocab_size, embed_dim)
568
+
569
+ def forward(self, action_indices):
570
+ """
571
+ Args:
572
+ action_indices (torch.Tensor): Tensor of shape (batch_size, seq_len)
573
+
574
+ Returns:
575
+ torch.Tensor: Encoded actions of shape (batch_size, seq_len, embed_dim)
576
+ """
577
+ return self.embedding(action_indices)
578
+
579
+ class RepresentationNetwork(nn.Module):
580
+ def __init__(self, vocab_dim, d_model, state_dim):
581
+ super(RepresentationNetwork, self).__init__()
582
+ self.proj = nn.Linear(vocab_dim, d_model) # Project from vocab_dim to d_model
583
+ self.linear = nn.Linear(d_model, state_dim) # Project from d_model to state_dim
584
+ self.norm = nn.LayerNorm(state_dim)
585
+
586
+ def forward(self, transformer_output):
587
+ """
588
+ Args:
589
+ transformer_output (torch.Tensor): Shape (batch_size, seq_len, vocab_dim)
590
+
591
+ Returns:
592
+ torch.Tensor: Encoded state of shape (batch_size, seq_len, state_dim)
593
+ """
594
+ # First project down from vocab_dim to d_model
595
+ projected_output = self.proj(transformer_output) # Shape: (batch_size, seq_len, d_model)
596
+ # Then project down from d_model to state_dim
597
+ state = self.linear(projected_output) # Shape: (batch_size, seq_len, state_dim)
598
+ state = self.norm(state) # Shape: (batch_size, seq_len, state_dim)
599
+ return state
600
+
601
+
602
+ class DynamicsNetwork(nn.Module):
603
+ def __init__(self, state_dim, action_dim, hidden_dim):
604
+ super(DynamicsNetwork, self).__init__()
605
+ self.rms_norm = nn.LayerNorm(state_dim)
606
+ self.fc1 = nn.Linear(state_dim + action_dim, hidden_dim)
607
+ self.activation = nn.GELU()
608
+ self.fc2 = nn.Linear(hidden_dim, state_dim)
609
+
610
+ def forward(self, state, action):
611
+ """
612
+ Args:
613
+ state (torch.Tensor): Current state, shape (batch_size, state_dim)
614
+ action (torch.Tensor): Action embedding, shape (batch_size, action_dim)
615
+
616
+ Returns:
617
+ torch.Tensor: Predicted next state, shape (batch_size, state_dim)
618
+ """
619
+ norm_state = self.rms_norm(state)
620
+ combined = torch.cat([norm_state, action], dim=-1)
621
+ hidden = self.activation(self.fc1(combined))
622
+ next_state = self.fc2(hidden)
623
+ return next_state
624
+
625
+ class PredictionNetwork(nn.Module):
626
+ def __init__(self, state_dim, action_vocab_size, value_dim):
627
+ super(PredictionNetwork, self).__init__()
628
+ self.state_dim = state_dim
629
+ self.rms_norm = nn.LayerNorm(state_dim)
630
+ self.policy_head = nn.Linear(state_dim, action_vocab_size) # Output size is action_vocab_size
631
+ self.value_head = nn.Linear(state_dim, value_dim)
632
+
633
+ def forward(self, state):
634
+ """
635
+ Args:
636
+ state (torch.Tensor): State representation, shape (batch_size, state_dim)
637
+ Returns:
638
+ Tuple[torch.Tensor, torch.Tensor]: Policy logits and value estimates
639
+ """
640
+ norm_state = self.rms_norm(state)
641
+ policy_logits = self.policy_head(norm_state) # Shape: (batch_size, action_vocab_size)
642
+ value_estimates = self.value_head(norm_state).squeeze(-1) # Shape: (batch_size)
643
+ return policy_logits, value_estimates
644
+
645
+
646
+
647
+
648
+ class MCTSNode:
649
+ __slots__ = [
650
+ 'state',
651
+ 'parent',
652
+ 'action',
653
+ 'children',
654
+ 'visit_count',
655
+ 'value_sum',
656
+ 'prior',
657
+ 'cached_policy',
658
+ 'cached_value',
659
+ 'thought_node',
660
+ 'entropy',
661
+ 'variance'
662
+ ]
663
+
664
+ def __init__(self, state, thought_node, parent=None, action=None):
665
+ self.state = state
666
+ self.thought_node = thought_node
667
+ self.parent = parent
668
+ self.action = action
669
+ self.children = {}
670
+ self.visit_count = 0
671
+ self.value_sum = 0.0
672
+ self.prior = 0.0
673
+ self.cached_policy = None
674
+ self.cached_value = None
675
+ self.entropy = 0.0
676
+ self.variance = 0.0
677
+
678
+ def expand(self, priors):
679
+ for child_thought_node in self.thought_node.children:
680
+ action = child_thought_node.name
681
+ if action not in self.children:
682
+ child_state = self.state.apply_action(action)
683
+ child_node = MCTSNode(
684
+ state=child_state,
685
+ thought_node=child_thought_node,
686
+ parent=self,
687
+ action=action
688
+ )
689
+ child_node.prior = priors.get(action, 1.0 / len(self.thought_node.children))
690
+ self.children[action] = child_node
691
+
692
+ def is_leaf(self):
693
+ return len(self.children) == 0
694
+
695
+ def ucb_score(self, total_visits, exploration_constant=math.sqrt(2)):
696
+ if self.visit_count == 0:
697
+ return float('inf') # Ensure unvisited nodes are selected first
698
+ avg_value = self.value_sum / self.visit_count
699
+ exploration_term = exploration_constant * self.prior * math.sqrt(total_visits) / (1 + self.visit_count)
700
+ entropy_term = -0.1 * self.entropy # Slightly prefer lower entropy
701
+ variance_term = 0.05 * self.variance # Slightly prefer higher variance
702
+ return avg_value + exploration_term + entropy_term + variance_term
703
+
704
+
705
+ class MCTS:
706
+ def __init__(self, prediction_network, dynamics_network, action_encoder, num_iterations=10, exploration_constant=math.sqrt(2), beam_size=5, n_tokens_predict=3):
707
+ self.prediction_network = prediction_network
708
+ self.dynamics_network = dynamics_network
709
+ self.action_encoder = action_encoder
710
+ self.num_iterations = num_iterations
711
+ self.exploration_constant = exploration_constant
712
+ self.beam_size = beam_size
713
+ self.n_tokens_predict = n_tokens_predict
714
+ self.cache = {}
715
+
716
+ def search_with_beam(self, root_state):
717
+ root_node = MCTSNode(state=root_state, thought_node=root_state.thought_node)
718
+
719
+ # Evaluate the root node and backpropagate
720
+ value_estimate = self.evaluate(root_node) # Evaluate and expand root_node
721
+ self.backpropagate(root_node, value_estimate) # Backpropagate the value
722
+
723
+ beam = [(root_node, 0.0, 0.0, 0.0, [])] # (node, score, cum_entropy, cum_variance, action_sequence)
724
+
725
+ for iteration in range(self.num_iterations):
726
+ all_candidates = []
727
+ for node, score, cum_entropy, cum_variance, action_sequence in beam:
728
+ if node.is_leaf():
729
+ value_estimate = self.evaluate(node)
730
+ self.backpropagate(node, value_estimate) # Backpropagate after evaluation
731
+ if len(node.children) == 0:
732
+ continue # No children to expand
733
+
734
+ total_visits = sum(child.visit_count for child in node.children.values())
735
+ # Select top actions based on UCB score
736
+ sorted_children = sorted(
737
+ node.children.items(),
738
+ key=lambda item: item[1].ucb_score(total_visits, self.exploration_constant),
739
+ reverse=True
740
+ )[:self.beam_size]
741
+
742
+ for selected_action, selected_node in sorted_children:
743
+ current_node = selected_node
744
+ current_sequence = action_sequence + [selected_action]
745
+ current_score = score
746
+ current_entropy = cum_entropy + selected_node.entropy
747
+ current_variance = cum_variance + selected_node.variance
748
+
749
+ # Predict n_tokens_predict actions
750
+ for _ in range(self.n_tokens_predict):
751
+ if current_node.is_leaf():
752
+ value_estimate = self.evaluate(current_node)
753
+ self.backpropagate(current_node, value_estimate) # Backpropagate after evaluation
754
+ if len(current_node.children) == 0:
755
+ break # No more actions
756
+ total_visits = sum(child.visit_count for child in current_node.children.values())
757
+ next_action, next_node = max(
758
+ current_node.children.items(),
759
+ key=lambda item: item[1].ucb_score(total_visits, self.exploration_constant)
760
+ )
761
+ current_sequence.append(next_action)
762
+
763
+ # Prevent division by zero by ensuring visit_count > 0
764
+ if next_node.visit_count > 0:
765
+ current_score += next_node.value_sum / next_node.visit_count
766
+ else:
767
+ # Assign a default value or handle the zero division case
768
+ current_score += 0.0 # Alternatively, use a small epsilon or skip
769
+
770
+ current_entropy += next_node.entropy
771
+ current_variance += next_node.variance
772
+ current_node = next_node
773
+
774
+ all_candidates.append((current_node, current_score, current_entropy, current_variance, current_sequence))
775
+
776
+ if not all_candidates:
777
+ break # No more candidates to expand
778
+
779
+ # Select top beam_size candidates
780
+ beam = sorted(all_candidates, key=lambda x: x[1] - 0.1 * x[2] + 0.05 * x[3], reverse=True)[:self.beam_size]
781
+ print(f"Iteration {iteration + 1}: Beam size after sorting: {len(beam)}") # Debug
782
+
783
+ if beam:
784
+ best_sequence = beam[0][4]
785
+ return best_sequence
786
+ else:
787
+ return []
788
+
789
+
790
+
791
+ def search(self, root_state):
792
+ root_node = MCTSNode(state=root_state, thought_node=root_state.thought_node)
793
+
794
+ for _ in range(self.num_iterations):
795
+ node = self.select(root_node)
796
+ value = self.evaluate(node)
797
+ self.backpropagate(node, value)
798
+
799
+ return self.best_action_sequence(root_node)
800
+
801
+ def select(self, node):
802
+ while not node.is_leaf():
803
+ total_visits = sum(child.visit_count for child in node.children.values())
804
+ _, node = max(
805
+ node.children.items(),
806
+ key=lambda item: item[1].ucb_score(total_visits, self.exploration_constant)
807
+ )
808
+ return node
809
+
810
+ def evaluate(self, node):
811
+ # Extract the last time step
812
+ state_representation = node.state.representation[:, -1, :] # Shape: (batch_size=1, state_dim)
813
+ print(f"Evaluating node with state_representation shape: {state_representation.shape}") # Debug
814
+ policy_logits, value_estimate = self.prediction_network(state_representation)
815
+ print(f"Policy logits shape: {policy_logits.shape}, Value estimate shape: {value_estimate.shape}") # Debug
816
+ value_estimate = value_estimate.item() # Now safe as batch_size=1
817
+
818
+ policy_probs = F.softmax(policy_logits, dim=-1).squeeze(0) # Shape: (action_vocab_size,)
819
+ print(f"Policy probabilities shape: {policy_probs.shape}") # Debug
820
+
821
+ priors = {}
822
+ for child in node.thought_node.children:
823
+ action_name = child.name
824
+ action_idx = action_to_index.get(action_name, None)
825
+ if action_idx is not None and action_idx < policy_probs.size(0):
826
+ priors[action_name] = policy_probs[action_idx].item()
827
+ else:
828
+ priors[action_name] = 1.0 / len(node.thought_node.children)
829
+
830
+ node.expand(priors)
831
+
832
+ # Calculate entropy and variance
833
+ entropy = -torch.sum(policy_probs * torch.log(policy_probs + 1e-9))
834
+ variance = torch.var(policy_probs)
835
+ node.entropy = entropy.item()
836
+ node.variance = variance.item()
837
+
838
+ print(f"Node entropy: {node.entropy}, variance: {node.variance}") # Debug
839
+
840
+ return value_estimate # Return the value estimate for backpropagation
841
+
842
+
843
+ def backpropagate(self, node, value):
844
+ while node is not None:
845
+ node.visit_count += 1
846
+ node.value_sum += value
847
+ node = node.parent
848
+
849
+ def best_action_sequence(self, root_node):
850
+ sequences = []
851
+ self._generate_sequences(root_node, [], sequences)
852
+
853
+ # Score sequences based on visit counts, entropy, and variance
854
+ scored_sequences = []
855
+ for seq in sequences:
856
+ score = sum(node.visit_count for node in seq)
857
+ entropy = sum(node.entropy for node in seq)
858
+ variance = sum(node.variance for node in seq)
859
+ adjusted_score = score - 0.1 * entropy + 0.05 * variance
860
+ scored_sequences.append((seq, adjusted_score))
861
+
862
+ # Sort sequences by adjusted score and select top beam_size
863
+ best_sequences = sorted(scored_sequences, key=lambda x: x[1], reverse=True)[:self.beam_size]
864
+
865
+ # Return the actions of the best sequence
866
+ best_sequence = best_sequences[0][0]
867
+ return [node.action for node in best_sequence[1:self.n_tokens_predict+1]] # Exclude root node
868
+
869
+ def _generate_sequences(self, node, current_sequence, sequences):
870
+ current_sequence.append(node)
871
+ if len(current_sequence) > self.n_tokens_predict or not node.children:
872
+ sequences.append(current_sequence)
873
+ else:
874
+ for child in node.children.values():
875
+ self._generate_sequences(child, current_sequence.copy(), sequences)
876
+
877
+ class State:
878
+ def __init__(self, representation, dynamics_network, action_encoder, thought_node):
879
+ self.representation = representation
880
+ self.dynamics_network = dynamics_network
881
+ self.action_encoder = action_encoder
882
+ self.thought_node = thought_node
883
+
884
+ def apply_action(self, action):
885
+ next_thought_node = None
886
+ for child in self.thought_node.children:
887
+ if child.name == action:
888
+ next_thought_node = child
889
+ break
890
+ if next_thought_node is None:
891
+ raise ValueError(f"Action '{action}' is not valid from the current thought node.")
892
+
893
+ # Adjust action_index and action_embedding shapes
894
+ action_index = torch.tensor([action_to_index[action]], device=self.representation.device)
895
+ action_embedding = self.action_encoder(action_index) # Shape: (batch_size=1, action_dim)
896
+
897
+ # Extract the last time step of the state
898
+ state = self.representation[:, -1, :] # Shape: (batch_size, state_dim)
899
+
900
+ # Ensure action_embedding matches the state dimension
901
+ next_state_representation = self.dynamics_network(state, action_embedding) # Shape: (batch_size, state_dim)
902
+
903
+ # Append the new state to the representation history
904
+ new_representation = torch.cat([self.representation, next_state_representation.unsqueeze(1)], dim=1) # Shape: (batch_size, seq_len+1, state_dim)
905
+
906
+ return State(
907
+ representation=new_representation,
908
+ dynamics_network=self.dynamics_network,
909
+ action_encoder=self.action_encoder,
910
+ thought_node=next_thought_node
911
+ )
912
+
913
+
914
+
915
+ class PPOAgent:
916
+ def __init__(self, policy_network, optimizer, clip_epsilon=0.2, entropy_coef=0.01, value_coef=0.5):
917
+ self.policy_network = policy_network
918
+ self.optimizer = optimizer
919
+ self.clip_epsilon = clip_epsilon
920
+ self.entropy_coef = entropy_coef
921
+ self.value_coef = value_coef
922
+
923
+ def compute_loss(self, states, old_log_probs, actions, returns, advantages):
924
+ # Get policy logits and value estimates
925
+ policy_logits, value_estimates = self.policy_network(states)
926
+ batch_size, seq_len, num_actions = policy_logits.size()
927
+
928
+ # Flatten tensors using reshape
929
+ policy_logits = policy_logits.reshape(-1, num_actions) # Shape: (batch_size * seq_len, num_actions)
930
+ value_estimates = value_estimates.view(-1)
931
+ actions = actions.reshape(-1) # Shape: (batch_size * seq_len)
932
+ old_log_probs = old_log_probs.reshape(-1) # Shape: (batch_size * seq_len)
933
+ returns = returns.view(-1)
934
+ advantages = advantages.reshape(-1) # Shape: (batch_size * seq_len)
935
+
936
+ # Ensure value_estimates and returns are the same size
937
+ if value_estimates.size() != returns.size():
938
+ print(f"Shape mismatch: value_estimates shape: {value_estimates.size()}, returns shape: {returns.size()}")
939
+ value_estimates = value_estimates[:returns.size(0)]
940
+
941
+ # Compute new log probabilities
942
+ new_log_probs_all = F.log_softmax(policy_logits, dim=-1) # Shape: (batch_size * seq_len, num_actions)
943
+ new_log_probs = new_log_probs_all.gather(1, actions.unsqueeze(-1)).squeeze(-1) # Shape: (batch_size * seq_len)
944
+
945
+ # Compute ratios
946
+ ratios = torch.exp(new_log_probs - old_log_probs)
947
+
948
+ # PPO surrogate loss
949
+ surr1 = ratios * advantages
950
+ surr2 = torch.clamp(ratios, 1 - self.clip_epsilon, 1 + self.clip_epsilon) * advantages
951
+ policy_loss = -torch.min(surr1, surr2).mean()
952
+
953
+ # Value loss
954
+ value_loss = F.mse_loss(value_estimates, returns)
955
+
956
+ # Entropy loss
957
+ entropy = -(new_log_probs * torch.exp(new_log_probs)).mean()
958
+
959
+ # Total loss
960
+ total_loss = policy_loss + self.value_coef * value_loss - self.entropy_coef * entropy
961
+ return total_loss
962
+
963
+
964
+ # Tree of Thought Components
965
+
966
+ class ThoughtNode:
967
+ def __init__(self, name):
968
+ self.name = name
969
+ self.children = []
970
+ self.parent = None
971
+
972
+ def add_child(self, child_node):
973
+ child_node.parent = self
974
+ self.children.append(child_node)
975
+
976
+ # Function to build the Tree of Thought from your detailed structure
977
+ def build_tree_of_thought():
978
+ # Create the root node
979
+ root = ThoughtNode('Problem-Solving Process')
980
+
981
+ # Level 1 nodes
982
+ problem_identification = ThoughtNode('Problem Identification')
983
+ problem_analysis = ThoughtNode('Problem Analysis')
984
+ solution_generation = ThoughtNode('Solution Generation')
985
+ implementation = ThoughtNode('Implementation')
986
+ evaluation_adjustment = ThoughtNode('Evaluation and Adjustment')
987
+
988
+ root.add_child(problem_identification)
989
+ root.add_child(problem_analysis)
990
+ root.add_child(solution_generation)
991
+ root.add_child(implementation)
992
+ root.add_child(evaluation_adjustment)
993
+
994
+ # Problem Identification children
995
+ B1 = ThoughtNode('Define the Problem')
996
+ B2 = ThoughtNode('Identify Stakeholders')
997
+ B3 = ThoughtNode('Determine Constraints')
998
+ B4 = ThoughtNode('Recognize Problem Type')
999
+ B5 = ThoughtNode('Historical Context')
1000
+ problem_identification.add_child(B1)
1001
+ problem_identification.add_child(B2)
1002
+ problem_identification.add_child(B3)
1003
+ problem_identification.add_child(B4)
1004
+ problem_identification.add_child(B5)
1005
+
1006
+ # Define the Problem children
1007
+ B1a = ThoughtNode('Problem Statement Formulation')
1008
+ B1b = ThoughtNode('Scope Definition')
1009
+ B1c = ThoughtNode('Objective Setting')
1010
+ B1.add_child(B1a)
1011
+ B1.add_child(B1b)
1012
+ B1.add_child(B1c)
1013
+
1014
+ # Identify Stakeholders children
1015
+ B2a = ThoughtNode('Stakeholder Mapping')
1016
+ B2b = ThoughtNode('Interest and Influence Analysis')
1017
+ B2c = ThoughtNode('Engagement Strategy')
1018
+ B2.add_child(B2a)
1019
+ B2.add_child(B2b)
1020
+ B2.add_child(B2c)
1021
+
1022
+ # Determine Constraints children
1023
+ B3a = ThoughtNode('Resource Limitations')
1024
+ B3b = ThoughtNode('Time Constraints')
1025
+ B3c = ThoughtNode('Legal and Regulatory Constraints')
1026
+ B3.add_child(B3a)
1027
+ B3.add_child(B3b)
1028
+ B3.add_child(B3c)
1029
+
1030
+ # Recognize Problem Type children
1031
+ B4a = ThoughtNode('Simple vs Complex')
1032
+ B4b = ThoughtNode('Known vs Unknown')
1033
+ B4c = ThoughtNode('Tame vs Wicked Problems')
1034
+ B4.add_child(B4a)
1035
+ B4.add_child(B4b)
1036
+ B4.add_child(B4c)
1037
+
1038
+ # Historical Context children
1039
+ B5a = ThoughtNode('Previous Attempts')
1040
+ B5b = ThoughtNode('Lessons Learned')
1041
+ B5c = ThoughtNode('Environmental Factors')
1042
+ B5.add_child(B5a)
1043
+ B5.add_child(B5b)
1044
+ B5.add_child(B5c)
1045
+
1046
+ # Problem Analysis children
1047
+ C1 = ThoughtNode('Root Cause Analysis')
1048
+ C2 = ThoughtNode('System Mapping')
1049
+ C3 = ThoughtNode('Data Collection')
1050
+ C4 = ThoughtNode('Impact Assessment')
1051
+ C5 = ThoughtNode('Theoretical Framework')
1052
+ problem_analysis.add_child(C1)
1053
+ problem_analysis.add_child(C2)
1054
+ problem_analysis.add_child(C3)
1055
+ problem_analysis.add_child(C4)
1056
+ problem_analysis.add_child(C5)
1057
+
1058
+ # Root Cause Analysis children
1059
+ C1a = ThoughtNode('5 Whys Technique')
1060
+ C1b = ThoughtNode('Fishbone Diagram')
1061
+ C1c = ThoughtNode('Pareto Analysis')
1062
+ C1.add_child(C1a)
1063
+ C1.add_child(C1b)
1064
+ C1.add_child(C1c)
1065
+
1066
+ # System Mapping children
1067
+ C2a = ThoughtNode('Causal Loop Diagrams')
1068
+ C2b = ThoughtNode('Stock and Flow Models')
1069
+ C2c = ThoughtNode('Network Analysis')
1070
+ C2.add_child(C2a)
1071
+ C2.add_child(C2b)
1072
+ C2.add_child(C2c)
1073
+
1074
+ # Data Collection children
1075
+ C3a = ThoughtNode('Quantitative Data')
1076
+ C3b = ThoughtNode('Qualitative Data')
1077
+ C3c = ThoughtNode('Data Validation')
1078
+ C3.add_child(C3a)
1079
+ C3.add_child(C3b)
1080
+ C3.add_child(C3c)
1081
+
1082
+ # Quantitative Data children
1083
+ C3a1 = ThoughtNode('Surveys and Questionnaires')
1084
+ C3a2 = ThoughtNode('Experimental Data')
1085
+ C3a3 = ThoughtNode('Big Data Analytics')
1086
+ C3a.add_child(C3a1)
1087
+ C3a.add_child(C3a2)
1088
+ C3a.add_child(C3a3)
1089
+
1090
+ # Qualitative Data children
1091
+ C3b1 = ThoughtNode('Interviews')
1092
+ C3b2 = ThoughtNode('Focus Groups')
1093
+ C3b3 = ThoughtNode('Observational Studies')
1094
+ C3b.add_child(C3b1)
1095
+ C3b.add_child(C3b2)
1096
+ C3b.add_child(C3b3)
1097
+
1098
+ # Data Validation children
1099
+ C3c1 = ThoughtNode('Statistical Validation')
1100
+ C3c2 = ThoughtNode('Cross-Validation')
1101
+ C3c3 = ThoughtNode('Expert Review')
1102
+ C3c.add_child(C3c1)
1103
+ C3c.add_child(C3c2)
1104
+ C3c.add_child(C3c3)
1105
+
1106
+ # Impact Assessment children
1107
+ C4a = ThoughtNode('Environmental Impact')
1108
+ C4b = ThoughtNode('Social Impact')
1109
+ C4c = ThoughtNode('Economic Impact')
1110
+ C4.add_child(C4a)
1111
+ C4.add_child(C4b)
1112
+ C4.add_child(C4c)
1113
+
1114
+ # Theoretical Framework children
1115
+ C5a = ThoughtNode('Literature Review')
1116
+ C5b = ThoughtNode('Conceptual Modeling')
1117
+ C5c = ThoughtNode('Hypothesis Formation')
1118
+ C5.add_child(C5a)
1119
+ C5.add_child(C5b)
1120
+ C5.add_child(C5c)
1121
+
1122
+ # Solution Generation children
1123
+ D1 = ThoughtNode('Creative Problem Solving')
1124
+ D2 = ThoughtNode('Analytical Approach')
1125
+ D3 = ThoughtNode('Mathematical Computation')
1126
+ D4 = ThoughtNode('Decision Making')
1127
+ solution_generation.add_child(D1)
1128
+ solution_generation.add_child(D2)
1129
+ solution_generation.add_child(D3)
1130
+ solution_generation.add_child(D4)
1131
+
1132
+ # Action Planning, Resource Allocation, Change Management children (implementation phase)
1133
+ E1 = ThoughtNode('Action Planning')
1134
+ E2 = ThoughtNode('Resource Allocation')
1135
+ E3 = ThoughtNode('Change Management')
1136
+ implementation.add_child(E1)
1137
+ implementation.add_child(E2)
1138
+ implementation.add_child(E3)
1139
+
1140
+ # Verification, Performance Metrics, Feedback Loops, Continuous Improvement children (evaluation phase)
1141
+ F1 = ThoughtNode('Verification')
1142
+ F2 = ThoughtNode('Performance Metrics')
1143
+ F3 = ThoughtNode('Feedback Loops')
1144
+ F4 = ThoughtNode('Continuous Improvement')
1145
+ evaluation_adjustment.add_child(F1)
1146
+ evaluation_adjustment.add_child(F2)
1147
+ evaluation_adjustment.add_child(F3)
1148
+ evaluation_adjustment.add_child(F4)
1149
+
1150
+ # Cross-Cutting Considerations children
1151
+ G = ThoughtNode('Cross-Cutting Considerations')
1152
+ root.add_child(G)
1153
+
1154
+ # Cross-Cutting Considerations children
1155
+ G1 = ThoughtNode('Ethical Framework')
1156
+ G2 = ThoughtNode('Stakeholder Management')
1157
+ G3 = ThoughtNode('Interdisciplinary Connections')
1158
+ G4 = ThoughtNode('Technological Integration')
1159
+ G5 = ThoughtNode('Emotional Intelligence')
1160
+ G6 = ThoughtNode('Collaborative Problem Solving')
1161
+ G7 = ThoughtNode('Computational Considerations') # Assuming H was intended as G7
1162
+ G8 = ThoughtNode('Order of Operations') # Assuming I was intended as G8
1163
+ G9 = ThoughtNode('Critical Thinking') # Assuming J was intended as G9
1164
+ G10 = ThoughtNode('Future Perspective') # Assuming K was intended as G10
1165
+ G11 = ThoughtNode('Learning and Adaptation') # Assuming L was intended as G11
1166
+ G.add_child(G1)
1167
+ G.add_child(G2)
1168
+ G.add_child(G3)
1169
+ G.add_child(G4)
1170
+ G.add_child(G5)
1171
+ G.add_child(G6)
1172
+ G.add_child(G7)
1173
+ G.add_child(G8)
1174
+ G.add_child(G9)
1175
+ G.add_child(G10)
1176
+ G.add_child(G11)
1177
+
1178
+ # Ethical Framework children
1179
+ G1a = ThoughtNode('Value-based Decision Making')
1180
+ G1b = ThoughtNode('Long-term Consequences')
1181
+ G1.add_child(G1a)
1182
+ G1.add_child(G1b)
1183
+
1184
+ # Value-based Decision Making children
1185
+ G1a1 = ThoughtNode('Ethical Theories Application')
1186
+ G1a2 = ThoughtNode('Moral Dilemma Resolution')
1187
+ G1a.add_child(G1a1)
1188
+ G1a.add_child(G1a2)
1189
+
1190
+ # Long-term Consequences children
1191
+ G1b1 = ThoughtNode('Sustainability Assessment')
1192
+ G1b2 = ThoughtNode('Intergenerational Impact')
1193
+ G1b.add_child(G1b1)
1194
+ G1b.add_child(G1b2)
1195
+
1196
+ # Stakeholder Management children
1197
+ G2a = ThoughtNode('Direct Stakeholders')
1198
+ G2b = ThoughtNode('Indirect Stakeholders')
1199
+ G2c = ThoughtNode('Conflicting Interests')
1200
+ G2.add_child(G2a)
1201
+ G2.add_child(G2b)
1202
+ G2.add_child(G2c)
1203
+
1204
+ # Conflicting Interests children
1205
+ G2c1 = ThoughtNode('Negotiation Strategies')
1206
+ G2c2 = ThoughtNode('Conflict Resolution Techniques')
1207
+ G2c.add_child(G2c1)
1208
+ G2c.add_child(G2c2)
1209
+
1210
+ # Interdisciplinary Connections children
1211
+ G3a = ThoughtNode('Related Fields')
1212
+ G3b = ThoughtNode('Cross-disciplinary Impact')
1213
+ G3.add_child(G3a)
1214
+ G3.add_child(G3b)
1215
+
1216
+ # Related Fields children
1217
+ G3a1 = ThoughtNode('Cross-domain Knowledge Transfer')
1218
+ G3a2 = ThoughtNode('Interdisciplinary Collaboration')
1219
+ G3a.add_child(G3a1)
1220
+ G3a.add_child(G3a2)
1221
+
1222
+ # Cross-disciplinary Impact children
1223
+ G3b1 = ThoughtNode('Synergy Identification')
1224
+ G3b2 = ThoughtNode('Holistic Impact Assessment')
1225
+ G3b.add_child(G3b1)
1226
+ G3b.add_child(G3b2)
1227
+
1228
+ # Technological Integration children
1229
+ G4a = ThoughtNode('AI-assisted Problem Solving')
1230
+ G4b = ThoughtNode('Data-driven Insights')
1231
+ G4c = ThoughtNode('Digital Collaboration Tools')
1232
+ G4.add_child(G4a)
1233
+ G4.add_child(G4b)
1234
+ G4.add_child(G4c)
1235
+
1236
+ # AI-assisted Problem Solving children
1237
+ G4a1 = ThoughtNode('Machine Learning Models')
1238
+ G4a2 = ThoughtNode('Natural Language Processing')
1239
+ G4a.add_child(G4a1)
1240
+ G4a.add_child(G4a2)
1241
+
1242
+ # Data-driven Insights children
1243
+ G4b1 = ThoughtNode('Big Data Analytics')
1244
+ G4b2 = ThoughtNode('Predictive Modeling')
1245
+ G4b.add_child(G4b1)
1246
+ G4b.add_child(G4b2)
1247
+
1248
+ # Digital Collaboration Tools children
1249
+ G4c1 = ThoughtNode('Project Management Platforms')
1250
+ G4c2 = ThoughtNode('Virtual Reality Collaboration')
1251
+ G4c.add_child(G4c1)
1252
+ G4c.add_child(G4c2)
1253
+
1254
+ # Emotional Intelligence children
1255
+ G5a = ThoughtNode('Self-Awareness')
1256
+ G5b = ThoughtNode('Empathy')
1257
+ G5c = ThoughtNode('Stress Management')
1258
+ G5.add_child(G5a)
1259
+ G5.add_child(G5b)
1260
+ G5.add_child(G5c)
1261
+
1262
+ # Self-Awareness children
1263
+ G5a1 = ThoughtNode('Emotional Recognition')
1264
+ G5a2 = ThoughtNode('Personal Bias Identification')
1265
+ G5a.add_child(G5a1)
1266
+ G5a.add_child(G5a2)
1267
+
1268
+ # Empathy children
1269
+ G5b1 = ThoughtNode('Perspective Taking')
1270
+ G5b2 = ThoughtNode('Active Listening')
1271
+ G5b.add_child(G5b1)
1272
+ G5b.add_child(G5b2)
1273
+
1274
+ # Stress Management children
1275
+ G5c1 = ThoughtNode('Mindfulness Techniques')
1276
+ G5c2 = ThoughtNode('Resilience Building')
1277
+ G5c.add_child(G5c1)
1278
+ G5c.add_child(G5c2)
1279
+
1280
+ # Collaborative Problem Solving children
1281
+ G6a = ThoughtNode('Team Dynamics')
1282
+ G6b = ThoughtNode('Communication Strategies')
1283
+ G6c = ThoughtNode('Conflict Resolution')
1284
+ G6.add_child(G6a)
1285
+ G6.add_child(G6b)
1286
+ G6.add_child(G6c)
1287
+
1288
+ # Team Dynamics children
1289
+ G6a1 = ThoughtNode('Team Formation Strategies')
1290
+ G6a2 = ThoughtNode('Role Assignment')
1291
+ G6a.add_child(G6a1)
1292
+ G6a.add_child(G6a2)
1293
+
1294
+ # Communication Strategies children
1295
+ G6b1 = ThoughtNode('Clear Messaging')
1296
+ G6b2 = ThoughtNode('Feedback Mechanisms')
1297
+ G6b.add_child(G6b1)
1298
+ G6b.add_child(G6b2)
1299
+
1300
+ # Conflict Resolution children
1301
+ G6c1 = ThoughtNode('Mediation Techniques')
1302
+ G6c2 = ThoughtNode('Consensus Building')
1303
+ G6c.add_child(G6c1)
1304
+ G6c.add_child(G6c2)
1305
+
1306
+ # Computational Considerations children
1307
+ G7a = ThoughtNode('CPU Operations')
1308
+ G7b = ThoughtNode('GPU Parallelization')
1309
+ G7c = ThoughtNode('Floating-Point Precision')
1310
+ G7.add_child(G7a)
1311
+ G7.add_child(G7b)
1312
+ G7.add_child(G7c)
1313
+
1314
+ # CPU Operations children
1315
+ G7a1 = ThoughtNode('Instruction Set Architecture')
1316
+ G7a2 = ThoughtNode('Pipelining and Parallelism')
1317
+ G7a.add_child(G7a1)
1318
+ G7a.add_child(G7a2)
1319
+
1320
+ # GPU Parallelization children
1321
+ G7b1 = ThoughtNode('CUDA Programming')
1322
+ G7b2 = ThoughtNode('OpenCL Framework')
1323
+ G7b.add_child(G7b1)
1324
+ G7b.add_child(G7b2)
1325
+
1326
+ # Floating-Point Precision children
1327
+ G7c1 = ThoughtNode('IEEE 754 Standard')
1328
+ G7c2 = ThoughtNode('Error Propagation Analysis')
1329
+ G7c.add_child(G7c1)
1330
+ G7c.add_child(G7c2)
1331
+
1332
+ # Order of Operations children
1333
+ G8a = ThoughtNode('Parentheses')
1334
+ G8b = ThoughtNode('Exponents')
1335
+ G8c = ThoughtNode('Multiplication and Division')
1336
+ G8d = ThoughtNode('Addition and Subtraction')
1337
+ G8.add_child(G8a)
1338
+ G8.add_child(G8b)
1339
+ G8.add_child(G8c)
1340
+ G8.add_child(G8d)
1341
+
1342
+ # Critical Thinking children
1343
+ G9a = ThoughtNode('Assumptions Questioning')
1344
+ G9b = ThoughtNode('Bias Recognition')
1345
+ G9.add_child(G9a)
1346
+ G9.add_child(G9b)
1347
+
1348
+ # Assumptions Questioning children
1349
+ G9a1 = ThoughtNode('Socratic Questioning')
1350
+ G9a2 = ThoughtNode('Devil\'s Advocate Approach')
1351
+ G9a.add_child(G9a1)
1352
+ G9a.add_child(G9a2)
1353
+
1354
+ # Bias Recognition children
1355
+ G9b1 = ThoughtNode('Cognitive Bias Identification')
1356
+ G9b2 = ThoughtNode('Debiasing Techniques')
1357
+ G9b.add_child(G9b1)
1358
+ G9b.add_child(G9b2)
1359
+
1360
+ # Future Perspective children
1361
+ G10a = ThoughtNode('Short-term Projections')
1362
+ G10b = ThoughtNode('Long-term Scenarios')
1363
+ G10c = ThoughtNode('Potential Impacts')
1364
+ G10.add_child(G10a)
1365
+ G10.add_child(G10b)
1366
+ G10.add_child(G10c)
1367
+
1368
+ # Short-term Projections children
1369
+ G10a1 = ThoughtNode('Trend Analysis')
1370
+ G10a2 = ThoughtNode('Scenario Planning')
1371
+ G10a.add_child(G10a1)
1372
+ G10a.add_child(G10a2)
1373
+
1374
+ # Long-term Scenarios children
1375
+ G10b1 = ThoughtNode('Futures Wheel')
1376
+ G10b2 = ThoughtNode('Backcasting')
1377
+ G10b.add_child(G10b1)
1378
+ G10b.add_child(G10b2)
1379
+
1380
+ # Potential Impacts children
1381
+ G10c1 = ThoughtNode('Risk Assessment')
1382
+ G10c2 = ThoughtNode('Opportunity Identification')
1383
+ G10c.add_child(G10c1)
1384
+ G10c.add_child(G10c2)
1385
+
1386
+ # Learning and Adaptation children
1387
+ G11a = ThoughtNode('Reflective Practice')
1388
+ G11b = ThoughtNode('Knowledge Transfer')
1389
+ G11c = ThoughtNode('Adaptive Problem Solving')
1390
+ G11.add_child(G11a)
1391
+ G11.add_child(G11b)
1392
+ G11.add_child(G11c)
1393
+
1394
+ # Reflective Practice children
1395
+ G11a1 = ThoughtNode('After Action Review')
1396
+ G11a2 = ThoughtNode('Learning Journals')
1397
+ G11a.add_child(G11a1)
1398
+ G11a.add_child(G11a2)
1399
+
1400
+ # Knowledge Transfer children
1401
+ G11b1 = ThoughtNode('Best Practice Documentation')
1402
+ G11b2 = ThoughtNode('Mentoring Programs')
1403
+ G11b.add_child(G11b1)
1404
+ G11b.add_child(G11b2)
1405
+
1406
+ # Adaptive Problem Solving children
1407
+ G11c1 = ThoughtNode('Iterative Approaches')
1408
+ G11c2 = ThoughtNode('Flexibility in Methodology')
1409
+ G11c.add_child(G11c1)
1410
+ G11c.add_child(G11c2)
1411
+
1412
+ return root
1413
+
1414
+ def traverse_tree(node, action_list):
1415
+ if node.name not in action_list:
1416
+ action_list.append(node.name)
1417
+ for child in node.children:
1418
+ traverse_tree(child, action_list)
1419
+
1420
+
1421
+
1422
+ def infer(query, world_model_components, root_thought_node, tokenizer, max_length=20, inference_mode='world_model', beam_size=5, n_tokens_predict=3, mcts_iterations=10, exploration_constant=1.414):
1423
+
1424
+
1425
+ """
1426
+ Perform inference given a query, utilizing the Tree of Thought and MCTS with multi-token beam search.
1427
+
1428
+ Args:
1429
+ query (str): The input query or prompt.
1430
+ world_model_components (tuple): Tuple containing the model components.
1431
+ root_thought_node (ThoughtNode): The root node of the Tree of Thought.
1432
+ tokenizer (transformers.PreTrainedTokenizer): The tokenizer used.
1433
+ max_length (int): Maximum length for the generated sequence.
1434
+ inference_mode (str): Inference mode ('world_model', 'without_world_model', 'world_model_tree_of_thought')
1435
+ beam_size (int): Size of the beam for beam search
1436
+ n_tokens_predict (int): Number of tokens to predict at each step
1437
+
1438
+ Returns:
1439
+ List[str] or str: The sequence of actions (thoughts) selected or generated text.
1440
+ """
1441
+ representation_network, dynamics_network, prediction_network, action_encoder, ppo_agent, model_transformer = world_model_components
1442
+
1443
+ # Tokenize and encode the query
1444
+ input_ids = tokenizer.encode(query, return_tensors='pt').to(device)
1445
+ attention_mask = (input_ids != tokenizer.pad_token_id).long()
1446
+
1447
+ if inference_mode == 'without_world_model':
1448
+ # Directly use the transformer model to generate text with beam search
1449
+ with torch.no_grad():
1450
+ generated_sequences = model_transformer.generate_with_beam_search(
1451
+ src=input_ids,
1452
+ tokenizer=tokenizer,
1453
+ beam_size=beam_size,
1454
+ max_length=max_length,
1455
+ n_tokens_predict=n_tokens_predict,
1456
+ temperature=args.temperature
1457
+ )
1458
+ best_sequence, best_score = generated_sequences[0]
1459
+ generated_text = tokenizer.decode(best_sequence[0], skip_special_tokens=True)
1460
+ return generated_text
1461
+
1462
+ else:
1463
+ # Use the world model components
1464
+ with torch.no_grad():
1465
+ transformer_output = model_transformer(input_ids, input_ids)
1466
+ # Get the initial state representation
1467
+ initial_representation = representation_network(transformer_output) # Shape: (batch_size=1, seq_len, state_dim)
1468
+ initial_representation = initial_representation[:, -1, :].unsqueeze(1) # Shape: (batch_size=1, 1, state_dim)
1469
+ initial_state = State(
1470
+ representation=initial_representation,
1471
+ dynamics_network=dynamics_network,
1472
+ action_encoder=action_encoder,
1473
+ thought_node=root_thought_node
1474
+ )
1475
+ if inference_mode == 'world_model_tree_of_thought':
1476
+ # Use MCTS with Tree of Thought and multi-token beam search
1477
+ mcts = MCTS(prediction_network, dynamics_network, action_encoder, num_iterations=mcts_iterations, exploration_constant=exploration_constant)
1478
+
1479
+ current_state = initial_state
1480
+ thought_sequence = []
1481
+
1482
+ for _ in range(max_length // n_tokens_predict):
1483
+ best_actions = mcts.search_with_beam(current_state)
1484
+
1485
+ thought_sequence.extend(best_actions)
1486
+
1487
+ # Apply the best actions to get the next state
1488
+ for action in best_actions:
1489
+ current_state = current_state.apply_action(action)
1490
+
1491
+ # Check if we've reached a leaf node (no further actions)
1492
+ if len(current_state.thought_node.children) == 0:
1493
+ break
1494
+
1495
+ return thought_sequence
1496
+ else:
1497
+ # Use the world model without Tree of Thought, but with multi-token beam search
1498
+ beam = [(initial_state, 0.0, torch.zeros(1, device=device), torch.zeros(1, device=device))] # (state, score, cum_entropy, cum_variance)
1499
+
1500
+ for _ in range(max_length // n_tokens_predict):
1501
+ all_candidates = []
1502
+ for state, score, cum_entropy, cum_variance in beam:
1503
+ policy_logits, _ = prediction_network(state.representation)
1504
+ probs = F.softmax(policy_logits / args.temperature, dim=-1)
1505
+ entropy = -torch.sum(probs * torch.log(probs + 1e-9), dim=-1)
1506
+ variance = torch.var(probs, dim=-1)
1507
+
1508
+ topk_probs, topk_indices = torch.topk(probs, k=beam_size, dim=-1)
1509
+
1510
+ for i in range(beam_size ** n_tokens_predict):
1511
+ indices = [i // (beam_size ** j) % beam_size for j in range(n_tokens_predict)]
1512
+ new_actions = [index_to_action[topk_indices[0, j, indices[j]].item()] for j in range(n_tokens_predict)]
1513
+ new_score = score + torch.sum(torch.log(topk_probs[0, range(n_tokens_predict), indices]))
1514
+ new_entropy = cum_entropy + torch.sum(entropy[0, indices])
1515
+ new_variance = cum_variance + torch.sum(variance[0, indices])
1516
+
1517
+ new_state = state
1518
+ for action in new_actions:
1519
+ new_state = new_state.apply_action(action)
1520
+
1521
+ all_candidates.append((new_state, new_score, new_entropy, new_variance, new_actions))
1522
+
1523
+ # Select top beam_size candidates
1524
+ beam = sorted(all_candidates, key=lambda x: x[1] - 0.1 * x[2] + 0.05 * x[3], reverse=True)[:beam_size]
1525
+
1526
+ # Accumulate actions
1527
+ if not thought_sequence:
1528
+ thought_sequence = [b[4] for b in beam]
1529
+ else:
1530
+ for i, b in enumerate(beam):
1531
+ thought_sequence[i].extend(b[4])
1532
+
1533
+ # Return the top sequence
1534
+ return thought_sequence[0]
1535
+
1536
+
1537
+ def train_epoch_world_model(world_model_components, train_loader, optimizer, scheduler, scaler, args, model_transformer, state_dim, embed_dim, input_dim):
1538
+ representation_network, dynamics_network, prediction_network, action_encoder, ppo_agent, _ = world_model_components
1539
+ representation_network.train()
1540
+ dynamics_network.train()
1541
+ prediction_network.train()
1542
+ action_encoder.train()
1543
+ ppo_agent.policy_network.train()
1544
+
1545
+ total_loss = 0.0
1546
+ optimizer.zero_grad()
1547
+ print(f"Starting World Model training epoch with {len(train_loader)} batches...")
1548
+
1549
+ for i, batch in enumerate(train_loader):
1550
+ print(f"Processing batch {i+1}/{len(train_loader)}...")
1551
+
1552
+ # Move batches to the device
1553
+ src_batch = batch['input_ids'].to(device)
1554
+ tgt_batch = batch['labels'].to(device)
1555
+
1556
+ with torch.amp.autocast(device_type='cuda'):
1557
+ print("Forward pass through Transformer (frozen)...")
1558
+ with torch.no_grad():
1559
+ transformer_output = model_transformer(src_batch, tgt_batch[:, :-1])
1560
+
1561
+ # World Model - Representation
1562
+ state_representation = representation_network(transformer_output)
1563
+
1564
+ # For simplicity, let's assume true actions are provided (e.g., next tokens)
1565
+ true_actions = tgt_batch[:, :-1]
1566
+ action_sequences = true_actions
1567
+
1568
+ # Get action embeddings
1569
+ action_embeddings = action_encoder(action_sequences)
1570
+
1571
+ # Apply dynamics network
1572
+ predicted_next_state_batch = dynamics_network(state_representation, action_embeddings)
1573
+
1574
+ # Prediction Network - Policy logits and value
1575
+ policy_logits, value_estimates = prediction_network(predicted_next_state_batch)
1576
+
1577
+ # Define true_policy and true_value as placeholders on the GPU
1578
+ true_policy = F.one_hot(true_actions, num_classes=input_dim).float()
1579
+ true_value = torch.zeros_like(value_estimates).to(device)
1580
+
1581
+ # Compute individual losses
1582
+ ppo_loss = ppo_agent.compute_loss(
1583
+ state_representation,
1584
+ torch.zeros_like(true_actions, dtype=torch.float32).to(device),
1585
+ true_actions,
1586
+ torch.zeros_like(value_estimates, dtype=torch.float32).to(device),
1587
+ torch.zeros_like(value_estimates, dtype=torch.float32).to(device)
1588
+ )
1589
+
1590
+ info_nce = InfoNCE_Loss()(
1591
+ state_representation.view(-1, state_dim),
1592
+ F.dropout(state_representation.view(-1, state_dim), p=0.1, training=True)
1593
+ )
1594
+
1595
+ covariance = CovarianceRegularization()(predicted_next_state_batch.view(-1, predicted_next_state_batch.size(-1)))
1596
+ dynamics_loss = DynamicsPerformanceLoss()(state_representation, predicted_next_state_batch)
1597
+
1598
+ perturbed_next_state = predicted_next_state_batch + torch.randn_like(predicted_next_state_batch) * 0.01
1599
+ thought_loss = ThoughtConsistencyLoss()(predicted_next_state_batch, perturbed_next_state)
1600
+
1601
+ pv_loss = PolicyValueJointLoss()(policy_logits, true_policy, value_estimates.squeeze(-1), true_value.squeeze(-1))
1602
+ action_diversity = ActionDiversityReward()(action_embeddings.view(-1, embed_dim))
1603
+
1604
+ mcts_best_values = torch.zeros(true_actions.size(0)).to(device)
1605
+ etv = ExpectedThoughtValueLoss()(mcts_best_values)
1606
+
1607
+ visit_counts = torch.ones(true_actions.size(0), policy_logits.size(-1)).to(device)
1608
+ exploration = ExplorationRegularization()(visit_counts)
1609
+
1610
+ old_policy = F.softmax(policy_logits.detach(), dim=-1)
1611
+ new_policy = F.softmax(policy_logits, dim=-1)
1612
+ kl_loss = KL_DivergenceLoss()(old_policy, new_policy)
1613
+
1614
+ # Total Loss
1615
+ loss = (
1616
+ ppo_loss +
1617
+ info_nce +
1618
+ covariance +
1619
+ dynamics_loss +
1620
+ thought_loss +
1621
+ pv_loss +
1622
+ action_diversity +
1623
+ etv +
1624
+ exploration +
1625
+ kl_loss
1626
+ )
1627
+ loss = loss / args.accumulation_steps
1628
+
1629
+ print("Backward pass...")
1630
+ scaler.scale(loss).backward()
1631
+
1632
+ if (i + 1) % args.accumulation_steps == 0 or (i + 1) == len(train_loader):
1633
+ print("Gradient clipping...")
1634
+ scaler.unscale_(optimizer)
1635
+ torch.nn.utils.clip_grad_norm_(
1636
+ [param for group in optimizer.param_groups for param in group['params']],
1637
+ args.max_grad_norm
1638
+ )
1639
+
1640
+ print("Optimizer step...")
1641
+ scaler.step(optimizer)
1642
+ scaler.update()
1643
+
1644
+ print("Zeroing gradients...")
1645
+ optimizer.zero_grad()
1646
+
1647
+ print("Updating learning rate...")
1648
+ scheduler.step()
1649
+
1650
+ total_loss += loss.item() * args.accumulation_steps
1651
+
1652
+ # Print individual losses and total loss for this batch
1653
+ print(f"Batch {i+1} completed. Losses:")
1654
+ print(f" PPO Loss: {ppo_loss.item():.4f}")
1655
+ print(f" InfoNCE Loss: {info_nce.item():.4f}")
1656
+ print(f" Covariance Loss: {covariance.item():.4f}")
1657
+ print(f" Dynamics Loss: {dynamics_loss.item():.4f}")
1658
+ print(f" Thought Consistency Loss: {thought_loss.item():.4f}")
1659
+ print(f" Policy-Value Loss: {pv_loss.item():.4f}")
1660
+ print(f" Action Diversity Loss: {action_diversity.item():.4f}")
1661
+ print(f" Expected Thought Value Loss: {etv.item():.4f}")
1662
+ print(f" Exploration Loss: {exploration.item():.4f}")
1663
+ print(f" KL Divergence Loss: {kl_loss.item():.4f}")
1664
+ print(f" Total Loss: {loss.item():.4f}")
1665
+
1666
+ avg_loss = total_loss / len(train_loader)
1667
+ print(f"World Model training epoch completed. Average loss: {avg_loss:.4f}")
1668
+ return avg_loss
1669
+
1670
+ def train_epoch_language_model(model, train_loader, optimizer, scheduler, scaler, args):
1671
+ model.train()
1672
+ total_loss = 0.0
1673
+ optimizer.zero_grad()
1674
+ print(f"Starting Language Model training epoch with {len(train_loader)} batches...")
1675
+
1676
+ for i, batch in enumerate(train_loader):
1677
+ input_ids = batch['input_ids'].to(device)
1678
+ labels = batch['labels'].to(device)
1679
+
1680
+ with autocast():
1681
+ outputs = model(input_ids, input_ids)
1682
+ logits = outputs.view(-1, outputs.size(-1))
1683
+ labels = labels.view(-1)
1684
+ loss = F.cross_entropy(logits, labels, ignore_index=model.embedding.padding_idx)
1685
+ loss = loss / args.accumulation_steps
1686
+
1687
+ scaler.scale(loss).backward()
1688
+
1689
+ if (i + 1) % args.accumulation_steps == 0 or (i + 1) == len(train_loader):
1690
+ scaler.unscale_(optimizer)
1691
+ torch.nn.utils.clip_grad_norm_(
1692
+ [param for group in optimizer.param_groups for param in group['params']],
1693
+ args.max_grad_norm
1694
+ )
1695
+ scaler.step(optimizer)
1696
+ scaler.update()
1697
+ optimizer.zero_grad()
1698
+ scheduler.step()
1699
+
1700
+ total_loss += loss.item() * args.accumulation_steps
1701
+ print(f"Batch {i + 1} completed. Current loss: {loss.item():.4f}")
1702
+
1703
+ avg_loss = total_loss / len(train_loader)
1704
+ print(f"Language Model training epoch completed. Average loss: {avg_loss:.4f}")
1705
+ return avg_loss
1706
+
1707
+
1708
+
1709
+ def main():
1710
+ args = parse_args()
1711
+ print("Arguments parsed successfully.")
1712
+
1713
+ # Create save directory
1714
+ os.makedirs(args.save_dir, exist_ok=True)
1715
+ print(f"Save directory created: {args.save_dir}")
1716
+
1717
+ # Load tokenizer
1718
+ print("Loading tokenizer...")
1719
+ tokenizer = AutoTokenizer.from_pretrained(args.model_name)
1720
+ if tokenizer.pad_token is None:
1721
+ tokenizer.pad_token = tokenizer.eos_token
1722
+ print("Tokenizer loaded successfully.")
1723
+
1724
+ # Define padding_idx and input dimension based on tokenizer
1725
+ padding_idx = tokenizer.pad_token_id
1726
+ input_dim = len(tokenizer)
1727
+
1728
+ # Initialize the Transformer model on GPU
1729
+ print("Initializing Transformer model...")
1730
+ model_transformer = Transformer(
1731
+ input_dim=input_dim,
1732
+ d_model=128,
1733
+ num_heads=4,
1734
+ num_layers=4,
1735
+ d_ff=256,
1736
+ num_experts=2,
1737
+ output_dim=input_dim,
1738
+ dropout=0.1,
1739
+ top_k=2
1740
+ ).to(device)
1741
+ model_transformer.train()
1742
+ print("Transformer model initialized on device.")
1743
+
1744
+ # Define model parameters (adjusted for speed)
1745
+ d_model = 128
1746
+ state_dim = 128
1747
+ action_dim = d_model
1748
+ hidden_dim = 256
1749
+ vocab_dim = input_dim
1750
+ embed_dim = d_model
1751
+
1752
+ # Define World Model components
1753
+ representation_network = RepresentationNetwork(vocab_dim, d_model, state_dim).to(device)
1754
+ dynamics_network = DynamicsNetwork(state_dim, action_dim, hidden_dim).to(device)
1755
+ prediction_network = PredictionNetwork(state_dim, input_dim, 1).to(device)
1756
+ action_encoder = ActionEncoder(input_dim, action_dim).to(device)
1757
+
1758
+ # Initialize PPO Agent
1759
+ ppo_agent = PPOAgent(
1760
+ policy_network=prediction_network,
1761
+ optimizer=optim.AdamW(prediction_network.parameters(), lr=args.learning_rate),
1762
+ clip_epsilon=0.2,
1763
+ entropy_coef=0.01,
1764
+ value_coef=0.5
1765
+ )
1766
+
1767
+ # Bundle World Model components
1768
+ world_model_components = (representation_network, dynamics_network, prediction_network, action_encoder, ppo_agent, model_transformer)
1769
+
1770
+ print(f"Current mode: {args.mode}")
1771
+ if args.mode == 'train':
1772
+ print("Loading and preprocessing data...")
1773
+ train_loader, eval_loader = load_data(args, tokenizer)
1774
+ print("Data loaded and preprocessed successfully.")
1775
+
1776
+ # Optimizer and Scheduler
1777
+ optimizer = optim.AdamW(
1778
+ list(representation_network.parameters()) +
1779
+ list(dynamics_network.parameters()) +
1780
+ list(prediction_network.parameters()) +
1781
+ list(action_encoder.parameters()),
1782
+ lr=args.learning_rate, weight_decay=args.weight_decay
1783
+ ) if args.train_mode == 'world_model' else optim.AdamW(model_transformer.parameters(), lr=args.learning_rate)
1784
+ scheduler = CosineAnnealingLR(optimizer, T_max=args.num_epochs)
1785
+ scaler = GradScaler()
1786
+
1787
+ print(f"Starting {args.train_mode} training...")
1788
+
1789
+ for epoch in range(args.num_epochs):
1790
+ if args.train_mode == 'world_model':
1791
+ avg_loss = train_epoch_world_model(
1792
+ world_model_components,
1793
+ train_loader,
1794
+ optimizer,
1795
+ scheduler,
1796
+ scaler,
1797
+ args,
1798
+ model_transformer,
1799
+ state_dim,
1800
+ embed_dim,
1801
+ input_dim
1802
+ )
1803
+ else:
1804
+ avg_loss = train_epoch_language_model(
1805
+ model_transformer,
1806
+ train_loader,
1807
+ optimizer,
1808
+ scheduler,
1809
+ scaler,
1810
+ args
1811
+ )
1812
+
1813
+ print(f"{args.train_mode.capitalize()} training epoch {epoch + 1} completed. Average loss: {avg_loss:.4f}")
1814
+
1815
+ if args.train_mode == 'world_model':
1816
+ save_all_models(model_transformer, representation_network, dynamics_network, prediction_network, action_encoder, args.save_dir, epoch + 1)
1817
+ print(f"Models saved for epoch {epoch + 1}")
1818
+ else:
1819
+ torch.save(model_transformer.state_dict(), os.path.join(args.save_dir, f'language_model_epoch_{epoch + 1}.pt'))
1820
+ print(f"Language model saved for epoch {epoch + 1}")
1821
+
1822
+ print("Training completed.")
1823
+
1824
+ elif args.mode == 'inference':
1825
+ print("Entering inference mode...")
1826
+ # Build Tree of Thought if needed
1827
+ print("Building Tree of Thought...")
1828
+ tree_root = build_tree_of_thought()
1829
+ print("Tree of Thought built successfully.")
1830
+
1831
+ # Generate action list
1832
+ print("Generating action list...")
1833
+ action_list = []
1834
+ traverse_tree(tree_root, action_list)
1835
+ print(f"Action list generated. Total actions: {len(action_list)}")
1836
+
1837
+ # Create mappings
1838
+ global action_to_index, index_to_action
1839
+ action_to_index = {action: idx for idx, action in enumerate(action_list)}
1840
+ index_to_action = {idx: action for action, idx in action_to_index.items()}
1841
+ action_vocab_size = len(action_list)
1842
+ print(f"Action mappings created. Vocabulary size: {action_vocab_size}")
1843
+
1844
+ # Initialize or load models based on the load_model argument
1845
+ if args.load_model:
1846
+ print(f"Loading saved model from {args.load_model}")
1847
+ # Load the saved models
1848
+ model_transformer.load_state_dict(torch.load(os.path.join(args.load_model, 'transformer_model.pt')))
1849
+ representation_network.load_state_dict(torch.load(os.path.join(args.load_model, 'representation_network.pt')))
1850
+ dynamics_network.load_state_dict(torch.load(os.path.join(args.load_model, 'dynamics_network.pt')))
1851
+
1852
+ # Load prediction network and adjust its size if necessary
1853
+ saved_state_dict = torch.load(os.path.join(args.load_model, 'prediction_network.pt'))
1854
+ saved_vocab_size = saved_state_dict['policy_head.weight'].size(0)
1855
+ if saved_vocab_size != action_vocab_size:
1856
+ print(f"Adjusting prediction network size from {saved_vocab_size} to {action_vocab_size}")
1857
+ prediction_network = PredictionNetwork(state_dim, saved_vocab_size, 1).to(device)
1858
+ prediction_network.load_state_dict(saved_state_dict)
1859
+ prediction_network.policy_head = nn.Linear(prediction_network.state_dim, action_vocab_size).to(device)
1860
+ else:
1861
+ prediction_network = PredictionNetwork(state_dim, action_vocab_size, 1).to(device)
1862
+ prediction_network.load_state_dict(saved_state_dict)
1863
+
1864
+ action_encoder.load_state_dict(torch.load(os.path.join(args.load_model, 'action_encoder.pt')))
1865
+ else:
1866
+ print("Using newly initialized models")
1867
+
1868
+ # Prepare the components
1869
+ world_model_components = (representation_network, dynamics_network, prediction_network, action_encoder, ppo_agent, model_transformer)
1870
+
1871
+ print("Starting inference loop...")
1872
+ while True:
1873
+ if args.query:
1874
+ query = args.query
1875
+ args.query = None # Reset query for next iteration
1876
+ else:
1877
+ query = input("Please enter your query (or type 'exit' to quit): ")
1878
+ if query.lower() == 'exit':
1879
+ break
1880
+
1881
+ print(f"Processing query: {query}")
1882
+ result = infer(query, world_model_components, tree_root, tokenizer,
1883
+ max_length=args.max_length,
1884
+ inference_mode=args.inference_mode,
1885
+ beam_size=args.beam_size,
1886
+ n_tokens_predict=args.n_tokens_predict,
1887
+ mcts_iterations=args.mcts_iterations,
1888
+ exploration_constant=args.mcts_exploration_constant)
1889
+
1890
+
1891
+ if args.inference_mode == 'without_world_model':
1892
+ print("Generated Text:")
1893
+ print(result)
1894
+ else:
1895
+ print("Generated Thought Sequence:")
1896
+ for thought in result:
1897
+ print(thought)
1898
+
1899
+ print("\n") # Add a newline for better readability between queries
1900
+
1901
+ print("Inference completed.")
1902
+
1903
+ else:
1904
+ print(f"Invalid mode: {args.mode}. Please choose 'train' or 'inference'.")
1905
+
1906
+ if __name__ == '__main__':
1907
+ main()