Victor Major
vmajor
		AI & ML interests
Application of ML and AI to classification of physical world inputs, in particular the ability of models to generalize to unseen real world data and independently categorise new observations.
		
		Organizations
None yet
Qwen3-235B-A22B-128K-Q8_0 does not run with vllm, or llama.cpp
									4
	#9 opened 6 months ago
		by
		
				
							
						vmajor
	
I am so confused
								1
#1 opened 8 months ago
		by
		
				
							
						vmajor
	
Are the Q4 and Q5 models R1 or R1-Zero
									18
	#2 opened 10 months ago
		by
		
				
							
						gng2info
	
Suggestion for censorship disclosure - odd responses from R1
								7
#25 opened 10 months ago
		by
		
				
							
						vmajor
	
Hardware requirements?
๐
							
						8
				
								29
#19 opened 10 months ago
		by
		
				
							
						JohnnieB
	
Advice on running llama-server with Q2_K_L quant
									3
	#6 opened 10 months ago
		by
		
				
							
						vmajor
	
llama.cpp cannot load Q6_K model
									5
	#3 opened 10 months ago
		by
		
				
							
						vmajor
	
How do I make the model output JSON?
									6
	#14 opened 12 months ago
		by
		
				
							
						vmajor
	
add merge tag
#1 opened almost 2 years ago
		by
		
				
							
						davanstrien
	
add merge tag
#1 opened almost 2 years ago
		by
		
				
							
						davanstrien
	
Pytorch format available?
									3
	#7 opened almost 2 years ago
		by
		
				
							
						vmajor
	
Thank you
								23
#9 opened almost 2 years ago
		by
		
				
							
						ehartford
	
Ability to generalise
									6
	#1 opened about 2 years ago
		by
		
				
							
						vmajor
	
fp16 version
									4
	#2 opened over 2 years ago
		by
		
				
							
						vmajor
	
Is it possible to run this model on the CPU?
								1
#20 opened over 2 years ago
		by
		
				
							
						vmajor
	
COVID-19?
								2
#1 opened over 2 years ago
		by
		
				
							
						vmajor
	
Still referring to COVID-19?
									2
	#1 opened over 2 years ago
		by
		
				
							
						vmajor
	
stable-vicuna-13B-GPTQ-4bit.compat.no-act-order.safetensors not compatible with "standard" settings
									25
	#1 opened over 2 years ago
		by
		
				
							
						vmajor
	
Loading and interacting with Stable-vicuna-13B-GPTQ through python without webui
									22
	#6 opened over 2 years ago
		by
		
				
							
						AbdouS
	
stable-vicuna-13B-GPTQ-4bit.compat.no-act-order.safetensors not compatible with "standard" settings
									25
	#1 opened over 2 years ago
		by
		
				
							
						vmajor