Nasir
Your-Zeshan
		AI & ML interests
Hardware: MacBook Pro 4 (M4 processor) with 48 GB RAM, typically 36 GB allocated to VM for LM Studio/MLX/local inference and optimal model performance.
Preferences: High-performance Apple Silicon-optimized models, particularly quantized formats (GGUF/MLX). Focus on models that run efficiently on Apple Silicon with MLX framework optimization. Prefer 4-bit and 8-bit quantized models for optimal memory usage within the 36 GB VM allocation.
Local Apps: Jan, MLX LM, LM Studio, Ollama - all configured for MacBook Pro 4 hardware specifications.
		Recent Activity
						liked
								a model
							
						3 days ago
						
					
						
						
						
						nightmedia/Qwen3-Next-80B-A3B-Instruct-qx86n-mlx
						
						liked
								a model
							
						15 days ago
						
					
						
						
						
						NexVeridian/Qwen3-Next-80B-A3B-Instruct-3bit
						
						upvoted 
								a
								collection
							
						about 2 months ago
						
					ERNIE 4.5
						Organizations
None yet
