Nasir
Your-Zeshan
AI & ML interests
Hardware: MacBook Pro 4 (M4 processor) with 48 GB RAM, typically 36 GB allocated to VM for LM Studio/MLX/local inference and optimal model performance.
Preferences: High-performance Apple Silicon-optimized models, particularly quantized formats (GGUF/MLX). Focus on models that run efficiently on Apple Silicon with MLX framework optimization. Prefer 4-bit and 8-bit quantized models for optimal memory usage within the 36 GB VM allocation.
Local Apps: Jan, MLX LM, LM Studio, Ollama - all configured for MacBook Pro 4 hardware specifications.
Recent Activity
liked
a Space
14 days ago
baidu/ERNIE-4.5-VL-28B-A3B-Thinking
liked
a model
about 1 month ago
nightmedia/Qwen3-Next-80B-A3B-Instruct-qx86n-mlx
liked
a model
about 2 months ago
NexVeridian/Qwen3-Next-80B-A3B-Instruct-3bit
Organizations
None yet