llama_model_loader: loaded meta data with 34 key-value pairs and 723 tensors from Reflection-Llama-3.1-70B-IMat-GGUF/Reflection-Llama-3.1-70B.Q8_0.gguf.hardlink.gguf (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Meta Llama 3 70B Instruct llama_model_loader: - kv 3: general.organization str = Meta Llama llama_model_loader: - kv 4: general.finetune str = Instruct llama_model_loader: - kv 5: general.basename str = Meta-Llama-3 llama_model_loader: - kv 6: general.size_label str = 70B llama_model_loader: - kv 7: general.license str = llama3.1 llama_model_loader: - kv 8: general.base_model.count u32 = 1 llama_model_loader: - kv 9: general.base_model.0.name str = Meta Llama 3.1 70B Instruct llama_model_loader: - kv 10: general.base_model.0.organization str = Meta Llama llama_model_loader: - kv 11: general.base_model.0.repo_url str = https://huggingface.co/meta-llama/Met... llama_model_loader: - kv 12: general.tags arr[str,1] = ["text-generation"] llama_model_loader: - kv 13: llama.block_count u32 = 80 llama_model_loader: - kv 14: llama.context_length u32 = 8192 llama_model_loader: - kv 15: llama.embedding_length u32 = 8192 llama_model_loader: - kv 16: llama.feed_forward_length u32 = 28672 llama_model_loader: - kv 17: llama.attention.head_count u32 = 64 llama_model_loader: - kv 18: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 19: llama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 20: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 21: general.file_type u32 = 7 llama_model_loader: - kv 22: llama.vocab_size u32 = 128262 llama_model_loader: - kv 23: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 24: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 25: tokenizer.ggml.pre str = llama-bpe llama_model_loader: - kv 26: tokenizer.ggml.tokens arr[str,128262] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 27: tokenizer.ggml.token_type arr[i32,128262] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 28: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... llama_model_loader: - kv 29: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 30: tokenizer.ggml.eos_token_id u32 = 128009 llama_model_loader: - kv 31: tokenizer.ggml.padding_token_id u32 = 128009 llama_model_loader: - kv 32: tokenizer.chat_template str = {% set loop_messages = messages %}{% ... llama_model_loader: - kv 33: general.quantization_version u32 = 2 llama_model_loader: - type f32: 161 tensors llama_model_loader: - type q8_0: 562 tensors llm_load_vocab: special tokens cache size = 262 llm_load_vocab: token to piece cache size = 0.8000 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 128262 llm_load_print_meta: n_merges = 280147 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 8192 llm_load_print_meta: n_embd = 8192 llm_load_print_meta: n_layer = 80 llm_load_print_meta: n_head = 64 llm_load_print_meta: n_head_kv = 8 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 8 llm_load_print_meta: n_embd_k_gqa = 1024 llm_load_print_meta: n_embd_v_gqa = 1024 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 28672 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 0 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 500000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_ctx_orig_yarn = 8192 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: ssm_dt_b_c_rms = 0 llm_load_print_meta: model type = 70B llm_load_print_meta: model ftype = Q8_0 llm_load_print_meta: model params = 70.55 B llm_load_print_meta: model size = 69.82 GiB (8.50 BPW) llm_load_print_meta: general.name = Meta Llama 3 70B Instruct llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>' llm_load_print_meta: EOS token = 128009 '<|eot_id|>' llm_load_print_meta: PAD token = 128009 '<|eot_id|>' llm_load_print_meta: LF token = 128 'Ä' llm_load_print_meta: EOT token = 128009 '<|eot_id|>' llm_load_print_meta: max token length = 256 ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes llm_load_tensors: ggml ctx size = 0.68 MiB llm_load_tensors: offloading 25 repeating layers to GPU llm_load_tensors: offloaded 25/81 layers to GPU llm_load_tensors: CPU buffer size = 71494.38 MiB llm_load_tensors: CUDA0 buffer size = 21676.56 MiB .................................................................................................... llama_new_context_with_model: n_ctx = 512 llama_new_context_with_model: n_batch = 512 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 500000.0 llama_new_context_with_model: freq_scale = 1 llama_kv_cache_init: CUDA_Host KV buffer size = 110.00 MiB llama_kv_cache_init: CUDA0 KV buffer size = 50.00 MiB llama_new_context_with_model: KV self size = 160.00 MiB, K (f16): 80.00 MiB, V (f16): 80.00 MiB llama_new_context_with_model: CUDA_Host output buffer size = 0.49 MiB llama_new_context_with_model: CUDA0 compute buffer size = 1331.19 MiB llama_new_context_with_model: CUDA_Host compute buffer size = 17.01 MiB llama_new_context_with_model: graph nodes = 2566 llama_new_context_with_model: graph splits = 609 system_info: n_threads = 25 (n_threads_batch = 25) / 32 | AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 1 | AVX512_VBMI = 1 | AVX512_VNNI = 1 | AVX512_BF16 = 1 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | compute_imatrix: tokenizing the input .. compute_imatrix: tokenization took 43.174 ms compute_imatrix: computing over 125 chunks with batch_size 512 compute_imatrix: 5.89 seconds per pass - ETA 12.25 minutes [1]5.9985,[2]4.4764,[3]3.8989,[4]4.7385,[5]4.7986,[6]4.0443,[7]4.1299,[8]4.5462,[9]4.8014, save_imatrix: stored collected data after 10 chunks in Reflection-Llama-3.1-70B-IMat-GGUF/imatrix.dat [10]4.4081,[11]4.8686,[12]5.3513,[13]5.8343,[14]6.2641,[15]6.5275,[16]6.8746,[17]7.0103,[18]6.7349,[19]6.4256, save_imatrix: stored collected data after 20 chunks in Reflection-Llama-3.1-70B-IMat-GGUF/imatrix.dat [20]6.4326,[21]6.5717,[22]6.5525,[23]6.7670,[24]6.7271,[25]7.0591,[26]7.0410,[27]6.6238,[28]6.2935,[29]6.3051, save_imatrix: stored collected data after 30 chunks in Reflection-Llama-3.1-70B-IMat-GGUF/imatrix.dat [30]6.2833,[31]5.9767,[32]5.7087,[33]5.5869,[34]5.4917,[35]5.5808,[36]5.6471,[37]5.6256,[38]5.7035,[39]5.8553, save_imatrix: stored collected data after 40 chunks in Reflection-Llama-3.1-70B-IMat-GGUF/imatrix.dat [40]5.9525,[41]5.7458,[42]5.5510,[43]5.3823,[44]5.2344,[45]5.2025,[46]5.1741,[47]5.2933,[48]5.3845,[49]5.4845, save_imatrix: stored collected data after 50 chunks in Reflection-Llama-3.1-70B-IMat-GGUF/imatrix.dat [50]5.4510,[51]5.5448,[52]5.6486,[53]5.7359,[54]5.8005,[55]5.8907,[56]5.9464,[57]6.0125,[58]6.0514,[59]6.0705, save_imatrix: stored collected data after 60 chunks in Reflection-Llama-3.1-70B-IMat-GGUF/imatrix.dat [60]6.0508,[61]6.0632,[62]6.1140,[63]6.1803,[64]6.1305,[65]6.1261,[66]6.1501,[67]6.1475,[68]6.1542,[69]6.1526, save_imatrix: stored collected data after 70 chunks in Reflection-Llama-3.1-70B-IMat-GGUF/imatrix.dat [70]6.1726,[71]6.1811,[72]6.2016,[73]6.1905,[74]6.1620,[75]6.1710,[76]6.1825,[77]6.1710,[78]6.1771,[79]6.2167, save_imatrix: stored collected data after 80 chunks in Reflection-Llama-3.1-70B-IMat-GGUF/imatrix.dat [80]6.2429,[81]6.2421,[82]6.2574,[83]6.2922,[84]6.2218,[85]6.2174,[86]6.2301,[87]6.2520,[88]6.2912,[89]6.3056, save_imatrix: stored collected data after 90 chunks in Reflection-Llama-3.1-70B-IMat-GGUF/imatrix.dat [90]6.2753,[91]6.2366,[92]6.1995,[93]6.1766,[94]6.1462,[95]6.1181,[96]6.0893,[97]6.1151,[98]6.1628,[99]6.2287, save_imatrix: stored collected data after 100 chunks in Reflection-Llama-3.1-70B-IMat-GGUF/imatrix.dat [100]6.2908,[101]6.3385,[102]6.4423,[103]6.4692,[104]6.5022,[105]6.4584,[106]6.4757,[107]6.4469,[108]6.3831,[109]6.3132, save_imatrix: stored collected data after 110 chunks in Reflection-Llama-3.1-70B-IMat-GGUF/imatrix.dat [110]6.3544,[111]6.4023,[112]6.4253,[113]6.4320,[114]6.4691,[115]6.5127,[116]6.5334,[117]6.5611,[118]6.5905,[119]6.5512, save_imatrix: stored collected data after 120 chunks in Reflection-Llama-3.1-70B-IMat-GGUF/imatrix.dat [120]6.4765,[121]6.3970,[122]6.3229,[123]6.2496,[124]6.2033,[125]6.1428, save_imatrix: stored collected data after 125 chunks in Reflection-Llama-3.1-70B-IMat-GGUF/imatrix.dat llama_print_timings: load time = 28195.97 ms llama_print_timings: sample time = 0.00 ms / 1 runs ( 0.00 ms per token, inf tokens per second) llama_print_timings: prompt eval time = 697528.68 ms / 64000 tokens ( 10.90 ms per token, 91.75 tokens per second) llama_print_timings: eval time = 0.00 ms / 1 runs ( 0.00 ms per token, inf tokens per second) llama_print_timings: total time = 720919.57 ms / 64001 tokens Final estimate: PPL = 6.1428 +/- 0.08656