main: build = 2998 (9588f196) main: built with cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 for x86_64-linux-gnu main: seed = 1716657411 llama_model_loader: loaded meta data with 23 key-value pairs and 291 tensors from RoLlama2-7b-Instruct-IMat-GGUF/RoLlama2-7b-Instruct.gguf (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.name str = RoLlama2-7b-Instruct llama_model_loader: - kv 2: llama.block_count u32 = 32 llama_model_loader: - kv 3: llama.context_length u32 = 4096 llama_model_loader: - kv 4: llama.embedding_length u32 = 4096 llama_model_loader: - kv 5: llama.feed_forward_length u32 = 11008 llama_model_loader: - kv 6: llama.attention.head_count u32 = 32 llama_model_loader: - kv 7: llama.attention.head_count_kv u32 = 32 llama_model_loader: - kv 8: llama.rope.freq_base f32 = 10000.000000 llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 10: general.file_type u32 = 0 llama_model_loader: - kv 11: llama.vocab_size u32 = 32004 llama_model_loader: - kv 12: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 13: tokenizer.ggml.model str = llama llama_model_loader: - kv 14: tokenizer.ggml.pre str = default llama_model_loader: - kv 15: tokenizer.ggml.tokens arr[str,32004] = ["", "", "", "<0x00>", "<... llama_model_loader: - kv 16: tokenizer.ggml.scores arr[f32,32004] = [0.000000, 0.000000, 0.000000, 0.0000... llama_model_loader: - kv 17: tokenizer.ggml.token_type arr[i32,32004] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ... llama_model_loader: - kv 18: tokenizer.ggml.bos_token_id u32 = 1 llama_model_loader: - kv 19: tokenizer.ggml.eos_token_id u32 = 2 llama_model_loader: - kv 20: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 21: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 22: general.quantization_version u32 = 2 llama_model_loader: - type f32: 291 tensors llm_load_vocab: special tokens definition check successful ( 263/32004 ). llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = SPM llm_load_print_meta: n_vocab = 32004 llm_load_print_meta: n_merges = 0 llm_load_print_meta: n_ctx_train = 4096 llm_load_print_meta: n_embd = 4096 llm_load_print_meta: n_head = 32 llm_load_print_meta: n_head_kv = 32 llm_load_print_meta: n_layer = 32 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 1 llm_load_print_meta: n_embd_k_gqa = 4096 llm_load_print_meta: n_embd_v_gqa = 4096 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 11008 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 0 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 10000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_yarn_orig_ctx = 4096 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: model type = 7B llm_load_print_meta: model ftype = all F32 llm_load_print_meta: model params = 6.74 B llm_load_print_meta: model size = 25.10 GiB (32.00 BPW) llm_load_print_meta: general.name = RoLlama2-7b-Instruct llm_load_print_meta: BOS token = 1 '' llm_load_print_meta: EOS token = 2 '' llm_load_print_meta: UNK token = 0 '' llm_load_print_meta: LF token = 13 '<0x0A>' ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: CUDA_USE_TENSOR_CORES: yes ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes llm_load_tensors: ggml ctx size = 0.30 MiB llm_load_tensors: offloading 20 repeating layers to GPU llm_load_tensors: offloaded 20/33 layers to GPU llm_load_tensors: CPU buffer size = 25705.14 MiB llm_load_tensors: CUDA0 buffer size = 15440.62 MiB ................................................................................................... llama_new_context_with_model: n_ctx = 512 llama_new_context_with_model: n_batch = 512 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 10000.0 llama_new_context_with_model: freq_scale = 1 llama_kv_cache_init: CUDA_Host KV buffer size = 96.00 MiB llama_kv_cache_init: CUDA0 KV buffer size = 160.00 MiB llama_new_context_with_model: KV self size = 256.00 MiB, K (f16): 128.00 MiB, V (f16): 128.00 MiB llama_new_context_with_model: CUDA_Host output buffer size = 0.12 MiB llama_new_context_with_model: CUDA0 compute buffer size = 570.57 MiB llama_new_context_with_model: CUDA_Host compute buffer size = 17.01 MiB llama_new_context_with_model: graph nodes = 1030 llama_new_context_with_model: graph splits = 136 system_info: n_threads = 25 / 32 | AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 1 | AVX512_VBMI = 1 | AVX512_VNNI = 1 | AVX512_BF16 = 1 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | compute_imatrix: tokenizing the input .. compute_imatrix: tokenization took 134.534 ms compute_imatrix: computing over 234 chunks with batch_size 512 compute_imatrix: 1.07 seconds per pass - ETA 4.15 minutes [1]6.7005,[2]4.6847,[3]4.5108,[4]5.0395,[5]5.6345,[6]5.7096,[7]5.1835,[8]5.5952,[9]5.7962, save_imatrix: stored collected data after 10 chunks in RoLlama2-7b-Instruct-IMat-GGUF/imatrix.dat [10]6.1092,[11]6.1191,[12]5.5838,[13]5.7159,[14]5.6740,[15]6.1152,[16]6.2804,[17]6.6363,[18]6.8269,[19]7.0312, save_imatrix: stored collected data after 20 chunks in RoLlama2-7b-Instruct-IMat-GGUF/imatrix.dat [20]7.2110,[21]7.2486,[22]7.3765,[23]7.0451,[24]6.8337,[25]6.8506,[26]6.5745,[27]6.4531,[28]6.2121,[29]6.2059, save_imatrix: stored collected data after 30 chunks in RoLlama2-7b-Instruct-IMat-GGUF/imatrix.dat [30]6.3211,[31]6.4225,[32]6.4934,[33]6.4554,[34]6.4658,[35]6.4658,[36]6.2056,[37]6.0473,[38]5.9893,[39]5.9557, save_imatrix: stored collected data after 40 chunks in RoLlama2-7b-Instruct-IMat-GGUF/imatrix.dat [40]5.9206,[41]5.8352,[42]5.8871,[43]5.9102,[44]5.9600,[45]6.0277,[46]6.1043,[47]6.1477,[48]6.2849,[49]6.3972, save_imatrix: stored collected data after 50 chunks in RoLlama2-7b-Instruct-IMat-GGUF/imatrix.dat [50]6.5215,[51]6.6117,[52]6.7180,[53]6.7125,[54]6.6487,[55]6.5960,[56]6.6775,[57]6.7240,[58]6.7473,[59]6.8159, save_imatrix: stored collected data after 60 chunks in RoLlama2-7b-Instruct-IMat-GGUF/imatrix.dat [60]6.8965,[61]6.9310,[62]6.9934,[63]7.0350,[64]7.0991,[65]7.1198,[66]7.1559,[67]7.1915,[68]7.2275,[69]7.2732, save_imatrix: stored collected data after 70 chunks in RoLlama2-7b-Instruct-IMat-GGUF/imatrix.dat [70]7.3077,[71]7.3522,[72]7.3969,[73]7.3381,[74]7.2846,[75]7.2409,[76]7.2044,[77]7.2001,[78]7.1674,[79]7.1217, save_imatrix: stored collected data after 80 chunks in RoLlama2-7b-Instruct-IMat-GGUF/imatrix.dat [80]7.0613,[81]7.0511,[82]7.0086,[83]6.9737,[84]6.9908,[85]7.0142,[86]7.0291,[87]7.0622,[88]7.0617,[89]7.0343, save_imatrix: stored collected data after 90 chunks in RoLlama2-7b-Instruct-IMat-GGUF/imatrix.dat [90]7.0282,[91]7.0298,[92]7.0163,[93]7.0261,[94]7.0124,[95]7.0157,[96]7.0354,[97]7.0555,[98]7.0258,[99]6.9884, save_imatrix: stored collected data after 100 chunks in RoLlama2-7b-Instruct-IMat-GGUF/imatrix.dat [100]6.9991,[101]7.0214,[102]7.0149,[103]6.9958,[104]6.9613,[105]6.9460,[106]6.9636,[107]6.9742,[108]6.9615,[109]6.9532, save_imatrix: stored collected data after 110 chunks in RoLlama2-7b-Instruct-IMat-GGUF/imatrix.dat [110]6.9334,[111]6.9471,[112]6.9669,[113]6.9714,[114]6.9936,[115]6.9930,[116]6.9924,[117]6.9886,[118]7.0000,[119]6.9776, save_imatrix: stored collected data after 120 chunks in RoLlama2-7b-Instruct-IMat-GGUF/imatrix.dat [120]6.9771,[121]6.9669,[122]6.9392,[123]6.9593,[124]6.9526,[125]6.9559,[126]6.9442,[127]6.9412,[128]6.9540,[129]6.9283, save_imatrix: stored collected data after 130 chunks in RoLlama2-7b-Instruct-IMat-GGUF/imatrix.dat [130]6.9012,[131]6.8814,[132]6.8839,[133]6.8495,[134]6.8527,[135]6.8238,[136]6.7985,[137]6.7638,[138]6.7283,[139]6.6945, save_imatrix: stored collected data after 140 chunks in RoLlama2-7b-Instruct-IMat-GGUF/imatrix.dat [140]6.6647,[141]6.6382,[142]6.6132,[143]6.6142,[144]6.6093,[145]6.5929,[146]6.5679,[147]6.5633,[148]6.5580,[149]6.5536, save_imatrix: stored collected data after 150 chunks in RoLlama2-7b-Instruct-IMat-GGUF/imatrix.dat [150]6.5441,[151]6.5279,[152]6.5201,[153]6.5088,[154]6.4984,[155]6.5112,[156]6.4883,[157]6.4857,[158]6.4982,[159]6.4920, save_imatrix: stored collected data after 160 chunks in RoLlama2-7b-Instruct-IMat-GGUF/imatrix.dat [160]6.4920,[161]6.5029,[162]6.5026,[163]6.5179,[164]6.5230,[165]6.5400,[166]6.5462,[167]6.5466,[168]6.5516,[169]6.5563, save_imatrix: stored collected data after 170 chunks in RoLlama2-7b-Instruct-IMat-GGUF/imatrix.dat [170]6.5812,[171]6.5788,[172]6.5882,[173]6.6165,[174]6.6283,[175]6.6511,[176]6.6671,[177]6.6859,[178]6.6988,[179]6.7259, save_imatrix: stored collected data after 180 chunks in RoLlama2-7b-Instruct-IMat-GGUF/imatrix.dat [180]6.7385,[181]6.7897,[182]6.8050,[183]6.8292,[184]6.8312,[185]6.8364,[186]6.8479,[187]6.8537,[188]6.8407,[189]6.8462, save_imatrix: stored collected data after 190 chunks in RoLlama2-7b-Instruct-IMat-GGUF/imatrix.dat [190]6.8543,[191]6.8644,[192]6.8716,[193]6.9020,[194]6.8978,[195]6.8769,[196]6.9122,[197]6.9455,[198]6.9734,[199]7.0272, save_imatrix: stored collected data after 200 chunks in RoLlama2-7b-Instruct-IMat-GGUF/imatrix.dat [200]7.0753,[201]7.0835,[202]7.0873,[203]7.0552,[204]7.0582,[205]7.0700,[206]7.0962,[207]7.0864,[208]7.0836,[209]7.0864, save_imatrix: stored collected data after 210 chunks in RoLlama2-7b-Instruct-IMat-GGUF/imatrix.dat [210]7.0981,[211]7.1154,[212]7.1139,[213]7.1097,[214]7.1161,[215]7.1363,[216]7.1500,[217]7.1572,[218]7.1493,[219]7.1459, save_imatrix: stored collected data after 220 chunks in RoLlama2-7b-Instruct-IMat-GGUF/imatrix.dat [220]7.1379,[221]7.1379,[222]7.1357,[223]7.1600,[224]7.1470,[225]7.1489,[226]7.1394,[227]7.1735,[228]7.2152,[229]7.2590, save_imatrix: stored collected data after 230 chunks in RoLlama2-7b-Instruct-IMat-GGUF/imatrix.dat [230]7.2989,[231]7.3122,[232]7.2983,[233]7.2814,[234]7.2620, save_imatrix: stored collected data after 234 chunks in RoLlama2-7b-Instruct-IMat-GGUF/imatrix.dat llama_print_timings: load time = 2800.86 ms llama_print_timings: sample time = 0.00 ms / 1 runs ( 0.00 ms per token, inf tokens per second) llama_print_timings: prompt eval time = 230095.46 ms / 119808 tokens ( 1.92 ms per token, 520.69 tokens per second) llama_print_timings: eval time = 0.00 ms / 1 runs ( 0.00 ms per token, inf tokens per second) llama_print_timings: total time = 232816.11 ms / 119809 tokens Final estimate: PPL = 7.2620 +/- 0.07084