main: build = 2998 (9588f196) main: built with cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 for x86_64-linux-gnu main: seed = 1716650944 llama_model_loader: loaded meta data with 23 key-value pairs and 291 tensors from RoLlama2-7b-Base-IMat-GGUF/RoLlama2-7b-Base.gguf (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.name str = RoLlama2-7b-Base llama_model_loader: - kv 2: llama.block_count u32 = 32 llama_model_loader: - kv 3: llama.context_length u32 = 4096 llama_model_loader: - kv 4: llama.embedding_length u32 = 4096 llama_model_loader: - kv 5: llama.feed_forward_length u32 = 11008 llama_model_loader: - kv 6: llama.attention.head_count u32 = 32 llama_model_loader: - kv 7: llama.attention.head_count_kv u32 = 32 llama_model_loader: - kv 8: llama.rope.freq_base f32 = 10000.000000 llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 10: general.file_type u32 = 0 llama_model_loader: - kv 11: llama.vocab_size u32 = 32000 llama_model_loader: - kv 12: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 13: tokenizer.ggml.model str = llama llama_model_loader: - kv 14: tokenizer.ggml.pre str = default llama_model_loader: - kv 15: tokenizer.ggml.tokens arr[str,32000] = ["", "", "", "<0x00>", "<... llama_model_loader: - kv 16: tokenizer.ggml.scores arr[f32,32000] = [-1000.000000, -1000.000000, -1000.00... llama_model_loader: - kv 17: tokenizer.ggml.token_type arr[i32,32000] = [4, 4, 4, 6, 6, 6, 6, 6, 6, 6, 6, 6, ... llama_model_loader: - kv 18: tokenizer.ggml.bos_token_id u32 = 1 llama_model_loader: - kv 19: tokenizer.ggml.eos_token_id u32 = 2 llama_model_loader: - kv 20: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 21: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 22: general.quantization_version u32 = 2 llama_model_loader: - type f32: 291 tensors llm_load_vocab: special tokens definition check successful ( 259/32000 ). llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = SPM llm_load_print_meta: n_vocab = 32000 llm_load_print_meta: n_merges = 0 llm_load_print_meta: n_ctx_train = 4096 llm_load_print_meta: n_embd = 4096 llm_load_print_meta: n_head = 32 llm_load_print_meta: n_head_kv = 32 llm_load_print_meta: n_layer = 32 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 1 llm_load_print_meta: n_embd_k_gqa = 4096 llm_load_print_meta: n_embd_v_gqa = 4096 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 11008 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 0 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 10000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_yarn_orig_ctx = 4096 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: model type = 7B llm_load_print_meta: model ftype = all F32 llm_load_print_meta: model params = 6.74 B llm_load_print_meta: model size = 25.10 GiB (32.00 BPW) llm_load_print_meta: general.name = RoLlama2-7b-Base llm_load_print_meta: BOS token = 1 '' llm_load_print_meta: EOS token = 2 '' llm_load_print_meta: UNK token = 0 '' llm_load_print_meta: LF token = 13 '<0x0A>' ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: CUDA_USE_TENSOR_CORES: yes ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes llm_load_tensors: ggml ctx size = 0.30 MiB llm_load_tensors: offloading 20 repeating layers to GPU llm_load_tensors: offloaded 20/33 layers to GPU llm_load_tensors: CPU buffer size = 25705.02 MiB llm_load_tensors: CUDA0 buffer size = 15440.62 MiB ................................................................................................... llama_new_context_with_model: n_ctx = 512 llama_new_context_with_model: n_batch = 512 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 10000.0 llama_new_context_with_model: freq_scale = 1 llama_kv_cache_init: CUDA_Host KV buffer size = 96.00 MiB llama_kv_cache_init: CUDA0 KV buffer size = 160.00 MiB llama_new_context_with_model: KV self size = 256.00 MiB, K (f16): 128.00 MiB, V (f16): 128.00 MiB llama_new_context_with_model: CUDA_Host output buffer size = 0.12 MiB llama_new_context_with_model: CUDA0 compute buffer size = 570.50 MiB llama_new_context_with_model: CUDA_Host compute buffer size = 17.01 MiB llama_new_context_with_model: graph nodes = 1030 llama_new_context_with_model: graph splits = 136 system_info: n_threads = 25 / 32 | AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 1 | AVX512_VBMI = 1 | AVX512_VNNI = 1 | AVX512_BF16 = 1 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | compute_imatrix: tokenizing the input .. compute_imatrix: tokenization took 125.811 ms compute_imatrix: computing over 234 chunks with batch_size 512 compute_imatrix: 1.01 seconds per pass - ETA 3.95 minutes [1]6.5112,[2]4.4562,[3]4.2933,[4]4.8976,[5]5.5800,[6]5.7003,[7]5.1407,[8]5.5090,[9]5.7099, save_imatrix: stored collected data after 10 chunks in RoLlama2-7b-Base-IMat-GGUF/imatrix.dat [10]5.9901,[11]5.9894,[12]5.4802,[13]5.6410,[14]5.6296,[15]6.0470,[16]6.2778,[17]6.6477,[18]6.8520,[19]7.0643, save_imatrix: stored collected data after 20 chunks in RoLlama2-7b-Base-IMat-GGUF/imatrix.dat [20]7.2517,[21]7.3158,[22]7.4136,[23]7.0787,[24]6.8746,[25]6.9046,[26]6.5979,[27]6.4651,[28]6.2251,[29]6.2063, save_imatrix: stored collected data after 30 chunks in RoLlama2-7b-Base-IMat-GGUF/imatrix.dat [30]6.3143,[31]6.3944,[32]6.4643,[33]6.4125,[34]6.4368,[35]6.4331,[36]6.1717,[37]6.0020,[38]5.9439,[39]5.9147, save_imatrix: stored collected data after 40 chunks in RoLlama2-7b-Base-IMat-GGUF/imatrix.dat [40]5.8779,[41]5.8027,[42]5.8540,[43]5.8879,[44]5.9478,[45]6.0150,[46]6.1004,[47]6.1635,[48]6.2985,[49]6.4025, save_imatrix: stored collected data after 50 chunks in RoLlama2-7b-Base-IMat-GGUF/imatrix.dat [50]6.5184,[51]6.5962,[52]6.6901,[53]6.6712,[54]6.5976,[55]6.5375,[56]6.6040,[57]6.6458,[58]6.6603,[59]6.7214, save_imatrix: stored collected data after 60 chunks in RoLlama2-7b-Base-IMat-GGUF/imatrix.dat [60]6.7964,[61]6.8227,[62]6.8727,[63]6.9066,[64]6.9666,[65]6.9723,[66]6.9924,[67]7.0200,[68]7.0479,[69]7.1045, save_imatrix: stored collected data after 70 chunks in RoLlama2-7b-Base-IMat-GGUF/imatrix.dat [70]7.1573,[71]7.1911,[72]7.2446,[73]7.1923,[74]7.1487,[75]7.1168,[76]7.0949,[77]7.0958,[78]7.0658,[79]7.0225, save_imatrix: stored collected data after 80 chunks in RoLlama2-7b-Base-IMat-GGUF/imatrix.dat [80]6.9714,[81]6.9669,[82]6.9332,[83]6.9034,[84]6.9238,[85]6.9416,[86]6.9542,[87]6.9856,[88]6.9754,[89]6.9436, save_imatrix: stored collected data after 90 chunks in RoLlama2-7b-Base-IMat-GGUF/imatrix.dat [90]6.9388,[91]6.9316,[92]6.9052,[93]6.9047,[94]6.8704,[95]6.8598,[96]6.8683,[97]6.8744,[98]6.8523,[99]6.8122, save_imatrix: stored collected data after 100 chunks in RoLlama2-7b-Base-IMat-GGUF/imatrix.dat [100]6.8280,[101]6.8493,[102]6.8454,[103]6.8313,[104]6.7957,[105]6.7833,[106]6.8049,[107]6.8225,[108]6.8166,[109]6.8088, save_imatrix: stored collected data after 110 chunks in RoLlama2-7b-Base-IMat-GGUF/imatrix.dat [110]6.7904,[111]6.8092,[112]6.8310,[113]6.8367,[114]6.8622,[115]6.8613,[116]6.8605,[117]6.8584,[118]6.8650,[119]6.8409, save_imatrix: stored collected data after 120 chunks in RoLlama2-7b-Base-IMat-GGUF/imatrix.dat [120]6.8407,[121]6.8334,[122]6.8168,[123]6.8380,[124]6.8310,[125]6.8360,[126]6.8229,[127]6.8227,[128]6.8378,[129]6.8105, save_imatrix: stored collected data after 130 chunks in RoLlama2-7b-Base-IMat-GGUF/imatrix.dat [130]6.7825,[131]6.7693,[132]6.7773,[133]6.7399,[134]6.7478,[135]6.7146,[136]6.6873,[137]6.6523,[138]6.6196,[139]6.5866, save_imatrix: stored collected data after 140 chunks in RoLlama2-7b-Base-IMat-GGUF/imatrix.dat [140]6.5604,[141]6.5322,[142]6.5060,[143]6.5056,[144]6.5018,[145]6.4858,[146]6.4614,[147]6.4538,[148]6.4476,[149]6.4416, save_imatrix: stored collected data after 150 chunks in RoLlama2-7b-Base-IMat-GGUF/imatrix.dat [150]6.4331,[151]6.4143,[152]6.4082,[153]6.3941,[154]6.3841,[155]6.3965,[156]6.3716,[157]6.3683,[158]6.3783,[159]6.3709, save_imatrix: stored collected data after 160 chunks in RoLlama2-7b-Base-IMat-GGUF/imatrix.dat [160]6.3711,[161]6.3793,[162]6.3778,[163]6.3884,[164]6.3915,[165]6.4053,[166]6.4116,[167]6.4109,[168]6.4165,[169]6.4187, save_imatrix: stored collected data after 170 chunks in RoLlama2-7b-Base-IMat-GGUF/imatrix.dat [170]6.4411,[171]6.4371,[172]6.4493,[173]6.4749,[174]6.4882,[175]6.5136,[176]6.5301,[177]6.5492,[178]6.5616,[179]6.5917, save_imatrix: stored collected data after 180 chunks in RoLlama2-7b-Base-IMat-GGUF/imatrix.dat [180]6.6051,[181]6.6538,[182]6.6720,[183]6.6970,[184]6.6986,[185]6.7028,[186]6.7188,[187]6.7233,[188]6.7161,[189]6.7217, save_imatrix: stored collected data after 190 chunks in RoLlama2-7b-Base-IMat-GGUF/imatrix.dat [190]6.7322,[191]6.7441,[192]6.7508,[193]6.7804,[194]6.7786,[195]6.7564,[196]6.7951,[197]6.8311,[198]6.8616,[199]6.9126, save_imatrix: stored collected data after 200 chunks in RoLlama2-7b-Base-IMat-GGUF/imatrix.dat [200]6.9532,[201]6.9634,[202]6.9688,[203]6.9288,[204]6.9336,[205]6.9453,[206]6.9705,[207]6.9605,[208]6.9563,[209]6.9660, save_imatrix: stored collected data after 210 chunks in RoLlama2-7b-Base-IMat-GGUF/imatrix.dat [210]6.9813,[211]6.9957,[212]6.9933,[213]6.9930,[214]7.0022,[215]7.0195,[216]7.0291,[217]7.0319,[218]7.0229,[219]7.0206, save_imatrix: stored collected data after 220 chunks in RoLlama2-7b-Base-IMat-GGUF/imatrix.dat [220]7.0145,[221]7.0166,[222]7.0143,[223]7.0363,[224]7.0224,[225]7.0250,[226]7.0196,[227]7.0529,[228]7.0876,[229]7.1273, save_imatrix: stored collected data after 230 chunks in RoLlama2-7b-Base-IMat-GGUF/imatrix.dat [230]7.1639,[231]7.1754,[232]7.1593,[233]7.1410,[234]7.1214, save_imatrix: stored collected data after 234 chunks in RoLlama2-7b-Base-IMat-GGUF/imatrix.dat llama_print_timings: load time = 2722.80 ms llama_print_timings: sample time = 0.00 ms / 1 runs ( 0.00 ms per token, inf tokens per second) llama_print_timings: prompt eval time = 225723.02 ms / 119808 tokens ( 1.88 ms per token, 530.77 tokens per second) llama_print_timings: eval time = 0.00 ms / 1 runs ( 0.00 ms per token, inf tokens per second) llama_print_timings: total time = 228410.17 ms / 119809 tokens Final estimate: PPL = 7.1214 +/- 0.06868