icefall-asr-librispeech-zipformer-2023-05-15
/
decoding_result
/fast_beam_search
/log-decode-epoch-50-avg-25-beam-20.0-max-contexts-8-max-states-64-use-averaged-model-2023-10-08-18-58-59
2023-10-08 18:58:59,577 INFO [decode.py:670] Decoding started | |
2023-10-08 18:58:59,578 INFO [decode.py:676] Device: cuda:0 | |
2023-10-08 18:58:59,583 INFO [decode.py:686] {'best_train_loss': inf, 'best_valid_loss': inf, 'best_train_epoch': -1, 'best_valid_epoch': -1, 'batch_idx_train': 0, 'log_interval': 50, 'reset_interval': 200, 'valid_interval': 3000, 'feature_dim': 80, 'subsampling_factor': 4, 'warm_step': 2000, 'env_info': {'k2-version': '1.24.1', 'k2-build-type': 'Release', 'k2-with-cuda': True, 'k2-git-sha1': 'c51a0b9684442a88ee37f3ce0af686a04b66855b', 'k2-git-date': 'Mon May 1 21:38:03 2023', 'lhotse-version': '1.12.0.dev+git.891bad1.clean', 'torch-version': '1.10.0+cu102', 'torch-cuda-available': True, 'torch-cuda-version': '10.2', 'python-version': '3.8', 'icefall-git-branch': 'new-zipformer-onnx-decode', 'icefall-git-sha1': 'a53d7102-dirty', 'icefall-git-date': 'Fri Jun 30 11:38:26 2023', 'icefall-path': '/ceph-zw/workspace/zipformer/icefall_zipformer', 'k2-path': '/ceph-zw/workspace/k2/k2/k2/python/k2/__init__.py', 'lhotse-path': '/ceph-zw/workspace/share/lhotse/lhotse/__init__.py', 'hostname': 'de-74279-k2-train-9-0208143539-7dbf569d4f-r7nrb', 'IP address': '10.177.13.150'}, 'epoch': 50, 'iter': 0, 'avg': 25, 'use_averaged_model': True, 'exp_dir': PosixPath('zipformer/exp'), 'bpe_model': 'data/lang_bpe_500/bpe.model', 'lang_dir': PosixPath('data/lang_bpe_500'), 'decoding_method': 'fast_beam_search', 'beam_size': 4, 'beam': 20.0, 'ngram_lm_scale': 0.01, 'max_contexts': 8, 'max_states': 64, 'context_size': 2, 'max_sym_per_frame': 1, 'num_paths': 200, 'nbest_scale': 0.5, 'num_encoder_layers': '2,2,3,4,3,2', 'downsampling_factor': '1,2,4,8,4,2', 'feedforward_dim': '512,768,1024,1536,1024,768', 'num_heads': '4,4,4,8,4,4', 'encoder_dim': '192,256,384,512,384,256', 'query_head_dim': '32', 'value_head_dim': '12', 'pos_head_dim': '4', 'pos_dim': 48, 'encoder_unmasked_dim': '192,192,256,256,256,192', 'cnn_module_kernel': '31,31,15,15,15,31', 'decoder_dim': 512, 'joiner_dim': 512, 'causal': False, 'chunk_size': '16,32,64,-1', 'left_context_frames': '64,128,256,-1', 'use_transducer': True, 'use_ctc': False, 'full_libri': True, 'mini_libri': False, 'manifest_dir': PosixPath('data/fbank'), 'max_duration': 600, 'bucketing_sampler': True, 'num_buckets': 30, 'concatenate_cuts': False, 'duration_factor': 1.0, 'gap': 1.0, 'on_the_fly_feats': False, 'shuffle': True, 'drop_last': True, 'return_cuts': True, 'num_workers': 2, 'enable_spec_aug': True, 'spec_aug_time_warp_factor': 80, 'enable_musan': True, 'input_strategy': 'PrecomputedFeatures', 'res_dir': PosixPath('zipformer/exp/fast_beam_search'), 'suffix': 'epoch-50-avg-25-beam-20.0-max-contexts-8-max-states-64-use-averaged-model', 'blank_id': 0, 'unk_id': 2, 'vocab_size': 500} | |
2023-10-08 18:58:59,584 INFO [decode.py:688] About to create model | |
2023-10-08 18:59:00,536 INFO [decode.py:755] Calculating the averaged model over epoch range from 25 (excluded) to 50 | |
2023-10-08 18:59:11,877 INFO [decode.py:791] Number of model parameters: 65549011 | |
2023-10-08 18:59:11,878 INFO [asr_datamodule.py:465] About to get test-clean cuts | |
2023-10-08 18:59:11,884 INFO [asr_datamodule.py:472] About to get test-other cuts | |
2023-10-08 18:59:15,110 INFO [decode.py:562] batch 0/?, cuts processed until now is 43 | |
2023-10-08 18:59:52,622 INFO [decode.py:562] batch 20/?, cuts processed until now is 1430 | |
2023-10-08 19:00:10,479 INFO [zipformer.py:1728] name=None, attn_weights_entropy = tensor([3.2918, 3.3261, 4.0328, 5.8494], device='cuda:0') | |
2023-10-08 19:00:32,724 INFO [decode.py:562] batch 40/?, cuts processed until now is 2561 | |
2023-10-08 19:00:33,939 INFO [decode.py:580] The transcripts are stored in zipformer/exp/fast_beam_search/recogs-test-clean-beam_20.0_max_contexts_8_max_states_64-epoch-50-avg-25-beam-20.0-max-contexts-8-max-states-64-use-averaged-model.txt | |
2023-10-08 19:00:34,053 INFO [utils.py:562] [test-clean-beam_20.0_max_contexts_8_max_states_64] %WER 2.21% [1160 / 52576, 125 ins, 92 del, 943 sub ] | |
2023-10-08 19:00:34,355 INFO [decode.py:593] Wrote detailed error stats to zipformer/exp/fast_beam_search/errs-test-clean-beam_20.0_max_contexts_8_max_states_64-epoch-50-avg-25-beam-20.0-max-contexts-8-max-states-64-use-averaged-model.txt | |
2023-10-08 19:00:34,358 INFO [decode.py:610] | |
For test-clean, WER of different settings are: | |
beam_20.0_max_contexts_8_max_states_64 2.21 best for test-clean | |
2023-10-08 19:00:37,030 INFO [decode.py:562] batch 0/?, cuts processed until now is 52 | |
2023-10-08 19:01:11,751 INFO [decode.py:562] batch 20/?, cuts processed until now is 1647 | |
2023-10-08 19:01:50,594 INFO [decode.py:562] batch 40/?, cuts processed until now is 2870 | |
2023-10-08 19:01:51,784 INFO [decode.py:580] The transcripts are stored in zipformer/exp/fast_beam_search/recogs-test-other-beam_20.0_max_contexts_8_max_states_64-epoch-50-avg-25-beam-20.0-max-contexts-8-max-states-64-use-averaged-model.txt | |
2023-10-08 19:01:51,903 INFO [utils.py:562] [test-other-beam_20.0_max_contexts_8_max_states_64] %WER 4.82% [2523 / 52343, 235 ins, 230 del, 2058 sub ] | |
2023-10-08 19:01:52,214 INFO [decode.py:593] Wrote detailed error stats to zipformer/exp/fast_beam_search/errs-test-other-beam_20.0_max_contexts_8_max_states_64-epoch-50-avg-25-beam-20.0-max-contexts-8-max-states-64-use-averaged-model.txt | |
2023-10-08 19:01:52,218 INFO [decode.py:610] | |
For test-other, WER of different settings are: | |
beam_20.0_max_contexts_8_max_states_64 4.82 best for test-other | |
2023-10-08 19:01:52,218 INFO [decode.py:822] Done! | |