icefall_asr_tal-csasr_pruned_transducer_stateless5
/
log
/greedy_search
/log-decode-epoch-30-avg-24-context-2-max-sym-per-frame-1-use-averaged-model-2022-06-23-15-03-06
2022-06-23 15:03:06,191 INFO [decode.py:536] Decoding started | |
2022-06-23 15:03:06,191 INFO [decode.py:542] Device: cuda:0 | |
2022-06-23 15:03:06,298 INFO [lexicon.py:176] Loading pre-compiled data/lang_char/Linv.pt | |
2022-06-23 15:03:06,315 INFO [decode.py:552] {'best_train_loss': inf, 'best_valid_loss': inf, 'best_train_epoch': -1, 'best_valid_epoch': -1, 'batch_idx_train': 0, 'log_interval': 50, 'reset_interval': 100, 'valid_interval': 2000, 'feature_dim': 80, 'subsampling_factor': 4, 'model_warm_step': 1000, 'env_info': {'k2-version': '1.15.1', 'k2-build-type': 'Release', 'k2-with-cuda': True, 'k2-git-sha1': 'f8d2dba06c000ffee36aab5b66f24e7c9809f116', 'k2-git-date': 'Thu Apr 21 12:20:34 2022', 'lhotse-version': '1.4.0.dev+git.94e9ed9.clean', 'torch-version': '1.11.0', 'torch-cuda-available': True, 'torch-cuda-version': '10.2', 'python-version': '3.8', 'icefall-git-branch': 'pruned-rnnt5-recipe-for-tal-csasr', 'icefall-git-sha1': 'c1c893b-dirty', 'icefall-git-date': 'Thu Jun 16 19:19:00 2022', 'icefall-path': '/ceph-meixu/luomingshuang/icefall', 'k2-path': '/ceph-ms/luomingshuang/k2_latest/k2/python/k2/__init__.py', 'lhotse-path': '/ceph-meixu/luomingshuang/anaconda3/envs/k2-python/lib/python3.8/site-packages/lhotse-1.4.0.dev0+git.94e9ed9.clean-py3.8.egg/lhotse/__init__.py', 'hostname': 'de-74279-k2-train-1-0307195509-54c966b95f-rtpfq', 'IP address': '10.177.22.9'}, 'epoch': 30, 'iter': 0, 'avg': 24, 'use_averaged_model': True, 'exp_dir': PosixPath('pruned_transducer_stateless5/exp'), 'lang_dir': 'data/lang_char', 'decoding_method': 'greedy_search', 'beam_size': 4, 'beam': 4, 'max_contexts': 4, 'max_states': 8, 'context_size': 2, 'max_sym_per_frame': 1, 'num_encoder_layers': 24, 'dim_feedforward': 1536, 'nhead': 8, 'encoder_dim': 384, 'decoder_dim': 512, 'joiner_dim': 512, 'manifest_dir': PosixPath('data/fbank_new'), 'max_duration': 800, 'bucketing_sampler': True, 'num_buckets': 300, 'concatenate_cuts': False, 'duration_factor': 1.0, 'gap': 1.0, 'on_the_fly_feats': False, 'shuffle': True, 'drop_last': True, 'return_cuts': True, 'num_workers': 2, 'enable_spec_aug': True, 'spec_aug_time_warp_factor': 80, 'enable_musan': True, 'input_strategy': 'PrecomputedFeatures', 'res_dir': PosixPath('pruned_transducer_stateless5/exp/greedy_search'), 'suffix': 'epoch-30-avg-24-context-2-max-sym-per-frame-1-use-averaged-model', 'blank_id': 0, 'vocab_size': 7341} | |
2022-06-23 15:03:06,315 INFO [decode.py:554] About to create model | |
2022-06-23 15:03:06,919 INFO [decode.py:621] Calculating the averaged model over epoch range from 6 (excluded) to 30 | |
2022-06-23 15:03:13,988 INFO [decode.py:643] Number of model parameters: 102139163 | |
2022-06-23 15:03:13,988 INFO [asr_datamodule.py:425] About to get dev cuts | |
2022-06-23 15:03:13,991 INFO [asr_datamodule.py:360] About to create dev dataset | |
2022-06-23 15:03:14,289 INFO [asr_datamodule.py:381] About to create dev dataloader | |
2022-06-23 15:03:14,289 INFO [asr_datamodule.py:432] About to get test cuts | |
2022-06-23 15:03:14,884 INFO [asr_datamodule.py:407] About to create test dataloader | |
2022-06-23 15:03:16,776 INFO [decode.py:447] batch 0/?, cuts processed until now is 78 | |
2022-06-23 15:03:45,168 INFO [decode.py:464] The transcripts are stored in pruned_transducer_stateless5/exp/greedy_search/recogs-dev-greedy_search-epoch-30-avg-24-context-2-max-sym-per-frame-1-use-averaged-model.txt | |
2022-06-23 15:03:45,297 INFO [utils.py:410] [dev-greedy_search] %WER 7.30% [8318 / 113916, 1384 ins, 1921 del, 5013 sub ] | |
2022-06-23 15:03:45,639 INFO [decode.py:477] Wrote detailed error stats to pruned_transducer_stateless5/exp/greedy_search/errs-dev-greedy_search-epoch-30-avg-24-context-2-max-sym-per-frame-1-use-averaged-model.txt | |
2022-06-23 15:03:45,639 INFO [decode.py:494] | |
For dev, WER of different settings are: | |
greedy_search 7.3 best for dev | |
2022-06-23 15:03:47,602 INFO [decode.py:447] batch 0/?, cuts processed until now is 82 | |
2022-06-23 15:04:28,459 INFO [decode.py:447] batch 50/?, cuts processed until now is 6627 | |
2022-06-23 15:05:08,790 INFO [decode.py:447] batch 100/?, cuts processed until now is 14092 | |
2022-06-23 15:05:14,097 INFO [decode.py:464] The transcripts are stored in pruned_transducer_stateless5/exp/greedy_search/recogs-test-greedy_search-epoch-30-avg-24-context-2-max-sym-per-frame-1-use-averaged-model.txt | |
2022-06-23 15:05:14,484 INFO [utils.py:410] [test-greedy_search] %WER 7.39% [24750 / 335012, 4023 ins, 5746 del, 14981 sub ] | |
2022-06-23 15:05:15,485 INFO [decode.py:477] Wrote detailed error stats to pruned_transducer_stateless5/exp/greedy_search/errs-test-greedy_search-epoch-30-avg-24-context-2-max-sym-per-frame-1-use-averaged-model.txt | |
2022-06-23 15:05:15,485 INFO [decode.py:494] | |
For test, WER of different settings are: | |
greedy_search 7.39 best for test | |
2022-06-23 15:05:15,486 INFO [decode.py:680] Done! | |