repo
stringlengths 8
116
| tasks
stringlengths 8
117
| titles
stringlengths 17
302
| dependencies
stringlengths 5
372k
| readme
stringlengths 5
4.26k
| __index_level_0__
int64 0
4.36k
|
---|---|---|---|---|---|
AngeLouCN/DC-UNet | ['medical image segmentation', 'semantic segmentation'] | ['DC-UNet: Rethinking the U-Net Architecture with Dual Channel Efficient CNN for Medical Images Segmentation'] | main.py model.py saveModel evaluateModel tversky_loss jacard iou_loss dice_coef_loss focal_tversky dice_coef tversky trainStep trans_conv2d_bn DCUNet conv2d_bn ResPath DCBlock flatten sum flatten sum flatten sum tversky makedirs write save to_json open saveModel round open subplot str imshow title savefig sum range predict close read suptitle print reshape makedirs write figure ravel len evaluateModel format print range fit int conv2d_bn add concatenate conv2d_bn range add concatenate ResPath Model conv2d_bn Input DCBlock | # DC-UNet: Rethinking the U-Net Architecture with Dual Channel Efficient CNN for Medical Images Segmentation <div align=center><img src="https://github.com/AngeLouCN/DC-UNet/blob/main/results/result.PNG" width="784" height="462" alt="Result"/></div> This repository contains the implementation of a new version U-Net (DC-UNet) used to segment different types of biomedical images. This is a binary classification task: the neural network predicts if each pixel in the biomedical images is either a region of interests (ROI) or not. The neural network structure is described in this [**paper**](https://arxiv.org/abs/2006.00414). ## Architecture of DC-UNet <div align=center><img src="https://github.com/AngeLouCN/DC-UNet/blob/main/model_architecture/DC-block.jpg" width="250" height="250" alt="DC-Block"/><img src="https://github.com/AngeLouCN/DC-UNet/blob/main/model_architecture/res_path.jpg" width="600" height="250" alt="Res-path"/></div> <div align=center><img src="https://github.com/AngeLouCN/DC-UNet/blob/main/model_architecture/dcunet.jpg" width="850" height="250" alt="DC-UNet"/></div> ## Dataset In this project, we test three datasets: - [x] Infrared Breast Dataset | 100 |
AngryCai/BS-Nets | ['hyperspectral image classification'] | ['BS-Nets: An End-to-End Framework For Band Selection of Hyperspectral Image'] | BS_Net_FC.py utility.py Preprocessing.py BS_Net_Conv.py Helper.py __init__.py BS_Net_Conv BS_Net_FC Dataset cal_mean_spectral_angle eval_band_cv cal_mean_spectral_divergence eval_band maxabs_scale accuracy_score Processor KNN predict fit save_res_4kfolds_cv len Processor append train_test_split range predict fit asarray entropy reshape shape histogram append sum range arccos reshape dot shape sum range | # BS-Nets # Band selection (feature selection) of hyperspectral image using neural networks with attention mechanism. **Requirement** - TensorFlow-1.6 Please cite the following paper. > > [Y. Cai, X. Liu, and Z. Cai, "BS-Nets: An End-to-End Framework for Band Selection of Hyperspectral Image," IEEE Transactions on Geoscience and Remote Sensing, pp. 1-16, 2019.](https://ieeexplore.ieee.org/document/8907858 "BS-Nets: An End-to-End Framework for Band Selection of Hyperspectral Image") # Citing # @ARTICLE{8907858, | 101 |
Animadversio/FloodFillNetwork-Notes | ['boundary detection', 'semantic segmentation'] | ['Flood-Filling Networks'] | ffn/training/augmentation.py compute_partitions.py agglomeration_graph_gen.py ffn/utils/ortho_plane_visualization.py analysis_script/visualize_seed.py analysis_script/visualize_segmentation.py ffn/inference/align.py ffn/inference/inference_flags.py ffn/inference/executor.py ffn/inference/resegmentation.py ffn/training/model.py analysis_script/image_preprocess.py examples/kasthuri11_1_proc.py ffn/inference/resegmentation_pb2.py ffn/inference/consensus_pb2.py ffn/training/optimizer.py ffn/inference/resegmentation_analysis.py ffn/utils/vector_pb2.py parallel_misc/mnist_replica.py analysis_script/utils_format_convert.py neuroglancer_segment_visualize.py ffn/inference/storage.py ffn/utils/proofreading.py resegment_seed_generation_Center_Mass.py proc_segmentation_script.py train.py visualize_segmentation_script.py tissue_classify/Unet_model.py compute_partitions_parallel.py parallel_misc/asynch_training_example.py analysis_script/read_segmentation_results.py run_inference.py analysis_script/subvolume_stitching.py build_coordinates.py parallel_misc/train_multi_GPU_sync.py ffn/inference/inference_utils.py ffn/inference/inference.py ffn/utils/png_to_h5.py ffn/utils/bounding_box_pb2.py parallel_misc/cifar10_multi_gpu.py run_tc_train.py ffn/training/inputs.py ffn/utils/bounding_box.py tissue_classify/pixel_classifier2D.py analysis_script/neuroglancer_agglomeration.py tissue_classify/data_prep.py ffn/training/models/convstack_3d.py ffn/inference/seed.py ffn/training/mask.py parallel_misc/cifar10.py examples/IPL_proc.py generate_h5_file.py analysis_script/resegment_seed_generation.py analysis_script/DataFormat_convert.py run_resegment_script.py examples/p11_6_proc.py parallel_misc/cifar10_input.py run_consensus.py run_resegment.py run_inference_from_seed.py ffn/training/import_util.py ffn/training/variables.py ffn/inference/inference_pb2.py run_inference_script.py parallel_misc/train_multi_GPUs.py resegment_agglomeration.py ffn/utils/geom_utils.py connect_with_knossos.py ffn/inference/movement.py resegment_seed_generation_ED.py ffn/inference/consensus.py run_tc_inference.py ffn/inference/segmentation.py worker_func main _int64_feature _bytes_feature compute_partitions load_mask _query_summed_volume adjust_bboxes _summed_volume_table main load_mask compute_partitions_parallel_new _query_summed_volume compute_partitions_parallel adjust_bboxes _summed_volume_table main worker_func main merge_seg_dicts generate_seg_dict_from_dir GraphUpdater_show neuroglancer_visualize generate_seg_dict_from_dir_list main worker_func _sel symmetrize_pair_array find_projection_point _sel worker_func symmetrize_pair_array run_save_consensus main main clear_small_island proc_img_dir proc_vol zero_corrected_countless train_canvas_size save_flags get_example train_eval_size _get_offset_and_scale_map _get_permutable_axes train_ffn train_labels_size define_data_input fixed_offsets fov_moves get_batch EvalTracker main run_training_step _get_reflectable_axes train_image_size max_pred_offsets prepare_ffn main normalize_img_stack_with_mask down_sample_img_stack foo ManualAgglomeration visualize_supervoxel_size_dist export_segmentation_to_VAST export_composite_image load_segmentation_output seed_regularize worker_func _sel symmetrize_pair_array stitich_subvolume_grid merge_segment _overlap_selection subvolume_path read_image_vol_from_h5 read_segmentation_from_h5 convert_image_stack_to_h5 normalize_img_stack convert_raw_seg_stack_to_h5 show_img_with_scatter visualize_mask visualize_seed Aligner Alignment compute_consensus_for_segmentations compute_consensus ThreadingBatchExecutor BatchExecutor visualize_state DynamicImage no_halt Canvas Runner _cmap_rgb1 self_prediction_halt request_from_flags options_from_flags TimedIter StatCounter Counters compute_histogram_lut match_histogram timer_counter get_policy_fn BaseMovementPolicy FaceMaxMovementPolicy MovementRestrictor get_scored_move_offsets get_starting_location get_target_path get_canvas process process_point IncompleteResegmentationError evaluate_pair_resegmentation compute_iou InvalidBaseSegmentatonError evaluate_endpoint_resegmentation evaluate_segmentation_result parse_resegmentation_filename PolicyGrid3d PolicyPeaks PolicyInvertOrigins BaseSeedPolicy PolicyPeaks2d PolicyMax PolicyGrid2d relabel_volume make_labels_contiguous _get_index_dtype clear_dust clean_up split_segmentation_by_intersection reduce_id_bits split_disconnected_components dequantize_probability legacy_segmentation_path checkpoint_path threshold_segmentation object_prob_path clip_subvolume_to_bounds build_mask legacy_subvolume_path atomic_file legacy_object_prob_path load_segmentation_from_source quantize_probability load_origins segmentation_path get_existing_subvolume_path load_segmentation subvolume_path get_corner_from_path get_existing_corners save_subvolume decorated_volume reflection permute_axes xy_transpose PermuteAndReflect import_symbol soften_labels load_from_numpylike create_filename_queue ravel_lom_dims ravel_zyx_dims get_offset_scale offset_and_scale_patches unravel_lom_dims lom_radius redundant_lom lom_dims unravel_zyx_dims load_patch_coordinates_from_filename_queue load_patch_coordinates make_seed crop update_at crop_and_pad FFNModel optimizer_from_flags FractionTracker _predict_object_mask ConvStack3DFFNModel BoundingBox OrderlyOverlappingCalculator containing intersections intersection ToVector3j To3Tuple ToNumpy3Vector concat_ortho_planes cut_ortho_planes normalize_image ObjectReview GraphUpdater ObjectClassification Base main train inputs _add_loss_summaries _activation_summary inference distorted_inputs _variable_with_weight_decay _variable_on_cpu loss maybe_download_and_extract distorted_inputs inputs read_cifar10 _generate_image_and_label_batch average_gradients main train tower_loss main main run_training_step train_ffn average_gradients train_ffn_multi tower_loss pixel_classify_data_proc image_stack_to_vol pixel_classify_data_generator pixel_classifier_2d inference_on_image dilate_pool myUnet merge evaluate_pair_resegmentation items list defaultdict TFRecordOptions GZIP concatenate shuffle split partition_volumes info max values enumerate cumsum astype int32 masks _summed_volume_table build_mask BoundingBox clear_dust load_mask _query_summed_volume set shape prod _summed_volume_table unique info zeros sum array enumerate len append adjusted_by all array adjust_bboxes BoundingBox c_uint8 copyto Pool c_int16 clear_dust load_mask map shape RawArray sum prod set unique info time print reshape zeros array len print _query_summed_volume _summed_volume_table enumerate len BoundingBox c_uint8 copyto Pool c_int16 clear_dust load_mask map shape RawArray sum prod set unique info time print reshape zeros array len stack_n print beg_n path name_pattern convert_image_stack_to_h5 normalize_img_stack output_name load items uint64 list subvolume_path print read_image_vol_from_h5 tuple close Viewer append join sorted int subvolume_path isdir search group append listdir exists enumerate items list append join generate_seg_dict_from_dir export_composite_image seg_export_dir seg_dir read_image_vol_from_h5 render_dir visualize_supervoxel_size_dist imageh5_dir export_segmentation_to_VAST load_segmentation_output visualize sort unique zeros range len mean array nonzero reshape argmin sqrt shape nonzero unravel_index zeros sum array range list argmin astype sqrt shape unravel_index zeros sum range Parse segmentation_output_dir make_labels_contiguous ConsensusRequest segmentation_path save_subvolume append compute_consensus BoundingBox Parse segmentation_output_dir join dump bounding_box request_from_flags dictConfig start Runner stop_executor MakeDirs run object_prob_path save_segmentation log_info corner segment_at literal_eval segmentation_path sum downsample_factor unique seed_list make_canvas savez inference_request InferenceRequest fromarray sorted imresize glob print tuple convert crop any split save resize float array inference_on_image merge fromarray join imresize print tuple File close convert merge shape any save create_dataset zeros float inference_on_image enumerate split setdiff1d print reshape copy shape unique label append ndindex append run image_offset_scale_map split soften_labels load_from_numpylike constant batch_size train_coords reshape ones tolist train_image_size logical_and transform_axes train_labels_size offset_and_scale_patches PermuteAndReflect shuffle_batch equal load_patch_coordinates split list float32 placeholder define_tf_graph chain input_image_size sorted popleft train_image_size extend set add any crop_and_pad deque get_scored_move_offsets array logit load_example get_offsets add_patch crop_and_pad make_seed _batch range zip concatenate MakeDirs train_dir seed int time task train_ffn model_name import_symbol stat_only join str print File close shape create_dataset block_reduce array append percentile join print reshape File astype close flatten shape create_dataset append array clip input eval load subvolume_path show join xlabel print close ylabel title hist log10 figure unique savefig warning makedirs show join imresize print divmod close imshow warning figure zeros range imsave makedirs show join divmod close imshow figure zeros range imsave makedirs unique label seed_regularize center_of_mass int uint64 list print divmod set _overlap_selection array unique append argmax max merge_segment tuple exists str list segmentation_path append range add_edges_from Graph close unique load relabel_volume connected_components subvolume_path print min extend add_nodes_from save_subvolume zeros len join print File astype axis close imshow create_dataset zeros imread range enumerate join print File axis close imshow unique create_dataset fromfile zeros range len File File percentile join print reshape File astype close shape create_dataset clip show close imshow title scatter savefig figure show imshow close figure show imshow close figure split_segmentation_by_intersection split_min_size reduce_id_bits Process load_segmentation_from_source segmentation2 getpid unique info compute_consensus_for_segmentations segmentation1 sqrt power pi sin fromarray concat_ortho_planes expit ndarray as_strided isinstance concatenate reshape cut_ortho_planes strides shape UpdateFromPIL deltas scored_coords _cmap_rgb1 array inference_options Parse InferenceOptions Parse inference_request InferenceRequest time uint8 cumulative_distribution equalize_adapthist tolist astype array range cumulative_distribution range zeros insert set add shape unravel_index argmax array enumerate get logit move_threshold movement_policy_args movement_policy_name loads import_symbol unravel_index tuple argmax shape update str join id_a id_b point md5 Exists output_directory MakeDirs info error array log_info _deregister_client points info process_point range len int all array float sum max info int sum CopyFrom list ComputeOverlapCounts items ravel segmentation_radius start EndpointSegmentationResult parse_resegmentation_filename int max evaluate_segmentation_result PairResegmentationResult point radius compute_iou from_a segmentation_radius from_b float sum array parse_resegmentation_filename origin dict arange zeros_like csr_matrix reshape size unique len zeros_like csr_matrix reshape size unique array reshape unique shape max label any list clear_dust copy dict unique zip ravel split_disconnected_components uint64 setdefault bitwise_and dict remap_input unique zip zeros ravel bitwise_or enumerate len HasField split Rename digitize linspace nan astype float32 MakeDirs reduce_id_bits dirname tuple basename search append get_corner_from_path join Glob legacy_segmentation_path Exists checkpoint_path segmentation_path legacy_object_prob_path object_prob_path get_existing_subvolume_path shape BoundingBox intersection invert Alignment expression reshape channels SerializeToString mask align_and_crop logical_not shape WhichOneof clip_subvolume_to_bounds eval fatal expand_bounds decorated_volume zeros values get_existing_subvolume_path min_size threshold HasField split_cc mask convert_to_tensor transpose as_list set_shape rsplit import_module getattr info int Glob search group parse_single_example TFRecordOptions GZIP read dtype list _num_channels iter next array values set_shape py_func array tuple array list tuple extend pad zip append array tuple list full array learning_rate conv3d conv relu minimum BoundingBox isinstance end size maximum start any extend minimum list end map maximum start Vector3j isinstance ndarray isinstance list rollaxis copy append array enumerate zeros swapaxes isnan zeros sigmoid shape Server ClusterSpec name zero_fraction sub histogram scalar multiply add_to_collection _variable_on_cpu l2_loss truncated_normal_initializer join use_fp16 data_dir float16 cast join use_fp16 data_dir float16 cast lrn max_pool sparse_softmax_cross_entropy_with_logits int64 reduce_mean cast add_to_collection name get_collection apply average ExponentialMovingAverage scalar int trainable_variables batch_size name _add_loss_summaries apply_gradients histogram ExponentialMovingAverage exponential_decay scalar join urlretrieve print data_dir extractall stat makedirs read height decode_raw uint8 reshape CIFAR10Record transpose strided_slice cast int32 width depth FixedLengthRecordReader image shuffle_batch batch string_input_producer name get_collection sub add_n inference TOWER_NAME loss scalar concat reduce_mean zip append expand_dims Exists DeleteRecursively train maybe_download_and_extract train_dir num_gpus download_only data_dir len exit job_name task_index read_data_sets sorted glob print astype zeros imread enumerate len int print squeeze predict shape pad ceil zeros argmax array append enumerate | # Flood-Filling Networks Flood-Filling Networks (FFNs) are a class of neural networks designed for instance segmentation of complex and large shapes, particularly in volume EM datasets of brain tissue. For more details, see the related publications: * https://arxiv.org/abs/1611.00421 * https://doi.org/10.1101/200675 This is not an official Google product. # Installation No installation is required. To install the necessary dependencies, run: | 102 |
AnirudhMukherjee/story-generation | ['story generation'] | ['Hierarchical Neural Story Generation'] | storygeneration/beam.py storygeneration/tests/test_example.py storygeneration/model.py storygeneration/multipara.py storygeneration/utils.py storygeneration/sample.py storygeneration/tests/test_beam.py storygeneration/sentiment.py flask_storygeneration.py storygeneration/train.py storygeneration/tests/test_train.py storygeneration/tests/test_utils.py forms.py storygeneration/combined.py results theme ThemeForm BeamSearch get_tweets_for_model checkSent remove_noise generateStory storyGen get_all_words Model sample main remove_noise get_all_words get_tweets_for_model main train TextLoader TestBeamMethods naive_predict TestStringMethods TestUtilsMethods TestUtilsMethods generateStory data validate_on_submit ThemeForm TextAreaField SelectField lemmatize WordNetLemmatizer pos_tag lower sub startswith append remove_noise classify word_tokenize dict sample strip checkSent split storyGen sample parse_args add_argument ArgumentParser parse_args Model add_argument ArgumentParser train TextLoader log_dir batch_size get_checkpoint_state data_dir merge_all FileWriter Model init_from vocab_size GPUOptions input_encoding seq_length | # Hierarchical Story Generation ## Abstract <p>Story generation involves developing a system that can write stories in a manner such that the similarity between the story written by the system is close to stories written by a human. The story generation system that we are working on generates a well-structured, coherent, and semantically correct short story. Our system sees to it that coherency is maintained between sentences as well as paragraphs alike and the plot as well as thematic ideas are carried along throughout the story. Certain characters as well as subplots are introduced as the story progresses. The generated story relies heavily on the input sentences as given by the user. The user gives a few introductory lines to the story as an input, based on which a coherent story is churned out. Keywords from the user input such as the characters and settings are extracted by the system and fed into the sequence-to-sequence model which generates the story. The user can also select a certain theme to be maintained throughout the story. The theme could be anything, for example comedy, based on which the entire mood of the story gets decided and likewise, sentences are generated to evoke a sense of light heartedness or comedy. Thus our system relies on the input provided by the user as well as the theme selected by them as a starting point to be taken into consideration while generating the story. It must be kept in mind that stories must stick to their narrative and not deviate from its intended idea. A basic text generation system might part ways with the main idea of a bunch of texts and deviate off topic altogether by shifting its focus on some unimportant pieces of text. Our system on the other hand does not deviate from the main idea of the story. We achieve this by training our system in a hierarchical fashion. Our system first generates prompts from the user input. A prompt is a short sentence or sentences which conveys the idea of input text. Our system sticks to this prompt while generating the output text. Hence, by making use of a hierarchical fashion to | 103 |
Anna996/Neural-Style-Transfer-Project | ['style transfer'] | ['A Neural Algorithm of Artistic Style'] | project.py style_layers get_loss_and_grads deprocess_image style_loss_of_layer content_loss style_loss gram_matrices_of_style_image astype clip sum function square style_layers function transpose matmul batch_flatten permute_dimensions append transpose square matmul batch_flatten shape permute_dimensions sum style_layers enumerate len calc_gradient reshape | # Neural-Style-Transfer-Project The project is based on Deep Neural Networks, which creates artistic images by learning the content of one image and the style of the other image. The project was written in Python in Colab (by Google), using TensorFlow and VGG16 - a convolutional neural network model. ![res5](https://user-images.githubusercontent.com/51260694/94795833-96a9bd80-03e6-11eb-8c19-3da633d4973f.jpg) ![res6](https://user-images.githubusercontent.com/51260694/94795855-9e696200-03e6-11eb-8367-9e703026d5e8.jpg) Credits for knowledge and inspiration: - A Neural Algorithm of Artistic Style – https://arxiv.org/pdf/1508.06576.pdf - Neural Style Transfer on Real Time Video – https://towardsdatascience.com/neural-style-transfer-on-real-time-video-with-full-implementable-code-ac2dbc0e9822 | 104 |
Annbless/DUTCode | ['video stabilization'] | ['DUT: Learning Video Stabilization by Simply Watching Unstable Videos'] | utils/image_utils.py models/DIFRINT/flowlib.py scripts/DUTStabilizer.py utils/IterativeSmooth.py scripts/StabNetStabilizer.py models/DIFRINT/models.py models/StabNet/v2_93.py models/StabNet/model.py models/DUT/MotionPro.py models/correlation/correlation.py models/StabNet/pytorch_resnet_v2_50.py models/DUT/DUT.py configs/config.py models/DUT/rf_det_module.py scripts/DIFRINTStabilizer.py utils/math_utils.py utils/MedianFilter.py models/DIFRINT/pwcNet.py models/DUT/PWCNet.py models/DUT/rf_det_so.py models/DUT/Smoother.py utils/ProjectionUtils.py utils/WarpUtils.py _FunctionCorrelation ModuleCorrelation cupy_launch FunctionCorrelation cupy_kernel make_color_wheel disp_to_flowfile read_flow read_disp_png read_image scale_image flow_error segment_flow evaluate_flow warp_image flow_to_image write_flow compute_color evaluate_flow_file show_flow visualize_flow read_flow_png UNet2 ResNet2 UNet1 ResNet UNet3 DIFNet3 DIFNet2 ResNet3 UNetFlow Discriminator estimate PwcNet KLT DUT MotionEstimation motionPropagate RFDetection KeypointDetction JacobiSolver MotionPro estimate backwarp Network RFDetModule RFDetSO Smoother transformer stabNet load_weights KitModel parse_args generateStable warpRevBundle2 cvt_img2train make_dirs filtbordmask nms topk_map filter_border soft_nms_3d im_rescale get_gauss_filter_weight soft_max_and_argmax_1d warp clip_patch generateSmooth gauss pairwise_distances ptCltoCr distance_matrix_vector MSD L2Norm SingleMotionPropagate MedianPool2d MultiMotionPropagate multiHomoEstimate MotionDistanceMeasure singleHomoEstimate HomoCalc HomoProj warpListImage mesh_warp_frame int str join replace size search group stride split flow_to_image imshow show read_flow show arctan2 hsv_to_rgb pi sqrt imshow flow_to_image zeros max print close float32 int32 resize fromfile open list Reader zeros asDirect range len tofile array close open zeros mean sqrt eps print min sqrt repeat compute_color max flow_error read_flow flow_error list Reader zeros asDirect range len tofile close dstack zeros array open array open minimum griddata uint8 concatenate reshape astype maximum imshow logical_or zeros range min astype float32 array max uint8 arctan2 size astype pi logical_not isnan shape sqrt floor zeros make_color_wheel range zeros transpose floor arange int view FloatTensor cpu size copy_ floor interpolate ceil cuda grid_sample expand new_ones cuda cat _transform3 item add_argument ArgumentParser unsqueeze resize VideoWriter HEIGHT VideoWriter_fourcc release WIDTH transpose append expand_dims imread range format astype join uint8 print reshape min write float32 warpListImage tqdm numpy len fromarray int COLOR_BGR2GRAY reshape BILINEAR array resize crop cvtColor makedirs INTER_LINEAR remap resize new_full int view clamp size matmul stack repeat unsqueeze eye device meshgrid gather float long cat new_full view grid_sample size matmul div unsqueeze repeat permute device meshgrid to cat new_full pad size filtbordmask zeros_like size where pad unsqueeze tensor ge range cat view size repeat device to numpy float exp size exp max permute exp size unsqueeze device to sum L2Norm len isinstance resize sum zeros_like repeat unsqueeze device to abs range cat t clamp mm sqrt transpose mm view view clamp size chunk clone matmul index_select new_tensor unsqueeze repeat permute gather float long cat norm sqrt float sum arange zeros_like RANSAC unsqueeze numpy device view shape permute meshgrid to range cat medfilt sqrt stack nonzero float MedianPool2d clone findHomography median PIXELS array arange zeros_like RANSAC unsqueeze device HomoProj view transpose MotionDistanceMeasure shape permute meshgrid to expand_dims sum range cat ones_like astype medfilt PIXELS mean sqrt stack nonzero float fit_predict MedianPool2d reshape float32 findHomography HomoCalc median numpy array bmm ones_like zeros_like view shape unsqueeze inverse permute device to HEIGHT WIDTH device to PIXELS long arange zeros_like RANSAC unsqueeze device HEIGHT HomoProj WIDTH view transpose MotionDistanceMeasure permute meshgrid expand_dims to sum cat FLOWC ones_like astype mean stack float fit_predict reshape float32 repeat findHomography HomoCalc PIXELS array arange RANSAC unsqueeze device HEIGHT WIDTH view transpose permute meshgrid to cat FLOWC stack float repeat findHomography PIXELS array abs sqrt rot sum append WIDTH grid_sample stack permute device HomoCalc float HEIGHT range cat concatenate transpose astype float32 from_numpy expand_dims | <h1 align="left">DUT: Learning Video Stabilization by Simply Watching Unstable Videos <a href="https://arxiv.org/pdf/2011.14574.pdf"><img src="https://img.shields.io/badge/arXiv-Paper-<COLOR>.svg" ></a> <a href="https://colab.research.google.com/drive/1i-cK-6uFKbWRjxF26uxUqHHSEvq1Ln8h?usp=sharing"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> </a> </h1> <p align="center"> <a href="#introduction">Introduction</a> | <a href="#results-demo">Results Demo</a> | <a href="#installation">Installation</a> | <a href="#inference-code">Inference Code</a> | <a href="#news">News</a> | <a href="#statement">Statement</a> | | 105 |
AnonymousDLMA/MI_with_DA | ['data augmentation'] | ['How Does Data Augmentation Affect Privacy in Machine Learning?'] | models/resnet_cifar.py models/twolayer_convnet.py mi_attack.py models/wrn16_8.py models/__init__.py mi_utils.py custom_augs.py utils.py cifar_train.py test adjust_learning_rate train_vs_test train checkpoint train_classifier verify_unlearning trn_or_test load_all_stat load_array Net get_ground_truth test_classifier softmax get_files get_best_boundary compute_probs get_moments get_confidence random_crop random_translate generate_transform random_rotate accuracy random_flip MyCIFAR100 random_shear random_erase MyCIFAR10 ResNet conv3x3 BasicBlock resnet110 conv3x3 ConvNet convnet ResNet conv3x3 BasicBlock wrn16_8 eval data time backward print float zero_grad loss_func step max net enumerate eval seed str sess mkdir save arch param_groups lr load load_array reshape append concatenate DataLoader Compose CIFAR100 CIFAR10 reshape exp sum append array range arange trn_or_test append range mean power deepcopy binary_cross_entropy backward zero_grad loss_func SGD choice range parameters test_classifier cuda step max net int item sum max net range append compute_probs range print seed list ToTensor sample Normalize random_erase append range randint uniform randint randint randint randint topk size t eq mul_ expand_as append sum max ResNet ConvNet ResNet | # Code for Neurips2020 submission "Membership Inference with Privately Augmented Data Endorses the Benign while Suppresses the Adversary" ## Dependency This code is tested with [torch 1.5](https://github.com/pytorch/pytorch) and [numpy 1.14](https://numpy.org/). We use benchmark datasets [CIFAR10 and CIFAR100](https://www.cs.toronto.edu/~kriz/cifar.html). The program will download the dataset automatically at the first run. ## Training target model We use random seed to generate individual transformation. Each integer seed corresponds to a different transformation. Therefore, the random seeds chosen during training can be used to re-generate augmented instances that the model is trained on. The following command trains a ResNet110 model with 10 augmented instances for each image. ``` CUDA_VISIBLE_DEVICES=0 python cifar_train.py --arch resnet110 --aug_instances 10 --sess resnet110_N10 ``` | 106 |
Anou9531/Laplacian | ['graph learning'] | ['Learning Laplacian Matrix in Smooth Graph Signal Representations'] | code/main.py code/data_loader.py synthetic_data_gen get_a_vec get_precision_rnd get_precision_er_L get_MSE create_dup_matrix create_A_mat create_b_mat get_precision_er create_static_matrices_for_L_opt get_T_mat create_G_mat get_prec_recall_rnd_L get_f_score get_recall_er_L gl_sig_model get_u_vec T norm ones reshape squeeze inv qp dot create_static_matrices_for_L_opt trace eye matrix ravel array range T create_dup_matrix create_b_mat create_A_mat dot create_G_mat zeros zeros zeros zeros range get_T_mat get_u_vec append zeros arange cumsum zeros get_a_vec range zeros arange cumsum delete append zeros range range fill_diagonal fill_diagonal range fill_diagonal get_precision_rnd | # Learn-Graph-Laplacian This is an implementation of the paper Learning Laplacian Matrix in Smooth Graph Signal Representations https://arxiv.org/pdf/1406.7842.pdf The original code can be found on the authors website [web.media.mit.edu/~xdong/pub.html](https://web.media.mit.edu/~xdong/pub.html) # Tests - Precision, Recall, F-measure are comparable to those mentioned in the paper for both ER graph and Gaussian RBF. For BA graph, it is less. - No parameters are changed for Gaussian RBF. For ER the threshold is slightly increased. If you spot any bugs feel free to open an issue or mail me: ark.sadhu2904@gmail.com | 107 |
AntonioCarta/mslmn | ['speech recognition'] | ['Incremental Training of a Recurrent Neural Network Exploiting a Multi-Scale Dynamic Memory'] | mslmn/cannon/tasks/__init__.py mslmn/incremental_train.py mslmn/iamondb.py mslmn/cannon/utils.py mslmn/cannon/model_selection.py mslmn/cannon/regularizers.py mslmn/models.py mslmn/cannon/laes/__init__.py mslmn/cannon/tasks/interface.py mslmn/task_trainer.py mslmn/cannon/laes/big_svd.py mslmn/cannon/laes/svd_la.py mslmn/cannon/rnn_jit.py mslmn/cannon/experiment.py mslmn/cannon/torch_trainer.py mslmn/cannon/__init__.py mslmn/container.py mslmn/cannon/callbacks.py mslmn/train_iamondb.py mslmn/cannon/rnn.py JitItemClassifier ItemClassifier IAMOnDB construct_stroke_paths parse_ascii_file construct_ascii_path fetch_iamondb_file parse_strokes fetch_iamondb_from_list fetch_iamondb IncrementalTrainingCallback ClockworkRNNCell MultiScaleLMN MultiScaleLMNCell ClockworkRNN SequentialTaskTrainer BidirectionalRNN train_foo ModelCheckpoint save_training_checkpoint OrthogonalInit LearningCurveCallback LRDecayCallback TrainingCallback EarlyStoppingCallback Config Experiment ParamListTrainer RandomSampler soft_ortho_constraint OrthogonalPenalty ActivationNormPreservingRegularizer LSTMLayer SequenceClassifier LSTMCell MIDILanguageModel LMNDetachCell LSTMDetachLayer LMNDetachLayer LinearMemoryNetwork LMNLayer LSTMLayer SequenceClassifier DiscreteRNN RNNLayer LSTMCell RNNCell LinearMemoryNetwork LMNLayer GenericTaskTrainer TorchTrainer build_default_logger SequentialTaskTrainer load_dir_results is_nan set_gpu set_allow_cuda assert_equals cuda_move cosine_similarity gradient_clipping assert_relative_equals standard_init Svd_single_column SvdForBigData KeCSVD indirectSVD get_Xi_block LinearAutoencoder Svd_single_column xi_data_matvec xi_data_rmatvec indirectSVD build_xhi_matrix get_Xi_range KeCoSVD build_R get_la_weights xi_seq_rmatvec xi_seq_matvec build_R_block get_Xi_block vt_R_v_block_multiplication Dataset isalpha join join sorted isalpha filter listdir list print parse_ascii_file parse_strokes tokenize_ind range enumerate list print extend fetch_iamondb_file zip enumerate fetch_iamondb_from_list LSTMLayer ClockworkRNN SequentialTaskTrainer BidirectionalRNN IAMOnDB fit Adam parameters compute_metrics cuda_move ItemClassifier info MultiScaleLMN append_hyperparam_dict best_result LMNLayer log_dir _train_dict save model transpose eye setFormatter addHandler StreamHandler Formatter DEBUG setLevel INFO FileHandler xavier_normal_ print print str new_query format is_available clamp_ named_parameters norm sum abs sum isdir print name scandir append len zeros sum range enumerate max svd list print hstack min reversed append KeCSVD get_Xi_block range int list print hstack reversed ceil KeCSVD range svd todense dot qr csc_matrix list transpose reversed pinv sqrt indirectSVD range shape prange zeros reshape asarray xi_seq_rmatvec shape zeros max range len shape range zeros reshape asarray prange shape append zeros xi_seq_matvec sum range len append get_Xi_block range floor get_Xi_range KeCoSVD list transpose reversed pinv sqrt indirectSVD range print min zeros sum max range enumerate len append range T diag copy vt_R_v_block_multiplication append range T build_R_block zeros_like | # Incremental Training of a Recurrent Neural Network Exploiting a Multi-Scale Dynamic Memory MSLMN code for IAM-OnDB experiments and incremental training. This codebase implements our recurrent model based on a hierarchical recurrent neural network architecture. The model is trained incrementally by dynamically expanding the architecture to capture longer dependencies during training. Each new module is pretrained to maximize its memory capacity. ## References This work is based on our paper published @ ECML 2020: [https://arxiv.org/abs/2006.16800](https://arxiv.org/abs/2006.16800) If you find this useful consider citing: ``` @inproceedings{carta2020incremental, title={Incremental Training of a Recurrent Neural Network Exploiting a Multi-Scale Dynamic Memory}, author={Antonio Carta and Alessandro Sperduti and Davide Bacciu}, booktitle={ECML/PKDD}, | 108 |
Anunay1234/Sentiment-Analysis-using-LSTM | ['stochastic optimization'] | ['Adam: A Method for Stochastic Optimization'] | imdbReviews.py imdb.py imdb_bidirectional_lstm.py load_data extract_words build_dict load_data main grab_data seed load endswith close shuffle array zip append max open join rstrip replace words strip translate lower sub maketrans split append len list extract_words chdir getcwd print glob len dict sum keys enumerate values split extract_words chdir getcwd glob split enumerate len remove_unk join dump print build_dict close open grab_data len | # Sentiment-Analysis-using-LSTM # LSTM-sentiment-analysis Due to computationly intensive of LSTM method, we only use two LSTM layes in our classifcation model. These two LSTM layes are bidirectional, which include a forwads LSTM and a backwards LSTM. Feature extraction was done by reading all training reviews and tokenizing all english words, as well as removing stop words using `nltk` package. Training in LSTM RNN contains two steps. First, run the neural network going forward. This sets the cell states. Then, you go backwards computing derivatives. This uses the cell states (what the network knows at a given point in time) to figure out how to change the network's weights. When LSTM updates cell states, we choose to use the default `Adam` optimizer (http://arxiv.org/abs/1412.6980v8), which is a method for Stochastic Optimization. The optimizer minimizes the loss function, which here is the mean square error between expected output and acutal output. input matrix shape is (number of samples x maxlen) `number_of_samples` here is 25000 reviews. All reviews are transform into sequences of word vector. `maxlen` is the max length of each sequence. i.e., if a review has more than `maxlen` words, then this review will be truncated. However, if a review has less than `maxlen` words, then the sequence will pad 0's to make it a regular shape. `max_features` is the dictionary size. The dictionary was created before data feed into LSTM RNN. Dictionary keys are purified words, dictionary values are the indicies, which is from 2 to 90000. Such that, the most frequent word has lowest index value. For those rarely occurred words, their indicies is large. We can use `max_features` to filter out uncommon words. | 109 |
ApGa/adversarial_deepfakes | ['face swapping'] | ['Adversarial Perturbations Fool Deepfake Detectors'] | utils/parallelized_classifier.py adv_examples.py evaluation.py dip_template.py utils/crop.py utils/HHReLU.py classifier.py cw_attack.py generate_dataset.py utils/dip_utils.py scaled_bim test_softmax_batch bim test test_softmax cw_greedy_round ImageFolderWithPaths fgsm carlini save_batch generate_adversarial_examples ifgsm add_noise make_model_HHReLU train_model lipshitz_regularization _var2numpy L2Adversary from_tanh_space atanh to_tanh_space closure defense_pred show_torch_img load_image save_image input_gradient evaluate Crop load get_image np_to_torch torch_to_np optimize get_noisy_image get_noise plot_image_grid get_image_grid pil_to_np np_to_pil fill_noise get_params crop_image HHReLU add_noise make_model_HHReLU train_model lipshitz_regularization L2Adversary backward to model zero_ float loss save_image group range list tqdm attack append save_batch to list model tqdm zip to max list model tqdm zip to max list model tqdm zip to max range HHReLU len stack repeat Variable normal_ cauchy_ uniform_ add_noise deepcopy time format print to zero_grad tqdm eval double load_state_dict train step range state_dict COLOR_BGR2RGB transpose resize imread array cvtColor expit transpose unsqueeze resize to numpy imshow transpose fromarray uint8 transpose astype save str torch_to_np defense_pred backward print mse plot_image_grid copy_ normal_ parameters item zip save_image cuda net compare_psnr detach stack repeat expit exp criterion model float tqdm input_gradient numpy append to BCEWithLogitsLoss max CrossEntropyLoss astype float32 np_to_pil crop split make_grid show transpose get_image_grid imshow figure max open load isinstance ANTIALIAS pil_to_np BICUBIC resize uniform_ normal_ np_to_torch arange isinstance concatenate fill_noise meshgrid zeros float transpose array transpose uint8 astype closure print zero_grad Adam LBFGS step range | # adversarial_deepfakes Deepfakes with an adversarial twist. This repository provides code and additional materials for the paper: "Adversarial perturbations fool deepfake detectors", Apurva Gandhi and Shomik Jain, To Appear in IJCNN 2020. The paper uses adversarial perturbations to enhance deepfake images and fool common deepfake detectors. We also explore two improvements to deepfake detectors: (i) Lipschitz regularization, and (ii) Deep Image Prior (DIP). Link to preprint: https://arxiv.org/abs/2003.10596. ## Files: - adv_examples.py: Adversarial Examples Creation - classifier.py: Deepfake Detector Creation - cw.py: Carlini-Wagner L2 Norm Attack | 110 |
AppleHolic/source_separation | ['speech enhancement'] | ['Phase-aware Speech Enhancement with Deep Complex U-Net'] | source_separation/settings.py source_separation/train_jointly.py source_separation/synthesize.py source_separation/train.py setup.py source_separation/modules.py source_separation/hyperopt_run.py source_separation/dataset.py source_separation/__init__.py source_separation/models.py source_separation/trainer.py get_requirements get_concated_datasets AugmentSpeechDataset get_datasets main _main RefineSpectrogramUnet ComplexConvBlock SpectrogramUnet ComplexActLayer _ComplexConvNd ComplexConv1d ComplexTransposedConv1d refine_unet_base refine_unet_larger refine_unet_larger_add spec_unet_comp validate test_dir __load_model test_worker WaveDataset run main handle_cases LossMixingTrainer Wave2WaveTrainer main join AugmentSpeechDataset SpeechDataLoader meta_cls join meta_cls ConcatDataset AugmentSpeechDataset SpeechDataLoader zip append join Adam get_datasets MultiStepLR parameters DataParallel dataset_func cuda run update refine_unet_larger print eval load_state_dict get_loadable_checkpoint cuda load preemphasis write_wav print __load_model astype float32 lowpass cuda clip resample cuda clip write_wav squeeze __load_model get_datasets append format zip enumerate join print inv_preemphasis tqdm pesq numpy SAMPLE_RATE makedirs join write_wav basename replace inv_preemphasis squeeze makedirs list print glob __load_model map DataParallel DataLoader WaveDataset makedirs Adam MultiStepLR parameters DataParallel handle_cases cuda run DSD100Meta VoiceBankMeta get_datasets dataset_func MUSDB18Meta list get_concated_datasets map zip | # Source Separation [![Python 3.6](https://img.shields.io/badge/python-3.6-blue.svg)](https://www.python.org/downloads/release/python-360/) [![Hits](https://hits.seeyoufarm.com/api/count/incr/badge.svg?url=https%3A%2F%2Fgithub.com%2FAppleholic%2Fsource_separation)](https://hits.seeyoufarm.com) [![Synthesis Example On Colab Notebook](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Appleholic/source_separation/blob/master/assets/Source_Separation_first_notebook.ipynb) --- ## Introduction *Source Separation* is a repository to extract speeches from various recorded sounds. It focuses to adapt more real-like dataset for training models. ### Main components, different things The latest model in this repository is basically built with spectrogram based models. In mainly, [Phase-aware Speech Enhancement with Deep Complex U-Net](https://arxiv.org/abs/1903.03107) are implemented with modifications. | 111 |
ArantxaCasanova/ralis | ['active learning', 'semantic segmentation'] | ['Reinforced active learning for image segmentation'] | utils/logger.py data/gtav.py data/data_utils.py utils/replay_buffer.py data/camvid_al.py models/fpn_bayesian.py run.py models/model_utils.py utils/transforms.py train_supervised.py utils/progressbar.py utils/parser.py utils/joint_transforms.py data/cityscapes.py data/cityscapes_al_splits.py data/cityscapes_al.py models/query_network.py models/fpn.py data/camvid.py utils/final_utils.py main train_classif main make_dataset colorize_mask Camvid make_dataset colorize_mask Camvid_al make_dataset CityScapes colorize_mask make_dataset CityScapes_al colorize_mask make_dataset CityScapes_al_splits colorize_mask get_data get_transforms make_dataset colorize_mask GTAV FPN FPN50 ResNet Bottleneck conv3x3 Upsample FPN101 ResNet FPN101_bayesian Bottleneck FPN50_bayesian conv3x3 Upsample FPN_bayesian create_models apply_dropout create_feature_vector_3H_region_kl count_parameters select_action compute_state get_region_candidates compute_entropy_seg create_feature_vector_3H_region_kl_sim load_models optimize_model_conv add_labeled_images compute_bald add_kl_pool2 QueryNetworkDQN validate get_training_stage check_mkdir create_and_load_optimizers evaluate set_training_stage compute_set_jacc final_test test confusion_matrix_pytorch get_logfile train RandomHorizontallyFlip SlidingCropOld CenterCrop RandomSizedCrop FreeScale RandomRotate Compose RandomCropRegion Scale CropRegion RandomCrop ComposeRegion SlidingCrop RandomSized Logger LoggerMonitor save_arguments get_arguments format_time progress_bar ReplayMemory DeNormalize FreeScale RandomVerticalFlip FlipChannels MaskToTensorOneHot MaskToTensor epoch_num save_arguments get_data rl_pool ckpt_path get_logfile save cuda push exp_name open seed str create_models list create_and_load_optimizers num_each_iter load_models load_state_dict train_classif append range state_dict manual_seed_all snapshot final_test patience compute_state get_region_candidates copy add_index close test eval optimize_model_conv manual_seed set_training_stage ReplayMemory join checkpointer ExponentialLR read int check_mkdir namedtuple print rl_buffer only_last_labeled select_action reset add_labeled_images get_candidates isfile train split str validate epoch_num num_classes print set_training_stage append train step range validate num_classes step convert putpalette print join append listdir sort Camvid print CityScapes_al_splits CityScapes_al DataLoader GTAV CityScapes get_transforms Camvid_al str print Compose Scale ComposeRegion MaskToTensor tolist loadmat len FPN FPN FPN_bayesian FPN_bayesian print str count_parameters cuda sum load join list items print OrderedDict get_logfile load_state_dict isfile cuda pop int time get_random_unlabeled_region_image set_unlabeled_regions str print choice get_unlabeled_regions get_num_unlabeled_regions get_num_unlabeled_regions_image append len data state_subset unsqueeze compute_bald max str view FloatTensor apply randperm append range cat detach get_subset_state create_feature_vector_3H_region_kl compute_entropy_seg mean eval item type net add_kl_pool2 time print sort reshape create_feature_vector_3H_region_kl_sim cpu get_specific_item len exp view Variable print random num_each_iter choice rl_pool eval cpu max join str print tuple write add_index get_num_labeled_regions ckpt_path append enumerate exp_name open train mean entropy transpose range histogram zeros sum Tensor cat int list sum entropy size tolist transpose repeat histogram unique zeros balance_cl len int size tolist unique zeros exp view log_softmax size eval net state zero_grad state_subset ckpt_path gather cuda exp_name open view progress_bar range cat detach policy_net LongTensor close sample type join Transition backward print smooth_l1_loss write cpu train step len mkdir load join print RMSprop SGD load_state_dict last_epoch str join set_names int resume_epoch print Logger isfile append range join isfile ckpt_path exp_name join ckpt_path exp_name mean sum astype cuda range squeeze_ num_classes view evaluate eval confusion_matrix_pytorch numpy cuda net enumerate clip_grad_norm_ zero_grad numpy cuda squeeze_ num_classes view progress_bar confusion_matrix_pytorch net enumerate collect criterion backward print evaluate parameters step len save ckpt_path cuda exp_name squeeze_ num_classes view progress_bar confusion_matrix_pytorch state_dict eval item net enumerate join criterion evaluate print numpy len squeeze_ num_classes criterion view evaluate print progress_bar eval confusion_matrix_pytorch item numpy cuda net enumerate len DataLoader ckpt_path CityScapes MaskToTensor exp_name open list num_classes OrderedDict load_state_dict append range Camvid Compose test eval load join items print write isfile add_argument ArgumentParser update str join print getattr ckpt_path exp_name int time join format_time write append range len int | # Reinforced Active Learning for Image Segmentation (RALIS) Code for the paper [Reinforced Active Learning for Image Segmentation](https://arxiv.org/abs/2002.06583) ## Dependencies - python 3.6.5 - numpy 1.14.5 - scipy 1.1.0 - Pytorch 0.4.0 ## Scripts The folder 'scripts' contains the different bash scripts that could be used to train the same models used in the paper, for both Camvid and Cityscapes datasets. - launch_supervised.sh: To train the pretrained segmentation models. | 112 |
ArashRabbani/DeePore | ['physical simulations'] | ['DeePore: a deep learning workflow for rapid and comprehensive characterization of porous materials'] | Demo1.py Demo2.py Demo5.py Demo3.py DeePore.py Demo4.py DeePore7 check_get gener ecl_distance DeePore6 DeePore2 WMSE splitdata WBCE parfor loadmodel create_compact_dataset writeh5slice show_feature_maps makeblocks hdf_shapes modelmake normalize DeePore3 showentry predict shuf normal DeePore5 testmodel DeePore8 DeePore4 prettyresult nowstr trainmodel DeePore1 feedsampledata prep readh5slice mat2np now DeePore9 slicevol calc print eval urlretrieve input shuffle ones multiply square float32 tile ones multiply square float32 tile compile Model Input RMSprop compile Model Input RMSprop compile Model Input RMSprop compile Model Input RMSprop compile Model Input RMSprop compile Model Input RMSprop Model Input compile Model Input compile Model Input compile print save microseconds seconds datetime days now DeePore7 DeePore5 DeePore6 DeePore2 DeePore8 DeePore4 DeePore9 DeePore3 DeePore1 load str gener hdf_shapes save_weights load_weights modelmake ModelCheckpoint nowstr fit load str load_weights modelmake int32 gener add_subplot tick_params max str multiply set_yscale ylabel ylim scatter savefig next range predict update xlim load set_xscale evaluate print reshape xlabel text min savemat figure len loadmat zeros squeeze load threshold print makeblocks squeeze ecl_distance int8 THRESH_BINARY mean shape stack zeros imread range len print reshape str len min load reshape multiply mean append range reshape squeeze float64 float32 distance zeros range min show subplot normal squeeze axis sqrt imshow savefig figure ceil range int asarray arange delete linspace argwhere unique ceil append str replace print squeeze strip write close open round range len writeh5slice readh5slice ecl_distance hdf_shapes range slicevol subplot set_title squeeze axis viridis imshow savefig figure len testmodel trainmodel | # DeePore: Deep learning for rapid characterization of porous materials ## Summary DeePore is a deep learning workflow for rapid estimation of a wide range of porous material properties based on the binarized micro–tomography images. By combining naturally occurring porous textures we generated 17,700 semi–real 3–D micro–structures of porous geo–materials with size of 256^3 voxels and 30 physical properties of each sample are calculated using physical simulations on the corresponding pore network models. Next, a designed feed–forward convolutional neural network (CNN) is trained based on the dataset to estimate several morphological, hydraulic, electrical, and mechanical characteristics of the porous material in a fraction of a second. In order to fine–tune the CNN design, we tested 9 different training scenarios and selected the one with the highest average coefficient of determination (*R*2) equal to 0.885 for 1418 testing samples. Additionally, 3 independent synthetic images as well as 3 realistic tomography images have been tested using the proposed method and results are compared with pore network modelling and experimental data, respectively. Tested absolute permeabilities had around 13% relative error compared to the experimental data which is noticeable considering the accuracy of the direct numerical simulation methods such as Lattice Boltzmann and Finite Volume. The workflow is compatible with any physical size of the images due to its dimensionless approach and can be used to characterize large–scale 3–D images by averaging the model outputs for a sliding window that scans the whole geometry. The present repository is corresponded to this published paper: Rabbani, A., Babaei, M., Shams, R., Da Wang, Y., & Chung, T. (2020). DeePore: a deep learning workflow for rapid and comprehensive characterization of porous materials. *Advances in Water Resources*, *146*, 103787. <br/>[Link to the paper on arxiv](https://arxiv.org/abs/2005.03759) [Link to the paper on sciencedirect](https://www.sciencedirect.com/science/article/pii/S0309170820304590) The required packages to use this python repository are: 'numpy', 'scipy', 'h5py', 'tensorflow', 'matplotlib', 'cv2', and 'urllib'. I recommend to use Anaconda which has all these packages installed except cv2 and tensorflow of which you can easily install from pip. Additionally on Demo#4 for improving training time of different scenarios 'joblib' parallel computing library is used but you can skip it if not planning to use parallelization. <br/> Here, is a visual summary of data workflow in DeePore to make the ground truth data and train the model: | 113 |
ArashRahnama/Adversarial-Explainations-for-Artificial-Intelligence-systems-AXAI | ['speech recognition', 'adversarial attack'] | ['An Adversarial Approach for Explaining the Predictions of Deep Neural Networks'] | AXAI.py AXAI | # Adversarial-Explanations-for-Artificial-Intelligence-Systems-AXAI This is the codebase for our AXAI explainability algorithm from our paper "An Adversarial Approach for Explaining the Predictions of Deep Neural Networks" under review for NeurIPS 2020 and available @https://arxiv.org/abs/2005.10284. | 114 |
ArashRahnama/Adversarial-Explanations-for-Artificial-Intelligence-Systems-AXAI | ['speech recognition', 'adversarial attack'] | ['An Adversarial Approach for Explaining the Predictions of Deep Neural Networks'] | AXAI.py AXAI | # Adversarial-Explanations-for-Artificial-Intelligence-Systems-AXAI This is the codebase for our AXAI explainability algorithm from our paper "An Adversarial Approach for Explaining the Predictions of Deep Neural Networks" under review for NeurIPS 2020 and available @https://arxiv.org/abs/2005.10284. | 115 |
Ardibid/ArtisticStyleRoboticPainting | ['style transfer'] | ['Artistic Style in Robotic Painting; a Machine Learning Approach to Learning Brushstroke from Human Artists'] | python_files/vae_models.py python_files/vaes_encoder.py python_files/vae_main.py python_files/vae_loss.py python_files/vae_recons_interps.py python_files/vae_plots.py python_files/vae_dataset.py python_files/vaes_generators.py python_files/vae_train.py Net Encoder_MLP Encoder Reshape Net Generator_MLP Generator Reshape MyDataset partition_dataset loss_function train_procedure save_results ConvVAE MLP_VAE show_samples_ visualize_one_batch imshow savefig plot_vae_training_plot interpolate reconstruct train train_epochs test print int shuffle BCE BCEWithLogitsLoss calc_kl mean train_epochs reconstruct reshape dict DataLoader stack interpolate sample format show_samples_ print makedirs fn dirname plot_vae_training_plot show squeeze numpy format print imshow shape zip numpy enumerate show tight_layout dirname makedirs show make_grid FloatTensor print axis imshow title savefig figure permute arange plot xlabel ylabel title savefig figure linspace legend len view numpy iter to next to next numpy iter zero_grad dataset list OrderedDict append encoder to format mean item sample_training enumerate items decoder backward print loss_function step len OrderedDict eval get str list format decoder print train Adam test extend parameters save append encoder to keys range | # Artistic Style Robotic Painting by: [Ardavan Bidgoli](ardavan.io), [Manuel Rodriguez Ladrón de Guevara](https://github.com/manuelladron) , [Cinnie Hsiung](https://github.com/cinniehsiung?tab=overview&from=2017-01-01&to=2017-01-31), [Jean Oh](https://github.com/jeanoh) , [Eunsu Kang](https://github.com/kangeunsu) #### [arXiv](https://arxiv.org/abs/2007.03647) | [YouTube](https://www.youtube.com/watch?v=UUFIJr9iQuA) **Artistic Style in Robotic Painting: a Machine Learning Approach to Learning Brushstroke from Human Artists** Robotic painting has been a subject of interest among both artists and roboticists since the 1970s. Researchers and interdisciplinary artists have employed various painting techniques and human-robot collaboration models to create visual mediums on canvas. One of the challenges of robotic painting is to apply a desired artistic style to the painting. Style transfer techniques with machine learning models have helped us address this challenge with the visual style of a specific painting. However, other manual elements of style, i.e., painting techniques and brushstrokes of an artist have not been fully addressed. We propose a method to integrate an artistic style to the brushstrokes and the painting process through collaboration with a human artist. In this paper, we describe our approach to 1) collect brushstrokes and hand-brush motion samples from an artist, and 2) train a generative model to generate brushstrokes that pertains to the artist's style, and 3) integrate the learned model on a robot arm to paint on a canvas. In a preliminary study, 71% of human evaluators find our robot's paintings pertaining to the characteristics of the artist's style. This project aims to develop a method to integrate an artistic style to the brushstrokes and the painting process through collaboration with a human artist. In this paper, we describe our approach to 1) collect brushstrokes and hand-brush motion samples from an artist, and 2) train a generative model to generate brushstrokes that pertains to the artist's style, and 3) integrate the learned model on a robot arm to paint on a canvas. **Table of Contents** - [Status](#Status) - [Installation](#installation) - [Dependencies](#Dependencies) | 116 |
Arnaud15/CS236_Neural_Processes_For_Image_Completion | ['gaussian processes'] | ['Conditional Neural Processes'] | NP.py test.py models.py utils.py NP_CIFAR10.py VectorAttentionAggregator ContextEncoder Decoder ContextToLatentDistributionCIFAR QueryAttentionAggregator ContextToLatentDistribution DecoderCIFAR ContextEncoderCIFAR MeanAgregator main train all_forward compute_loss main train all_forward compute_loss main log_normal save_model random_mask_uniform kl_normal make_mesh_grid display_images save_images_batch load_models sample_z display_images_CIFAR random_mask aggregator context_to_dist view size transpose expand context_encoder cat decoder size expand mean sample_z cat save_model zero_grad compute_loss tensor view len iter to next range cat format random_mask_uniform size enumerate add_image time backward print display_images all_forward step random_mask add_scalar DataLoader device models_path ContextToLatentDistribution seed list VectorAttentionAggregator Adam load_models resume_file to ContextEncoder QueryAttentionAggregator manual_seed is_available MNIST log_dir Decoder parameters train epochs MeanAgregator makedirs decoder display_images_CIFAR ContextToLatentDistributionCIFAR ContextEncoderCIFAR DecoderCIFAR CIFAR10 permutation bsize unsqueeze save max show exp quick context_encoder sum range cat format plot choice autoregressive item enumerate context_to_dist decoder print Subset zeros sample_z len ones_like make_grid zeros_like size min expand cat ones_like make_grid zeros_like size min expand cpu range make_grid view transpose numpy imsave join format print save to makedirs load load_state_dict meshgrid float linspace float rand view expand zeros array range view pow exp log sum sqrt to exp device sum | ### CS236 Deep Generative Processes ## Neural Processes for Image Completion #### Amaury Sabran, Arnaud Autef, Benjamin Petit In this project, we develop image completion techniques based on Neural Processes , a recently proposed class of models that uses neural networks to describe distributions over functions. We show that the Neural Process model seamlessly applies to the problem of image completion, explore different approaches to this task and discuss their performance on two well-known datasets: MNIST and CIFAR10. For more details about Neural Processes, see https://arxiv.org/abs/1807.01613 and https://arxiv.org/abs/1807.01622. | 117 |
Artaches/SSAN-self-attention-sentiment-analysis-classification | ['sentiment analysis'] | ['Self-Attention: A Better Building Block for Sentiment Analysis Neural Network Classifiers'] | Utils/Datasets.py Utils/Representations.py ave.py bow.py cnn.py joint.py Utils/MyMetrics.py lstm_bilstm.py retrofit.py Utils/twokenize.py Utils/WordVecs.py Utils/Semeval_2013_Dataset.py Utils/SenTube_Dataset.py Utils/emoticons.py san.py get_best_C print_prediction print_results test_embeddings main test print_prediction bow print_results main get_W convert_dataset create_cnn idx_sent write_vecs print_prediction add_unknown_words print_results main test_embeddings get_dev_params get_best_C print_prediction print_results test_embeddings main get_W convert_dataset create_LSTM create_BiLSTM idx_sent write_vecs print_prediction add_unknown_words print_results main test_embeddings get_dev_params get_best_C print_prediction print_results test_embeddings main concatenate_heads printNumberOfParams multi_head_attention idx_sent write_vecs relative_attention_inner add_unknown_words encoder_layer scaled_dot_product transformerClassifier convert_dataset generate_relative_positions_matrix positional_encoding project_qkv print_results main split_heads run_model_on_datasets_with_embeddings createAndTrainTransformer get_W printMetrics dot_product_attention_relative predictInBatches generate_relative_positions_embeddings Stanford_Sentiment_Dataset General_Dataset analyze_tweet MyMetrics words idx_vecs sum_vecs ave_vecs getMyData rem_mentions_urls conv_tweet Semeval_Dataset words SenTube_Dataset post_process optional align AlignmentFailed neg_lookahead simple_tokenize unicodify squeeze_whitespace edge_punct_munge Tokenization pos_lookahead regexify_abbrev unprotected_tokenize regex_or tokenize WordVecs sorted format _ytrain print _Xdev fit write predict set LogisticRegression _Xtrain _ydev f1_score enumerate flush len _ytest Stanford_Sentiment_Dataset WordVecs Semeval_Dataset sorted _ytrain list MyMetrics LogisticRegression append predict format General_Dataset set mean zip get_scores print get_best_C _Xtrain print_prediction vector_size _Xtest fit print test_embeddings tabulate print add_argument ArgumentParser print_results vars parse_args zeros len _ytest Stanford_Sentiment_Dataset Semeval_Dataset list _ytrain sorted MyMetrics len LogisticRegression append predict format _Xdev General_Dataset set mean zip get_scores print _Xtrain print_prediction _ydev _Xtest fit test int sorted format arange print MyMetrics create_cnn set isfile get_scores float argmax range predict fit uniform dict zeros len Embedding Sequential add Model Dense append Input compile Dropout pad_sequences _Xdev _Xtrain array _Xtest sorted keys argmax str load_model predict_classes _w2idx add_unknown_words range convert_dataset _Xdev create_cnn float listdir keys get_dev_params enumerate get_W join sub _ydev ModelCheckpoint len array create_LSTM create_BiLSTM Embedding Sequential add Dense Dropout LSTM compile len Bidirectional Embedding Sequential add Dense Dropout LSTM compile len seed create_LSTM std create_BiLSTM array cos sin reshape transpose clip_by_value tile range split_last_dimension_then_transpose self_attention_heads model_dim self_attention_heads softmax matmul model_dim reshape transpose matmul get_shape max_relative_positions softmax relative_attention_inner assert_is_compatible_with generate_relative_positions_embeddings dense concatenate_heads dot_product_attention_relative dropout self_attention_sublayer_bias_and_activation print self_attention_heads self_attention_sublayer_dropout model_dim project_qkv qkv_projections_bias_and_activation use_relative_positions split_heads scaled_dot_product dense self_attention_sublayer_residual_and_norm ffnn_sublayer dropout ffnn_sublayer_dropout multi_head_attention add model_dim layer_norm num_layers reduce_mean include_positional_encoding input_emb_apply_dropout range dict empty_like run print get_shape sum trainable_variables printNumberOfParams indices squeeze placeholder transformerClassifier format one_hot sparse_softmax_cross_entropy_with_logits softmax ConfigProto minimize print min float32 reduce_mean int32 global_variables_initializer placeholder_with_default len print format Stanford_Sentiment_Dataset assign WordVecs reset_default_graph run_exps_amount Semeval_Dataset list std placeholder add_unknown_words append range format convert_dataset _Xdev General_Dataset mean get_scores keys createAndTrainTransformer get_W constant printMetrics print Variable float32 now _Xtrain vector_size array _Xtest len now run_model_on_datasets_with_embeddings now search zeros vector_size array split len zeros vector_size array split append split append open words min mean array append max append startswith join list range len squeeze_whitespace unicodify align Tokenization post_process end search span edge_punct_munge append range finditer len append search sub sub | # SSAN-self-attention-sentiment-analysis-classification Code for the paper "Self-Attention: A Better Building Block for Sentiment Analysis Neural Network Classifiers": http://aclweb.org/anthology/W18-6219, https://arxiv.org/abs/1812.07860 . This paper was published in WASSA 2018 (9th Workshop on Computational Approaches to Subjectivity, Sentiment & Social Media Analysis), held in conjuction with EMNLP 2018. Contact: aambarts@sfu.ca The paper builds upon the work of the paper "Assessing State-of-the-art Sentiment Models on State-of-the-art Sentiment Datasets", Barnes et. al. This repository is a fork of their implementation for the said paper: https://github.com/jbarnesspain/sota_sentiment. ## Abstract Sentiment Analysis has seen much progress in the past two decades. For the past few years, neural network approaches, primarily RNNs and CNNs, have been the most successful for this task. Recently, a new category of neural networks, self-attention networks (SANs), have been created which utilizes the attention mechanism as the basic building block. Self-attention networks have been shown to be effective for sequence modeling tasks, while having no recurrence or convolutions. In this work we explore the effectiveness of the SANs for sentiment analysis. We demonstrate that SANs are superior in performance to their RNN and CNN counterparts by comparing their classification accuracy on six datasets as well as their model characteristics such as training speed and memory consumption. Finally, we explore the effects of various SAN modifications such as multi-head attention as well as two methods of incorporating sequence position information into SANs. ## Run Self-Attention models To run the work we've done, simply unzip the google word embeddings in the /embeddings folder (or use your own) and run ```python san.py -emb embeddings/google.txt```. To change which self-attention architecture that was discussed in the paper you'd like to, see the hparams dictionary object in san.py. Using the values in that dictionary you can configure the san.py script to run SSAN, Transfore Encoder, RPR or PE positional information technques, etc. To run the baseline models, follow the instructions from: https://github.com/jbarnesspain/sota_sentiment | 118 |
Au3C2/GVS | ['style transfer', 'anomaly detection'] | ['Generator Versus Segmentor: Pseudo-healthy Synthesis'] | unet/__init__.py utils/split_cases_lits.py unet/unet_model.py utils/ms_ssim.py utils/nii2npy_brats.py unet/unet_parts.py utils/dataset.py unet/networks.py test.py utils/nii2npy_lits.py utils/init_logging.py utils/split_cases_brats.py main.py train_net get_args get_args predict get_norm_layer PixelDiscriminator Identity GANLoss ResnetGenerator ResnetBlock define_D UnetGenerator UnetSkipConnectionBlock init_weights get_scheduler init_net NLayerDiscriminator cal_gradient_penalty define_G Segmenter_3layer Segmenter_lite Segmenter_32channel Reconstucter Segmenter Up DoubleConv_rec DoubleConv Down Up_rec Down_rec OutConv LiverDataset BrainDataset init_logging MS_SSIM create_window ms_ssim _gaussian_filter ssim slice_ct find995 get_boundingbox slice_ct get_boundingbox zero_grad where DataLoader unsqueeze save reconstucter set_postfix_str StepLR view squeeze Adam MSELoss to CrossEntropyLoss range detach ones_like format segmenter mean info item enumerate ce_loss backward tqdm parameters BrainDataset train step len add_argument ArgumentParser imwrite model DataLoader device abs max COLOR_GRAY2RGB str load_state_dict COLORMAP_JET to range define_G format concatenate astype eval mkdir info enumerate load uint8 applyColorMap min output tqdm BrainDataset numpy gpu cvtColor len BatchNorm2d partial InstanceNorm2d LambdaLR CosineAnnealingLR ReduceLROnPlateau StepLR print apply init_weights cuda ResnetGenerator UnetGenerator get_norm_layer NLayerDiscriminator PixelDiscriminator get_norm_layer view size rand grad mean requires_grad_ netD setFormatter getLogger addHandler StreamHandler Formatter mkdir setLevel INFO FileHandler exp arange repeat conv2d transpose _gaussian_filter pow mean clamp_min clamp_min avg_pool2d stack unsqueeze append ssim prod range list bbox append label remove_small_objects regionprops len print astype zeros range len list format glob print sitkFloat32 makedirs where tqdm ReadImage GetArrayFromImage find995 sitkUInt8 save sum range len get_boundingbox zoom sort set mkdir sitkInt16 | ## 📝 Table of Contents - [About](#about) - [Getting Started](#getting_started) - [Usage](#usage) - [Authors](#authors) - [Acknowledgments](#acknowledgement) ## 🧐 About <a name = "about"></a> This is the anonymous code of GVS, which mainly includes training details, pretrained model and the synthetic images of one volume. ## 🏁 Getting Started <a name = "getting_started"></a> These instructions will get you a copy of the project up and running on your local machine for development and testing purposes. | 119 |
AustinDoolittle/Pytorch-Gain | ['object localization', 'semantic segmentation'] | ['Tell Me Where to Look: Guided Attention Inference Network'] | gain.py data.py models.py transform.py main.py mx_to_cv write_image cv_to_mx scale_to_range ImageDataset RawDataset load_image AttentionGAIN tile_images scalar train_handler infer_handler set_available_gpus model_info_handler parse_args ResidualBlock Darknet53 model_to_str get_model GreyNet19 Affine Translate DropoutAndAffine TransformerBase Dropout imwrite dirname makedirs imread resize min max scale_to_range uint8 astype COLOR_GRAY2RGB COLOR_RGB2GRAY transpose astype float32 scale_to_range expand_dims cvtColor int str print expand_dims shape resize ceil zeros float range len str join isinstance batch_size input_channels dataset_path output_dir input_dims name strftime transformer model_type update AttentionGAIN num_epochs load join weights_file gpus print set_available_gpus serialization_format RawDataset train imwrite input_channels output_dir input_dims FloatTensor len waitKey strftime expand image_path imshow heatmap_label load_image generate_heatmap load time weights_file print labels index zeros makedirs load weights_file print model_to_str model_type get_model add_argument add_parser ArgumentParser set_defaults add_subparsers models getattr model named_modules | # Pytorch-Gain An implementation of GAIN heatmap network in pytorch. Original paper: https://arxiv.org/abs/1802.10171 | 120 |
AutoML-4Paradigm/S2E | ['automl'] | ['Searching to Exploit Memorization Effect in Learning from Corrupted Labels'] | space/co_mnist_main.py space/rbf_main.py space/co_main.py space/sin_100_main.py space/co_100_main.py heng_100_main.py alg/loss.py alg/model.py space/random_mnist_main.py alg/grad_main.py loss.py random_100_main.py random_mnist_main.py alg/band_main.py space/random_100_main.py space/rbf_mnist_main.py random_main.py heng_main.py space/sin_main.py space/sin_mnist_main.py space/mlp_100_main.py alg/bayes_main.py space/mlp_mnist_main.py space/rbf_100_main.py space/loss.py heng_mnist_main.py space/random_main.py data/utils.py alg/heng_main.py model.py space/model.py data/mnist.py space/mlp_main.py alg/ng_main.py alg/share_main.py data/cifar.py evaluate accuracy black_box_function adjust_learning_rate main train evaluate accuracy black_box_function adjust_learning_rate main train evaluate accuracy black_box_function adjust_learning_rate main train loss_selfteaching loss_3teaching loss_curve loss_coteaching loss_curriculum loss_softcoteaching CNN MLP CNN_large call_bn CNN_co evaluate accuracy black_box_function adjust_learning_rate main train evaluate accuracy black_box_function adjust_learning_rate main train evaluate accuracy black_box_function adjust_learning_rate main train evaluate accuracy black_box_function adjust_learning_rate main train evaluate accuracy black_box_function adjust_learning_rate main train evaluate accuracy black_box_function adjust_learning_rate main train evaluate accuracy black_box_function adjust_learning_rate main train loss_selfteaching loss_3teaching loss_curve loss_coteaching loss_curriculum loss_softcoteaching CNN MLP CNN_large call_bn CNN_co evaluate accuracy black_box_function adjust_learning_rate main train evaluate accuracy black_box_function adjust_learning_rate main train CIFAR100 CIFAR10 MNIST read_label_file get_int read_image_file noisify_pairflip noisify list_files download_url check_integrity noisify_multiclass_symmetric list_dir multiclass_noisify evaluate accuracy black_box_function adjust_learning_rate main train evaluate accuracy black_box_function adjust_learning_rate main train evaluate accuracy black_box_function adjust_learning_rate main train loss_selfteaching loss_3teaching loss_curve loss_coteaching loss_curriculum loss_softcoteaching evaluate accuracy black_box_function adjust_learning_rate main train evaluate accuracy black_box_function adjust_learning_rate main train evaluate accuracy black_box_function adjust_learning_rate main train CNN MLP CNN_large call_bn CNN_co evaluate accuracy black_box_function adjust_learning_rate main train evaluate accuracy black_box_function adjust_learning_rate main train evaluate accuracy black_box_function adjust_learning_rate main train evaluate accuracy black_box_function adjust_learning_rate main train evaluate accuracy black_box_function adjust_learning_rate main train evaluate accuracy black_box_function adjust_learning_rate main train evaluate accuracy black_box_function adjust_learning_rate main train evaluate accuracy black_box_function adjust_learning_rate main train evaluate accuracy black_box_function adjust_learning_rate main train param_groups topk size t eq softmax mul_ expand_as append sum max backward print float transpose min zero_grad accuracy loss_coteaching model1 append model2 step cuda enumerate data max print float eval softmax model1 model2 cuda exp arange evaluate print CNN_large Adam range parameters adjust_learning_rate n_epoch power train sum cuda log len psi log seed svd ones delta polygamma range n_iter n_samples black_box_function T print inv maximum argsort dot beta zeros diag CNN MLP max int sum one_hot tolist min cpu tensor float argmax cuda cross_entropy len int arange float sum cuda cross_entropy len one_hot tolist tensor argmax cuda cross_entropy len int cpu float sum cuda cross_entropy len data int argsort cross_entropy len int intersect1d float sum cuda cross_entropy len arange adjust_learning_rate cuda exp CNN_large Adam sum copy power evaluate parameters n_epoch train len CNN MLP int rand floor test_epoch ceil int eta Trials fmin fisher_samples md5 hexdigest join urlretrieve print expanduser makedirs expanduser list expanduser list RandomState arange print ones copy assert_array_almost_equal sum print mean eye range multiclass_noisify ones print mean range multiclass_noisify noisify_multiclass_symmetric noisify_pairflip minimum randn outer copy random minimum | # S2E ICML'20: Searching to Exploit Memorization Effect in Learning from Corrupted Labels (PyTorch implementation). ======= This is the code for the paper: [Searching to Exploit Memorization Effect in Learning from Corrupted Labels](https://arxiv.org/abs/1911.02377) Quanming Yao, Hansi Yang, Bo Han, Gang Niu, James T. Kwok. ## Requirements Python = 3.7, PyTorch = 1.3.1, NumPy = 1.18.5, SciPy = 1.4.1 All packages can be installed by Conda. ## Running S2E on benchmark dataset with synthetic noise (MNIST, CIFAR-10 and CIFAR-100) Example usage for MNIST with 50% symmetric noise | 121 |
Autonise-AI/Text-Recognition | ['scene text detection', 'text classification', 'instance segmentation', 'semantic segmentation'] | ['PixelLink: Detecting Scene Text via Instance Segmentation'] | src/model/generic_model.py src/loader/dete_loader.py src/Dlmodel/TrainTestR.py src/prepare_metadata/prepare_metadata.py src/prepare_metadata/meta_synth.py src/model/crnn.py src/loader/reco_loader.py src/model/model_loader.py src/prepare_metadata/meta_artificial.py src/loader/mnist.py src/prepare_metadata/meta_ic15.py src/model/u_net_resnet_50_encoder.py src/model/resnet_own.py src/helper/utils.py src/Dlmodel/TestRD.py src/Dlmodel/TestOneImageR.py src/helper/profiler.py src/model/unet.py src/model/trial.py src/prepare_metadata/meta_coco.py src/loader/square.py src/Dlmodel/TrainTestD.py src/helper/read_yaml.py src/Dlmodel/TestOneImageD.py src/Dlmodel/TestOneImageRD.py main.py src/model/unet_parts.py src/pipeline_manager.py src/model/densenet.py src/model/u_net_resnet_50_parts.py src/loader/scale_two.py src/loader/art.py src/Dlmodel/Dlmodel.py src/helper/logger.py src/prepare_metadata/meta_own.py src/loader/generic_dataloader.py src/prepare_metadata/meta_ic13.py prepare_metadata test_one_r train_r train_d fscore test_entire_folder_d test_one_rd test_entire_folder_r test_entire_folder_rd main test_one_d test_d test_r prepare_metadata test_one_r train_r train_d test_entire_folder_d test_one_rd PipelineManager test_entire_folder_r test_entire_folder_rd test_one_d test_d test_r Dlmodel TestOneImageDClass TestOneImageRClass TestOneImageRDClass TrainTestR TrainTestD TrainTestR Logger Profiler read_yaml scores line one_hot overlap_remove inside_point get_connected_components remove_small_boxes FocalLoss line_intersection get_rotated_bbox get_f_score homographic_rotation precision_recall_fscore strLabelConverter intersection_union ArtificialGen DeteLoader own_DataLoader trainLoader RecoDataloader scale_two square CRNN_orig CRNN_resnet BidirectionalLSTM densenet161 DenseNet densenet169 densenet201 _DenseLayer _DenseBlock _Transition densenet121 model load_weights load_model conv1x1 resnext50_32x4d ResNet resnet50 resnext101_32x8d Bottleneck resnet152 conv3x3 resnet34 resnet18 BasicBlock resnet101 own UNet Up DoubleConv Inconv Down JustUp OutConv UNetWithResnet50Encoder UpBlockForUNetWithResNet50 ConvBlock Bridge MetaArtificial MetaCoco MetaIC13 MetaIC15 MetaOwn MetaSynth float get_f_score dump plot profiler info dump plot profiler info start_testing profiler start_testing profiler test_one_image_r profiler print test_one_image_r profiler open dump test_one_image_rd profiler mkdir profiler info test_one_image_d profiler print mkdir profiler create_annot profiler info set int64 astype int64 astype pointPolygonTest float64 astype inside_point int64 line_intersection convexHull contourArea range float64 astype bool sum range intersection_union len zeros array range intersection_union reshape astype int64 append minAreaRect range len get_rotated_bbox contourArea append range len FILLED overlap_remove where RETR_LIST remove_small_boxes clf list exp shape pad range imsave add_edges_from Graph findContours astype copy mkdir zip flip uint8 connected_components drawContours CHAIN_APPROX_SIMPLE print reshape float32 add_nodes_from zeros len load scores endswith print lower append listdir exists enumerate open isinstance view size get_device to deg2rad where getPerspectiveTransform show squeeze perspectiveTransform shape imshow append range warpAffine astype T uint8 print reshape min float32 dot int32 array len list DenseNet group load_url match load_state_dict keys compile list DenseNet group load_url match load_state_dict keys compile list DenseNet group load_url match load_state_dict keys compile list DenseNet group load_url match load_state_dict keys compile OrderedDict items list load_state_dict load load_weights cuda CRNN load_url ResNet load_state_dict load_url ResNet load_state_dict load_url ResNet load_state_dict load_url ResNet load_state_dict load_url ResNet load_state_dict ResNet ResNet format print Sequential MaxPool2d add_module summary convRelu cuda | # Pytorch Implementation of [Pixel-LINK](https://arxiv.org/pdf/1801.01315.pdf) ## A brief abstract of your project including the problem statement and solution approach We are attempting to detect all kinds of text in the wild. The technique used for text detection is based on the paper PixelLink: Detecting Scene Text via Instance Segmentation (https://arxiv.org/abs/1801.01315) by Deng et al. The text instances present in the scene images lie very close to each other, and it is challenging to distinguish them using semantic segmentation. So, there is a need of instance segmentation. The approach consists of two key steps: a) Linking of pixels in the same text instance - Segmentation step, b) Text bounding box extraction using the linking done. There are two kinds of predictions getting done here at each pixel level in the image: a) Text/non-text prediction, b) Link prediction. This approach sets it apart from other kinds of methodologies used so far for text detection. Before PixelLink, the SOTA approaches on text detection does two kinds of prediction: a) Text/non-text prediction, b) Location Regression. Here both of these predictions are made at one go taking many fewer number of iterations and less training data. | 122 |
Awesome-AutoAug-Algorithms/AWS-OHL-AutoAug | ['data augmentation'] | ['Online Hyper-parameter Learning for Auto-Augmentation Strategy'] | distm/torch.py distm/__init__.py agent/reinforce.py pipeline/ohl.py aug_op/__init__.py models/wresnet.py utils/config.py utils/misc.py distm/local.py agent/ppo.py distm/base.py models/__init__.py pipeline/__init__.py pipeline/aws.py agent/base.py utils/file.py aug_op/registry.py scheduler/__init__.py utils/dist.py pipeline/base.py main.py criterion/__init__.py scheduler/scheduler.py aug_op/ops.py utils/data.py criterion/labelsmooth.py agent/__init__.py models/resnet.py main BasicAgent get_descretized_operations PPOAgent Reinforce agent_entry Posterize Rotate Solarize Contrast TranslateY Brightness AutoContrast Invert ShearX ShearY Sharpness Color TranslateX Equalize Registry LabelSmoothCELoss criterion_entry BasicDistManager LocalManager dist_entry ResNet Res18CIFAR BasicBlock Bottleneck WRN40_2 WideResNet WRN28_10 conv3x3 WideBasic model_entry AWSPipeline BasicPipeline OHLPipeline pipeline_entry LRScheduler _WarmUpLRScheduler StepLRScheduler StepDecayLRScheduler CosineLRScheduler ConstantLRScheduler scheduler_entry parse_raw_config update_op_sc_cfg update_model_and_criterion_cfgs _DatasetHelper get_num_classes get_train_val_set_size get_dataset_settings DistInfiniteBatchSampler collate_fn_for_autoaug InfiniteBatchSampler create_dataloaders sync_vals DistLogger DistModule create_logger attach_param hsigmoid ppo1_kld Cutout get_af ints_ceil clear_grads AverageMeter accuracy detach_param swish reduce_grads hswish kld time_str filter_params SwishAutoFn init_params main BasicAgent get_descretized_operations PPOAgent Reinforce agent_entry Posterize Rotate Solarize Contrast TranslateY Brightness AutoContrast Invert ShearX ShearY Sharpness Color TranslateX Equalize Registry LabelSmoothCELoss criterion_entry BasicDistManager LocalManager dist_entry ResNet Res18CIFAR BasicBlock Bottleneck WRN40_2 WideResNet WRN28_10 conv3x3 WideBasic model_entry AWSPipeline BasicPipeline OHLPipeline pipeline_entry LRScheduler _WarmUpLRScheduler StepLRScheduler StepDecayLRScheduler CosineLRScheduler ConstantLRScheduler scheduler_entry parse_raw_config update_op_sc_cfg update_model_and_criterion_cfgs _DatasetHelper get_num_classes get_train_val_set_size get_dataset_settings DistInfiniteBatchSampler collate_fn_for_autoaug InfiniteBatchSampler create_dataloaders sync_vals DistLogger DistModule create_logger attach_param hsigmoid ppo1_kld Cutout get_af ints_ceil clear_grads AverageMeter accuracy detach_param swish reduce_grads hswish kld time_str filter_params SwishAutoFn init_params join dist_entry main_py_rel_path parse_raw_config getcwd chdir add_argument dist is_master print barrier pipeline_entry finalize mkdir ArgumentParser exp_dirname parse_args copytree OrderedDict items sorted op_class_type linspace linspace linspace linspace linspace linspace int astype linspace linspace linspace linspace WideResNet WideResNet get pop EasyDict get_num_classes EasyDict pop warmup_divisor iters step_times step_epochs base_lr_divisor warmup_lr warmup_epochs warmup_ratio round base_lr kwargs dict _DatasetHelper _DatasetHelper list stack LongTensor zip val_set_size ToTensor clz train_set_size abspath Cutout expanduser get_train_val_set_size Compose lower Normalize info setattr type join time Subset get_dataset_settings split world_size zeros allreduce setFormatter getLogger addHandler StreamHandler Formatter setLevel FileHandler output numel bias normal_ any kaiming_normal_ modules append weight __name__ constant_ enumerate parameters zero_ sum float add_ parameters detach_ parameters requires_grad_ parameters topk t eq item expand_as append sum max lower named_modules list defaultdict bias named_parameters append weight keys __name__ | # Automatic Augmentation Zoo An integration of several popular automatic augmentation methods, including OHL ([Online Hyper-Parameter Learning for Auto-Augmentation Strategy](http://openaccess.thecvf.com/content_ICCV_2019/papers/Lin_Online_Hyper-Parameter_Learning_for_Auto-Augmentation_Strategy_ICCV_2019_paper.pdf)) and AWS ([Improving Auto Augment via Augmentation Wise Weight Sharing](https://arxiv.org/abs/2009.14737)) by Sensetime Research. We will post updates regularly so you can star 🌟 or watch 👓 this repository for the latest. ## Introduction This repository provides the official implementations of [OHL](http://openaccess.thecvf.com/content_ICCV_2019/papers/Lin_Online_Hyper-Parameter_Learning_for_Auto-Augmentation_Strategy_ICCV_2019_paper.pdf) and [AWS](https://arxiv.org/abs/2009.14737), and will also integrate some other popular auto-aug methods (like [Auto Augment](https://openaccess.thecvf.com/content_CVPR_2019/papers/Cubuk_AutoAugment_Learning_Augmentation_Strategies_From_Data_CVPR_2019_paper.pdf), [Fast AutoAugment](http://papers.nips.cc/paper/8892-fast-autoaugment.pdf) and [Adversarial autoaugment](https://arxiv.org/pdf/1912.11188)) in pure PyTorch. We use `torch.distributed` to conduct the distributed training. The model checkpoints will be upload to GoogleDrive or OneDrive soon. <!-- ## Our Trained Model / Checkpoint --> <!-- + OneDrive: [Link](https://1drv.ms/u/s!Am_mmG2-KsrnajesvSdfsq_cN48?e=aHVppN) --> ## Dependencies It would be recommended to conduct experiments under: | 123 |
AyanKumarBhunia/on-the-fly-FGSBIR | ['sketch based image retrieval', 'image retrieval', 'cross modal retrieval'] | ['Sketch Less for More: On-the-Fly Fine-Grained Sketch Based Image Retrieval', 'Sketch Less for More: On-the-Fly Fine-Grained Sketch-Based Image Retrieval'] | dataset_chairv2.py RL_Networks.py render_sketch_chairv2.py main_chairv2.py Net_Basic_V1.py Environment_SBIR.py get_ransform CreateDataset_Sketchy Environment main_train Net_Basic redraw_Quick2RGB mydrawPNG Preprocess_QuickDraw_redraw Policy backbone_network extend Train Environment clip_grad_norm_ zero_grad calculate_loss DataLoader save Adam append to sum range state_dict format get_reward mean Sketch_Array_Train item enumerate backward print select_action CreateDataset_Sketchy parameters niter train step int list bresenham append zeros round range binary_dilation len float round astype array mydrawPNG Preprocess_QuickDraw_redraw | # Sketch Less for More: On-the-Fly Fine-Grained Sketch Based Image Retrieval, CVPR 2020 (Oral) **Ayan Kumar Bhunia**, Yongxin Yang, Timothy M. Hospedales, Tao Xiang, Yi-Zhe Song, “Sketch Less for More: On-the-Fly Fine-Grained Sketch Based Image Retrieval”, IEEE Conf. on Computer Vision and Pattern Recognition (**CVPR**), 2020. https://arxiv.org/abs/2002.10310 ## Abstract Fine-grained sketch-based image retrieval (FG-SBIR) addresses the problem of retrieving a particular photo instance given a user's query sketch. Its widespread applicability is however hindered by the fact that drawing a sketch takes time, and most people struggle to draw a complete and faithful sketch. In this paper, we reformulate the conventional FG-SBIR framework to tackle these challenges, with the ultimate goal of retrieving the target photo with the least number of strokes possible. We further propose an on-the-fly design that starts retrieving as soon as the user starts drawing. To accomplish this, we devise a reinforcement learning based cross-modal retrieval framework that directly optimizes rank of the ground-truth photo over a complete sketch drawing episode. Additionally, we introduce a novel reward scheme that circumvents the problems related to irrelevant sketch strokes, and thus provides us with a more consistent rank list during the retrieval. We achieve superior early-retrieval efficiency over state-of-the-art methods and alternative baselines on two publicly available fine-grained sketch retrieval datasets. ## Framework ![Framework](Framework.jpg) Figure: (a) A conventional FG-SBIR framework trained using triplet loss. (b) Our proposed reinforcement learning based framework that takes into account a complete sketch rendering episode. Key locks signifies particular weights are fixed during RL training. ## Illustrative Example ![Example](example.jpg) | 124 |
Ayush1651999/Stock-Price-predictor | ['time series'] | ['Real-valued (Medical) Time Series Generation with Recurrent Conditional GANs'] | preprocessing.py main.py new_dataset append range len | # Stock-Price-predictor This is repo for the course project of the course GNR652 - Machine Learning for remote sensing, where we programmed an LSTM network for stock price prediction Some modifications possible - Denoising the price signal to remove the stochastic component using wavelet denoising method Increasing the data size by training an RCGAN on this dataset. For more information on this part, please check : https://arxiv.org/abs/1706.02633 | 125 |
B-Yassine/Carlini-and-Wagner_InceptionV3_Imagenet | ['adversarial attack'] | ['Towards Evaluating the Robustness of Neural Networks'] | Inception_v3.py adversarial_generation.py | # C&W_InceptionV3_Imagenet Simple implementation of the C&W attack on a pre-trained Keras InceptionV3 on Imagenet To generate the adversarial image simply run: Python adversarial_generation.py To test the classification, Run: Python Inception_v3.py # Adversarial Examples Adversarial examples are inputs that has been slightly modified to be imperceptible by the human and cause a misclassification Formalization often used: for a clean input x, an input x’ is an adversarial example if it is misclassified and d(x, x’) < eps. | 126 |
BCV-Uniandes/ISINet | ['semantic segmentation'] | ['ISINet: An Instance-Based Approach for Surgical Instrument Segmentation'] | maskrcnn/maskrcnn_benchmark/modeling/matcher.py maskrcnn/maskrcnn_benchmark/modeling/roi_heads/mask_head/roi_mask_feature_extractors.py maskrcnn/maskrcnn_benchmark/data/collate_batch.py maskrcnn/maskrcnn_benchmark/modeling/backbone/backbone.py maskrcnn/maskrcnn_benchmark/engine/inference.py maskrcnn/maskrcnn_benchmark/modeling/detector/generalized_rcnn.py maskrcnn/maskrcnn_benchmark/modeling/roi_heads/box_head/inference.py maskrcnn/maskrcnn_benchmark/structures/boxlist_ops.py temp_consistency_module/utils/warp_utils.py maskrcnn/maskrcnn_benchmark/data/datasets/evaluation/__init__.py maskrcnn/maskrcnn_benchmark/config/paths_catalog.py maskrcnn/demo/webcam.py maskrcnn/maskrcnn_benchmark/modeling/rpn/anchor_generator.py maskrcnn/maskrcnn_benchmark/data/build.py maskrcnn/maskrcnn_benchmark/layers/batch_norm.py maskrcnn/setup.py maskrcnn/maskrcnn_benchmark/modeling/roi_heads/mask_head/roi_mask_predictors.py temp_consistency_module/networks/FlowNetSD.py maskrcnn/maskrcnn_benchmark/modeling/roi_heads/mask_head/loss.py maskrcnn/maskrcnn_benchmark/modeling/roi_heads/mask_head/inference.py maskrcnn/maskrcnn_benchmark/layers/_utils.py maskrcnn/maskrcnn_benchmark/data/datasets/evaluation/voc/voc_eval.py maskrcnn/tools/train_net.py maskrcnn/docker/docker-jupyter/jupyter_notebook_config.py maskrcnn/maskrcnn_benchmark/modeling/roi_heads/roi_heads.py maskrcnn/maskrcnn_benchmark/data/transforms/build.py maskrcnn/maskrcnn_benchmark/utils/model_serialization.py maskrcnn/maskrcnn_benchmark/utils/cv2_util.py maskrcnn/maskrcnn_benchmark/structures/bounding_box.py maskrcnn/maskrcnn_benchmark/engine/trainer.py temp_consistency_module/networks/FlowNetS.py maskrcnn/maskrcnn_benchmark/structures/image_list.py maskrcnn/maskrcnn_benchmark/config/defaults.py maskrcnn/maskrcnn_benchmark/data/samplers/iteration_based_batch_sampler.py maskrcnn/maskrcnn_benchmark/modeling/box_coder.py maskrcnn/maskrcnn_benchmark/data/samplers/grouped_batch_sampler.py temp_consistency_module/networks/channelnorm_package/setup.py maskrcnn/maskrcnn_benchmark/modeling/backbone/__init__.py temp_consistency_module/datasets.py maskrcnn/maskrcnn_benchmark/utils/miscellaneous.py maskrcnn/maskrcnn_benchmark/modeling/rpn/loss.py maskrcnn/maskrcnn_benchmark/modeling/rpn/__init__.py maskrcnn/maskrcnn_benchmark/modeling/make_layers.py maskrcnn/maskrcnn_benchmark/data/__init__.py maskrcnn/maskrcnn_benchmark/data/datasets/coco.py maskrcnn/maskrcnn_benchmark/solver/build.py maskrcnn/maskrcnn_benchmark/utils/metric_logger.py maskrcnn/maskrcnn_benchmark/modeling/rpn/inference.py maskrcnn/demo/predictor.py data/organize2018.py maskrcnn/maskrcnn_benchmark/data/datasets/evaluation/coco/coco_eval.py maskrcnn/maskrcnn_benchmark/utils/env.py temp_consistency_module/models.py maskrcnn/maskrcnn_benchmark/modeling/poolers.py maskrcnn/maskrcnn_benchmark/data/transforms/__init__.py maskrcnn/maskrcnn_benchmark/data/samplers/__init__.py maskrcnn/maskrcnn_benchmark/solver/__init__.py maskrcnn/maskrcnn_benchmark/modeling/registry.py temp_consistency_module/networks/FlowNetFusion.py maskrcnn/maskrcnn_benchmark/modeling/roi_heads/box_head/box_head.py maskrcnn/maskrcnn_benchmark/data/datasets/__init__.py maskrcnn/maskrcnn_benchmark/data/datasets/concat_dataset.py maskrcnn/maskrcnn_benchmark/data/transforms/transforms.py maskrcnn/maskrcnn_benchmark/utils/logger.py maskrcnn/maskrcnn_benchmark/modeling/backbone/fpn.py data/organize2017.py maskrcnn/maskrcnn_benchmark/modeling/detector/__init__.py maskrcnn/maskrcnn_benchmark/utils/comm.py maskrcnn/maskrcnn_benchmark/data/datasets/voc.py maskrcnn/maskrcnn_benchmark/solver/lr_scheduler.py temp_consistency_module/networks/correlation_package/setup.py data/visualize_coco_annotations.py maskrcnn/tools/test_net.py maskrcnn/maskrcnn_benchmark/data/datasets/evaluation/coco/__init__.py maskrcnn/maskrcnn_benchmark/layers/nms.py maskrcnn/tools/trim_detectron_model.py temp_consistency_module/networks/channelnorm_package/channelnorm.py data/robotseg_to_coco.py maskrcnn/maskrcnn_benchmark/layers/misc.py temp_consistency_module/networks/resample2d_package/setup.py maskrcnn/tools/cityscapes/instances2dict_with_polygons.py temp_consistency_module/networks/resample2d_package/resample2d.py maskrcnn/maskrcnn_benchmark/modeling/rpn/rpn.py maskrcnn/maskrcnn_benchmark/modeling/roi_heads/box_head/roi_box_predictors.py maskrcnn/maskrcnn_benchmark/modeling/roi_heads/box_head/roi_box_feature_extractors.py maskrcnn/tests/test_data_samplers.py maskrcnn/maskrcnn_benchmark/utils/collect_env.py maskrcnn/maskrcnn_benchmark/modeling/balanced_positive_negative_sampler.py maskrcnn/tools/cityscapes/convert_cityscapes_to_coco.py maskrcnn/tests/checkpoint.py maskrcnn/maskrcnn_benchmark/utils/model_zoo.py maskrcnn/maskrcnn_benchmark/utils/registry.py maskrcnn/maskrcnn_benchmark/utils/imports.py maskrcnn/maskrcnn_benchmark/utils/c2_model_loading.py temp_consistency_module/utils/tools.py maskrcnn/tests/test_metric_logger.py temp_consistency_module/main.py maskrcnn/maskrcnn_benchmark/layers/roi_align.py maskrcnn/maskrcnn_benchmark/config/__init__.py temp_consistency_module/utils/frame_utils.py temp_consistency_module/utils/param_utils.py maskrcnn/maskrcnn_benchmark/utils/checkpoint.py maskrcnn/maskrcnn_benchmark/data/samplers/distributed.py maskrcnn/maskrcnn_benchmark/layers/roi_pool.py maskrcnn/maskrcnn_benchmark/data/datasets/evaluation/voc/__init__.py maskrcnn/maskrcnn_benchmark/modeling/roi_heads/mask_head/mask_head.py temp_consistency_module/utils/flow_utils.py maskrcnn/maskrcnn_benchmark/modeling/utils.py temp_consistency_module/networks/correlation_package/correlation.py maskrcnn/maskrcnn_benchmark/modeling/backbone/resnet.py maskrcnn/maskrcnn_benchmark/modeling/detector/detectors.py temp_consistency_module/convert.py temp_consistency_module/networks/submodules.py maskrcnn/maskrcnn_benchmark/data/datasets/list_dataset.py maskrcnn/maskrcnn_benchmark/layers/__init__.py maskrcnn/maskrcnn_benchmark/engine/__init__.py maskrcnn/maskrcnn_benchmark/modeling/roi_heads/box_head/loss.py maskrcnn/maskrcnn_benchmark/__init__.py maskrcnn/maskrcnn_benchmark/structures/segmentation_mask.py temp_consistency_module/utils/__init__.py temp_consistency_module/networks/FlowNetC.py maskrcnn/maskrcnn_benchmark/layers/smooth_l1_loss.py get_binary_mask parse_args get_cat_id crop_image get_class_num get_bw_masks filter_for_frame parse_args label_ann filter_for_annotations filter_for_jpeg filter_for_png main parse_args parse_args get_extensions COCODemo main DatasetCatalog ModelCatalog make_data_sampler _quantize make_data_loader make_batch_data_sampler build_dataset _compute_aspect_ratios BatchCollator COCODataset ConcatDataset ListDataset PascalVOCDataset evaluate COCOResults check_expected_results prepare_for_coco_segmentation evaluate_predictions_on_coco do_coco_evaluation evaluate_box_proposals prepare_for_coco_detection coco_evaluation calc_detection_voc_ap do_voc_evaluation calc_detection_voc_prec_rec eval_detection_voc voc_evaluation DistributedSampler GroupedBatchSampler IterationBasedBatchSampler build_transforms Compose ToTensor Resize Normalize RandomHorizontalFlip compute_on_dataset inference _accumulate_predictions_from_multiple_gpus do_train reduce_loss_dict FrozenBatchNorm2d _NewEmptyTensorOp interpolate ConvTranspose2d Conv2d ROIAlign _ROIAlign _ROIPool ROIPool smooth_l1_loss _load_C_extensions BalancedPositiveNegativeSampler BoxCoder conv_with_kaiming_uniform make_conv3x3 get_group_gn make_fc group_norm Matcher LevelMapper Pooler cat build_backbone build_resnet_fpn_backbone build_resnet_backbone LastLevelMaxPool FPN StemWithGN ResNetHead _make_stage ResNet BottleneckWithGN Bottleneck StemWithFixedBatchNorm BottleneckWithFixedBatchNorm BaseStem build_detection_model GeneralizedRCNN CombinedROIHeads build_roi_heads build_roi_box_head ROIBoxHead PostProcessor make_roi_box_post_processor make_roi_box_loss_evaluator FastRCNNLossComputation make_roi_box_feature_extractor FPNXconv1fcFeatureExtractor FPN2MLPFeatureExtractor ResNet50Conv5ROIFeatureExtractor FPNPredictor make_roi_box_predictor FastRCNNPredictor paste_mask_in_image expand_boxes Masker make_roi_mask_post_processor MaskPostProcessorCOCOFormat expand_masks MaskPostProcessor make_roi_mask_loss_evaluator MaskRCNNLossComputation project_masks_on_boxes keep_only_positive_boxes ROIMaskHead build_roi_mask_head MaskRCNNFPNFeatureExtractor make_roi_mask_feature_extractor MaskRCNNC4Predictor make_roi_mask_predictor AnchorGenerator generate_anchors _scale_enum _whctrs make_anchor_generator _ratio_enum _generate_anchors BufferList _mkanchors make_rpn_postprocessor RPNPostProcessor RPNLossComputation make_rpn_loss_evaluator build_rpn RPNModule RPNHead make_optimizer make_lr_scheduler WarmupMultiStepLR BoxList cat_boxlist boxlist_iou boxlist_nms remove_small_boxes _cat ImageList to_image_list Mask SegmentationMask Polygons _rename_basic_resnet_weights load_resnet_c2_format load_c2_format _rename_weights_for_resnet _load_c2_pickled_weights _rename_fpn_weights DetectronCheckpointer Checkpointer collect_env_info get_pil_version synchronize get_world_size reduce_dict all_gather get_rank is_main_process findContours setup_environment setup_custom_environment import_file setup_logger SmoothedValue MetricLogger mkdir strip_prefix_if_present load_state_dict align_and_update_state_dicts cache_url _register_generic Registry TestCheckpointer SubsetSampler TestGroupedBatchSampler TestIterationBasedBatchSampler TestMetricLogger main main train test removekey parse_args convert_coco_stuff_mat convert_cityscapes_instance_only getLabelID instances2dict_with_polygons main inference Model FlowNet2 FlowNetC FlowNetFusion FlowNetS FlowNetSD save_grad i_conv deconv predict_flow tofp16 init_deconv_bilinear conv tofp32 ChannelNorm ChannelNormFunction Correlation CorrelationFunction Resample2dFunction Resample2d writeFlow readFlow read_gen parse_flownets parse_flownetfusion parse_flownetsonly parse_flownetsd parse_flownetc gpumemusage datestr add_arguments_for_module format_dictionary_of_losses TimerBlock kwargs_from_args IteratorTimer save_checkpoint module_to_dict update_hyperparameter_schedule match_candidates warp_candidates warp_bw_annotations load_all_bw_anns calculate_distances load_json update_annotations generate_ann jaccard_matrix split_anns filter_for_videos warp_ann get_video_frames compute_mask_IU calculate_jaccard_matrix crop_image match extract_candidates uncrop_image load_anns compute_mask_IU AverageMeter add_argument ArgumentParser imread shape range zeros len format replace imsave findall join format uint8 group replace join join join int uint8 basename filter_for_annotations sort size create_image_info group astype tqdm create_annotation_info filter_for_png append walk open glob join dirname abspath merge_from_file VideoCapture time read run_on_opencv_image format print config_file add_argument freeze COCODemo merge_from_list imshow ArgumentParser opts parse_args destroyAllWindows get ConcatDataset getattr append factory SequentialSampler RandomSampler list sorted copy get_img_info append float range len BatchSampler IterationBasedBatchSampler GroupedBatchSampler _quantize _compute_aspect_ratios format import_file make_data_sampler getLogger IMS_PER_BATCH PATHS_CATALOG MAX_ITER get_world_size NUM_WORKERS BatchCollator DataLoader warning make_batch_data_sampler SIZE_DIVISIBILITY build_transforms build_dataset DatasetCatalog append PascalVOCDataset isinstance COCODataset dict __name__ items list format join COCOResults check_expected_results getLogger prepare_for_coco_segmentation item info save evaluate_box_proposals prepare_for_coco_detection convert tolist extend resize enumerate decode Masker tolist extend masker expand tqdm resize get_field enumerate arange zeros_like resize max boxlist_iou append sum loadAnns range cat getAnnIds mean float enumerate reshape sort convert min zeros as_tensor len accumulate summarize evaluate COCOeval error format info getLogger get_img_info format info get_groundtruth eval_detection_voc resize append enumerate calc_detection_voc_ap calc_detection_voc_prec_rec list defaultdict cumsum astype extend copy keys numpy array unique zip append zeros argmax max arange concatenate empty nan sum max range len warning info getLogger TO_BGR255 MIN_SIZE_TEST Compose MIN_SIZE_TRAIN Normalize MAX_SIZE_TRAIN MAX_SIZE_TEST update int zip narrow_copy tqdm eval device append Tensor to cat enumerate update list sorted getLogger warning all_gather keys str time format join getLogger synchronize device _accumulate_predictions_from_multiple_gpus timedelta dict save info compute_on_dataset dataset len get_world_size getLogger model zero_grad save str MetricLogger to sum update format timedelta info reduce_loss_dict enumerate time backward global_avg train step len _output_size tuple abs where join glob extend dirname abspath EPSILON DIM_PER_GP NUM_GROUPS group_norm Conv2d bias normal_ kaiming_normal_ ReLU append weight constant_ kaiming_uniform_ bias weight constant_ Linear OrderedDict ResNet Sequential FPN ResNet Sequential OrderedDict OUT_CHANNELS RES2_OUT_CHANNELS append transformation_module range append MASK_ON CombinedROIHeads BoxCoder DETECTIONS_PER_IMG PostProcessor BBOX_REG_WEIGHTS USE_FPN NMS SCORE_THRESH POSITIVE_FRACTION FG_IOU_THRESHOLD BATCH_SIZE_PER_IMAGE BoxCoder BalancedPositiveNegativeSampler BBOX_REG_WEIGHTS BG_IOU_THRESHOLD Matcher FastRCNNLossComputation zeros_like float new_zeros int uint8 expand_masks min float32 expand interpolate zeros to max POSTPROCESS_MASKS POSTPROCESS_MASKS_THRESHOLD Masker MaskPostProcessor zip convert device resize append to crop FG_IOU_THRESHOLD MaskRCNNLossComputation BG_IOU_THRESHOLD Matcher RESOLUTION get_field squeeze append AnchorGenerator STRADDLE_THRESH ANCHOR_SIZES ANCHOR_STRIDE USE_FPN ASPECT_RATIOS vstack _ratio_enum array hstack sqrt _whctrs round _mkanchors _whctrs _mkanchors NMS_THRESH FPN_POST_NMS_TOP_N_TRAIN POST_NMS_TOP_N_TRAIN RPNPostProcessor POST_NMS_TOP_N_TEST MIN_SIZE PRE_NMS_TOP_N_TRAIN FPN_POST_NMS_TOP_N_TEST PRE_NMS_TOP_N_TEST POSITIVE_FRACTION FG_IOU_THRESHOLD RPNLossComputation BATCH_SIZE_PER_IMAGE BalancedPositiveNegativeSampler BG_IOU_THRESHOLD Matcher WEIGHT_DECAY_BIAS SGD named_parameters BASE_LR BIAS_LR_FACTOR WEIGHT_DECAY convert _box_nms get_field bbox mode squeeze unbind bbox clamp min area max len add_field size set BoxList _cat fields mode int list isinstance tuple copy_ zero_ zip ceil Tensor enumerate max _rename_basic_resnet_weights sorted format getLogger OrderedDict from_numpy info keys _rename_fpn_weights CONV_BODY _load_c2_pickled_weights replace _rename_weights_for_resnet get_pretty_env_info _send_and_wait get_world_size get_rank from_buffer dumps get_world_size loads zip append to max cat get_world_size startswith get setup_custom_environment setup_environment import_file spec_from_file_location exec_module module_from_spec setFormatter join getLogger addHandler StreamHandler Formatter DEBUG setLevel FileHandler makedirs max list sorted format view getLogger tuple tolist shape info keys enumerate len items sorted list OrderedDict keys strip_prefix_if_present align_and_update_state_dicts state_dict join basename format replace synchronize write search group getenv path _download_url_to_file expanduser urlparse makedirs make_data_loader OUTPUT_DIR collect_env_info set_device MASK_ON get_rank to inference TEST DEVICE build_detection_model init_process_group synchronize setup_logger WEIGHT mkdir info zip enumerate load join DetectronCheckpointer local_rank len DEVICE make_optimizer load update build_detection_model CHECKPOINT_PERIOD make_data_loader WEIGHT DistributedDataParallel DetectronCheckpointer do_train device to OUTPUT_DIR make_lr_scheduler join zip synchronize MASK_ON inference mkdir make_data_loader empty_cache OUTPUT_DIR module TEST enumerate len test distributed train dict pop format print exit print_help print join len load join print endswith len zip append walk open hasInstances uint8 format toDict print Instance findContours len astype RETR_EXTERNAL copy unique abspath append CHAIN_APPROX_NONE array flush open instances2dict_with_polygons threshold unsqueeze argmax cuda DataFrame inference_batch_size num_classes fill_ squeeze transpose algorithm dirname append range cat imsave save_predictions inf replace csv_path size compute_mask_IU astype close reversed mean eval unique zip item float enumerate pop uint8 weighted_mode task print sort writeFlow write to_csv tqdm match mode zeros numpy array makedirs fill_ size from_numpy ceil zeros abs range tofile write close shape zeros open imread from_numpy copy from_numpy copy enumerate from_numpy copy enumerate from_numpy copy enumerate from_numpy copy enumerate timezone now getargspec format add_argument_group print add_argument capitalize parse_known_args __init__ module_to_dict __name__ enumerate join str int replace ceil range len param_groups float maximum copyfile join save sum zeros float astype nonzero append expand_dims inf print calculate_descriptor mean sqrt full range len uint8 grid_sample float size squeeze astype transpose expand shape permute cpu tensor expand_dims squeeze from_numpy shape zeros range uint8 squeeze astype uncrop_image zeros binary_fill_holes crop_image enumerate compute_mask_IU astype float32 zeros range compute_mask_IU astype float32 zeros range arange inf jaccard_matrix argmax full enumerate uint8 arange squeeze astype calculate_jaccard_matrix argmax range len findall join sorted filter_for_videos zeros unique join uint8 format imsave squeeze astype zeros range imread split_anns ImageCollection join uint8 asarray concatenate match_candidates astype shape extract_candidates split_anns load_json zeros imsave | # ISINet This is the Pytorch implementation of [ISINet: An Instance-Based Approach for Surgical Instrument Segmentation](https://arxiv.org/abs/2007.05533) published at [MICCAI2020](https://www.miccai2020.org/en/). ## Installation Requirements: - Python >= 3.6 - Pytorch == 1.4 - numpy - scikit-image - tqdm - scipy == 1.1 | 127 |
BCV-Uniandes/query-objseg | ['instance segmentation', 'semantic segmentation'] | ['Dynamic Multimodal Instance Segmentation guided by natural language queries'] | dmn_pytorch/models/dpn/__init__.py dmn_pytorch/visdom_display.py dmn_pytorch/train.py dmn_pytorch/utils/misc_utils.py dmn_pytorch/models/dpn/adaptive_avgmax_pool.py dmn_pytorch/models/tests/test_dmn.py dmn_pytorch/__init__.py dmn_pytorch/models/dpn/model_factory.py dmn_pytorch/utils/word_utils.py setup.py dmn_pytorch/utils/__init__.py dmn_pytorch/utils/losses.py dmn_pytorch/models/__init__.py dmn_pytorch/models/dmn.py dmn_pytorch/referit_loader.py dmn_pytorch/models/dpn/dpn_model.py dmn_pytorch/utils/transforms.py get_description get_version ReferDataset DatasetNotFoundError train compute_mask_IU evaluate visualization BaseDMN DMN UpsamplingModule AdaptiveAvgMaxPool2d adaptive_avgmax_pool2d pooling_factor dpn68b DPN DualPathBlock InputBlock dpn68 dpn98 CatBnAct BnActConv2d dpn131 dpn107 dpn92 get_transforms_eval create_model vgg16 LeNormalize dmn_fixture_lowres test_dmn_forward_lowres loader_fixture IoULoss reporthook VisdomWrapper ToNumpy ResizeAnnotation ResizePad CropResize ResizeImage Dictionary Corpus AverageMeter join strip literal_eval map startswith split save_folder zero_grad save dataset cuda squeeze len plot_line state_dict update format size avg net enumerate join time backward Variable print AverageMeter reset accum_iters step split sum data max cuda str squeeze tolist plot_line range format compute_mask_IU float net enumerate int time Variable print tqdm sigmoid zeros train numpy len num_images cuda target_transform list squeeze iter untokenize_word_vector append next range format size eval net join Variable print clone sigmoid images numpy print max_pool2d avg_pool2d cat load_url DPN load_state_dict load_url DPN load_state_dict load_url DPN load_state_dict load_url DPN load_state_dict load_url DPN load_state_dict load_url DPN load_state_dict Sequential Conv2d load_url load_state_dict vgg state_dict net_definition int floor Normalize LeNormalize join Compose ReferDataset BaseDMN unsqueeze cuda range int time min write flush | # dmn-pytorch [![License](https://img.shields.io/badge/license-MIT-blue.svg)](./LICENSE) [![Codacy Badge](https://api.codacy.com/project/badge/Grade/992bf5adf488489d8ea55998895793c7)](https://www.codacy.com?utm_source=github.com&utm_medium=referral&utm_content=andfoy/query-objseg&utm_campaign=Badge_Grade) <!-- [![Build Status](http://157.253.243.11/job/query-objseg/job/master/badge/icon)](http://157.253.243.11/job/query-objseg/job/master/) --> PyTorch code for [Dynamic Multimodal Instance Segmentation guided by natural language queries](http://openaccess.thecvf.com/content_ECCV_2018/papers/Edgar_Margffoy-Tuay_Dynamic_Multimodal_Instance_ECCV_2018_paper.pdf), ECCV 2018. [Project Page](https://biomedicalcomputervision.uniandes.edu.co/index.php/research?id=19) | ![horses](./examples/horses.png) | |:--:| | *A dark horse between three lighter horses* | | 128 |
BJTUJia/person_reID_DualNorm | ['person re identification'] | ['Frustratingly Easy Person Re-Identification: Generalizing Person Re-ID in Practice'] | person_reID_DualNorm/test_grid.py person_reID_DualNorm/train.py person_reID_DualNorm/models/MobileNet_IFN.py person_reID_DualNorm/models/__init__.py person_reID_DualNorm/test_viper.py person_reID_DualNorm/models/ResNet_IFN.py person_reID_DualNorm/losses.py person_reID_DualNorm/models/ResNet_o.py person_reID_DualNorm/test_prid2011.py person_reID_DualNorm/test_ilids.py person_reID_DualNorm/models/MobileNet_o.py DeepSupervision RingLoss ConfidencePenalty CrossEntropyLabelSmooth TripletLoss CenterLoss load_network extract_feature fliplr load_network extract_feature fliplr load_network extract_feature fliplr load_network extract_feature get_id fliplr train_model save_network load_state_dict MobileNetV2_IFN Bottleneck weights_init_classifier ConvBlock weights_init_kaiming Bottleneck ConvBlock MobileNetV2 ResNet resnet50_IFN init_pretrained_weights resnet50_fc512 Bottleneck resnet152 conv3x3 resnet34 resnet18 BasicBlock resnet101 ResNet resnet50 init_pretrained_weights resnet50_fc512 Bottleneck resnet152 conv3x3 resnet34 resnet18 BasicBlock resnet101 init_model get_names load join which_epoch load_state_dict index_select long norm FloatTensor print Variable size model div cuda zero_ expand_as cpu PCB fliplr range cat append int data model zero_grad max load_state_dict append range detach state_dict format size save_network time criterion backward print Variable train step join save is_available cuda state_dict load affine bias kaiming_normal_ weight __name__ constant_ bias normal_ weight __name__ constant_ update format print load_url load_state_dict state_dict ResNet init_pretrained_weights ResNet init_pretrained_weights ResNet init_pretrained_weights ResNet init_pretrained_weights ResNet init_pretrained_weights ResNet init_pretrained_weights ResNet init_pretrained_weights | # person_reID_DualNorm This is the pytorch implementation of our BMVC 2019 paper "Frustratingly Easy Person Re-Identification: Generalizing Person Re-ID in Practice" The trained models are avaiable at https://pan.baidu.com/s/1gHHJBF9IgBKlWcItZemmBg coed:5000 or https://drive.google.com/open?id=1Gy96vKH60ML9fk2znnZmz5LghS08-M0R(Googlr Drive) The testing datasets are avaiable at https://pan.baidu.com/s/1AKaOBvu3CRHhRGDgtWZSEg code:lax1 or https://drive.google.com/open?id=1-5JqXxqQ14MCngGLMPqiA-zbI2CUBFxV | 129 |
BUTSpeechFIT/BrnoLM | ['text augmentation', 'speech recognition', 'data augmentation'] | ['Text Augmentation for Language Models in High Error Recognition Scenario'] | scripts/sample-from-lm.py test/test_data_pipeline/test_multistream.py scripts/oov-clustering/reference-matrix-by-word-alignment.py test/test_oov_alignment_lib.py scripts/oov-clustering/plot-det.py scripts/rescoring/score-combiner.py scripts/train/train-pero.py brnolm/investigate-ivecs.py scripts/train/train.py scripts/train/train-no-epoch.py scripts/oov-clustering/compute-wc-covariance.py scripts/oov-clustering/evaluate-embeddings-large-scale.py test/test_runtime_util.py brnolm/data_pipeline/multistream.py brnolm/language_models/encoders.py scripts/train/train-independent.py brnolm/rmn-activation-plotter.py brnolm/oov_clustering/oov_alignment_lib.py test/test_language_models/test_lstm.py brnolm/runtime/model_statistics.py test/test_language_models/test_decoders.py scripts/train/train-chime-robust-v6.py scripts/model-building/build-shallow-nn-with-ivec.py scripts/migrator.py brnolm/files-to-bow.py brnolm/lm-info.py scripts/oov-clustering/apply-linear-transform.py brnolm/oov_clustering/embeddings_io.py setup.py brnolm/smm_itf/pack-smm.py test/test_smm_ivec_extractor.py brnolm/data_pipeline/threaded.py test/test_ivec_appenders.py test/test_language_models/test_language_model.py test/test_det.py brnolm/language_models/ffnn_models.py brnolm/oov_clustering/embeddings_computation.py brnolm/smm_itf/ivec_appenders.py test/test_analysis.py test/test_runtime/test_evaluation.py brnolm/language_models/language_model.py scripts/get-char-vocab.py scripts/oov-clustering/evaluate-embeddings.py scripts/model-info.py brnolm/analyze-ivec-changes.py scripts/oov-clustering/collect-embeddings.py test/test_model_statistics.py test/common.py brnolm/smm_itf/xtract-ivecs-example.py scripts/eval/eval-ivecs-domain-adaptation.py brnolm/data_pipeline/masked.py scripts/display-augmented-data.py scripts/eval/eval-chime-v2.py brnolm/analysis.py scripts/model-building/build-transformer.py brnolm/kaldi_itf.py brnolm/language_models/decoders.py brnolm/data_pipeline/pipeline_factories.py scripts/eval/eval-ivecs-oracle.py brnolm/language_models/transformer.py scripts/eval/eval-ivecs-partial.py brnolm/runtime/tensor_reorganization.py scripts/oov-clustering/compare-references.py test/test_data_pipeline/test_masked.py brnolm/multifile-ml-unigram-tranfer-ppl.py scripts/train/logger.py scripts/oov-clustering/process-hybrid-paths.py brnolm/runtime/evaluation.py scripts/oov-clustering/evaluate-embeddings-selective.py scripts/train/train-multifile.py test/utils.py test/test_data_pipeline/test_reading.py brnolm/data_pipeline/temporal_splitting.py scripts/eval/eval-noivecs-domain-adaptation.py brnolm/language_models/vocab.py brnolm/runtime/loggers.py brnolm/data_pipeline/split_corpus_dataset.py scripts/eval/eval-chime.py brnolm/rescoring/segment_scoring.py brnolm/runtime/runtime_utils.py scripts/model-building/build-lstmp.py scripts/eval/eval-independent.py brnolm/multifile-ml-unigram-ppl.py scripts/rescoring/plot-2d.py brnolm/language_models/lstm_model.py scripts/train/train-ivecs-oracle.py scripts/model-building/build-lstm.py brnolm/srilm-debug2.py brnolm/plotting.py brnolm/data_pipeline/flexible_pipeline.py scripts/train/train-flat.py brnolm/runtime/runtime_multifile.py scripts/oov-clustering/predict-embeddings.py scripts/rescoring/pick-best.py brnolm/runtime/reporting.py scripts/eval/eval-multifile.py scripts/rescoring/rescoring-combine-scores.py brnolm/multifile-ivec-unigram-ppl.py test/test_data_pipeline/test_split_corpus_dataset.py test/test_language_models/test_vocab.py scripts/model-building/build-shallow-nn.py brnolm/rmn-grad-plotter.py scripts/train/train-ivecs-partial.py brnolm/rmn-plotter.py scripts/oov-clustering/compute-edit-distance.py brnolm/data_pipeline/aug_paper_pipeline.py brnolm/smm_itf/smm_ivec_extractor.py brnolm/oov_clustering/det.py scripts/rescoring/rescore-kaldi-latts-continuous.py scripts/migrator-batch-first.py brnolm/data_pipeline/reading.py scripts/export-torchscript.py brnolm/data_pipeline/augmentation.py scripts/eval/eval.py scripts/oov-clustering/insert-oovs.py scripts/corpus-stats.py brnolm/analyze-ivec-distribution.py test/test_tensor_reorganization.py get_long_desc categorical_cross_entropy categorical_entropy categorical_kld length analyze_document euclidean_distance cosine_similarity DummyDict split_nbest_key bows_to_ps bows_to_ps bows_to_ent bow_from_documents bows_to_ps bows_to_ent documents_from_fn grid_plot _flip_ord main per_word_logprobs Substitutor Deletor Corruptor StatisticsCorruptor Sampler Corruptor cut_counts SampleCache Confuser TargetCorruptor form_input_targets LazyBatcher TemplSplitterClean CleanStreamsProvider InputTargetCorruptor BatchingSlicingIterator FileReadingHead SequenceReadingHead StreamingCorruptor masked_tensor_from_sentences LineTooLongError Batcher BatchBuilder batchify streaming_corruptor_factory plain_factory yaml_factory_noepoch corruptor_factory NoCorruptionUnpacker yaml_factory plain_factory_noepoch tokens_from_fn get_independent_lines tokens_from_file WordIdProvider char_splitter word_splitter TokenizedSplitFFMultiTarget DomainAdaptationSplitFFBase TokenizedSplitSingleTarget TokenizedSplit DomainAdaptationSplitFFMultiTarget DomainAdaptationSplit TokenizedSplitFFBase TemporalSplits DataCreator OndemandDataProvider CustomLossFullSoftmaxDecoder LabelSmoothedNLLLoss FullSoftmaxDecoder plain_nll_loss FlatEmbedding BengioModel BengioModelIvecInput torchscript_import UnreadableModelError LanguageModel torchscript_export split_batch_hidden_state detach_hidden_state LSTMLanguageModel LSTMPLanguageModel generate_square_subsequent_mask PositionalEncoding TransformerLM Vocabulary quoted_vocab_from_kaldi_wordlist vocab_from_kaldi_wordlist_base IndexGenerator vocab_from_kaldi_wordlist eer subsampling_indices det_points_from_score_tg subsample_list DETCurve pick area_under_curve tensor_from_words str_from_embedding emb_line_iterator all_embs_from_file all_embs_by_key number_of_errors word_ali_from_index_ali path_from_local_costs align local_costs_from_strings word_distance insertion_mismatch equal_lenght_mismatch path_from_moves ind_ali_from_path single_pair_mismatch extract_mismatch find_in_mismatches SegmentScorer SegmentScoringResult lstm_h0_provider EnblockEvaluator EvaluationReport SubstitutionalEnblockEvaluator_v2 get_oov_additional_cost OovCostApplicator SubstitutionalEnblockEvaluator IndependentLinesEvaluator GradLogger BaseLogger ProgressLogger NoneLogger InfinityLogger scaled_int_str ModelStatistics ValidationWatcher evaluate_ evaluate prepare_inputs train_no_transpose train_ evaluate_no_transpose train repackage_hidden filelist_to_tokenized_splits BatchFilter TransposeWrapper filenames_file_to_filenames init_seeds CudaStream filelist_to_objects epoch_summary InfiniNoneType reorg_single TensorReorganizer Singleton ParalelIvecAppender HistoryIvecAppender CheatingIvecAppender load IvecExtractor translate get_oovs main main main load_module_extra main MyUnpickler main NextIndexProducer get_sampler get_max LineWriter Unbuffered sample main main ivec_ts_from_file ivec_ts_from_file ts_from_file temp_splits_from_fn ivec_ts_from_file main embs_from_words levenshtein_distance only_differ_in_suffix extract_unique_scores trial_scores_list relevant_prefix emb_from_string words_from_idx parse_oov_id intersection read_latt read_pick parse_line translate_latt_to_model main spk_sess select_hidden_state_to_pass write_best dict_argmin Logger main main main ivec_ts_from_file temp_splits_from_fn main lstm_output_from_hidden train run_tests is_iterable skipIfNoLapack get_cpu_type get_gpu_type TestCase suppress_warnings download_file to_gpu freeze_rng_state iter_indices parse_set_seed_once CategoricalCrossEntropyTests CategoricalEntropyTests CategoricalKLDTests ListSubsamplingTests DetPointTests EerComputationTests AreaComputationTests HistoryIvecAppenderTests CheatingIvecAppenderTests ParalelIvecAppenderTests ScaledIntRepreTests AlignTest MismatchExtractionTest FindingInMismatchesTest NumberOfErrorsTest TensorReorganizerTests IvecExtractorTests DummySMM Dummy_lstm TensorReorganizerTests_SRN Dummy_srn TensorReorganizerTests getStream MaskedDataCreationTests BatcherTests BatchBuilderTest get_stream IndependentSentecesTests DomainAdaptationSplitTests DomainAdaptationSplitFFMultiTargetTests TokenizedSplitTests TokenizedSplitSingleTargetTests FullSoftmaxDecoderTests CUDA_BatchNLLCorrectnessTests CustomInitialHiddenStateTestsBase FakeModel TorchFaceTests FakeDecoder CPU_BatchNLLCorrectnessTests CPU_CustomInitialHiddenStateTests BatchNLLCorrectnessTestsBase OutputExtractionTests VocabFromKaldiTests IndexGeneratorTest VocabularyTests OovCostApplicatorTests OovCostTests masked_fill_ log masked_fill_ log zeros_like length print size len stack unroll euclidean_distance cosine_similarity unroll_steps ivec_extractor split join split t sum categorical_entropy bows_to_ps sum filenames_file_to_filenames long enumerate append range len show subplots set_title _flip_ord axis colorbar set_window_title imshow flat zip numpy_accessor len t size init_hidden view load stdin format exp zip add_argument write lm per_word_logprobs eval ArgumentParser sb append parse_args item enumerate add_boundaries split detach ones int64 zeros tensor max range cat len size cuda narrow contiguous get vocab tokens_from_fn corruptor_factory TransposeWrapper LazyBatcher TemplSplitterClean CleanStreamsProvider len get vocab BatchingSlicingIterator st_size WordIdProvider float StreamingCorruptor len vocab int StatisticsCorruptor Corruptor Confuser TargetCorruptor confusions float InputTargetCorruptor len list extend shuffle tokenizer split append tensor split Tensor isinstance script decoder model float transpose masked_fill int Vocabulary values group fullmatch max compile enumerate deepcopy list insert append range len float abs range len append sorted sum len int list range len subsampling_indices len asarray split append emb_line_iterator append key_transform emb_line_iterator stack append int append word_distance zeros enumerate min full range append local_costs_from_strings path_from_local_costs ind_ali_from_path path_from_moves insertion_mismatch tuple equal_lenght_mismatch reversed append deepcopy extend single_pair_mismatch t size contiguous repackage_hidden hs_reorganizer model prepare_inputs TensorReorganizer neg_log_prob eval init_hidden repackage_hidden data model zero_grad numpy histo_summary hierarchical_scalar_summary log list hs_reorganizer replace prepare_inputs TensorReorganizer clip_grad_norm items next_step backward neg_log_prob named_parameters parameters train step init_hidden scalar_summary train_ train_ Tensor isinstance filenames_file_to_filenames filenames_file_to_filenames seed manual_seed exp format index_select cat view size scatter_ zero_ float seek write getvalue TemporaryFile defaultdict tokenizer data StatisticsCorruptor del_rate provide basicConfig i2w statistics Confuser confusions nb_tokens range vocab tokens_from_fn subs_rate form_input_targets Corruptor ins_rate len force_cpu torchscript_export frozen_lm source target save load_module_extra module_from_spec exec_module find_spec ModelStatistics print model_path Categorical NextIndexProducer get_sampler stdout seed_text Unbuffered seed evaluate loss_per_token prefix init_seeds cuda IndependentLinesEvaluator total_loss vocab target_seq_len in_len DomainAdaptationSplitFFMultiTarget TokenizedSplitFFBase vocab tokens_from_file EnblockEvaluator batch_size target_seq_len t size init_hidden model list min append range enumerate len commonprefix append range len index vocab data relevant_prefix model size t unk_oi init_hidden tensor_from_words append unk_oi i2w unk join readline split tuple split join list model_from info write dict_argmin val_loss_fn repackage_hidden valid ValidationWatcher nb_updates model zero_grad SGD OndemandDataProvider log_training_update workdir target_subs_rate clip log list epochs val_interval TransposeWrapper lr ProgressLogger TemplSplitterClean clip_grad_norm InputTargetCorruptor log_interval backward time_since_creation neg_log_prob parameters LazyBatcher epoch_summary label_smoothing train step init_hidden plain_factory to param_groups train_yaml yaml_factory sum max_softmaxes InfinityLogger shuffle max_batch_size time Batcher yaml_factory_noepoch plain_factory_noepoch data model backward size contiguous zero_grad log neg_log_prob t parameters unsqueeze clip_grad_norm step init_hidden cat seed manual_seed_all add_argument parse_known_args ArgumentParser manual_seed is_available main parse_set_seed_once get is_tensor data is_storage isinstance get_gpu_type type is_available set_rng_state get_rng_state iter assertRaisesRegex join read basename dirname exists join write StringIO seek write StringIO seek | # BrnoLM A neural language modeling toolkit built on PyTorch. This is a scientific piece of code, so expect rough edges. BrnoLM has so far powered language modeling in the following papers: * Beneš et al. [Text Augmentation for Language Models in High Error Recognition Scenario](https://arxiv.org/pdf/2011.06056.pdf) * Žmolíková et al. [BUT System for CHiME-6 Challenge](https://www.fit.vutbr.cz/research/groups/speech/publi/2020/zmolikova_CHiME_2020_abstract.pdf) * Beneš et al. [i-vectors in language modeling: An efficient way of domain adaptation for feed-forward models](http://www.fit.vutbr.cz/research/groups/speech/publi/2018/benes_interspeech2018_1070.pdf) * Beneš et al. [Unsupervised Language Model Adaptation for Speech Recognition with no Extra Resources](http://www.fit.vutbr.cz/research/groups/speech/publi/2019/benes_DAGA_2019.pdf) ## Installation To install, clone this repository and exploit the provided `setup.py`, e.g.: | 130 |
Babylonpartners/corrsim | ['word embeddings', 'semantic textual similarity'] | ['Correlations between Word Vector Sets', 'Correlation Coefficients and Semantic Textual Similarity'] | evaluation/corrset_eval.py similarity/__init__.py evaluation/conf_intervals.py evaluation/wordsim_eval.py evaluation/utils.py evaluation/infosim_eval.py senteval/engine.py senteval/__init__.py similarity/correlation.py setup.py senteval/utils.py similarity/mi.py evaluation/corrsim_eval.py similarity/baseline.py similarity/cka.py senteval/sts.py batcher prepare batcher prepare batcher prepare batcher prepare print_frac_normal get_word_vec_path_by_name get_wordvec create_dictionary apsynp vector_correlations get_winner to_table get_wordvec correlation statistic max_ci_idxs to_ci_table apply_sim calculate_shapiro compute_ci cosine max_idxs calculate_similarities load_dataset load_file SE STS12Eval STS14Eval STS15Eval STSEval STS13Eval STS16Eval dotdict cosine create_dictionary cosine avg_cosine dcorr linear_kernel centering_matrix cka_factory gaussian_kernel apsynp pearson kendall _apsynp max_spearman apsyn spearman ksg_factory get_similarity_by_name get_wordvec word2id word_vec_name create_dictionary get_word_vec_path_by_name append zeros wvec_dim get items sorted list append enumerate print_frac_normal list format info values len debug sum len rankdata power mean read_csv glob load_file append sim_func print apply len set unique values shapiro combinations list items print zip ci spearmanr set get_wordvec compute_ci calculate_shapiro calculate_similarities correlation append float enumerate append float list items join list items max_idxs append get_winner join items list append get __delitem__ __setitem__ mean exp linear_kernel sqrt median diag mean mean mean mean power rankdata max | > **Note** > This repository is no longer actively maintained by Babylon Health. For further assistance, reach out to the paper authors. # CorrSim **CorrSim** is an evaluation framework and a collection of statistical similarity measures for word vectors described in Vitalii Zhelezniak, Aleksandar Savkov, April Shen, and Nils Y. Hammerla. *Correlation Coefficients and Semantic Textual Similarity, NAACL-HLT 2019.* **CorrSet** is a a collection of multivariate statistical similarity measures for word vectors described in Vitalii Zhelezniak, April Shen, Daniel Busbridge, Aleksandar Savkov, and Nils Y. Hammerla. *Correlations between Word Vector Sets, EMNLP-IJCNLP 2019.* **InfoSim** is a collection of mutual information similarity measures for word vectors described in Vitalii Zhelezniak, Aleksandar Savkov, and Nils Y. Hammerla. *Estimating Mutual Information Between Dense Word Embeddings, ACL 2020.* ## Dependencies | 131 |
Babylonpartners/fuzzymax | ['word embeddings', 'semantic textual similarity'] | ["Don't Settle for Average, Go for the Max: Fuzzy Sets and Max-Pooled Word Vectors"] | similarity/ablation.py evaluation/wmd.py senteval/__init__.py senteval/sts.py similarity/__init__.py evaluation/classical.py similarity/fuzzy.py senteval/utils.py evaluation/fuzzy_universes.py similarity/soft_card.py evaluation/constants.py evaluation/conf_intervals.py evaluation/fuzzy_eval.py evaluation/utils.py evaluation/sif.py senteval/engine.py similarity/classical.py setup.py similarity/baseline.py batcher prepare batcher prepare batcher prepare batcher prepare batcher prepare load_wordvec_matrix get_wordvec _get_word_weight compute_pc create_dictionary _get_word_freq_map get_word_vec_path_by_name batcher prepare SE STS12Eval STS14Eval STS15Eval STSEval STS13Eval STS16Eval dotdict cosine create_dictionary sum_jaccard max_cosine avg_jaccard dynamax_cosine cosine avg_cosine set_jaccard bag_jaccard max_jaccard dynamax_jaccard fuzzify dynamax_otsuka dynamax_dice fbow_jaccard_factory soft_cardinality munion sc_jaccard union ssum get_similarity_by_name norm get_wordvec word2id word_vec_name create_dictionary word_count_path get_word_vec_path_by_name append zeros wvec_dim batcher compute_pc T mean dot vstack pc get items sorted list append enumerate _get_word_freq_map len format info get len format info TruncatedSVD fit get __delitem__ __setitem__ max maximum vstack fuzzify mean sum maximum minimum minimum sum maximum mean set Counter dot T max maximum minimum maximum fuzzify vstack sum minimum sum vstack fuzzify minimum sum vstack fuzzify minimum sum max maximum munion soft_cardinality norm T dot power clip Counter | > **Note** > This repository is no longer actively maintained by Babylon Health. For further assistance, reach out to the paper authors. # FuzzyMax FuzzyMax is an evaluation framework and a collection of fuzzy set similarity measures for word vectors described in Vitalii Zhelezniak, Aleksandar Savkov, April Shen, Francesco Moramarco, Jack Flann, and Nils Y. Hammerla, [*Don't Settle for Average, Go for the Max: Fuzzy Sets and Max-Pooled Word Vectors, ICLR 2019.*](https://openreview.net/forum?id=SkxXg2C5FX) ## Similarity Measures Word vectors alone are sufficient to achieve excellent performance on the semantic textual similarity tasks (STS) when sentence representations and similarity measures are derived using the ideas from fuzzy set theory. The two important special cases described in the paper are **MaxPool-Jaccard** ```python import numpy as np | 132 |
Babylonpartners/rgat | ['graph attention'] | ['Relational Graph Attention Networks'] | rgat/ops/sparse_ops.py rgat/ops/math_ops.py rgat/layers/relational_graph_convolution.py rgat/layers/__init__.py examples/rdf/example.py examples/batching/example_eager.py examples/batching/example_static.py rgat/layers/relational_graph_attention_logits.py rgat/utils/graph_utils.py rgat/layers/basis_decomposition_dense.py setup.py examples/rdf/inputs.py rgat/layers/graph_utils.py rgat/layers/relational_graph_attention.py rgat/datasets/rdf.py main _built_relational_support _build_support get_batch_of_features_supports_values _built_relational_support _build_support main get_architecture get_batch_of_features_supports_values RGATNModel model_fn RGCNModel main get_relations_classes get_input_fn get_splits sp2tfsp _build_support get_dataset get_graph_file_path get_dataset_file_paths _read_tsv normalise_matrix _get_rdf_dataset_helper to_unicode RDFReader BasisDecompositionDense AttentionStyles AttentionModes HeadAggregation RelationalGraphAttention RelationalGraphAttentionLogits RelationalGraphConv batched_sparse_tensor_to_sparse_block_diagonal get_shape batched_sparse_dense_matmul indices_expand sparse_diagonal_matrix indices_expand_0 sparse_squeeze_0 _indices triple_from_coo batch_of_relational_supports_to_support relational_supports_to_support uniform cast int32 info seed items list format batch_of_relational_supports_to_support SparseTensor sparse_reorder rgat_layer RGAT allclose concat set_random_seed info triple_from_coo get_batch_of_features_supports_values len astype sparse_reorder rgat_layer RGAT sparse_placeholder placeholder info concatenate run global_variables_initializer get_architecture Session get_dataset get_global_step model_class sparse_concat model minimize sparse_softmax_cross_entropy AdamOptimizer gather argmax RunConfig train_and_evaluate Estimator get_input_fn HParams get_relations_classes transpose tocoo validate sp2tfsp get_dataset get_splits argmax format todense shape info zeros array len join join format info join format isfile get_graph_file_path get_dataset_file_paths _get_rdf_dataset_helper dirname save info join sorted format tocsr ones csr_matrix transpose dict triples normalise_matrix split info empty array enumerate len list sorted format lil_matrix tocsr _build_support tolist _get_indices_names identity set RDFReader info relationList union len flatten zeros diags shape isinstance as_list enumerate len type format ValueError list format ValueError OrderedDict type keys | > **Note** > This repository is no longer actively maintained by Babylon Health. For further assistance, reach out to the paper authors. # Relational Graph Attention Networks A TensorFlow implementation of Relational Graph Attention Networks for semi-supervised node classification and graph classification tasks introduced in our paper [Relational Graph Attention Networks](https://openreview.net/forum?id=Bklzkh0qFm). It is compatible with static and eager execution modes. Contact [dan.busbridge@babylonhealth.com](mailto:dan.busbridge@babylonhealth.com) for comments and questions. <img align="left" src="images/argat.png"> ## Installation To install `rgat`, run: ``` $ pip install git+git://github.com/Babylonpartners/rgat.git | 133 |
Baileyswu/pytorch-hmm-vae | ['speech recognition', 'noisy speech recognition', 'distant speech recognition'] | ['The PyTorch-Kaldi Speech Recognition Toolkit'] | kaldi_decoding_scripts/utils/nnet/make_lstm_proto.py data_io.py core.py myDNN.py kaldi_decoding_scripts/utils/nnet/gen_hamm_mat.py kaldi_decoding_scripts/utils/nnet/make_nnet_proto.py kaldi_decoding_scripts/utils/filt.py vae.py kaldi_decoding_scripts/utils/nnet/make_blstm_proto.py save_raw_fea.py utils.py kaldi_decoding_scripts/utils/nnet/gen_dct_mat.py quaternion_neural_networks.py tune_hyperparameters.py neural_networks.py kaldi_decoding_scripts/utils/reverse_arpa.py run_exp.py plot_acc_and_loss.py kaldi_decoding_scripts/utils/nnet/gen_splice.py kaldi_decoding_scripts/utils/nnet/make_cnn_proto.py kaldi_decoding_scripts/utils/nnet/make_cnn2d_proto.py extract_data_from_shared_list convert_numpy_to_torch run_nn run_nn_refac01 read_next_chunk_into_shared_list_with_subprocess load_counts UnknownMatrixHeader _read_vec_flt_binary open_or_fd _read_mat_ascii read_vec_int_ark context_window_old read_vec_flt_scp UnknownVectorHeader read_cntime_ark load_chunk read_vec_flt_ark write_mat read_cntime UnsupportedDataType write_vec_int BadInputFormat read_post_ark SubprocessFailed write_vec_flt read_vec_int read_mat load_dataset BadSampleSize read_vec_flt read_post_rxspec read_post_scp read_ali_ark read_lab_fea_refac01 _read_vec_flt_riff _read_mat_binary read_key read_cnet_ark _read_compressed_mat read_segments_as_bool_vec read_lab_fea context_window popen read_mat_scp read_post read_mat_ark PositionwiseFeedForward MultiHeadAttention ResidualFeedForwardNet MultiHeadCrossAttention ScaledDotProductAttention ResidualFeedFowardBlock CNN RNN liGRU SincNet LayerNorm LSTM_cudnn GRU_cudnn channel_averaging SincConv_fast GRU FusionLinearConv minimalGRU act_fun MLP PASE flip fusionRNN_jit SincConv SRU liGRU_layer LSTM RNN_cudnn logMelFb QuaternionLinearAutograd get_k QLSTM act_fun unitary_init random_init QuaternionLinearFunction quaternion_linear check_input quaternion_init get_r affect_init get_i QuaternionLinear flip get_j _max_nr_of_parallel_forwarding_processes _is_first_validation _run_forwarding_in_subprocesses nth_replace_string run_command compute_cw_max create_curves run_shell read_args_command_line check_cfg parse_model_field create_lists get_chunks_after_which_to_validate optimizer_init create_block_diagram run_shell_display dump_epoch_results is_sequential export_loss_acc_to_txt split_chunks write_cfg_chunk expand_section_proto get_val_cfg_file_path expand_str_ep create_block_connection get_val_lst_file_path compute_avg_performance check_consistency_with_proto parse_fea_field change_lr_cfg check_field expand_section get_all_archs get_val_info_file_path forward_model terminal_node_detection model_init create_configs parse_lab_field dict_fea_lab_arch is_sequential_dict do_validation_after_chunk _get_val_file_name_base shift check_cfg_fields cfg_item2sec list_fea_lab_arch forward_model_refac01 compute_n_chunks progress VAE test loss_function reparameterize train fix_filt_step Glorot start Thread join float cuda view _optimization_step load_counts strtobool convert_numpy_to_torch write_mat _prepare_input log _write_info_file list _get_dim_from_data_set len map _read_chunk_specific_config sum range detach extract_data_from_shared_list close _get_batch_size_from_config _get_batch_config read_next_chunk_into_shared_list_with_subprocess forward_model join time _save_model is_sequential_dict _initialize_random_seed shift _update_progress_bar _open_forward_output_files_and_get_file_handles _load_model_and_optimizer numpy split load_counts strtobool open_or_fd zero_grad DataParallel numpy save write_mat round cuda max log seed str list optimizer_init set_device len exit map load_state_dict sum current_device range detach state_dict Thread replace ConfigParser close start manual_seed item float keys model_init forward_model load int read join time backward is_sequential_dict contiguous shift write randint step progress split _read_features_and_labels_with_kaldi _match_feature_and_label_sequence_lengths _chunk_features_and_labels _concatenate_features_and_labels _input_is_wav_file flatten empty range concatenate empty range roll min mean context_window load_dataset std column_stack _reorder_data_set _append_to_shared_list _read_features_and_labels _read_from_config _read_chunk_specific_config update int read list dict_fea_lab_arch ConfigParser is_sequential_dict write exit shuffle load_chunk compute_cw_max keys append column_stack rsplit int seek search popen split open start Popen open decode strip read_vec_int open_or_fd read_key decode remove read open_or_fd close frombuffer array split pack char write open_or_fd encode range len read_vec_flt open_or_fd split read_vec_flt open_or_fd read_key decode remove open_or_fd close array split read frombuffer unpack decode read frombuffer pack char write open_or_fd encode tobytes read_mat open_or_fd split read_mat open_or_fd read_key decode _read_mat_ascii _read_mat_binary open_or_fd decode read reshape startswith frombuffer decode vstack append array split dtype read reshape zeros frombuffer array pack char write open_or_fd encode tobytes print exit startswith open_or_fd read_post split open_or_fd read_post read_key decode read tolist open_or_fd close append frombuffer range read_cntime open_or_fd read_key decode read tolist open_or_fd close frombuffer loadtxt repeat astype size view contiguous check_input check_input check_input check_input matmul cat tuple reshape sqrt uniform prod range reshape tuple sqrt uniform prod rvs normal RandomState tuple reshape cos sqrt uniform sin randint prod range data init_func size type_as strtobool get_chunks_after_which_to_validate _get_nr_of_valid_per_epoch_from_config decode readline print append Popen decode write Popen flush communicate wait Popen int str findall group write exit nth_replace_string split append range compile len read ConfigParser mean append float sum int list write exit map float split append sections read list add_section ConfigParser remove_section set sections append keys range values len ConfigParser read list set list write exit any sections keys read ConfigParser exit check_cfg_fields expand_section open rstrip strtobool values open run_shell str list sorted parse_model_field len exit create_block_diagram append sum range replace check_consistency_with_proto parse_fea_field sections join items parse_lab_field int read write split findall makedirs write sections exit append range len list _partition_chunks append get_chunks_after_which_to_validate _get_nr_of_valid_per_epoch_from_config format _get_val_lst_file_name _get_val_info_file_name _get_val_cfg_file_name strtobool max open str check_cfg list exit log10 ceil append range write_cfg_chunk expand_str_ep get_val_cfg_file_path format get_val_lst_file_path replace close get_all_archs float get_val_info_file_path keys int items do_validation_after_chunk write split compute_n_chunks len __add__ max open seed str sorted list map log10 reverse writelines ceil append split_chunks range format get_val_lst_file_path parse_fea_field close shuffle _get_validation_data_for_chunks _shuffle_forward_data int do_validation_after_chunk cfg_item2sec split len add_section str list sorted remove_section append range replace check_consistency_with_proto ConfigParser glob remove_option sections keys int read join items cfg_item2sec findall len sorted write exit sub append split write exit sub append split glob int sorted format read list replace ConfigParser len write exit findall float range append open list str list index append range len run_shell str read list remove replace create_block_connection ConfigParser findall append list replace strtobool len map cfg_item2sec findall range append split list replace strtobool len map cfg_item2sec findall range append split strtobool list keys strtobool int list max append keys NLLLoss list out_dim strtobool nn_class set eval import_module getattr train cuda list strtobool map SGD Adam RMSprop parameters float keys split list exp _get_network_output _get_labels_from_input randn_like mean _add_input_features_to_outs_dict shape _compute_layer_values float sum cat len list exp bool view isinstance randn_like mean shape long float sum keys cat len str list int print write close keys log10 ceil max open int write float round flush str asarray ndarray readlines makedirs savetxt split append float range len arange axis str use exit ylabel title savefig legend append export_loss_acc_to_txt range plot readlines clear print loadtxt xlabel write amax len find str read list ConfigParser set keys int write extend exit append float split exp randn_like exp view binary_cross_entropy pow sum format model backward print dataset zero_grad loss_function item to step enumerate len print eval format range with_glorot | # The PyTorch-Kaldi Speech Recognition Toolkit <img src="pytorch-kaldi_logo.png" width="220" img align="left"> PyTorch-Kaldi is an open-source repository for developing state-of-the-art DNN/HMM speech recognition systems. The DNN part is managed by PyTorch, while feature extraction, label computation, and decoding are performed with the Kaldi toolkit. This repository contains the last version of the PyTorch-Kaldi toolkit (PyTorch-Kaldi-v1.0). To take a look into the previous version (PyTorch-Kaldi-v0.1), [click here](https://bitbucket.org/mravanelli/pytorch-kaldi-v0.0/src/master/). If you use this code or part of it, please cite the following paper: *M. Ravanelli, T. Parcollet, Y. Bengio, "The PyTorch-Kaldi Speech Recognition Toolkit", [arXiv](https://arxiv.org/abs/1811.07453)* ``` @inproceedings{pytorch-kaldi, title = {The PyTorch-Kaldi Speech Recognition Toolkit}, author = {M. Ravanelli and T. Parcollet and Y. Bengio}, | 134 |
Bala93/Context-aware-segmentation | ['medical image segmentation', 'semantic segmentation'] | ['A context based deep learning approach for unbalanced medical image segmentation'] | UNet-based-segmentation/prostate/losses.py GAN-based-segmentation/prostate/img_mask_transform.py dataset_creation/prepare_dataset_prostrate_dynamic_gan.py UNet-based-segmentation/prostate/utils.py GAN-based-segmentation/cardiac/train_global.py UNet-based-segmentation/prostate/dataset.py UNet-based-segmentation/cardiac/dataset.py UNet-based-segmentation/prostate/models.py GAN-based-segmentation/prostate/models.py UNet-based-segmentation/prostate/train_local.py GAN-based-segmentation/cardiac/dataset.py dataset_creation/prepare_dataset_cardiac_dynamic_gan.py GAN-based-segmentation/cardiac/models.py UNet-based-segmentation/prostate/train_local_dyn.py UNet-based-segmentation/cardiac/train_local_dyn.py GAN-based-segmentation/cardiac/train_local.py GAN-based-segmentation/prostate/dataset.py GAN-based-segmentation/prostate/train_global.py GAN-based-segmentation/prostate/train_local.py UNet-based-segmentation/cardiac/train_local.py GAN-based-segmentation/prostate/train_local_dyn.py UNet-based-segmentation/prostate/train_global.py UNet-based-segmentation/cardiac/losses.py UNet-based-segmentation/cardiac/models.py GAN-based-segmentation/cardiac/train_local_dyn.py UNet-based-segmentation/cardiac/train_global.py UNet-based-segmentation/cardiac/utils.py GAN-based-segmentation/prostate/prepare_dataset_prostate_dynamic_gan.py get_file_dim_list get_file_dim_list TrainDataDynamicLocal TrainDataStaticLocal TrainData ValidData DynamicLocalDiscriminator DynamicContextDiscriminator UNetUpBlock UNet GlobalDiscriminator Discriminator StaticLocalDiscriminator UNetConvBlock StaticContextDiscriminator Concatenate save_model load_model evaluate build_model build_optim make_one_hot create_arg_parser train_epoch create_datasets build_discriminator create_data_loaders main visualize save_model load_model evaluate build_model build_optim make_one_hot create_arg_parser train_epoch create_datasets build_discriminator create_data_loaders main visualize save_model load_model evaluate build_model build_optim make_one_hot create_arg_parser train_epoch create_datasets build_discriminator create_data_loaders main visualize TrainDataDynamicLocal TrainDataStaticLocal TrainData ValidData RandomRotation hflip RandomVerticalFlip rotate RandomHorizontalFlip vflip DynamicLocalDiscriminator DynamicContextDiscriminator UNetUpBlock UNet GlobalDiscriminator Discriminator StaticLocalDiscriminator UNetConvBlock StaticContextDiscriminator Concatenate get_file_dim_list save_model load_model evaluate build_model build_optim make_one_hot create_arg_parser train_epoch create_datasets build_discriminator create_data_loaders main visualize save_model load_model evaluate build_model build_optim make_one_hot create_arg_parser train_epoch create_datasets build_discriminator create_data_loaders main visualize save_model load_model evaluate build_model build_optim make_one_hot create_arg_parser train_epoch create_datasets build_discriminator create_data_loaders main visualize DatasetImageMaskGlobal DatasetImageMaskLocal FocalLoss LossMulti ConvRelu UNetModule Conv3BN UNet conv3x3 evaluate visualize DatasetImageMaskGlobal DatasetImageMaskLocal FocalLoss LossMulti ConvRelu UNetModule Conv3BN UNet conv3x3 evaluate visualize uint8 basename value glob File astype tqdm pad bbox append array regionprops train_path TrainData ValidData validation_path DataLoader create_datasets zero_grad criterionD device BCEWithLogitsLoss modelD len criterionG to detach perf_counter item float long enumerate NLLLoss backward modelG tqdm parameters train step add_scalar eval perf_counter eval save to device parameters to SGD device load data_parallel build_optim build_model parameters DataParallel build_discriminator load_state_dict lr Adam data zero_ scatter_ build_optim batch_size lr_gamma DataParallel visualize StepLR load_model range SummaryWriter format data_parallel build_model close resume mkdir info num_epochs checkpoint evaluate print train_epoch parameters lr_step_size exp_dir build_discriminator create_data_loaders step add_argument ArgumentParser TrainDataStaticLocal exp squeeze append range int save_model TrainDataDynamicLocal pad | # Context-aware-segmentation > [A context based deep learning approach for unbalanced medical image segmentation](https://arxiv.org/abs/2001.02387) | 135 |
Bala93/Recon-GLGAN | ['mri reconstruction'] | ['Recon-GLGAN: A Global-Local context based Generative Adversarial Network for MRI Reconstruction'] | models/gan/train.py common/utils.py data/mri_data.py models/gan/run.py common/evaluate.py models/recon_glgan/train.py models/recon_glgan/model.py models/recon_glgan/run.py models/gan/model.py evaluate psnr mse nmse Metrics ssim save_reconstructions SliceData SliceDataDev UNetUpBlock UNetConvBlock Discriminator UNet load_model build_model create_arg_parser create_data_loaders main run save_model load_model evaluate build_disc build_optim create_arg_parser build_generator train_epoch create_datasets create_data_loaders main visualize UNetUpBlock LDiscriminator UNet Discriminator UNetConvBlock ContextDiscriminator Concatenate load_model build_model create_arg_parser create_data_loaders main run save_model load_model evaluate build_disc build_optim create_arg_parser train_epoch create_datasets create_data_loaders build_gen main visualize Metrics iterdir roi_size items list SliceDataDev val_path DataLoader acceleration to device load data_parallel build_model DataParallel load_state_dict eval defaultdict load_model print create_data_loaders out_dir save_reconstructions checkpoint run add_argument ArgumentParser val_path train_path SliceDataDev acceleration SliceData create_datasets L1Loss model zero_grad device adv_weight BCEWithLogitsLoss len netD to detach criterion_bce criterion_L1 perf_counter item float enumerate backward tqdm parameters train step add_scalar eval perf_counter eval copyfile save to device Discriminator SGD parameters device to build_optim build_disc build_generator parameters lr Adam save_model build_optim lr_gamma DataParallel visualize StepLR build_disc range SummaryWriter data_parallel close resume mkdir info num_epochs evaluate build_generator train_epoch parameters lr_step_size exp_dir step append range roi_size int roi_size to device build_gen build_gen | ## [Recon-GLGAN: A Global-Local context based Generative Adversarial Network for MRI Reconstruction(Accepted at Machine Learning in Medical Image Reconstruction(MLMI), MICCAI Workshop)](https://arxiv.org/abs/1908.09262) ## ReconGLGAN illustration: ![](figures/Recon_ROI_illustration.jpg) ## ReconGLGAN architecture: ![](figures/ROI_GAN.jpg) ## Reconstruction qualitative comparison: From Left to Right: Ground Truth FS image, ZF image, GAN reconstructed image, Recon-GLGAN reconstructed image, ZF reconstruction error, GAN reconstruction error and Recon-GLGAN reconstruction error. From Top to Bottom: Images corresponding to different acceleration factors: 2x, 4x and 8x. | 136 |
BaoWangMath/DNN-DataDependentActivation | ['adversarial defense', 'data augmentation', 'adversarial attack'] | ['Adversarial Defense via Data Dependent Activation Function and Total Variation Minimization'] | Cifar10-Natural/WNLL.py Cifar10-Robust/main_PGD_ResNet_WNLL.py pyflann/build/lib.linux-x86_64-2.7/pyflann/util/weave_tools.py Cifar10-Robust/Attack_ResNet_WNLL_PGD.py pyflann/build/lib.linux-x86_64-2.7/pyflann/exceptions.py Cifar10-Natural/Cifar10.py pyflann/build/lib.linux-x86_64-2.7/pyflann/util/__init__.py pyflann/build/lib.linux-x86_64-2.7/pyflann/io/dat_dataset.py pyflann/build/lib.linux-x86_64-2.7/pyflann/bindings/__init__.py pyflann/build/lib.linux-x86_64-2.7/pyflann/io/dataset.py Cifar10-Natural/utils.py Cifar10-Natural/resnet.py pyflann/build/lib.linux-x86_64-2.7/pyflann/io/npy_dataset.py pyflann/build/lib.linux-x86_64-2.7/pyflann/index.py MNIST-Robust/Attack_CNN_WNLL_PGD.py pyflann/build/lib.linux-x86_64-2.7/pyflann/io/hdf5_dataset.py Cifar10-Robust/utils.py pyflann/build/lib.linux-x86_64-2.7/pyflann/io/__init__.py pyflann/build/lib.linux-x86_64-2.7/pyflann/__init__.py MNIST-Robust/WNLL.py MNIST-Robust/utils.py pyflann/TestPYFlann.py pyflann/build/lib.linux-x86_64-2.7/pyflann/bindings/flann_ctypes.py pyflann/setup.py MNIST-Robust/PGD_CNN_WNLL.py Cifar10-Robust/WNLL.py pyflann/build/lib.linux-x86_64-2.7/pyflann/io/binary_dataset.py ResidualBlock ResNet56 BasickBlock ResNetCifar10 Unfreeze_layer progress_bar freeze_layer freeze_All format_time init_params Unfreeze_All weight_GL weight_ann resnet110 resnet20 AttackPGD ResNet_Cifar Bottleneck conv3x3 resnet56 BasicBlock resnet110 resnet20 AttackPGD ResNet_Cifar Bottleneck conv3x3 resnet56 BasicBlock Unfreeze_layer progress_bar freeze_layer freeze_All format_time init_params Unfreeze_All weight_GL weight_ann CNN CNN AttackPGD Unfreeze_layer progress_bar freeze_layer freeze_All format_time init_params Unfreeze_All weight_GL weight_ann FLANNException CommandException FLANN set_distance_type define_functions FLANNParameters CustomStructure ensure_2d_array FlannLib load_flann_library load check save load save load check is_number save __missing_h5py load check save CStruct CModule normal constant isinstance kaiming_normal Conv2d bias modules BatchNorm2d weight Linear int time join format_time write append range flush len int parameters parameters parameters parameters list T exp arange todense print reshape len shape repmat array nonzero append nn max range tocsc FLANN T bicgstab shape zeros float sum range tocsc len ResNet_Cifar ResNet_Cifar ResNet_Cifar flann_set_distance_type property RandomState dirname abspath exec reshape require size read open tofile fromfile isfile list check values splitext float split savetxt loadtxt save | # DNN-DataDependentActivation This repository consists PyTorch code for deep neural networks with graph interpolating function as output activation function ## External dependency: pyflann (https://github.com/primetang/pyflann) Place the pyflann library in your current directory to replace the pyflann folder ### Cifar10-Natural Code for reproducing results of naturally trained ResNets on the Cifar10 ### Cifar10-Robust Code for reproducing results of PGD adversarial training for ResNets on the Cifar10 ### MNIST-Robust Code for reproducing results of PGD adversarial training for Small-CNN on the MNIST | 137 |
BaoWangMath/DP-LSSGD | ['stochastic optimization'] | ['DP-LSSGD: A Stochastic Optimization Method to Lift the Utility in Privacy-Preserving ERM'] | RDP-Accountant/svrg_dp.py Logistic-Regression/Logistic_NonconstantLR_Eps10_Sigma0.py Logistic-Regression/Logistic_NonconstantLR_Eps10_Sigma1.py Logistic-Regression/LS_SGD.py RDP-Accountant/lssgd_dp.py RDP-Accountant/model_mnist.py RDP-Accountant/train_dpsgd_mnist.py RDP-Accountant/train_dpsvrg_mnist.py RDP-Accountant/svrg_dp1.py Logistic-Regression/utils.py RDP-Accountant/train_dplssgd_mnist.py Logistic-Regression/Grad_optimizer.py RDP-Accountant/utils.py RDP-Accountant/rdp_accountant_nn.py RDP-Accountant/sgd_dp.py Optimizer Linear Linear LSSGD format_time init_params progress_bar lssgd_dp ConvNet _log_print _log_sub _compute_log_a _compute_delta compute_rdp _compute_log_a_int _compute_rdp _compute_log_a_frac _log_erfc get_privacy_spent _compute_eps _log_add sgd_dp svrg_dp svrg_dp1 main main main format_time init_params progress_bar get_mean_and_std normal constant isinstance kaiming_normal Conv2d bias modules BatchNorm2d weight Linear int time join format_time write append range flush len int data rfft zeros_like zero_grad grad_dp_lay max cuda view BATCH_SIZE progress_bar numel normal_ size eval net enumerate parameters loss_function zeros step irfft len _log_add range log binom _log_sub binom _log_erfc sqrt abs log _log_add is_integer atleast_1d argmin exp atleast_1d nanargmin log isinf _compute_rdp array isscalar _compute_eps _compute_delta data max BATCH_SIZE progress_bar zero_grad grad_dp_lay normal_ parameters eval loss_function step cuda net enumerate len data zero_grad grad_dp_lay max cuda BATCH_SIZE progress_bar normal_ LARGE_BATCH_NUMBER range CrossEntropyLoss grad_dp_lay_full eval net enumerate deepcopy parameters loss_function step len data grad_full_cal zero_grad grad_mini_cal max cuda BATCH_SIZE progress_bar normal_ LARGE_BATCH_NUMBER CrossEntropyLoss eval zip net enumerate deepcopy parameters loss_function step len EPOCH BATCH_SIZE print SGD lssgd_dp parameters cuda compute_epsilon range LR_SGD CrossEntropyLoss sgd_dp clock NOISE_MULTIPLIER svrg_dp print DataLoader div_ zeros range len | # DP-LSSGD | 138 |
BaoWangMath/EnResNet | ['adversarial defense', 'adversarial attack'] | ['ResNets Ensemble via the Feynman-Kac Formalism to Improve Natural and Robust Accuracies'] | ResNet20/main_pgd_enresnet5_20.py WideResNet34-10/main_pgd_wideresnet34_10_Validation.py ResNet20/resnet_cifar.py ResNet20/Attack_PGD_EnResNet_5_20.py WideResNet34-10/utils.py ResNet20/utils.py WideResNet34-10/resnet_cifar.py WideResNet34-10/Attack_PGD_WideResNet.py en_preactresnet20_cifar PreActBasicBlock en_preactresnet44_cifar conv3x3 Ensemble_PreAct_ResNet_Cifar PreActBottleneck PreAct_ResNet_Cifar en_preactresnet20_cifar PreActBasicBlock AttackPGD conv3x3 Ensemble_PreAct_ResNet_Cifar PreActBottleneck PreAct_ResNet_Cifar en_preactresnet20_cifar PreActBasicBlock en_preactresnet32_cifar en_preactresnet44_cifar conv3x3 Ensemble_PreAct_ResNet_Cifar PreActBottleneck en_preactresnet110_cifar PreAct_ResNet_Cifar Unfreeze_layer progress_bar freeze_layer freeze_All format_time init_params Unfreeze_All BasicBlock NetworkBlock WideResNet BasicBlock NetworkBlock AttackPGD WideResNet en_preactresnet20_cifar PreActBasicBlock en_preactresnet32_cifar en_preactresnet44_cifar conv3x3 Ensemble_PreAct_ResNet_Cifar PreActBottleneck en_preactresnet110_cifar PreAct_ResNet_Cifar Unfreeze_layer progress_bar freeze_layer freeze_All format_time init_params Unfreeze_All Ensemble_PreAct_ResNet_Cifar Ensemble_PreAct_ResNet_Cifar Ensemble_PreAct_ResNet_Cifar Ensemble_PreAct_ResNet_Cifar normal constant isinstance kaiming_normal Conv2d bias modules BatchNorm2d weight Linear int time join format_time write append range flush len int parameters parameters parameters parameters | # EnResNet This repository consists PyTorch code for the paper: Bao Wang, Binjie Yuan, Zuoqiang Shi, Stanley J. Osher. EnResNet: ResNet Ensemble via the Feynman-Kac Formalism, arXiv:1811.10745, 2018 (https://arxiv.org/abs/1811.10745) The repo contains two subfolders for PGD adversarially training of ensemble of ResNet20 and WideResNet34-10, respectively. We inteprete the adversarial vulnerability of ResNets as irregularity of the solution of the transport equation, and we propose to improve regularity of the decision boundary by adding diffusion to the transport equation. Please refer to Figure 4 of our [paper](https://arxiv.org/abs/1811.10745) for more details. <p align="center"> <img src="fig4.png" height="600"> </p> The resulted convection-diffusion equation can be solved by using the Feynman-Kac formula, which can be approximated by an ensemble of modified ResNets. <p align="center"> | 139 |
BaoWangMath/Graph-Structured-Recurrent-Neural-Nets- | ['adversarial defense', 'adversarial attack'] | ['ResNets Ensemble via the Feynman-Kac Formalism to Improve Natural and Robust Accuracies'] | src/srnn_tf.py src/FeatureGeneration.py src/Flu_Prediction.py ConvertSeriesToMatrix generate_data partition_to_three_parts partition_to_one_node_per_state load_graph_and_normalize DataReaderUSFlu edge_cell four_layer_lstm RNNPrediction edge_cell_2 node_cell img_enlarge SRNNBase _default_edge_cell TrainValSplit SRNNUndirected _default_node_cell InverseTransform SRNNDirected T read_csv int list print min floor read_csv zeros float sum max range enumerate list range read_csv len loadtxt transpose sum max range len asarray reshape append range len MultiRNNCell MultiRNNCell print InverseTransform MultiRNNCell int permutation astype flatten shape reshape | BaoWangMath/Graph-Structured-Recurrent-Neural-Nets- | 140 |
BardOfCodes/Seg-Unravel | ['semantic segmentation'] | ['Per-Pixel Feedback for improving Semantic Segmentation'] | utils.py seg_fix.py demo.py global_variables.py main set_caffe_path seg_fix embed_fixations_gif get_blob embed_fixations get_heatmap imwrite image ArgumentParser get_top_fixations get_heatmap open list parse_args TEST seg_fix dump get_blob close Net embed_fixations set_caffe_path unique embed_fixations_gif join remove add_argument get_fixations_at_all_layers network insert astype copy expand_dims pad swapaxes imread array imread FONT_HERSHEY_SIMPLEX copy shape pad imread axis set_visible heatmap addWeighted shape pad imshow savefig append imread sum Axes range asarray astype square copy add_axes sqrt empty int Heatmap set_axis_off figure zeros | BardOfCodes/Seg-Unravel | 141 |
BardOfCodes/pytorch_deeplab_large_fov | ['semantic segmentation'] | ['Semantic Image Segmentation with Deep Convolutional Nets and Fully Connected CRFs'] | deeplab_large_fov.py test.py converter.py train_v1.py utils.py train_v2.py Net blur get_data_from_chunk_v2 get_parameters resize_label_batch crop get_test_data_from_chunk_v2 rotate adjust_learning_rate read_file chunker flip modules isinstance Conv2d Variable transpose from_numpy UpsamplingBilinear2d interp zeros int uniform zeros int warpAffine cos pi copy getRotationMatrix2D shape sin zeros abs int resize_label_batch transpose crop astype uniform cuda zeros imread flip enumerate transpose astype cuda zeros imread crop enumerate | BardOfCodes/pytorch_deeplab_large_fov | 142 |
Bartzi/kiss | ['scene text recognition'] | ['KISS: Keeping It Simple for Scene Text Recognition'] | common/datasets/text_recognition_image_dataset.py common/utils.py train_utils/create_video.py train_utils/disable_chain.py datasets/text_recognition/combine_npz_datasets.py optimizers/radam.py datasets/text_recognition/filter_word_length.py common/datasets/text_recognition_eval_dataset.py text/lstm_text_localizer.py transformer/encoder_decoder.py train_utils/tensorboard_utils.py datasets/text_recognition/balance_dataset.py train_utils/module_loading.py train_text_recognition.py updaters/transformer_text_updater.py datasets/text_recognition/merge_char_maps.py insights/visual_backprop.py train_utils/datatypes.py evaluation/text_recognition_evaluator.py datasets/text_recognition/json_to_npz.py common/datasets/__init__.py datasets/text_recognition/focused_scene_text_to_ours.py evaluation/custom_mean_evaluator.py evaluation/rotation_detection_evaluator.py functions/rotation_droput.py insights/bbox_plotter.py common/datasets/test_augmentations.py text/transformer_recognizer.py evaluate.py insights/tensorboard_gradient_histogram.py transformer/decoder.py image_manipulation/image_masking.py transformer/position_wise_feed_forward.py train_utils/dataset_utils.py text/text_localizer.py datasets/text_recognition/extract_char_map.py datasets/text_recognition/jaderberg_to_ours.py text/transformer_text_localizer.py transformer/encoder.py datasets/text_recognition/mat_to_json.py datasets/text_recognition/crop_words_from_oxford.py train_utils/show_progress.py run_eval_on_all_datasets.py common/datasets/text_recognition_char_crop_dataset.py train_utils/backup.py train_utils/logger.py transformer/embedding.py transformer/positional_encoding.py transformer/utils.py datasets/text_recognition/create_train_val_splits.py config/recognition_config.py train_utils/autocopy.py train_utils/match_bbox.py iou/bbox.py common/datasets/sub_dataset.py text/text_recognizer.py transformer/__init__.py transformer/attention.py datasets/text_recognition/combine_json_datasets.py datasets/text_recognition/convert_words_in_gt_to_chars.py datasets/text_recognition/filter_non_alpha_numeric.py train_utils/updater_trigger.py data_server.py common/dataset_management/dataset_server.py datasets/text_recognition/filter_bad_images.py insights/text_recognition_bbox_plotter.py common/datasets/image_dataset.py commands/interactive_train.py resnet/resnet_gn.py datasets/text_recognition/h5_to_json.py main Evaluator plot_eval_results determine_snapshot_names find_best_result main load_pretrained_model InteractiveTrain open_interactive_prompt TransformParameterRegressionLossCalculator MaxAreaLossCalculator AspectRatioLossCalculator LossCalculator DirectionLossCalculator SmoothIOUCalculator random_pairs PairWiseOverlapLossCalculator MinAreaLossCalculator OutOfImageLossCalculator IOUCalculator BaseImageDataset BaseNPZImageDataset aspect_ratio_preserving_resize ImageDataset prepare_image SubDataset main TextRecognitionEvenCharCropDataset TextRecognitionImageCharCropDataset TextRecognitionEvaluationDataset TextRecognitionImageDataset scatter_dataset DatasetServer SharedNPZMemoryManager DatasetClient parse_config balance_json_file main balance_npz_file main main main main main split_text get_bboxes get_relative_box_position main main test_image filter_npz_file main filter_npz_file main filter_json_file main save_image split_text get_bboxes build_file_name main can_item_be_copied split_text get_bboxes main CustomMeanEvaluator RotationMAPEvaluator TextRecognitionEvaluator TextRecognitionTensorboardEvaluator TextRecognitionTestFunction TextRecognitionEvaluatorFunction rotation_dropout RotationDropout ImageMasker InhibitionOfReturnImageMasker BBOXPlotter get_next_color ObjectnessBBOXPlotter TensorboardGradientPlotter TextRecognitionBBoxPlotter VisualBackprop AxisAlignedBBox ComparableMixin Vec2 BBox RAdamRule _learning_rate RAdam AdamHyperparameter _get_intermediate_dtype _inplace_axpby BasicA BottleNeckB BasicB ResNet BottleNeckBlock BottleNeckA BasicBlock LSTMTextLocalizer SliceLSTMLocalizer TextLocalizer TextRecognizer TransformerTextRecognizer TransformerTextLocalizer maybe_copy change_device_of recurse_copy DeviceChanger restore_backup get_import_info get_module get_backup_info get_definition_filepath get_definition_filename get_filter create_gif run_animation_process make_video create_video concatenate_gifs concatenate_videos retry_get_example disable_chains ChainUpdateDisabler Logger get_bbox_corners get_aabb_corners bbox_coords_to_feature_coords LayerExtractor calculate_receptive_fields get_class load_module ProgressWindow ImageDataHandler ImageServer TensorboardEvaluator UpdaterTrigger MultiHeadedAttention Decoder DecoderLayer Embedding Encoder EncoderLayer EncoderDecoder PositionalEncoding PositionwiseFeedForward subsequent_mask SublayerConnection get_conv_feature_encoder_decoder build_decoder get_encoder_decoder build_transform_param_decoder TransformerRecognizerOnlyUpdater TransformerTextRecognitionUpdater DatasetServer load_data items list defaultdict join plot savefig legend zip get_next_color sorted list map set zip config TextRecognitionEvaluatorFunction TextRecognitionBBoxPlotter batch_size TextRecognitionImageDataset Trainer bos_token num_chars_per_word parse_config MultithreadIterator TensorboardGradientPlotter Logger ArgumentParser GradientClipping run LSTMTextLocalizer log_name num_classes resume_recognizer Size setup use_memory_manager TransformerTextRecognitionUpdater TextRecognitionTensorboardEvaluator connect scatter_dataset num_words_per_image RAdam dump_graph getattr dirname add_hook append parse_args load_image test_image load_pretrained_model SummaryWriter format snapshot char_map relpath TransformerTextRecognizer open_interactive_prompt create_multi_node_optimizer target_size ProgressBar realpath resume save_gradient_information isoformat ExponentialShift create_communicator __name__ DatasetClient PrintReport join log_interval log_dir blank_label print resume_localizer add_argument snapshot_object extend intra_rank gpu makedirs InteractiveTrain start Thread pop_random height LANCZOS new min width paste resize mode LANCZOS aspect_ratio_preserving_resize get_dtype resize array fromarray list num_iterations BaseNPZImageDataset transpose astype dest tqdm get_example save npz_file range permutation size len send_obj SubDataset bcast_obj range recv_obj int read ConfigParser zip setattr conversion_func pop defaultdict insert tqdm pprint append balance_json_file balance_npz_file datasets defaultdict gt_files pop int val_ratio shuffle gt_file len append _make transpose to_aabb left Vec2 top split_text zip insert get_bboxes loadmat add_bos_token splitext print defaultdict filter_npz_file pop insert npz max_word_length func parent relative_to strip icdar_gt icdar_image_dir Path join splitext split fromarray build_file_name save items isinstance dict keys array set char_maps union values pow beta2 beta1 mdarray inplace_axpby isinstance HyperparameterProxy copy get_module get_backup_info join exec eval getattr abspath load_module setattr extract_number ImageData animation_creator max list sorted exit mkdtemp NamedTemporaryFile append get_filter format close concatenate_function listdir compile join print path filter join sorted print listdir run print join run join print close NamedTemporaryFile flush run height shape get_item width float clip minimum height maximum shape get_item width float clip data floor clip ceil pw get_receptive_field isinstance sy ph sx spec_from_file_location exec_module module_from_spec join format eval abspath load_module uint8 astype deepcopy PositionwiseFeedForward Embedding Decoder MultiHeadedAttention PositionalEncoding DecoderLayer deepcopy PositionwiseFeedForward Decoder MultiHeadedAttention PositionalEncoding DecoderLayer deepcopy DecoderLayer PositionwiseFeedForward Embedding Decoder Sequential Encoder MultiHeadedAttention PositionalEncoding EncoderDecoder EncoderLayer deepcopy DecoderLayer PositionwiseFeedForward Embedding Decoder Sequential Encoder MultiHeadedAttention PositionalEncoding EncoderDecoder EncoderLayer | # KISS Code for the paper [KISS: Keeping it Simple for Scene Text Recognition](https://arxiv.org/abs/1911.08400). This repository contains the code you can use in order to train a model based on our paper. You will also find instructions on how to access our model and also how to evaluate the model. # Pretrained Model You can find the pretrained model [here](https://bartzi.de/research/kiss). Download the zip and put into any directory. We will refer to this directory as `<model_dir>`. # Prepare for using the Code - make sure you have at least Python **3.7** installed on your system | 143 |
Bartzi/see | ['scene text detection', 'scene text recognition'] | ['SEE: Towards Semi-Supervised End-to-End Scene Text Recognition'] | utils/create_video.py chainer/insights/svhn_bbox_plotter.py chainer/functions/disable_shearing.py chainer/train_svhn.py datasets/fsns/transform_gt.py chainer/insights/lstm_per_step_plotter.py datasets/svhn/filter_large_images.py chainer/train_mnist.py chainer/metrics/svhn_ctc_metrics.py chainer/datasets/concatenated_dataset.py chainer/utils/crop_images.py chainer/metrics/ctc_metrics.py chainer/utils/baby_step_curriculum.py datasets/fsns/slice_fsns_dataset.py chainer/train_text_recognition.py datasets/fsns/download_fsns.py chainer/utils/dict_eval.py datasets/svhn/create_svhn_dataset.py datasets/fsns/extract_words.py datasets/svhn/create_svhn_csv_gt.py datasets/svhn/create_svhn_dataset_4_images.py chainer/text_recognition_demo.py datasets/svhn/prepare_svhn_crops.py chainer/insights/text_rec_bbox_plotter.py chainer/utils/logger.py chainer/commands/interactive_train.py datasets/fsns/swap_classes.py chainer/insights/fsns_bbox_plotter.py datasets/fsns/render_text_on_signs.py chainer/insights/visual_backprop.py datasets/svhn/svhn_dataextract_tojson.py chainer/metrics/softmax_metrics.py chainer/optimizers/multi_net_optimizer.py chainer/datasets/sub_dataset.py utils/show_progress.py chainer/metrics/lstm_per_step_metrics.py chainer/evaluation/evaluator.py chainer/utils/multi_accuracy_classifier.py chainer/models/fsns_resnet.py datasets/fsns/change_file_names.py chainer/functions/disable_translation.py chainer/utils/create_gif.py datasets/fsns/transform_back_to_single_line.py chainer/models/text_recognition.py chainer/insights/textrec_bbox_plotter.py chainer/utils/plotting.py chainer/models/fsns.py chainer/utils/train_utils.py chainer/train_fsns.py datasets/fsns/tfrecord_to_image.py chainer/functions/rotation_droput.py chainer/evaluate.py chainer/insights/bbox_plotter.py chainer/fsns_demo.py chainer/datasets/file_dataset.py chainer/metrics/loss_metrics.py chainer/metrics/textrec_metrics.py chainer/models/svhn.py chainer/metrics/svhn_softmax_metrics.py chainer/utils/datatypes.py chainer/utils/intelligent_attribute_shifter.py chainer/models/ic_stn.py strip_prediction extract_bbox get_class_and_module build_fusion_net create_network load_module build_recognition_net load_image build_localization_net strip_prediction extract_bbox get_class_and_module build_fusion_net create_network load_module build_recognition_net load_image build_localization_net log_postprocess mnist_accuracy mnist_loss log_postprocess log_postprocess InteractiveTrain open_interactive_prompt ConcatenatedDataset OpencvTextRecFileDataset TextRecFileDataset FileBasedDataset PaddableSubDataset split_dataset_random split_dataset_n split_dataset split_dataset_n_random SVHNEvaluator TextRecognitionEvaluator Evaluator FSNSEvaluator DisableShearing disable_shearing DisableTranslation disable_translation rotation_dropout RotationDropout BBOXPlotter FSNSBBOXPlotter LSTMPerStepBBOXPlotter SVHNBBoxPlotter TextRectBBoxPlotter TextRecBBOXPlotter VisualBackprop CTCMetrics LossMetrics PerStepLSTMMetric SoftmaxMetrics SVHNCTCMetrics SVHNSoftmaxMetrics TextRecCTCMetrics TextRectMetrics TextRecSoftmaxMetrics FSNSMultipleSTNLocalizationNet FSNSNet ResnetBlock FSNSRecognitionNet FSNSResnetReuseNet FSNSSoftmaxRecognitionNet FSNSSoftmaxRecognitionResNet FSNSSingleSTNLocalizationNet FSNSResNetLayers FSNSRecognitionResnet InverseCompositionalLocalizationNet SVHNRecognitionNet SVHNCTCRecognitionNet SVHNNet SVHNLocalizationNet TextRecognitionNet TextRecNet MultiNetOptimizer BabyStepCurriculum make_gif makedelta create_loop_header intToBin IntelligentAttributeShifter Logger Classifier LogPlotter AttributeUpdater get_definition_filepath EarlyStopIntervalTrigger add_default_arguments get_trainer FastEvaluatorBase get_concat_and_pad_examples concat_and_pad_examples TwoStateLearningRateShifter get_fast_evaluator get_definition_filename extract_words_from_gt get_image random_crop get_image_paths get_labels save_image find_font_size find_way_to_common_dir intersects intersects_bbox GaussianSVHNCreator intersection is_close overlap SVHNCreator BBox SVHNDatasetCreator get_images merge_bboxes enlarge_bbox DigitStructFile get_filter create_video make_video ProgressWindow ImageDataHandler ImageServer spec_from_file_location exec_module module_from_spec join format get_class_and_module model_dir eval build_fusion_net abspath load_module build_recognition_net to_gpu gpu build_localization_net append empty hstack reshape height clip width full update data reshape timesteps flatten shape get_array_module zip append split_axis separate softmax_cross_entropy data reshape accuracy timesteps flatten shape get_array_module zip append split_axis separate InteractiveTrain start Thread PaddableSubDataset len permutation len len permutation len int getdata write subtract_modulo copy getbbox crop join sorted print _make append compile snapshot observe_lr extend ProgressBar EarlyStopIntervalTrigger Trainer dump_graph Logger PrintReport add_argument join tqdm add split walk extend truetype multiline_textsize choice list height min choice width range join str format get_subdir save makedirs append extend len append split height width left overlap top height min extend width left label max top height width get_filter join extract_number list sorted ImageData print mkdtemp close NamedTemporaryFile path filter create_video append listdir max compile run join print close NamedTemporaryFile flush run | # SEE: Towards Semi-Supervised End-to-End Scene Text Recognition Code for the AAAI 2018 publication "SEE: Towards Semi-Supervised End-to-End Scene Text Recognition". You can read a preprint on [Arxiv](http://arxiv.org/abs/1712.05404) # Installation You can install the project directly on your PC or use a Docker container ## Directly on your PC 1. Make sure to use Python 3 2. It is a good idea to create a virtual environment ([example for creating a venv](http://docs.python-guide.org/en/latest/dev/virtualenvs/)) 3. Make sure you have the latest version of [CUDA](https://developer.nvidia.com/cuda-zone) (>= 8.0) installed 4. Install [CUDNN](https://developer.nvidia.com/cudnn) (> 6.0) 5. Install [NCCL](https://developer.nvidia.com/nccl) (> 2.0) [installation guide](https://docs.nvidia.com/deeplearning/sdk/nccl-archived/nccl_2212/nccl-install-guide/index.html) | 144 |
Bartzi/stn-ocr | ['optical character recognition', 'scene text detection', 'scene text recognition'] | ['STN-OCR: A single Neural Network for Text Detection and Text Recognition'] | mxnet/utils/extract_last_image_from_gif.py datasets/fsns/transform_gt.py mxnet/initializers/spn_initializer.py mxnet/metrics/ctc_metrics.py datasets/svhn/filter_large_images.py mxnet/utils/create_video.py mxnet/networks/text_rec.py mxnet/train_fsns.py mxnet/utils/datatypes.py datasets/fsns/slice_fsns_dataset.py mxnet/operations/ones.py mxnet/eval_text_recognition_model.py datasets/fsns/download_fsns.py mxnet/metrics/test_ctc_loss.py mxnet/data_io/fsns_file_iter.py datasets/svhn/create_svhn_dataset.py mxnet/operations/disable_shearing.py mxnet/utils/extract_images.py datasets/svhn/create_svhn_dataset_4_images.py mxnet/utils/create_gif.py mxnet/train_utils.py mxnet/data_io/file_iter.py mxnet/train_text_recognition.py mxnet/eval_svhn_model.py mxnet/metrics/base.py mxnet/symbols/lstm.py datasets/fsns/swap_classes.py datasets/fsns/tfrecord_utils/tfrecord_to_image.py mxnet/eval_fsns_model.py mxnet/networks/svhn.py mxnet/operations/debug.py mxnet/utils/create_record_io_file.py mxnet/utils/reorder_lst_file.py mxnet/train_svhn.py mxnet/utils/plot_log.py mxnet/utils/bbox_utils.py datasets/svhn/svhn_dataextract_tojson.py mxnet/networks/fsns.py mxnet/metrics/fsns_metrics.py mxnet/callbacks/create_checkpoint.py mxnet/utils/show_progress.py mxnet/metrics/ctc/setup.py mxnet/utils/create_text_rec_gt.py mxnet/data_io/lstm_iter.py mxnet/callbacks/fsns_bbox_plotter.py mxnet/callbacks/save_bboxes.py datasets/fsns/render_text_on_signs.py get_image random_crop get_image_paths get_labels save_image find_font_size find_way_to_common_dir intersects intersects_bbox GaussianSVHNCreator intersection is_close overlap SVHNCreator BBox SVHNDatasetCreator get_images DigitStructFile plot_bboxes get_model plot_bboxes get_model plot_bboxes get_model parse_args init_logging fit get_create_checkpoint_callback FSNSBBOXPlotter BBOXPlotter FileBasedIter _load_worker _load_image FSNSFileIter DataIterDecorator InitStateLSTMIter LSTMIter SPNInitializer ShapeAgnosticLoad STNCrossEntropy STNAccuracy CTCSTNAccuracy remove_blank CTCLoss strip_prediction tile_batch_size FSNSPretrainAccuracy FSNSPretrainCrossEntropy softmax FSNSNetwork SVHNLocalizationNetwork SVHNMultiLineResNetNetwork SVHNMultiLineCTCNetwork SVHNMultiLineResNetNetwork SVHNMultiLineCTCNetwork LocalizationNetwork Debug DebugProp DisableShearingProp DisableShearing ProvideOnes ProvideOnesProp lstm lstm_unroll meshgrid get_sampling_grid make_gif makedelta create_loop_header intToBin get_gt_item make_video LogPlotter split_list ProgressWindow ImageDataHandler ImageServer walk extend truetype multiline_textsize choice list height min choice width range join str format get_subdir save makedirs append extend len append split height width left overlap top set_params model_epoch Module bind load_checkpoint model_prefix get_network Group zeros asnumpy save_extracted_regions get_area_data get_outputs join blank_label argmax strip_prediction add_argument ArgumentParser str setFormatter log_dir int info getLogger addHandler StreamHandler upper Formatter rank mkdir log_file setLevel FileHandler progressbar batch_size model_prefix kv_store Speedometer create clip_gradient get_create_checkpoint_callback dirname init_logging append checkpoint_interval ProgressBar load_epoch FactorScheduler load join log_dir FeedForward save_model_prefix makedirs get put array split append list zip shape tile c SliceChannel Activation FullyConnected list SliceChannel FullyConnected Reshape lstm h LSTMState LSTMParam reversed Concat append range height ones shape vstack linspace width prod meshgrid shape reshape matmul int getdata write subtract_modulo copy getbbox crop join sorted print _make append compile enumerate join sorted ImageData print filter append listdir compile enumerate len | # STN-OCR: A single Neural Network for Text Detection and Text Recognition This repository contains the code for the paper: [STN-OCR: A single Neural Network for Text Detection and Text Recognition](https://arxiv.org/abs/1707.08831) # Please note that we refined our approach and released new source code. You can find the code [here](https://github.com/Bartzi/see) Please use the new code, if you want to experiment with FSNS like data and our approach. It should also be easy to redo the text recognition experiments with the new code, although we did not release any code for that. # Structure of the repository The folder `datasets` contains code related to datasets used in the paper. `datasets/svhn` contains several scripts that can be used to create svhn based ground truth files as used in our experiments reported in section 4.2., please see the readme in this folder on how to use the scripts. `datasets/fsns` contains scripts that can be used to first download the fsns dataset, second extract the images from the downloaded files and third restructure the contained gt files. The folder `mxnet` contains all code used for training our networks. # Installation | 145 |
BattashB/Adaptive-and-Iteratively-Improving-Recurrent-Lateral-Connections | ['action recognition'] | ['Adaptive and Iteratively Improving Recurrent Lateral Connections'] | resnet.py trainer20.py resnetdy20.py resnet110 resnet20 ResNet LambdaLayer resnet44 test resnet1202 resnet56 resnet32 _weights_init BasicBlock weight kaiming_normal_ __name__ print | # Adaptive-and-Iteratively-Improving-Recurrent-Lateral-Connections An official Pytorch implementation of "Adaptive and Iteratively Improving Recurrent Lateral Connections" https://arxiv.org/abs/1910.11105 <p align="center"> <img src="BasicFeedback.png" alt="smiley" height="350px" width="600px"/> </p> ## Prerequisites - ubuntu18.04 - python 3.6 - torch==1.2 - torchvision==0.4 - numpy==1.17.4 | 146 |
BeautyOfWeb/AffinityNet | ['few shot learning', 'type prediction', 'graph attention'] | ['AffinityNet: semi-supervised few-shot learning for disease type prediction'] | models/dense_factor_conv.py utils/sampler.py utils/utils.py affinitynet/test_graph_attention.py models/densenet.py models/factor_graph.py utils/gen_conv_params.py affinitynet/graph_attention.py utils/solver.py models/transformer.py WeightedFeature get_iterator GraphAttentionLayer GraphAttentionGroup DenseLinear get_partial_model GraphAttentionModel FeatureExtractor FineTuneModel WeightedView MultiviewAttention clustering plot_scatter split_train_test test_regression split_train_val_test test_GraphAttentionGroup test_clustering test_MultiviewAttention eval_acc randperm split_data example_learning test_WeightedFeature visualize_val pca cal_nmi test_GraphAttentionLayer construct_linear_model _DenseLayer _Transition DenseNet _DenseBlock DenseFactorBlock Block1d Factor1d FactorBlock FactorConv TinyConv Factor1d Transformer DecoderAttention get_uniq_topk MultiheadAttention get_target EncoderAttention StackedEncoder reduce_projections join_dict assert_int_or_list squaredims get_itemset get_iter gen_conv_params cal_padding_size BatchLoader RepeatedBatchSampler balanced_sampler BatchSequentialSampler Solver AverageMeter dist check_acc ImageFolder pil_loader update load_state_dict state_dict show isinstance plot print tolist PCA copy title IncrementalPCA fit show sorted isinstance contiguous BASE_COLORS pca close title scatter savefig figure model_ numpy makedirs format isinstance print confusion_matrix adjusted_mutual_info_score accuracy_score numpy spectral_clustering format isinstance model print reshape confusion_matrix copy adjusted_mutual_info_score accuracy_score format plot_scatter reshape size eval_acc f1_score max predict data model zero_grad reduce dist show step Adam ylabel title savefig append range format plot param_groups close eval_acc item backward print xlabel makedirs named_parameters figure loss_fn numpy split manual_seed isinstance len difference randperm sorted numpy data isinstance item list size randperm append range str Sequential len add_module WeightedView range Linear data model construct_linear_model str sorted split_train_test model_head sum range format concatenate size BASE_COLORS knn_graph add_module get_partial_model eval_acc mean FineTuneModel unique test_regression type cal_nmi Linear int remove plot_scatter isinstance print Variable numpy array len data sorted tolist reset_out_indices Model size BASE_COLORS get_partial_model eval_acc FineTuneModel unique test_regression Linear remove plot_scatter print numpy array len Variable WeightedFeature softmax test_regression type detach model_true isinstance Variable MODEL test_regression type detach model_true Variable MODEL test_regression type detach model show sorted list tolist reset_out_indices Model scatter append range plot concatenate BASE_COLORS astype get_partial_model eval_acc FineTuneModel zip test_regression type enumerate Linear int multivariate_normal isinstance Variable print figure array diag len data Variable zip append cat ceil sqrt floor assert_int_or_list assert_int_or_list list isinstance len max isinstance append get_iter next cal_padding_size len list get_itemset keys range len seed defaultdict reshape min len Tensor topk isinstance size mul_ append sum max | # AffinityNet AffinityNet with stacked kNN attention pooling layers for few-shot semi-supervised learning This repository is associated with the paper: AffinityNet: semi-supervised few-shot learning for disease type prediction For any questions about the code, please contact tianlema@buffalo.edu | 147 |
BenevolentAI/RELVM | ['relation extraction'] | ['Learning Informative Representations of Biomedical Relations with Latent Variable Models'] | run/unsup/__init__.py models/unsup/recognition/__init__.py models/sup/__init__.py tests/test_classification_pair.py models/unsup/generative/__init__.py exp_unsup.py tests/test_unsup.py trainers/sup/__init__.py run/__init__.py models/__init__.py trainers/__init__.py tests/test_classification_mention.py exp_classification_pair.py exp_classification_mention.py run/sup/__init__.py trainers/unsup/__init__.py models/sup/classification/__init__.py models/unsup/__init__.py Classification ConditionalInfiniteMixtureLSTMAutoregressive GaussianBiLSTM MaximumLikelihoodPairLevel precision_recall_f1 optimise_threshold MaximumLikelihoodMentionLevel Run SupMentionTest SupPairTest UnsupTest MaximumLikelihoodPairLevel MaximumLikelihoodMentionLevel SGVB range len append precision_recall_f1 argmax linspace | # RELVM This repository contains the code accompanying the paper *"Learning Informative Representations of Biomedical Relations with Latent Variable Models", Harshil Shah and Julien Fauqueur, EMNLP SustaiNLP 2020*, (https://arxiv.org/abs/2011.10285). ## Requirements - Python 3.7 - Numpy >= 1.17.2 - Tensorflow >= 2.0.0 ## Instructions ### Introduction The code in this repository is for training a latent variable generative model of pairs of entities and the contexts (i.e. sentences) in which the entities occur. The representations from this model can then be used to perform both mention-level and pair-level classification. Throughout the code, the following conventions are used: | 148 |
BengaliAI/graphemePrepare | ['optical character recognition', 'multi label classification'] | ['A Large Multi-Target Dataset of Common Bengali Handwritten Graphemes'] | data/extracted/pack.py data/extracted/purge.py data/scanned/transcribeGui.py data/packed/labelXGui.py labelXGui rightKey renameSave leftKey get configure set PhotoImage crop renameSave len get configure set PhotoImage crop renameSave | # Bengali.AI Computer Vision Challenge: Handwritten Bengali Grapheme Classification This repo contains code to extend/replicate the dataset present in the Kaggle [Bengali.AI Handwritten Grapheme Classification](www.kaggle.com/c/bengaliai-cv19). For the dataset, codes, discussions and leaderboards, visit the Kaggle competition page. The paper describing the dataset, protocols and future directions can be found [here](https://arxiv.org/abs/2010.00170) or [here](https://github.com/BengaliAI/graphemePrepare/blob/master/paper/paper_10292020.pdf). ## Common Handwritten Graphemes in Context #### Project Structure ``` . - data -- scanned -- extracted -- error | 149 |
BerkeleyAutomation/dvrk-vismpc | ['video prediction'] | ['VisuoSpatial Foresight for Multi-Step, Multi-Task Fabric Manipulation'] | run.py call_network/process_dvrk_camera_image.py analysis_folding.py call_network/load_config.py call_network/load_net.py analysis.py tests/test_03_checkerboard_pick_test.py dvrkClothSim.py ZividCapture.py config.py dvrkArm.py tests/test_01_positions.py utils.py analysis_plots.py analyze_single analyze_group analyze_icra_submission _criteria analyze plot stats plot dvrkArm dvrkClothSim action_correction run test_action_mapping get_sorted_imgs transform_CB2PSM move_p_from_net_output inpaint_depth_image save_image_numbers depth_3ch_to_255 normalize crop_then_resize calculate_coverage debug_print_img _adjust_gamma single_means get_date load_mapping_table get_net_results print_means call_wait_key deg_to_rad rad_to_deg depth_to_3ch continuously_view_2d ZividCapture get_and_process_zc_imgs get_net_results get_sorted_imgs run_test NetLoader process _calibrate basename split join format imwrite print zip enumerate len imwrite max sorted defaultdict list std append format replace mean zip keys enumerate join print min makedirs zfill array len print join analyze_group format join norm format imwrite print len min zfill zip append imread array enumerate makedirs legendHandles format subplots set_title print set_xlabel set_xlim tight_layout set_linewidth hist set_ylabel savefig legend tick_params range sorted format norm print min extend mean array append max len threshold paste destroyAllWindows fromarray new waitKey shape imshow sum format COLOR_BGR2GRAY copy mean cvtColor int norm line Draw print THRESH_BINARY argwhere array circle destroyAllWindows use_color move_p_from_net_output defaultdict use_rgbd shape imshow DVRK_IMG_PATH sleep append imread prod range format calculate_coverage compare_ssim hstack single_means get_net_results special join norm time print loadtxt zfill call_wait_key action_correction array len sorted sorted format std print min mean median max eps norm join format imwrite print zfill len format strftime print waitKey exit resize format print reshape min flatten shape zeros max range equalizeHist min array max range data time format inpaint print DepthImage shape fromarray format Draw new size logical_and waitKey imshow paste float sum array destroyAllWindows threshold paste destroyAllWindows fromarray new logical_and waitKey shape imshow sum format COLOR_BGR2GRAY size copy mean cvtColor float int norm line Draw print THRESH_BINARY argwhere array circle loadtxt zeros range range format print move_pose_pickup array transform_CB2PSM format print mean zeros std len print format std mean astype imshow waitKey capture_2Dimage imwrite resize count_nonzero dtype inpaint_depth_image fastNlMeansDenoising shape depth_3ch_to_255 prod crop_then_resize format debug_print_img fastNlMeansDenoisingColored float listdir get_c_d_img join print zfill depth_to_3ch len join format replace print TEST_IMAGE_FILES act_to_coords imread forward_pass makedirs format imwrite replace print ones float32 filter2D bilateralFilter resize imread print get_current_pose format | # Da Vinci Research Kit (dVRK) Code for Fabrics and Visual MPC *Update May 2020*: this is the code we used for the physical fabrics experiments with the dVRK. The master branch has the code for our RSS 2020 paper "VisuoSpatial Foresight (VSF) for Multi-Step, Multi-Task Fabric Manipulation": ``` @inproceedings{fabric_vsf_2020, author = {Ryan Hoque and Daniel Seita and Ashwin Balakrishna and Aditya Ganapathi and Ajay Tanwani and Nawid Jamali and Katsu Yamane and Soshi Iba and Ken Goldberg}, title = {{VisuoSpatial Foresight for Multi-Step, Multi-Task Fabric Manipulation}}, booktitle = {Robotics: Science and Systems (RSS)}, | 150 |
BestActionNow/SemiSupBLI | ['bilingual lexicon induction'] | ['Semi-Supervised Bilingual Lexicon Induction with Two-way Interaction'] | model/CSSBli.py model/PSSBli.py sinkhorn/sinkhorn_loss.py model/baseBli.py IO/__init__.py evaluation/eval.py sinkhorn/__init__.py model/__init__.py utils.py main.py evaluation/__init__.py IO/data.py load_batcher adaptLanguage make_directory to_cuda print_metrics read_from_yaml setup_logger setup_output_dir to_numpy disp_params write_to_yaml CSLS Evaluator _csls_test get_mean_similarity get_faiss_nearest_neighbours normalize MonoDictionary Batcher Language CrossLingualDictionary WordDictionary bliMethod Evaluator LinearTrans Hubness CSSBli PSSBli Prior_sinkhorn load join load_from_supervised Batcher Language append format print setFormatter basicConfig getLogger addHandler Formatter getattr FileHandler make_directory int join communicate strip setup_logger startswith split listdir max Popen write_to_yaml makedirs format info getLogger format int GpuIndexFlatIP GpuIndexFlatConfig astype add StandardGpuResources IndexFlatIP get_faiss_nearest_neighbours load CSLS print get_closest_csls_matches Language embeddings norm exp format zeros_like ones reshape size mm print t div shape device empty max log einsum | # Semi-Supervised Bilingual Lexicon Induction with Two-Way Message Passing Mechanisms In this repository, We present the implementation of our two poposed semi-supervised approches **CSS** and **PSS** for BLI. ## Dependencies * python 3.7 * Pytorch * Numpy * Faiss ## How to get the datasets You need to download the **MUSE** dataset from [here](https://github.com/facebookresearch/MUSE) to the **./muse_data** directory. You need to download the **VecMap** dataset from [here](https://github.com/artetxem/vecmap) to the **./vecmap_data** directory. | 151 |
BestSonny/SSTD | ['scene text detection'] | ['Single Shot Text Detector with Regional Attention'] | python/caffe/io.py python/caffe/test/test_python_layer.py python/caffe/net_spec.py examples/text/setup.py python/caffe/coord_map.py python/caffe/test/test_net.py tools/extra/resize_and_crop_images.py python/draw_net.py python/caffe/test/test_net_spec.py src/caffe/test/test_data/generate_sample_data.py python/caffe/draw.py python/caffe/pycaffe.py tools/extra/extract_seconds.py python/classify.py examples/text/demo_test.py python/caffe/model_libs.py python/caffe/test/test_solver.py python/caffe/classifier.py python/caffe/test/test_io.py python/caffe/test/test_python_layer_with_param_str.py tools/extra/parse_log.py python/caffe/__init__.py examples/text/nms/py_cpu_nms.py python/caffe/test/test_layer_type_list.py python/caffe/detector.py python/detect.py python/caffe/test/test_coord_map.py tools/extra/summarize.py nms worker get_labelname add_path clip xcycwh_angle_to_x1y1x2y2x3y3x4y4 find_in_path customize_compiler_for_nvcc custom_build_ext locate_cuda py_cpu_nms main main main parse_args Classifier coord_map UndefinedMapException conv_params coord_map_from_to AxisMismatchException inverse crop_params compose crop Detector get_edge_label draw_net get_layer_label get_pydot_graph choose_color_by_layertype get_pooling_types_dict draw_net_to_file Transformer blobproto_to_array datum_to_array array_to_blobproto array_to_datum resize_image arraylist_to_blobprotovector_str blobprotovector_str_to_arraylist load_image oversample make_if_not_exist VGGNetBody UnpackVariable InceptionTower ConvBNLayer Inception CreateAnnotatedDataMaskLayer Inception_ONE CreateMultiBoxRotationHead CreateMultiBoxHead ConvBNLayerMXN ResNet152Body ResNet101Body check_if_exist InceptionV3Body ResBody CreateAnnotatedDataLayer ZFNetBody Layers Function Parameters Top NetSpec assign_proto param_name_dict to_proto _Net_blobs _Net_forward_all _Net_set_input_arrays _Net_backward _Net_params _Net_forward _Net_outputs _Net_forward_backward_all _Net_blob_loss_weights _Net_batch _Net_get_id_name _Net_inputs TestCoordMap coord_net_spec TestBlobProtoToArray TestArrayToDatum TestLayerTypeList TestLevels TestStages simple_net_file TestNet TestAllInOne lenet TestNetSpec silent_net anon_lenet exception_net_file parameter_net_file SimpleLayer phase_net_file TestPythonLayer ParameterLayer PhaseLayer python_net_file ExceptionLayer SimpleParamLayer TestLayerWithParam python_param_net_file TestSolver get_start_time extract_seconds extract_datetime_from_line get_log_created_year write_csv parse_log fix_initial_nan_learning_rate save_csv_files main parse_args parse_line_for_net_output ResizeCropImagesMapper PILResizeCrop OpenCVResizeCrop print_table printed_len summarize_net main read_net format_param insert cos sin display_name item append range len Transformer round max release str set_device acquire set_raw_scale append range get set_mean preprocess task_done clock int set_channel_swap print reshape min set_mode_gpu set_transpose array pathsep pjoin exists split find_in_path items list pjoin pathsep dirname sep append _compile compiler_so append maximum minimum model_def endswith ArgumentParser save mean_file channel_swap output_file dirname expanduser parse_args input_file predict Classifier set_mode_cpu load time isdir print add_argument set_mode_gpu pretrained_model gpu len DataFrame Detector format to_hdf detect_selective_search mean set_index to_csv detect_windows read_csv add_argument ArgumentParser read NetParameter output_image_file rankdir Merge TRAIN draw_net_to_file TEST get params array get params array crop_params conv_params pop collect_bottoms add fn coord_map compose coord_map_from_to items list DESCRIPTOR batch_size str num_output get_pooling_types_dict add_edge get_edge_label list Dot exclude get_layer_label add_node values choose_color_by_layertype Edge Node bottom append type layer include top data array diff shape BlobProto extend flat extend BlobProtoVector ParseFromString BlobProtoVector extend tostring shape Datum flat data len astype float32 tile zoom tuple resize fill empty array concatenate shape tile empty array makedirs append range Pooling format ConvBNLayer Concat ConvBNLayerMXN Pooling format ConvBNLayer Concat ConvBNLayerMXN get update format Convolution Bias Scale ReLU BatchNorm update format UnpackVariable Convolution Bias Scale ReLU BatchNorm int format ConvBNLayer Eltwise ReLU update format ConvBNLayer Pooling update AnnotatedDataMask pool2 Softmax fc7 ReLU conv1 relu7 Pooling pool1 conv2 norm2 list relu6 norm1 update conv5 conv3 Convolution fc6 relu2 keys relu1 fc8 LRN relu4 relu3 relu5 pool5 InnerProduct conv4 Dropout conv4_1 relu2_2 relu3_2 relu2_1 relu1_2 conv5_1 fc7 ReLU relu5_1 relu7 Pooling list conv4_2 relu5_3 conv3_2 relu4_2 conv4_3 relu6 relu5_2 update Convolution conv2_2 conv1_2 fc6 relu4_3 relu1_1 keys conv5_3 int conv3_3 relu4_1 conv2_1 relu3_1 relu3_3 pool5 conv3_1 InnerProduct conv1_1 conv5_2 Dropout Pooling format ConvBNLayer res5c range ResBody conv1 Pooling format ConvBNLayer res5c range ResBody conv1 Pooling InnerProduct format Softmax InceptionTower ConvBNLayer Concat pool_3 softmax AVE append MAX range update ResBody list format Permute PriorBox ConvBNLayer Concat Normalize append keys range Flatten len update ResBody list format Permute PriorBox ConvBNLayer Concat Normalize append keys range Flatten len LayerParameter list NetParameter _to_proto extend Counter OrderedDict values iteritems hasattr isinstance extend add getattr setattr list OrderedDict _blobs _blob_names zip list _blob_loss_weights OrderedDict _blob_names zip OrderedDict list keys list keys iteritems layers index set outputs _forward len iteritems _backward layers inputs index set len iteritems asarray extend copy next _batch itervalues forward len iteritems asarray backward extend copy next _batch itervalues zip_longest zip forward len ascontiguousarray concatenate itervalues zeros next range len data Pooling pool Convolution NetSpec Deconvolution conv Input NamedTemporaryFile str close write data Pooling pool1 conv2 pool2 ip1 relu1 SoftmaxWithLoss Convolution NetSpec DummyData ip2 ReLU InnerProduct label conv1 Pooling SoftmaxWithLoss Convolution DummyData ReLU InnerProduct data NetSpec DummyData Silence data2 int rfind datetime split getctime year strip extract_datetime_from_line get_start_time total_seconds strip write get_log_created_year close extract_datetime_from_line open float get_log_created_year compile fix_initial_nan_learning_rate search group OrderedDict append float join basename write_csv print excel parse_log save_csv_files output_dir logfile_path NetParameter decay_mult format name lr_mult append print zip len get join str format convolution_param list setdefault param kernel_size map set top bottom append type module layer enumerate print_table filename summarize_net read_net | [![License](https://img.shields.io/badge/license-BSD-blue.svg)](LICENSE) # Single Shot Text Detector with Regional Attention ## Introduction **SSTD** is initially described in our [ICCV 2017 spotlight paper](https://arxiv.org/abs/1709.00138). [A third-party implementation of SSTD + Focal Loss](https://github.com/HotaekHan/SSTDNet). Thanks, Ho taek Han <img src='examples/main.png' width='800'> If you find it useful in your research, please consider citing: ``` @inproceedings{panhe17singleshot, | 152 |
BigRedT/no_frills_hoi_det | ['human object interaction detection'] | ['No-Frills Human-Object Interaction Detection: Factorization, Layout Encodings, and Training Techniques'] | utils/io.py exp/hoi_classifier/data/box_features.py data/hico/split_ids.py exp/hoi_classifier/data/assign_pose_to_human_candidates.py exp/hico_eval/sample_complexity_analysis.py data/hico/hoi_cls_count.py exp/hoi_classifier/vis/top_boxes_per_hoi.py exp/hoi_classifier/vis/faster_rcnn_aps.py exp/hoi_classifier/data/hoi_candidates.py exp/hoi_classifier/data/label_hoi_candidates.py exp/hoi_classifier/models/verb_given_object_appearance.py utils/losses.py utils/constants.py exp/hoi_classifier/data/cache_box_features.py exp/run_template.py exp/detect_coco_objects/select_confident_boxes.py exp/hoi_classifier/data/write_faster_rcnn_feats_to_hdf5.py data/hico/hico_constants.py exp/detect_coco_objects/prepare_data_for_faster_rcnn.py exp/hoi_classifier/vis/vis_object_aps_per_interaction.py exp/hoi_classifier/models/verb_given_human_appearance.py exp/hoi_classifier/data/features_dataset.py exp/detect_coco_objects/evaluate_boxes.py exp/hoi_classifier/eval.py exp/hoi_classifier/train.py utils/model.py data/coco_classes.py exp/hico_eval/compute_map.py utils/bbox_utils.py exp/hoi_classifier/vis/vis_human_pose.py utils/pytorch_layers.py exp/hoi_classifier/models/verb_given_human_pose.py exp/detect_coco_objects/run.py exp/hoi_classifier/models/scatter_verbs_to_hois.py exp/hoi_classifier/vis/vis_interaction_aps_per_object.py exp/hoi_classifier/models/verb_given_boxes_and_object_label.py utils/argparse_utils.py exp/experimenter.py exp/hoi_classifier/data/pose_features.py utils/html_writer.py exp/hoi_classifier/data/cache_pose_features.py exp/hoi_classifier/run.py exp/hoi_classifier/models/hoi_classifier_model.py data/hico/mat_to_json.py HicoConstants bin_hoi_ids main ConvertMat2Json main split list_exps exp_do_something evaluate_boxes box_recall evaluate_boxes_and_labels box_label_recall prepare_hico exp_select_and_evaluate_confident_boxes_in_hico exp_detect_coco_objects_in_hico select_det_ids select_dets select compute_ap eval_hoi compute_pr compute_normalized_pr load_gt_dets match_hoi main main compute_mAP main eval_model exp_top_boxes_per_hoi exp_eval exp_cache_pose_feats exp_assign_pose_to_human_cand exp_gen_and_label_hoi_cand exp_train exp_cache_box_feats train_model main eval_model assign_pose count_keypoints_in_box main get_pose_box BoxFeatures main compute_box_feats main Features FeatureConstants HoiCandidatesGenerator generate load_gt_dets assign match_hoi PoseFeatures main HoiClassifier HoiClassifierConstants ScatterVerbsToHois ScatterVerbsToHoisConstants VerbGivenBoxesAndObjectLabel VerbGivenBoxesAndObjectLabelConstants VerbGivenHumanAppearance VerbGivenHumanAppearanceConstants VerbGivenHumanPose VerbGivenHumanPoseConstants VerbGivenObjectAppearance VerbGivenObjectAppearanceConstants create_html get_gt_boxes get_gt_hois select_best_boxes_across_dataset main vis_keypts main main main str_to_bool manage_required_args vis_sub_obj_bboxes compute_area_batch join_bboxes_by_line add_bbox compute_iou compute_area compute_iou_batch vis_human_keypts vis_bboxes vis_bbox ExpConstants Constants save_constants HtmlWriter load_pickle_object load_json_object read dump_json_object load_mat_object dump_pickle_object deserialize_object write load_yaml_object mkdir_if_not_exists dumps_json_object JsonSerializableClass serialize_object WritableToFile NumpyAwareJSONEncoder FocalLoss Model MLP adjust_learning_rate get_activation Identity create_mlp append items list load_json_object join anno_list_json dump_json_object proc_dir HicoConstants tqdm bin_hoi_ids int sample set append len list items print len split print parse_args exp print enumerate compute_iou len zip enumerate compute_iou len join load_json_object anno_list_json list dump_json_object concatenate print File tolist iou_thresh tqdm exp_dir append box_recall keys enumerate hoi_list_json join load_json_object anno_list_json list dump_json_object concatenate print File tolist iou_thresh box_label_recall tqdm exp_dir append keys enumerate join load_json_object anno_list_json dump_json_object proc_dir print mkdir_if_not_exists dict exp_dir images_dir to_json enumerate len ExpConstants prepare_hico HicoConstants evaluate_boxes evaluate_boxes_and_labels select HicoConstants ExpConstants arange min compute_area append array range background_score_thresh select_det_ids max_humans max_objects_per_class object_score_thresh concatenate max_background human_score_thresh append zeros array enumerate load join load_json_object anno_list_json print faster_rcnn_boxes File mkdir_if_not_exists select_dets close tqdm exp_dir create_dataset to_json create_group compute_iou enumerate isnan any max arange cumsum array nan cumsum sum array nan compute_ap join compute_pr print File match_hoi save append print join load_json_object append starmap load_gt_dets num_processes mkdir_if_not_exists close set out_dir append parse_args Pool sorted compute_mAP bin_to_hoi_ids_json keys join concatenate print File close tqdm exp_dir eval hoi_classifier create_dataset SequentialSampler numpy create_group load model_pth Features eval_model Model load_state_dict cuda join print getcwd subset HicoConstants manage_required_args gen_hoi_cand exp_dir assign generate parse_args ExpConstants label_hoi_cand join subset HicoConstants manage_required_args exp_dir main parse_args ExpConstants join proc_dir subset HicoConstants manage_required_args exp_dir main parse_args ExpConstants join subset HicoConstants manage_required_args exp_dir main parse_args ExpConstants join verb_given_boxes_and_object_label verb_given_object_appearance getcwd Constants rcnn_det_prob manage_required_args exp_dir HoiClassifierConstants verb_given_human_pose imgs_per_batch FeatureConstants verb_given_appearance main parse_args ExpConstants verb_given_human_appearance join verb_given_boxes_and_object_label verb_given_object_appearance getcwd Constants model_num rcnn_det_prob manage_required_args exp_dir model_dir HoiClassifierConstants verb_given_human_pose FeatureConstants verb_given_appearance main parse_args ExpConstants verb_given_human_appearance join verb_given_boxes_and_object_label verb_given_object_appearance getcwd Constants model_num rcnn_det_prob manage_required_args exp_dir model_dir HoiClassifierConstants verb_given_human_pose FeatureConstants verb_given_appearance main parse_args ExpConstants verb_given_human_appearance zero_grad model_dir save exp_name FloatTensor Adam chain sum range state_dict format eval_model log_value BCELoss num_epochs enumerate join criterion backward Variable print RandomSampler parameters hoi_classifier train step criterion FloatTensor Variable size RandomSampler manual_seed float BCELoss train_model configure log_dir exp_dir model_dir save_constants to_txt amin array amax zeros enumerate compute_iou compute_area str assign_pose hoi_cand_hdf5 range create_group split_ids_json num_keypoints int File human_pose_dir create_dataset compute_features BoxFeatures array tile compute_box_feats PoseFeatures rpn_id_to_pose_h5py_to_npy compute_pose_feats array tile human_cands_pose_hdf5 load_json_object join value create_group split_ids_json print File mkdir_if_not_exists close tqdm exp_dir save_constants create_dataset selected_dets_hdf5 HoiCandidatesGenerator predict enumerate set hoi_list_json join load_json_object anno_list_json split_ids_json print load_gt_dets File mkdir_if_not_exists hoi_cand_hdf5 close match_hoi tqdm exp_dir save_constants create_dataset zeros range faster_rcnn_boxes zeros concatenate enumerate len vis_human_keypts tile list sorted reshape get_gt_boxes zfill tqdm append zeros num_to_vis keys range vis_sub_obj_bboxes join deepcopy list add_element mkdir_if_not_exists close zfill get_gt_hois HtmlWriter tqdm tile imread keys imsave enumerate vis_keypts hoi_list_json create_html human_pose_feats_hdf5 select_best_boxes_across_dataset pred_hoi_dets_h5py images_dir vis_human_keypts human_pose_feats_h5py add hoi_cand_h5py imread imsave enumerate reshape Box plot getcwd Scatter Layout join sorted print exit set getattr choices append help polygon polygon_perimeter set_color min max compute_area zeros logical_and minimum compute_area_batch stack maximum min copy set_color polygon polygon_perimeter max range copy vis_bbox min copy line_aa max range circle vis_bboxes join_bboxes_by_line zip line_aa circle range copy join items list print to_json decompress read loads compress write dumps encode compress write dumps dumps mkdir exists makedirs get_activation MLP param_groups | # No-Frills Human-Object Interaction Detection: Factorization, Layout Encodings, and Training Techniques By [Tanmay Gupta](http://tanmaygupta.info), [Alexander Schwing](http://alexander-schwing.de), and [Derek Hoiem](http://dhoiem.cs.illinois.edu) <p align="center"> <img src="imgs/teaser_wide.png"> </p> # Content - [Overview](#overview) - [Requirements](#requirements) - [Setup](#setup) - [Download the HICO-Det dataset](#download-the-hico-det-dataset) | 153 |
BigRedT/vico | ['word embeddings'] | ['ViCo: Word Embeddings from Visual Co-occurrences'] | exp/glove/save_ae_embeddings.py exp/multi_sense_cooccur/vis/categories.py exp/multi_sense_cooccur/models/embeddings.py exp/multi_sense_cooccur/explore_merged_cooccur.py data/semeval_2018_10/compute_overlap_with_visual_words.py exp/cifar100/run.py data/semeval_2018_10/constants.py exp/glove/save_ae_visual_features.py exp/semeval_2018_10/eval_concat_svm.py exp/cifar100/test_labels.py exp/genome_attributes/create_gt_attr_attr_cooccur.py exp/glove/models/decoder.py exp/cifar100/models/resnet.py exp/vis_w2v/convert_to_npy.py utils/pytorch_layers.py exp/cifar100/agg_results.py exp/cifar100/models/conse.py data/visualgenome/merge_object_attribute_freqs.py exp/glove/models/encoder.py data/semeval_2018_10/compute_feature_freq.py utils/io.py data/imagenet/wordnet.py exp/genome_attributes/dataset.py exp/wordnet/syn_cooccur.py exp/glove/train_ae.py exp/imagenet/dataset.py exp/run_template.py data/semeval_2018_10/compute_overlap_with_visualgenome.py exp/multi_sense_cooccur/merge_cooccur.py exp/genome_attributes/create_gt_obj_attr_cooccur.py exp/cifar100/eval.py exp/cifar100/train.py exp/imagenet/run.py exp/semeval_2018_10/f1_computer.py exp/glove/combine_glove_with_visual_features.py utils/model.py exp/glove/visual_features_dataset.py exp/multi_sense_cooccur/concat_with_glove.py data/glove/create_random_embeddings.py data/imagenet/create_wnid_to_words.py exp/wordnet/run.py exp/imagenet/create_gt_obj_hyp_cooccur.py exp/multi_sense_cooccur/concat_random_with_glove.py exp/multi_sense_cooccur/extract_embeddings.py exp/cifar100/vis/conf_vs_visual_sim.py exp/glove/select_subset_embeddings.py exp/semeval_2018_10/dataset.py data/imagenet/save_imagenet_labels.py data/visualgenome/create_object_files.py data/glove/save_as_hdf5.py data/imagenet/constants.py exp/run_template_simple.py exp/multi_sense_cooccur/vis/pca_tsne.py exp/cifar100/vis/conf_as_fun_of_sims.py exp/vis_w2v/readEmbed.py exp/multi_sense_cooccur/vis/supervised_partitioning.py data/imagenet/create_wordnet_edges.py exp/semeval_2018_10/run.py exp/semeval_2018_10/models/concat_svm_simple.py exp/cifar100/labels.py exp/multi_sense_cooccur/models/logbilinear.py exp/cifar100/dataset.py exp/cifar100/vis/acc_vs_num_classes.py exp/semeval_2018_10/model_selection.py exp/multi_sense_cooccur/vis/fine_categories.py exp/cifar100/vis/vis_conf_mat.py exp/cifar100/models/embed_to_classifier.py exp/glove/concat_embed_dataset.py exp/genome_attributes/run.py utils/argparse_utils.py data/visualgenome/constants.py exp/experimenter.py utils/html_writer.py exp/train_template.py exp/multi_sense_cooccur/synset_to_word_cooccur.py utils/lemmatizer.py exp/glove/train_ae_visual.py data/semeval_2018_10/compute_word_freq.py data/semeval_2018_10/merge_word_feature_freq.py exp/cifar100/vis/class_vs_sim.py exp/cifar100/vis/acc_with_std_vs_num_classes.py exp/semeval_2018_10/train_concat_svm.py exp/glove/run.py data/semeval_2018_10/convert_to_json.py utils/constants.py exp/semeval_2018_10/models/concat_svm.py data/imagenet/download_images.py exp/multi_sense_cooccur/vis/count_categories.py data/visualgenome/compute_object_freqs.py exp/multi_sense_cooccur/dataset.py exp/multi_sense_cooccur/train.py exp/multi_sense_cooccur/run.py exp/genome_attributes/create_gt_context_cooccur.py data/imagenet/create_wnid_to_urls.py exp/multi_sense_cooccur/vis/unsupervised_clustering.py exp/multi_sense_cooccur/neg_dataset.py exp/multi_sense_cooccur/extract_embeddings_xformed.py data/visualgenome/compute_attribute_freqs.py data/glove/constants.py GloveConstantsFactory Glove6B300dConstants Glove6B100dConstants main main ImagenetConstants main main main main construct_wget_cmd scale downloader main WordNetConstants WordNet WordNetNode main compute_feature_freq main compute_overlap main compute_word_freq SemEval201810Constants main read_txt main main compute_attribute_synset_freqs compute_attribute_freqs main compute_object_freqs compute_object_synset_freqs VisualGenomeConstants main get_object_annos get_image_id_to_object_id main list_exps exp_train exp_eval exp_do_something train_model main eval_model print_row main print_header Cifar100DatasetConstants Cifar100Dataset main eval_model exp_train exp_agg_results train_model main eval_model Conse Embed2ClassConstants Embed2Class ResNetCifar ResnetConstants ResNet resnet50 Bottleneck resnet152 conv3x3 resnetcifar32 resnet34 resnet18 ResnetModel BasicBlock resnet101 main plot_acc_vs_classes main plot_acc_vs_classes main create_scatter_plot main create_scatter_plot get_pair_counts main create_scatter_plot main create_confmat_heatmap main create_gt_synset_cooccur main create_synset_list main create_gt_synset_cooccur GenomeAttributesDatasetConstants GenomeAttributesNoImgsDataset GenomeAttributesDataset exp_create_gt_obj_attr_cooccur exp_create_gt_attr_attr_cooccur exp_create_gt_context_cooccur main compute_norm normalize ConcatEmbedDatasetConstants ConcatEmbedDataset exp_combine_glove_with_visual_features exp_save_ae_combined_glove_and_visual_features exp_save_ae_visual_features exp_combine_glove_and_visual_features_with_ae exp_ae_visual_features main get_embeddings main get_visual_features main train_model compute_norm normalize main train_model compute_norm normalize main VisualFeaturesDataset VisualFeaturesDatasetConstants DecoderConstants Decoder Encoder EncoderConstants main create_gt_synset_cooccur wnid_offset_to_synset ImagenetDatasetConstants ImagenetNoImgsDataset ImagenetDataset exp_create_gt_obj_hyp_cooccur main main compute_norm normalize Lemmatizer MultiSenseCooccurDatasetConstants MultiSenseCooccurDataset show_labels cooccur usage main main main NegMultiSenseCooccurDatasetConstants NegMultiSenseCooccurDataset EmbedInfo exp_extract_embeddings exp_supervised_partitioning exp_merge_cooccur exp_concat_with_glove exp_unsupervised_clustering exp_concat_random_with_glove exp_synset_to_word_cooccur exp_train exp_vis_pca_tsne SimpleEmbedInfo main synset_to_words train_model main EmbeddingsConstants Embeddings LogBilinearConstants LogBilinear Transform TransformConstants main get_tsne_embeddings scatter_plot main get_tsne_embeddings plot_metric_vs_depth get_word_feats plot_metric_vs_clusters get_tsne_embeddings main get_word_feats SemEval201810Dataset SemEval201810DatasetConstants main eval_model compute_f1 select_best_concat_mlp select_best_concat_svm exp_train_concat_svm exp_eval_concat_svm train_model main eval_model ConcatSVM ConcatSVMConstants ConcatSVM ConcatSVMConstants main main exp_syn_cooccur main synset_to_words str_to_bool manage_required_args ExpConstants Constants save_constants HtmlWriter load_pickle_object load_json_object read dump_json_object load_mat_object dump_pickle_object deserialize_object load_h5py_object load_yaml_object write mkdir_if_not_exists dumps_json_object JsonSerializableClass serialize_object WritableToFile NumpyAwareJSONEncoder Lemmatizer Model set_learning_rate LossProgressMonitor MLP adjust_learning_rate get_activation Identity create_mlp minimum max create join normal embeddings_h5py proc_dir File min maximum close mean shape uniform create_dataset std dump_json_object out_dir mkdir_if_not_exists open array append parse_args stack enumerate glove_txt print tqdm split items list rstrip ImagenetConstants len set_trace urls_txt wnid_to_urls_json wnid_to_words_json words_txt keys wnid_to_parent_json resize int size float join list items print mkdir_if_not_exists scale save open construct_wget_cmd exists run load_json_object img_dir load getcwd urlopen range SemEval201810Constants compute_feature_freq attribute_freqs_json VisualGenomeConstants object_freqs_json word_freqs feature_freqs round len round intersection compute_word_freq read_txt deepcopy all_word_freqs list values tqdm list values tqdm compute_attribute_synset_freqs compute_attribute_freqs object_annos_json list values tqdm list values tqdm compute_object_freqs compute_object_synset_freqs append str append str update tqdm objects_json attribute_synsets_json object_synsets_json attributes_json get_object_annos get_image_id_to_object_id all_word_freqs_json print parse_args exp join getcwd Constants DATASET_CONSTANTS exp_dir NET_CONSTANTS model_dir main ExpConstants join getcwd Constants DATASET_CONSTANTS exp_dir NET_CONSTANTS model_dir main ExpConstants print zero_grad SGD model_dir save cuda list model_num Adam CrossEntropyLoss range state_dict eval_model log_value num_epochs net enumerate items join criterion backward Variable print parameters train step criterion Variable size tqdm eval cuda net enumerate DATASET model_dir DataLoader NET cuda vis_dir Model save_constants load_state_dict configure net log_dir net_path train_model to_file exp_dir print mean round std print runs print_row print_header max FloatTensor permute normalize classify img_std CrossEntropyLoss range float embed2class img_mean labels zeros numpy len Cifar100Dataset save eval_model embed2class_path embed2class labels ResnetModel numpy Embed2Class str ResnetConstants glove_dim vico_dim embed_type manage_required_args Cifar100DatasetConstants Embed2ClassConstants held_classes parse_args main join ExpConstants getcwd dump_json_object reverse_loss load_embeddings numpy max FloatTensor permute append chain img_std normalize classify set_learning_rate Adagrad sim_loss mean lr float embed2class img_mean labels named_parameters exp_dir MultiMarginLoss Softmax argmax round copy softmax get_collate_fn ResNetCifar update ResNet size load_url load_state_dict state_dict update ResNet size load_url load_state_dict state_dict update ResNet size load_url load_state_dict state_dict update ResNet size load_url load_state_dict state_dict update ResNet size load_url load_state_dict state_dict list sorted plot Layout Bar append round keys str held_out_classes plot_acc_vs_classes out_base_dir std mean plot Layout append Scatter enumerate class_confmat_npy labels_npy transpose matmul create_scatter_plot visual_embed_npy pearsoncorr enumerate str get_pair_counts log visual_confmat_npy norm glove_confmat_npy mean round std Heatmap plot maximum Layout log create_confmat_heatmap join list items dump_json_object print set tqdm exp_dir range len GenomeAttributesNoImgsDataset create_collate_fn create_gt_synset_cooccur update set image_id_to_object_id_json create_synset_list join getcwd GenomeAttributesDatasetConstants main ExpConstants join getcwd GenomeAttributesDatasetConstants main ExpConstants join VisualGenomeConstants getcwd main ExpConstants concatenate glove_h5py visual_features_idx glove_idx visual_features_h5py zeros join create embeddings_h5py getcwd Constants main word_to_idx_json ExpConstants join getcwd Constants ConcatEmbedDatasetConstants DecoderConstants concat_dir exp_dir main ExpConstants EncoderConstants join getcwd Constants ConcatEmbedDatasetConstants DecoderConstants exp_dir main ExpConstants EncoderConstants join VisualFeaturesDatasetConstants getcwd Constants DecoderConstants exp_dir main ExpConstants EncoderConstants join VisualFeaturesDatasetConstants getcwd Constants DecoderConstants exp_dir main ExpConstants EncoderConstants print encoder tqdm eval cpu zeros cuda enumerate get_embeddings model_num word_to_idx ConcatEmbedDataset print encoder tqdm eval cpu zeros cuda enumerate get_visual_features VisualFeaturesDataset compute_loss exp_name transpose mm encoder detach format decoder min glove_dim state_dict size repeat name wnid_offset_to_synset keys ImagenetNoImgsDataset join getcwd ImagenetDatasetConstants main ExpConstants rand random_dim lemmatize add visual_word_to_idx visual_embeddings_npy set download Lemmatizer print tolist print print show_labels sort_values MultiSenseCooccurDataset LogBilinear weight cooccur_types getattr merged_cooccur_csv Categorical DataFrame cooccur_paths Series to_csv join getcwd Constants exp_dir main ExpConstants join getcwd Constants exp_dir main ExpConstants MultiSenseCooccurDatasetConstants model_num embed_dim LogBilinearConstants xform join deepcopy MultiSenseCooccurDatasetConstants getcwd Constants model_num cooccur_types manage_required_args exp_dir model_dir main embed_dim LogBilinearConstants parse_args ExpConstants xform join create embeddings_h5py getcwd Constants out_base_dir manage_required_args main parse_args ExpConstants word_to_idx_json join create embeddings_h5py getcwd Constants main word_to_idx_json ExpConstants join attribute_freqs_json VisualGenomeConstants getcwd Constants manage_required_args exp_dir object_freqs_json main parse_args ExpConstants join getcwd Constants main ExpConstants join getcwd Constants main ExpConstants list endswith add set lower split synset_cooccur_json synset_to_words word_cooccur_json log iter next use_neg keys loss NegMultiSenseCooccurDataset print range fit_transform kl_divergence_ plot set Layout Scatter enumerate len TSNE scatter_plot fit_transform get_tsne_embeddings word_to_idx_json PCA TSNE plot keys Layout append Scatter adjusted_rand_score sorted plot_metric_vs_depth get_word_feats predict get_embedding update predict_proba zip homogeneity_completeness_v_measure DecisionTreeClassifier fit plot keys Layout append Scatter AgglomerativeClustering fine fit_predict plot_metric_vs_clusters compute_f1 load_json_object str concat_svm concatenate print tuple visual_vocab_json array compute_hinge_loss append select_best_concat_svm SemEval201810Dataset dumps_json_object append f1_score accuracy_score range items list items list batch_size visual_only ConcatSVMConstants exp_name embed_quadratic_feat create getcwd parse_args distance_quadratic_feat out_base_dir embed_linear_feat embeddings_h5py Constants manage_required_args lr distance_linear_feat word_to_idx_json main ExpConstants join l2_weight glove_dim exp_dir SemEval201810DatasetConstants visual_only ConcatSVMConstants exp_name embed_quadratic_feat create getcwd parse_args distance_quadratic_feat out_base_dir embed_linear_feat embeddings_h5py visual_vocab_json Constants manage_required_args distance_linear_feat word_to_idx_json main ExpConstants join glove_dim exp_dir SemEval201810DatasetConstants data arange Parameter concat_svm compute_f1 arange to_txt int pop time main join ExpConstants getcwd words all_synsets join sorted print exit set getattr choices append help join items list print to_json decompress read loads compress write dumps encode compress write dumps dumps mkdir exists makedirs get_activation MLP param_groups param_groups | # ViCo: Word Embeddings from Visual Co-occurrences By [Tanmay Gupta](http://tanmaygupta.info), [Alexander Schwing](http://alexander-schwing.de), and [Derek Hoiem](http://dhoiem.cs.illinois.edu) <p align="center"> <img src="imgs/teaser.png"> </p> # Contents - [Overview](#overview) - [Just give me pretrained ViCo](#just-give-me-pretrained-vico) - [Install Dependencies](#install-dependencies) - [Setup](#setup) | 154 |
BingCS/AtLoc | ['camera localization'] | ['AtLoc: Attention Guided Camera Localization'] | train.py network/atloc.py data/process_robotcar.py data/dataset_mean.py eval.py data/dataloaders.py tools/saliency_map.py tools/utils.py tools/options.py network/att.py RobotCar SevenScenes MF AtLoc FourDirectionalLSTM AtLocPlus AttentionBlock Options AverageMeter qlog AtLocCriterion mkdirs qexp Logger mkdir calc_vos_simple quaternion_angular_error load_image process_poses load_state_dict AtLocPlusCriterion mkdir makedirs loader zeros norm arccos all norm hstack append stack cat arccos min pi dot abs max squeeze len dot qlog zeros range mat2quat items list replace OrderedDict | [![License CC BY-NC-SA 4.0](https://img.shields.io/badge/license-CC4.0-blue.svg)](https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode) ![Python 2.7](https://img.shields.io/badge/python-2.7-green.svg) # [AtLoc: Attention Guided Camera Localization](https://arxiv.org/abs/1909.03557) - AAAI 2020 (Oral). [Bing Wang](https://www.cs.ox.ac.uk/people/bing.wang/), [Changhao Chen](http://www.cs.ox.ac.uk/people/changhao.chen/website/), [Chris Xiaoxuan Lu](https://christopherlu.github.io/), [Peijun Zhao](https://www.cs.ox.ac.uk/people/peijun.zhao/), [Niki Trigoni](https://www.cs.ox.ac.uk/people/niki.trigoni/), and [Andrew Markham](https://www.cs.ox.ac.uk/people/andrew.markham/) ## License Licensed under the CC BY-NC-SA 4.0 license, see [LICENSE](LICENSE.md). ## Introduction This is the PyTorch implementation of **AtLoc**, a simple and efficient neural architecture for robust visual localization. #### Demos and Qualitative Results (click below for the video) <p align="center"> <a href="https://youtu.be/_8NQXBadklU"><img src="./figures/real.gif" width="100%"></a> </p> | 155 |
BingyaoHuang/Neural-STE | ['image dehazing', 'denoising'] | ['Modeling Deep Learning Based Privacy Attacks on Physical Mail'] | src/python/pytorch_ssim/__init__.py src/python/Models.py src/python/utils.py src/python/train_Neural-STE.py src/python/trainNetwork.py WarpingNet Interpolate DehazingRefineNet NeuralSTE appendDataPoint trainNeuralSTE loadData evalNeuralSTE plotMontageMultirow computeLoss SimpleDataset fs optionToString make_grid_transposed readImgsMT psnr saveImgs countParameters vfs resetRNGseed vhm rmse ssim printConfig create_window gaussian _ssim SSIM ssim print readImgsMT format fullfile model zero_grad MultiStepLR save plotMontageMultirow list Adam strftime range state_dict format sqrt gmtime item sample fullfile optionToString time line appendDataPoint backward print evalNeuralSTE makedirs empty_cache train step computeLoss l1_fun l2_fun ssim_fun line device seed manual_seed_all manual_seed SimpleDataset div DataLoader enumerate show requires_grad set_major_locator NullLocator ToPILImage squeeze axis tight_layout subplots_adjust margins imshow title F_tensor_to_image figure cpu numpy detach new_full int isinstance size min clone copy_ stack unsqueeze ceil float range cat norm_range requires_grad uint8 format imwrite transpose makedirs fullfile range detach device device device images image squeeze heatmap format print device_count __version__ cuda range get_device_name Tensor Tensor contiguous unsqueeze pow conv2d pad expand_as create_window size type_as get_device cuda is_cuda | [AAAI'21] Modeling Deep Learning Based Privacy Attacks on Physical Mail <br><br> === <p align="center"> <img src='doc/net.png'> </p> ## Introduction PyTorch implementation of [Neural-STE (Neural-See-Through-Envelope)][1]. Please refer to [supplementary material (~68M)][2] for more results. ---- | 156 |
BjornarVass/Recsys | ['session based recommendations'] | ['Time is of the Essence: a Joint Hierarchical RNN and Point Process Model for Time and Item Predictions'] | tester_dynamic.py tester.py modules.py logger.py preprocess_trimmed.py datahandler_temporal.py dynamic_model.py preprocess_general.py hawkes.py hawkes_baseline.py hawkes_datahandler.py datahandler.py intra.py model.py PlainRNNDataHandler RNNDataHandler MHP DataHandler train_on_batch process_batch predict_on_batch masked_cross_entropy_loss Intra_RNN Logger RecommenderModel Inter_RNN Time_Loss Embed Intra_RNN file_exists load_pickle split_single_session map_user_and_artist_id_to_labels split_to_training_and_testing remove_noisy_users pad_sequences collapse_session create_bpr_mf_sets convert_timestamps_reddit create_padded_sequence collapse_repeating_items sort_and_split_usersessions perform_session_splits split_long_sessions convert_timestamps_lastfm save_pickle get_session_lengths file_exists load_pickle split_single_session map_user_and_artist_id_to_labels split_to_training_and_testing pad_sequences collapse_session create_bpr_mf_sets convert_timestamps_reddit create_padded_sequence collapse_repeating_items sort_and_split_usersessions remove_infrequent_artists perform_session_splits split_long_sessions convert_timestamps_lastfm save_pickle get_session_lengths Tester Tester Variable gather view Variable embedding_matrix cuda LongTensor process_batch view FloatTensor Variable backward size step zero_grad masked_cross_entropy_loss mean sum cuda init_hidden rnn topk process_batch size init_hidden rnn dump open list save_pickle reversed list save_pickle reversed load_pickle range save_pickle len items list perform_session_splits append range len items list collapse_session range len pop load_pickle list items print collapse_repeating_items append split_long_sessions keys range save_pickle len append items list len len items list create_padded_sequence range len load_pickle list items int get_session_lengths pad_sequences append range save_pickle len pop load_pickle list items print keys array append std range save_pickle len load_pickle list extend keys save_pickle print open append items list | # Hieararchical RNN recommender with temporal modeling The code for my master's thesis # Requirements Python 3 PyTorch, with CUDA support Numpy Scipy # Data ## Datasets LastFM:http://www.dtic.upf.edu/~ocelma/MusicRecommendationDataset/lastfm-1K.html | 157 |
Bkmz21/CompactCNNCascade | ['face detection'] | ['Compact Convolutional Neural Network Cascade for Face Detection'] | cntk/prepare_data.py | ## Compact Convolutional Neural Network Cascade ## This is a binary library for very fast detection of simple objects in images using CPU or GPU.<br> Implemented of the algorithm described in the following paper: I.A. Kalinovskiy, V.G. Spitsyn, Compact Convolutional Neural Network Cascade for Face Detection, http://arxiv.org/abs/1508.01292 If you use the provided binaries for your work, please cite this paper. examples/main.cpp shows how to use the library.<br> You need a processor with AVX or AVX2 (1.6x speed up due to used INT16) instruction set support.<br> Supported Nvidia GPUs with compute capability 3.0 and higher (library builded with CUDA 8.0).<br> | 158 |
BlackHC/BatchBALD | ['active learning'] | ['BatchBALD: Efficient and Diverse Batch Acquisition for Deep Bayesian Active Learning'] | laaos_results/paper/cinic10_nispc20_pretrained_multibald_bald_k50_b10_885898.py laaos_results/paper/repeated_mnist_w_noise5_multibald_bald_k10_b10_531246.py src/acquisition_batch.py laaos_results/paper/cinic10_nispc20_pretrained_independent_bald_k50_b10_953083.py src/joint_entropy/unoptimized/test_exact_joint_probs.py laaos_results/paper/bald_20_534918_2018-12-14-004816.py laaos_results/paper/emnist_multibald_bald_k10_779382.py src/joint_entropy/sampling.py laaos_results/paper/repeated_mnist_w_noise5_independent_bald_k10_b10_420620.py laaos_results/paper/cinic10_nispc20_pretrained_multibald_bald_k50_b10_953083.py laaos_results/paper/repeated_mnist_w_noise2_multibald_bald_k10_b10_133926.py laaos_results/paper/rmnist_w_noise_independent_random_k10_b5_432746.py laaos_results/paper/rmnist_w_noise_independent_bald_k10_b10_211889.py src/test_acquisition_functions.py laaos_results/paper/bald_20_796110.py laaos_results/paper/repeated_mnist_w_noise2_independent_bald_k10_b10_661452.py laaos_results/paper/mnist_w_noise_independent_bald_k10_b10_89548.py laaos_results/paper/mnist_independent_bald_k100_b1_734490.py laaos_results/paper/repeated_mnist_w_noise5_independent_bald_k10_b10_717988.py src/mc_dropout.py laaos_results/paper/mnist_independent_bald_k100_b1_572400.py laaos_results/paper/repeated_mnist_w_noise5_independent_random_k10_b10_531246.py laaos_results/paper/bald_40_646208.py laaos_results/paper/repeated_mnist_w_noise5_multibald_bald_k10_b10_717988.py src/sampler_model.py laaos_results/paper/repeated_mnist_w_noise2_independent_bald_k10_b10_607771.py laaos_results/paper/emnist_independent_bald_k10_218487.py laaos_results/paper/bald_20_59448.py laaos_results/paper/emnist_independent_bald_k10_335690.py laaos_results/paper/repeated_mnist_w_noise5_multibald_bald_k10_b10_420620.py laaos_results/paper/rmnist_w_noise_multibald_bald_k10_b10_211889.py src/transformed_dataset.py src/ignite_progress_bar.py src/joint_entropy/unoptimized/test_sampling_batch.py laaos_results/paper/emnist_balanced_independent_random_k10_b5_267494.py laaos_results/paper/rmnist_w_noise_multibald_bald_k10_b10_424817.py laaos_results/paper/bald_20_990370.py laaos_results/paper/repeated_mnist_w_noise2_independent_bald_k10_b10_717988.py laaos_results/paper/repeated_mnist_w_noise5_independent_random_k10_b10_133926.py laaos_results/paper/mnist_independent_bald_k100_b10_572400.py laaos_results/paper/repeated_mnist_w_noise5_independent_bald_k10_b10_133926.py laaos_results/paper/rmnist_w_noise_independent_random_k10_b5_470211.py laaos_results/paper/cinic10_nispc20_pretrained_multibald_bald_k50_b10_371526.py laaos_results/paper/mnist_independent_bald_k100_b10_734490.py laaos_results/paper/mnist_multibald_bald_k100_b10_1029338.py src/al_notebook/show_batch.py laaos_results/paper/emnist_multibald_bald_k10_218487.py laaos_results/paper/mnist_multibald_bald_k100_b10_1038804.py src/ignite_utils.py laaos_results/paper/rmnist_w_noise_independent_random_k10_b5_131194.py laaos_results/paper/bald_20_247293.py src/joint_entropy/unoptimized/sampling.py src/random_fixed_length_sampler.py laaos_results/paper/cinic10_nispc20_pretrained_independent_bald_k50_b10_376026.py laaos_results/paper/rmnist_w_noise_multibald_bald_k10_b10_887341.py laaos_results/paper/cinic10_nispc20_pretrained_multibald_bald_k50_b10_376026.py laaos_results/paper/mnist_w_noise_independent_bald_k10_b10_841886.py src/torch_utils.py laaos_results/paper/mnist_multibald_bald_k100_b40_113780.py laaos_results/paper/emnist_balanced_independent_random_k10_b5_107646.py laaos_results/paper/mnist_independent_bald_k100_b1_1038804.py laaos_results/paper/bald_40_192067.py laaos_results/paper/rmnist_w_noise_independent_bald_k10_b10_741209.py laaos_results/paper/mnist_multibald_bald_k100_b5_572400.py laaos_results/paper/emnist_balanced_independent_random_k10_b5_67927.py laaos_results/paper/mnist_multibald_bald_k100_b5_1038804.py laaos_results/paper/bald_20_196837.py laaos_results/paper/rmnist_w_noise_multibald_bald_k10_b10_332929.py laaos_results/paper/repeated_mnist_w_noise5_independent_random_k10_b10_661452.py laaos_results/paper/rmnist_w_noise_independent_mean_stddev_k10_b10_415068.py src/al_notebook/results_loader.py laaos_results/paper/mnist_multibald_bald_k100_b10_734490.py laaos_results/paper/repeated_mnist_w_noise2_multibald_bald_k10_b10_531246.py laaos_results/paper/rmnist_w_noise_independent_random_k10_b5_796743.py laaos_results/paper/mnist_independent_bald_k100_b10_1029338.py laaos_results/paper/emnist_multibald_bald_k10_706460.py laaos_results/paper/repeated_mnist_w_noise5_multibald_bald_k10_b10_607771.py laaos_results/paper/emnist_independent_bald_k10_728719.py laaos_results/paper/repeated_mnist_w_noise5_independent_random_k10_b10_420620.py laaos_results/paper/repeated_mnist_w_noise2_independent_bald_k10_b10_420620.py laaos_results/paper/mnist_multibald_bald_k100_b40_841886.py laaos_results/paper/repeated_mnist_w_noise5_independent_bald_k10_b10_531246.py laaos_results/paper/bald_40_354594.py laaos_results/paper/rmnist_w_noise_independent_random_k10_b5_929495.py laaos_results/paper/rmnist_w_noise_independent_variation_ratios_k10_b10_355046.py laaos_results/paper/rmnist_w_noise_multibald_bald_k10_b10_741209.py laaos_results/paper/repeated_mnist_w_noise2_independent_random_k10_b10_717988.py laaos_results/paper/mnist_multibald_bald_k100_b10_572400.py src/dataset_enum.py laaos_results/paper/mnist_multibald_bald_k100_b5_926965.py src/context_stopwatch.py src/train_model.py laaos_results/paper/bald_20_132344_2018-12-13-091723.py laaos_results/paper/bald_mnist_107856.py laaos_results/paper/mnist_independent_bald_k100_b1_661442.py laaos_results/paper/emnist_independent_bald_k10_629535.py laaos_results/paper/mnist_multibald_bald_k100_b40_1003654.py src/acquisition_functions.py src/joint_entropy/test_matmuls.py laaos_results/paper/repeated_mnist_w_noise2_independent_bald_k10_b10_133926.py src/al_notebook/plots.py laaos_results/paper/cinic10_nispc20_pretrained_independent_bald_k50_b10_885898.py laaos_results/paper/mnist_multibald_bald_k100_b40_234797.py laaos_results/paper/rmnist_w_noise_independent_variation_ratios_k10_b10_289629.py src/recover_model.py laaos_results/paper/rmnist_w_noise_multibald_bald_k10_b10_920641.py laaos_results/paper/bald_40_527608.py laaos_results/paper/cinic10_nispc20_pretrained_independent_bald_k50_b10_941078.py laaos_results/paper/bald_40_757192.py laaos_results/paper/emnist_balanced_independent_random_k10_b5_507556.py src/joint_entropy/exact.py laaos_results/paper/mnist_w_noise_independent_bald_k10_b10_113780.py laaos_results/paper/rmnist_w_noise_independent_mean_stddev_k10_b10_822934.py laaos_results/paper/emnist_balanced_independent_random_k10_b5_482739.py laaos_results/paper/mnist_multibald_bald_k100_b40_920188.py src/al_notebook/torch_utils.py src/joint_entropy/unoptimized/exact.py laaos_results/paper/emnist_independent_bald_k10_779382.py laaos_results/paper/rmnist_w_noise_independent_mean_stddev_k10_b10_1017036.py laaos_results/paper/repeated_mnist_w_noise5_multibald_bald_k10_b10_133926.py src/mnist_model.py laaos_results/paper/mnist_independent_bald_k100_b1_926965.py laaos_results/paper/bald_40_332152.py laaos_results/paper/rmnist_w_noise_independent_bald_k10_b10_887341.py laaos_results/paper/cinic10_nispc20_pretrained_multibald_bald_k50_b10_395815.py laaos_results/paper/repeated_mnist_w_noise5_independent_random_k10_b10_717988.py laaos_results/paper/bald_20_1039002.py src/test_torch_utils.py laaos_results/paper/bald_20_440307.py laaos_results/paper/bald_mnist_703266.py laaos_results/paper/repeated_mnist_w_noise2_multibald_bald_k10_b10_717988.py laaos_results/paper/bald_40_903179.py laaos_results/paper/bald_20_281782.py src/independent_batch_acquisition.py laaos_results/paper/repeated_mnist_w_noise2_independent_random_k10_b10_607771.py laaos_results/paper/repeated_mnist_w_noise5_independent_bald_k10_b10_607771.py laaos_results/paper/mnist_independent_bald_k100_b10_661442.py laaos_results/paper/repeated_mnist_w_noise5_independent_random_k10_b10_607771.py laaos_results/paper/rmnist_w_noise_independent_bald_k10_b10_920641.py laaos_results/paper/rmnist_w_noise_independent_bald_k10_b10_332929.py laaos_results/paper/bald_40_445055.py src/active_learning_data.py src/joint_entropy/test_joint_entropy.py laaos_results/paper/repeated_mnist_w_noise2_independent_random_k10_b10_420620.py src/vgg_model.py laaos_results/paper/mnist_w_noise_independent_bald_k10_b10_920188.py laaos_results/paper/rmnist_w_noise_independent_mean_stddev_k10_b10_355046.py src/acquisition_method.py laaos_results/paper/mnist_independent_bald_k100_b1_1029338.py laaos_results/paper/cinic10_nispc20_pretrained_independent_bald_k50_b10_837979.py laaos_results/paper/mnist_multibald_bald_k100_b40_89548.py src/subrange_dataset.py src/emnist_model.py laaos_results/paper/bald_40_177989.py src/reduced_consistent_mc_sampler.py laaos_results/paper/mnist_multibald_bald_k100_b5_734490.py laaos_results/paper/cinic10_nispc20_pretrained_independent_bald_k50_b10_395815.py laaos_results/paper/repeated_mnist_w_noise5_independent_bald_k10_b10_661452.py laaos_results/paper/rmnist_w_noise_independent_random_k10_b5_54650.py laaos_results/paper/rmnist_w_noise_independent_bald_k10_b10_424817.py laaos_results/paper/repeated_mnist_w_noise5_multibald_bald_k10_b10_661452.py laaos_results/paper/bald_mnist_865341.py laaos_results/paper/rmnist_w_noise_independent_mean_stddev_k10_b10_289629.py laaos_results/paper/emnist_multibald_bald_k10_335690.py laaos_results/paper/mnist_multibald_bald_k100_b5_1029338.py laaos_results/paper/repeated_mnist_w_noise2_multibald_bald_k10_b10_661452.py laaos_results/paper/emnist_multibald_bald_k10_728719.py src/run_experiment.py laaos_results/paper/emnist_multibald_bald_k10_629535.py laaos_results/paper/repeated_mnist_w_noise2_independent_bald_k10_b10_531246.py src/ignite_restoring_score_guard.py laaos_results/paper/bald_40_825296.py laaos_results/paper/bald_mnist_1030548.py laaos_results/paper/mnist_multibald_bald_k100_b10_661442.py laaos_results/paper/bald_20_817488.py laaos_results/paper/repeated_mnist_w_noise2_multibald_bald_k10_b10_607771.py laaos_results/paper/repeated_mnist_w_noise2_independent_random_k10_b10_531246.py laaos_results/paper/mnist_independent_bald_k100_b10_1038804.py src/multi_bald.py laaos_results/paper/emnist_independent_bald_k10_706460.py laaos_results/paper/emnist_balanced_independent_random_k10_b5_129113.py laaos_results/paper/cinic10_nispc20_pretrained_multibald_bald_k50_b10_837979.py laaos_results/paper/repeated_mnist_w_noise2_independent_random_k10_b10_661452.py src/joint_entropy/unoptimized/test_sampling_sample.py laaos_results/paper/mnist_multibald_bald_k100_b10_926965.py laaos_results/paper/bald_20_1023109.py laaos_results/paper/cinic10_nispc20_pretrained_multibald_bald_k50_b10_941078.py src/joint_entropy/unoptimized/test_exact_batch.py laaos_results/paper/cinic10_nispc20_pretrained_independent_bald_k50_b10_371526.py laaos_results/paper/repeated_mnist_w_noise2_multibald_bald_k10_b10_420620.py src/test_torch_mnist.py laaos_results/paper/repeated_mnist_w_noise2_independent_random_k10_b10_133926.py laaos_results/paper/mnist_multibald_bald_k100_b5_661442.py src/run_experiment_no_al.py laaos_results/paper/bald_40_74940.py laaos_results/paper/bald_40_118596.py laaos_results/paper/bald_mnist_804264.py laaos_results/paper/mnist_w_noise_independent_bald_k10_b10_1003654.py laaos_results/paper/bald_mnist_755767.py laaos_results/paper/mnist_w_noise_independent_bald_k10_b10_234797.py laaos_results/paper/mnist_independent_bald_k100_b10_926965.py AcquisitionBatch max_entropy_acquisition_function random_acquisition_function variation_ratios bald_acquisition_function mean_stddev_acquisition_function AcquisitionFunction AcquisitionMethod ActiveLearningData ContextStopwatch get_RepeatedMNIST ExperimentData balance_dataset_by_repeating DatasetEnum compose_transformers get_targets get_MNIST get_CINIC10 get_experiment_data DataSource get_target_bins BayesianNet ignite_progress_bar IgniteProgressBar RestoringScoreGuard epoch_chain store_iteration_results log_results store_epoch_results chain log_epoch_results get_top_n compute_acquisition_bag MCDropout2d MCDropout BayesianModule set_dropout_p _MCDropout BayesianNet RandomFixedLengthSampler Loaders recover_model get_samples_from_laaos_store RecoveredModel recover_args parse_enum_str reduced_eval_consistent_bayesian_model SubsetEvalResults main create_experiment_config_argparser main SamplerModel NoDropoutModel eval_bayesian_model_consistent SubrangeDataset dataset_subset_split test_check_input_permutation test_acquisition_functions test_random_acquistion_function test_find_additional_labels test_partition_dataset test_get_balanced_samples get_subset_base_indices is_cuda_out_of_memory is_cudnn_snafu cuda_meminfo partition_dataset get_base_indices split_tensors should_reduce_batch_size logit_mean batch_jsd get_cuda_total_memory get_cuda_blocked_memory get_balanced_sample_indices gather_expand batch_multi_choices mean_stddev entropy get_cuda_available_memory _get_cuda_assumed_available_memory gc_cuda mutual_information train_model build_metrics TrainModelResult TransformedDataset vgg19 vgg16_cinic10_bn VGG vgg16_bn _vgg vgg19_bn vgg11_bn vgg13 vgg11 make_layers vgg13_bn vgg16 plot_aggregated_groups plot_save plot_aggregated_groups_sample_points plot_aggregated_values map_dict diff_args merge_args gather_accuracies get_accuracy get_marks load_experiment_results index_of_first fix_chosen_samples fill_values_sample_points_T aggregate_accuracies discard_eng_args to_namedtuple get_samples_accuracy_I aggregate_values get_diff_args_key2text aggregate_values_sample_points_T get_samples_values_I get_any gather_samples_I parse_enum_str get_vip_args VIPArgs handle_unary_funcs load_laaos_files AggregateAccuracies get_stores_info gather_accuracy recover_args get_merge_args_field pandas_accuracies expand_samples_I_values_I merge_sample_points_T groupby_dict handle_map_funcs get_laaos_files filter_dict gather_values camel_case_name get_threshold_quantiles_key sort_dict mnist_train show_batch mnist_test show_indices print_global_torch_tensors print_cuda_info gc_cuda entropy_from_M_K entropy_from_probs_b_M_C joint_probs_M_K joint_probs_M_K_impl batch_conditional_entropy_B batch conditional_entropy_from_logits_B_K_C entropy_joint_probs_B_M_C from_M_K sample_M_K importance_weighted_entropy_p_b_M_C sample_M_K_unified batch test_sampling_joint_entropy basic_exact_joint_entropy load_logits test_exact_joint_entropy test_unified_sampling_joint_entropy B samples_M_K test_looped_matmul C result_B_M_C test_batch_matmul K probs_B_K_C torch_device M batch_conditional_entropy_B joint_probs_M_K entropy_from_M_K batch from_M_K sample_M_K batch S probs_N_K_C B samples_M_K exact_module sampling_module C result_B_M_C K probs_B_K_C N torch_device test_batch M S probs_N_K_C B samples_M_K exact_module sampling_module C result_B_M_C test_joint_probs K probs_B_K_C N torch_device M S probs_N_K_C B samples_M_K exact_module sampling_module C result_B_M_C K probs_B_K_C N torch_device test_batch M S probs_N_K_C B samples_M_K test_sample_M_K exact_module sampling_module C result_B_M_C K probs_B_K_C N torch_device M ImageFolder Compose ConcatDataset MNIST Compose MNIST Compose ConcatDataset list balance_dataset_by_repeating print ActiveLearningData extract_dataset_from_indices tolist SubrangeDataset Subset from_iterable available_dataset active_dataset extract_dataset acquire TransformedDataset max get_target_bins values len list Counter list print min from_iterable max values isinstance ConcatDataset Subset SVHN dataset attach IgniteProgressBar topk reduced_eval_consistent_bayesian_model print get_dataset_indices scores_B subset_split numpy apply extend mnist AcquisitionMethod DatasetEnum dict independent parse_enum_str AcquisitionFunction validation_dataset balanced_test_set test_dataset DataLoader get_experiment_data device seed num_classes validation_set_size num_inference_samples get_samples_from_laaos_store get_data_source manual_seed recover_args train_model log_interval print train_dataset early_stopping_patience balanced_validation_set epochs print ActiveLearningData available_dataset DataLoader acquire dataset len add_argument validation_dataset balanced_test_set test_dataset indices DataLoader initial_samples_per_class ArgumentParser device get_experiment_data count seed experiments_laaos argv get_base_indices validation_set_size experiment_task_id len epochs active_dataset safe_load acquire append parse_args create_file_store quickquick __dict__ get_targets available_dataset manual_seed initial_samples join print add_argument train_dataset balanced_validation_set dict create_experiment_config_argparser makedirs values list num_classes name from_iterable get_random_available_indices train_dataset_limit get_data_source RandomFixedLengthSampler train_model RandomSampler epoch_samples len append SubrangeDataset isinstance len MNIST islice estimator eval DataLoader RandomAcquisitionFunction tensor cat MNIST create islice estimator eval DataLoader cat tensor BayesianNet create Forwarder rand estimator eval cat assert_allclose MNIST islice estimator eval DataLoader cat tensor BALDEstimator BayesianNet randint list get_balanced_sample_indices items ones partition_dataset array empty_cache collect gc_cuda empty _get_cuda_assumed_available_memory print mean logit_mean entropy squeeze int defaultdict randperm append range len isinstance tensor sum log multinomial reshape list list expand epoch_chain metrics store_iteration_results print nll_loss ignite_progress_bar create_supervised_trainer RestoringScoreGuard store_epoch_results run chain to desc log_epoch_results create_supervised_evaluator Conv2d load_state_dict_from_url make_layers VGG load_state_dict axhline list from_iterable scatter gca sample_points plot set quantile_sample_points zip thresholds enumerate pop items vlines print threshold_quantiles accuracies dict fill_between len plot_aggregated_values plot_aggregated_values savefig use namedtuple isinstance tuple camel_case_name keys handle_map_funcs handle_unary_funcs handle_unary_funcs items list inner_key handle_unary_funcs join walk abspath items list get_laaos_files safe_load recover_args append iterations accuracy index_of_first append list gather_samples_I values_getter zip append extend len list max zip percentile fill_values_sample_points_T asarray T merge_sample_points_T empty fill_values_sample_points_T asarray merge_sample_points_T initial_samples items list defaultdict available_sample_k isinstance tuple tag add set iterations keys values len discard_eng_args merge_args update list discard_eng_args merge_args dict keys initial_samples available_sample_k acquisition_method tag iterations num_inference_samples dataset type len update load_laaos_files append keys diff_args set imshow transpose numpy make_grid show_batch stack print items p tuple element_size shape device prod t shape range reshape ones shape mean sum shape empty range matmul ones entropy_from_probs_b_M_C split_tensors copy_ shape device zeros double entropy_joint_probs_B_M_C shape split_tensors copy_ shape double empty conditional_entropy_from_logits_B_K_C gather_expand t long double prod exp reshape gather_expand t double sum long log mean matmul importance_weighted_entropy_p_b_M_C empty range list exp product reshape gather_expand range mean shape device zeros double prod log load joint_probs_M_K exp item exp sample_M_K joint_probs_M_K item numpy exp sample_M_K joint_probs_M_K item numpy param gc_cuda benchmark benchmark reshape t double range sum log sum exp benchmark benchmark | # BatchBALD **Note:** A more modular re-implementation can be found at https://github.com/BlackHC/batchbald_redux. --- This is the code drop for our paper [BatchBALD: Efficient and Diverse Batch Acquisition for Deep Bayesian Active Learning](https://arxiv.org/abs/1906.08158). The code comes as is. See https://github.com/BlackHC/batchbald_redux and https://blackhc.github.io/batchbald_redux/ for a reimplementation. ElementAI's Baal framework also supports BatchBALD: https://github.com/ElementAI/baal/. Please cite us: ``` | 159 |
BlackHC/batchbald_redux | ['active learning'] | ['BatchBALD: Efficient and Diverse Batch Acquisition for Deep Bayesian Active Learning'] | batchbald_redux/joint_entropy.py batchbald_redux/__init__.py batchbald_redux/_nbdev.py batchbald_redux/consistent_mc_dropout.py batchbald_redux/repeated_mnist.py setup.py batchbald_redux/active_learning.py batchbald_redux/batchbald.py get_subset_base_indices get_base_indices ActiveLearningData get_balanced_sample_indices RandomFixedLengthSampler CandidateBatch compute_conditional_entropy compute_entropy get_bald_batch get_batchbald_batch ConsistentMCDropout BayesianModule _ConsistentMCDropout ConsistentMCDropout2d JointEntropy ExactJointEntropy gather_expand DynamicJointEntropy batch_multi_choices SampledJointEntropy create_MNIST_dataset TransformedDataset get_targets create_repeated_MNIST_dataset custom_doc_links int defaultdict randperm append range len Subset isinstance shape empty close tqdm shape empty close tqdm add_variables sum list min compute_conditional_entropy DynamicJointEntropy tqdm shape item append compute_batch empty max range shape topk min multinomial reshape list list DEBUG_CHECKS expand MNIST ConcatDataset Compose normal_ TransformedDataset isinstance ConcatDataset Subset dataset | # BatchBALD Redux > Clean reimplementation of \"BatchBALD: Efficient and Diverse Batch Acquisition for Deep Bayesian Active Learning\" For an introduction & more information, see https://blackhc.github.io/BatchBALD/. The paper can be found at http://arxiv.org/abs/1906.08158. The documentation for this version can be found under https://blackhc.github.io/batchbald_redux/. The original implementation used in the paper is available at https://github.com/BlackHC/BatchBALD. We are grateful for fastai's [nbdev](https://nbdev.fast.ai/) which is powering this package. For more information, explore the sections and notebooks in the left-hand menu. The code is available on https://github.com/BlackHC/batchbald_redux, and the website on https://blackhc.github.io/batchbald_redux. ## Install `pip install batchbald_redux` | 160 |
BogiHsu/Tacotron2-PyTorch | ['speech synthesis'] | ['Natural TTS Synthesis by Conditioning WaveNet on Mel Spectrogram Predictions'] | train.py model/layers.py text/numbers.py model/model.py utils/logger.py mkgta.py text/symbols.py text/__init__.py utils/dataset.py utils/util.py utils/audio.py inference.py hparams.py text/cleaners.py text/cmudict.py utils/plot.py hparams load_model plot audio infer save_mel plot_data infer load_model save_mel files_to_list load_checkpoint train save_checkpoint prepare_dataloaders LinearNorm ConvNorm Tacotron2 Decoder Postnet Prenet Encoder Tacotron2Loss is_end_of_frames LocationLayer Attention lowercase english_cleaners expand_abbreviations collapse_whitespace basic_cleaners convert_to_ascii transliteration_cleaners expand_numbers _parse_cmudict _get_pronunciation CMUDict normalize_numbers _expand_dollars _expand_ordinal _expand_decimal_point _expand_number _remove_commas text_to_sequence _clean_text _symbols_to_sequence _should_keep_symbol sequence_to_text _arpabet_to_sequence _mel_to_linear load_wav _build_mel_basis inv_melspectrogram _linear_to_mel melspectrogram _amp_to_db inv_spectrogram _istft preemphasis save_wav _denormalize spectrogram _db_to_amp _normalize _griffin_lim _stft inv_preemphasis _stft_parameters find_endpoint get_mel_text_pair ljdataset get_mel get_text files_to_list ljcollate Tacotron2Logger plot_spectrogram_to_numpy save_figure_to_numpy plot_alignment_to_numpy get_mask_from_lengths to_arr mode len load eval Tacotron2 load_state_dict inference long text_cleaners text_to_sequence imshow subplots range len savefig plot_data inv_melspectrogram save_wav to_arr T save train load_wav to_var teacher_infer to_arr n_frames_per_step melspectrogram Tensor cat join n_frames_per_step ljdataset ljcollate DataLoader load load_state_dict save chmod model LambdaLR clip_grad_norm_ zero_grad save_checkpoint prepare_dataloaders ckpt_pth Tacotron2 eg_text data_dir Adam grad_clip_thresh parse_batch Tacotron2Logger format close perf_counter eval sample_training log_training join log_dir criterion backward print load_checkpoint infer Tacotron2Loss parameters ckpt_dir mode sch step makedirs sub lowercase collapse_whitespace lowercase convert_to_ascii collapse_whitespace lowercase expand_abbreviations collapse_whitespace convert_to_ascii expand_numbers append _get_pronunciation sub split split group split int group sub match group len cleaner getattr read astype float32 abs max int16 sample_rate write astype preemphasis _stft _amp_to_db ref_level_db abs ref_level_db _db_to_amp _denormalize preemphasis _stft _linear_to_mel _amp_to_db ref_level_db abs ref_level_db _mel_to_linear _db_to_amp _denormalize int sample_rate _db_to_amp range len complex exp angle _stft rand astype pi gl_iters _istft range _stft_parameters _stft_parameters _build_mel_basis pinv _build_mel_basis dot maximum num_freq get_mel get_text load_wav reshape tostring_rgb fromstring subplots xlabel draw close ylabel colorbar tight_layout imshow save_figure_to_numpy subplots xlabel draw close ylabel colorbar tight_layout imshow save_figure_to_numpy cuda is_cuda mode arange item | # Tacotron2-PyTorch Yet another PyTorch implementation of [Natural TTS Synthesis by Conditioning WaveNet on Mel Spectrogram Predictions](https://arxiv.org/pdf/1712.05884.pdf). The project is highly based on [these](#References). I made some modification to improve speed and performance of both training and inference. ## TODO - [x] Add Colab demo. - [x] Update README. - [x] Upload pretrained models. - [x] Compatible with [WaveGlow](https://github.com/NVIDIA/waveglow) and [Hifi-GAN](https://github.com/jik876/hifi-gan). ## Requirements - Python >= 3.5.2 - torch >= 1.0.0 | 161 |
Borda/pyImSegm | ['superpixels', 'semantic segmentation'] | ['Supervised and unsupervised segmentation using superpixels, model estimation, and Graph Cut.', 'Detection and Localization of Drosophila Egg Chambers in Microscopy Images.', 'Region growing using superpixels with learned shape prior.'] | experiments_ovary_detect/run_ellipse_annot_match.py tests/test_pipelines.py experiments_ovary_detect/run_cut_segmented_objects.py tests/test_descriptors.py tests/test_region-growing.py experiments_ovary_centres/run_create_annotation.py experiments_segmentation/run_compute_stat_annot_segm.py imsegm/annotation.py imsegm/utilities/data_io.py imsegm/region_growing.py experiments_ovary_centres/gui_annot_center_correction.py experiments_segmentation/run_segm_slic_model_graphcut.py imsegm/classification.py imsegm/labeling.py imsegm/utilities/__init__.py tests/__init__.py experiments_ovary_detect/run_egg_swap_orientation.py tests/test_labeling.py streamlit-app.py experiments_ovary_detect/run_RG2Sp_estim_shape-models.py imsegm/utilities/data_samples.py imsegm/__init__.py experiments_ovary_detect/run_ovary_segm_evaluation.py imsegm/utilities/experiments.py tests/test_utilities.py tests/test_classification.py imsegm/utilities/read_zvi.py handling_annotations/run_segm_annot_inpaint.py imsegm/utilities/drawing.py experiments_ovary_detect/run_ovary_egg-segmentation.py imsegm/descriptors.py handling_annotations/run_image_color_quantization.py docs/source/conf.py imsegm/pipelines.py tests/test_graph-cut.py imsegm/graph_cuts.py handling_annotations/run_image_convert_label_color.py experiments_ovary_centres/run_center_evaluation.py experiments_ovary_detect/run_export_user-annot-segm.py handling_annotations/run_segm_annot_relabel.py experiments_ovary_centres/run_center_clustering.py tests/test_ellipse-fitting.py experiments_segmentation/run_segm_slic_classif_graphcut.py imsegm/ellipse_fitting.py experiments_segmentation/run_eval_superpixels.py experiments_ovary_detect/run_ellipse_cut_scale.py setup.py tests/test_superpixels.py imsegm/superpixels.py experiments_ovary_centres/run_center_prediction.py handling_annotations/run_overlap_images_segms.py experiments_ovary_centres/run_center_candidate_training.py BuildExt _parse_requirements process_image linkcode_resolve setup run_apidoc estimate_eggs_from_info export_corrections onclick onkey_release set_false_negative set_false_positive canvas_load_image_centers remove_point arg_parse_params canvas_update_image_centers main load_paths_image_csv add_point_correction load_csv_center_label compute_statistic_centers label_close_points compute_points_features export_dataset_visual export_visual_input_image_segm arg_parse_params load_image_segm_center prepare_experiment_folder wrapper_draw_export_slic_centers save_dump_data load_dump_data estim_points_compute_features wrapper_detect_center_candidates compute_min_dist_2_centers load_df_paths wrapper_estim_points_compute_features find_match_images_segms_centers is_drawing detect_center_candidates export_show_image_points_labels main_train experiment_loo dataset_load_images_segms_compute_features get_idx_name cluster_center_candidates main cluster_points_draw_export export_draw_image_centers_clusters load_center_evaluate evaluate_detection_stage estimate_eggs_from_info main compute_statistic_eggs_centres main get_csv_triplets load_compute_detect_centers segm_set_center_levels load_correct_segm draw_circle create_annot_centers main compute_eggs_center export_cut_objects main arg_parse_params condition_swap_correl perform_orientation_swap correlation_coefficient condition_swap_density compute_mean_image main main filter_table select_optimal_ellipse arg_parse_params main perform_stage extract_ellipse_object figure_draw_img_centre_segm export_figure arg_parse_params main figure_draw_annot_csv segment_graphcut_pixels segment_fit_ellipse_ransac_segm arg_parse_params segment_fit_ellipse_ransac segment_morphsnakes image_segmentation simplify_segm_3cls load_image segment_rg2sp_graphcut segment_graphcut_slic segment_active_contour create_dict_segmentation segment_rg2sp_greedy create_circle_center segment_watershed path_out_img main export_partial export_draw_image_segm segment_fit_ellipse evaluate_folder expert_visual compute_metrics arg_parse_params main main arg_parse_params fill_lut aparse_params export_visual main stat_single_set main compute_boundary_distance arg_parse_params segment_image wrapper_filter_labels load_train_classifier save_dump_data try_segment_image load_dump_data export_draw_image_segm_contour perform_train_predictions get_summary dataset_load_images_annot_compute_features prepare_output_dir visu_histogram_labels experiment_lpo use_rgb_image main_predict load_image_annot_compute_features_labels main_train filter_train_with_purity eval_segment_with_annot retrain_lpo_segment_image load_path_images compare_segms_metric_ars save_model load_model experiment_group_gmm segment_image_model experiment_single_gmm export_visual arg_parse_params main load_image get_idx_name segment_image_independent write_skip_file parse_imgs_idx_path parse_arg_params quantize_folder_images perform_quantize_image see_images_color_info main parse_arg_params perform_img_convert convert_labels_2_colors convert_folder_images convert_colors_2_labels load_dict_colours main main parse_arg_params perform_visu_overlap visualise_overlap main parse_arg_params perform_img_inpaint quantize_folder_images main parse_arg_params perform_image_relabel relabel_folder_images image_inpaint_pixels group_images_frequent_colors load_info_group_by_slices image_frequent_colors image_color_2_labels convert_img_colors_to_labels_reverted quantize_image_nearest_color unique_image_colors convert_img_colors_to_labels quantize_image_nearest_pixel convert_img_labels_to_colors create_clf_param_search_grid relabel_sequential create_clf_param_search_distrib compute_metric_fpfn_tpfn down_sample_dict_features_unique CrossValidate compute_stat_per_image unique_rows CrossValidateGroups export_results_clf_search search_params_cut_down_max_nb_iter balance_dataset_by_ convert_dict_label_features_2_vectors load_classifier create_classifiers create_classif_search create_clf_pipeline eval_classif_cross_val_roc shuffle_features_labels down_sample_dict_features_kmean compute_classif_stat_segm_annot compute_classif_metrics save_classifier feature_scoring_selection compute_tp_tn_fp_fn HoldOut compose_dict_label_features create_classif_search_train_export down_sample_dict_features_random convert_set_features_labels_2_dataset create_pipeline_neuron_net compute_metric_tpfp_tpfn eval_classif_cross_val_scores create_filter_bank_lm_2d cython_label_hist_seg2d norm_features _check_unrecognised_feature_names compute_ray_features_segm_2d_vectors _check_color_image compute_img_filter_response2d cython_img2d_color_mean numpy_img3d_gray_median numpy_img2d_color_mean image_subtract_gauss_smooth numpy_img2d_color_median _check_unrecognised_feature_group compute_selected_features_gray3d compute_selected_features_gray2d numpy_img3d_gray_mean make_edge_filter2d numpy_img2d_color_std shift_ray_features cython_img2d_color_std compute_texture_desc_lm_img3d_val cython_img3d_gray_mean compute_ray_features_segm_2d compute_image3d_gray_statistic compute_label_histograms_positions numpy_ray_features_seg2d compute_selected_features_img2d cython_img3d_gray_energy make_gaussian_filter1d numpy_img2d_color_energy compute_label_hist_proba reconstruct_ray_features_2d numpy_img3d_gray_energy reduce_close_points compute_texture_desc_lm_img2d_clr adjust_bounding_box_crop interpolate_ray_dist compute_label_hist_segm numpy_img3d_gray_std cython_img2d_color_energy cython_img3d_gray_std compute_ray_features_positions compute_image2d_color_statistic _check_gray_image_segm _check_color_image_segm compute_img_filter_response3d cython_ray_features_seg2d compute_selected_features_color2d prepare_boundary_points_ray_dist prepare_boundary_points_ray_join prepare_boundary_points_ray_mean filter_boundary_points split_segm_background_foreground get_slic_points_labels prepare_boundary_points_ray_edge prepare_boundary_points_close EllipseModelSegm ransac_segm add_overlap_ellipse estim_gmm_params compute_pairwise_cost_from_transitions insert_gc_debug_images estim_class_model compute_multivarian_otsu count_label_transitions_connected_segments compute_edge_model create_pairwise_matrix_specif segment_graph_cut_general create_pairwise_matrix estim_class_model_gmm compute_edge_weights get_vertexes_edges create_pairwise_matrix_uniform estim_class_model_kmeans compute_unary_cost compute_spatial_dist compute_pairwise_cost sequence_labels_merge binary_image_from_coords mask_segm_labels compute_boundary_distances relabel_by_dict contour_coords assign_label_by_max convert_segms_2_list compute_distance_map contour_binary_map merge_probab_labeling_2d histogram_regions_labels_counts compute_labels_overlap_matrix segm_labels_assignment relabel_max_overlap_merge relabel_max_overlap_unique assign_label_by_threshold histogram_regions_labels_norm assume_bg_on_boundary neighbour_connect4 train_classif_color2d_slic_features wrapper_compute_color2d_slic_features_labels estim_model_classes_group segment_color2d_slic_features_model_graphcut pipe_color2d_slic_features_model_graphcut pipe_gray3d_slic_features_model_graphcut compute_color2d_superpixels_features compute_object_shapes transform_rays_model_sets_mean_cdf_kmeans transform_rays_model_cdf_histograms region_growing_shape_slic_greedy transform_rays_model_cdf_kmeans object_segmentation_graphcut_slic prepare_graphcut_variables update_shape_costs_points get_neighboring_candidates transform_rays_model_sets_mean_cdf_mixture compute_data_costs_points compute_update_shape_costs_points_table_cdf transform_rays_model_cdf_mixture compute_cumulative_distrib region_growing_shape_slic_graphcut compute_pairwise_penalty compute_rg_crit compute_shape_prior_table_cdf transform_rays_model_cdf_spectral object_segmentation_graphcut_pixels compute_centre_moment_points compute_segm_object_shape enforce_center_labels compute_update_shape_costs_points_close_mean_cdf compute_segm_prob_fg get_segment_diffs_3d_conn6 segment_slic_img3d_gray get_segment_diffs_2d_conn4 get_neighboring_segments superpixel_centers segment_slic_img2d make_graph_segm_connect_grid3d_conn6 make_graph_segm_connect_grid2d_conn4 make_graph_segment_connect_edges load_images_list convert_nifti_2_img cut_object scale_image_size swap_coord_x_y get_image2d_boundary_color save_landmarks_txt load_complete_image_folder io_imsave add_padding save_landmarks_csv convert_img_color_to_rgb scale_image_vals_in_range load_landmarks_txt load_image io_imread export_image find_files_match_names_across_dirs update_path convert_img_color_from_rgb convert_img_2_nifti_rgb load_image_2d convert_img_2_nifti_gray load_zvi_volume_double_band_split load_params_from_txt image_open load_tiff_volume_split_double_band merge_image_channels scale_image_intensity load_landmarks_csv io_image_decorate load_image_tiff_volume load_img_double_band_split sample_segment_vertical_3d sample_segment_vertical_2d load_sample_image sample_color_image_rand_segment get_image_path make_overlap_images_optical figure_used_samples figure_image_segm_results figure_rg2sp_debug_complete draw_graphcut_weighted_edges ellipse_perimeter norm_aplha merge_object_masks figure_annot_slic_histogram_labels figure_ray_feature figure_overlap_annot_segm_image closest_point_on_line draw_rg2sp_results draw_eggs_ellipse _draw_disk figure_image_segm_centres figure_ellipse_fitting draw_image_segm_points draw_image_clusters_centers figure_segm_graphcut_debug figure_segm_boundary_dist make_overlap_images_chess create_figure_by_image ellipse figure_image_adjustment draw_eggs_rectangle draw_color_labeling _ellipse parse_annot_rectangles draw_graphcut_unary_cost_segments get_nb_workers WrapExecuteSequence load_config_yaml Experiment create_subfolders save_config_yaml create_experiment_folder set_experiment_logger extend_list_params try_decorator append_final_stat is_iterable string_dict zvi_read get_hex i32 parse_image get_dir get_layer_count read_image_container_content load_image read_item_storage_content read_struct ImageDimensionError generate_data TestClassification TestFeatures _export_ray_results TestEllipseFitting TestGraphCut TestLabels show_segm_debugs_2d TestPipelinesGMM show_segm_results_2d TestPipelinesClassif run_segm2d_gmm_gc expert_segm compute_prior_map load_inputs TestRegionGrowing TestSuperpixels TestDataSamples TestUtilities list subplots set_title estim_model_classes_group print segment_color2d_slic_features_model_graphcut mark_boundaries write dict imshow pyplot imread keys main connect get join find_source expanduser info add_argument abspath ArgumentParser vars parse_args sorted list glob debug zip iterrows values append center_of_mass astype replace ones set_false_negative isfile zeros read_csv to_csv replace list replace draw_eggs_ellipse zip array estimate_eggs_from_info len info canvas_update_image_centers imread load_csv_center_label plot tolist set_xlim draw imshow clf set_ticklabels contour set_ylim int canvas_update_image_centers cdist argmin append array values reset_index cdist argmin canvas_update_image_centers array values drop button xdata debug warning ydata remove_point add_point_correction export_corrections debug keyval_name len canvas_load_image_centers info keyval set_title show_all mpl_connect connect Window subplots_adjust add canvas_load_image_centers read_csv info set_default_size FigureCanvasGTKAgg load_paths_image_csv Figure len update join basename load_config_yaml update_path dirname list find_files_match_names_across_dirs info range len get_idx_name update_path merge_image_channels merge_probab_labeling_2d is_drawing swap_coord_x_y load_landmarks_csv warning relabel_by_dict export_visual_input_image_segm float max io_imread load_img_double_band_split join figure_image_segm_centres savefig close cdist argmin min array join subplots close tight_layout draw_image_segm_points savefig array segment_slic_img2d superpixel_centers compute_points_features compute_label_histograms_positions all hstack compute_ray_features_positions append ndarray isinstance compute_min_dist_2_centers warning type array len WrapExecuteSequence join iterrows update partial save_landmarks_csv label_close_points debug close tqdm info WrapExecuteSequence join list info update export_show_image_points_labels isdir label_close_points tolist astype logical_and compute_classif_metrics copy DICT_LABEL_MARKER_FN_FP sum len export_show_image_points_labels join save_landmarks_csv compute_statistic_centers swap_coord_x_y predict load tolist dict info savez_compressed info WrapExecuteSequence join partial set_index transpose to_csv info append DataFrame mkdir join debug dirname join list info find_match_images_segms_centers to_csv isfile range read_csv len export_dataset_visual prepare_experiment_folder string_dict save_dump_data CrossValidateGroups load_dump_data load_df_paths ceil save_config_yaml eval_classif_cross_val_roc create_subfolders set_experiment_logger info join int experiment_loo dataset_load_images_segms_compute_features convert_set_features_labels_2_dataset create_classif_search_train_export eval_classif_cross_val_scores len max DBSCAN copy mean append array range fit join subplots draw_image_clusters_centers tight_layout close array savefig max update join export_draw_image_centers_clusters save_landmarks_csv cluster_center_candidates debug swap_coord_x_y load_landmarks_csv io_imread WrapExecuteSequence join list find_files_match_names_across_dirs partial set_index create_subfolders to_csv append save_config_yaml DataFrame range NAME_CSV_TRIPLES merge_object_masks parse_annot_rectangles draw_eggs_rectangle export_show_image_points_labels astype vstack append center_of_mass array len join uint8 io_imsave estimate_eggs_from_info export_draw_image_centers_clusters debug astype close dict savefig load_image_segm_center figure_image_segm_results compute_statistic_eggs_centres array len WrapExecuteSequence join iterrows partial DataFrame debug create_subfolders map load_info_group_by_slices to_csv info append empty read_csv sleep evaluate_detection_stage collect transpose FOLDER_EXPERIMENT copy set_experiment_logger get_csv_triplets string_dict update join time collect estim_points_compute_features load_classifier dict cluster_points_draw_export detect_center_candidates sleep load_image_segm_center isfile set_index update_path find_match_images_segms_centers to_csv apply info read_csv update iterrows describe load_classifier prepare_experiment_folder uname append center_of_mass DataFrame max range debug label remove_small_objects binary_opening io_imread zeros _draw_disk zeros_like pi draw_circle max io_imsave logical_and shape sum range distance_transform_edt disk astype sqrt center_of_mass binary_opening enumerate int join uint8 join basename segm_set_center_levels replace to_csv load_correct_segm compute_eggs_center glob dirname mkdir join io_imsave debug cut_object load_image_2d unique debug export_image join condition_swap_correl basename condition_swap_density shape load_image_2d mean int float sum correlation_coefficient mean std WrapExecuteSequence median load_image_2d min sorted compute_mean_image argmax join iterrows update min tolist logical_and dict shape logical_or append zeros float sum max read_csv add_overlap_ellipse len info filter_table enumerate export_image join basename glob tolist cut_object load_image_2d resize zeros add_overlap_ellipse WrapExecuteSequence join str format partial iterrows info list mkdir NORM_FUNC len groupby perform_stage dropna subplots shape imshow scatter array gca float contour max subplots draw_eggs_rectangle shape imshow array gca float parse_annot_rectangles max contour join figure_draw_img_centre_segm figure_image_adjustment debug close shape savefig read_csv figure_draw_annot_csv io_imread values load_info_group_by_slices splitext error debug min shape abspath expanduser max io_imread load_img_double_band_split subplots figure_image_adjustment plot close imshow shape savefig contour array max zeros_like debug distance_transform_edt binary_closing watershed binary_fill_holes binary_opening range enumerate int _draw_disk transpose append zeros circle_perimeter enumerate int debug tolist astype create_circle_center binary_dilation min remove_small_holes shape active_contour zeros max gaussian_filter enumerate int zeros_like debug levelset MultiMorphSnakes create_circle_center dict sqrt run MorphACWE fn_preproc_points zeros_like debug EllipseModel estimate params any append add_overlap_ellipse enumerate fn_preproc_points int ransac zeros_like debug params any append enumerate add_overlap_ellipse len fn_preproc_points append zeros_like get_slic_points_labels debug params any bincount ravel ransac_segm add_overlap_ellipse enumerate object_segmentation_graphcut_pixels object_segmentation_graphcut_slic load join close region_growing_shape_slic_greedy savefig open range figure_rg2sp_debug_complete compute_segm_prob_fg len load join figure_rg2sp_debug_complete close savefig open range region_growing_shape_slic_graphcut compute_segm_prob_fg len gaussian_filter binary_fill_holes copy rollaxis swap_coord_x_y warning list io_imsave save_landmarks_csv simplify_segm_3cls load_image update_path create_dict_segmentation debug astype mkdir tile info export_partial __name__ join time items uint8 export_draw_image_segm dict fn load_landmarks_csv segment_slic_img2d join endswith to_csv mkdir splitext create_dict_segmentation tolist create_experiment_folder DEBUG setLevel debug logical_and relabel_max_overlap_unique logical_or load_image_2d append float sum max join subplots figure_image_adjustment plot close swap_coord_x_y load_landmarks_csv load_image_2d shape imshow savefig float contour array describe DataFrame list basename iterrows tolist compute_metrics append range update find_files_match_names_across_dirs expert_visual zip info empty join set_index to_csv len sort_index compute_object_shapes transform_rays_model_sets_mean_cdf_mixture linspace transform_rays_model_cdf_mixture array io_imread add_argument dirname ArgumentParser info vars parse_args tolist max enumerate fill_lut join relabel_sequential norm_aplha debug close savefig figure_overlap_annot_segm_image isdir compute_classif_stat_segm_annot astype nan export_visual load_image array basename empty Series median join isdir debug figure_segm_boundary_dist close compute_boundary_distances savefig segment_slic_img2d load_image isdir join figure_annot_slic_histogram_labels close savefig convert_img_color_to_rgb gray2rgb copy get _path_out_img clip convert_img_color_from_rgb debug mark_boundaries compute_selected_features_img2d histogram_regions_labels_norm use_rgb_image segment_slic_img2d load_image get_idx_name argmax max imsave WrapExecuteSequence iterrows reset_index partial read_csv join debug close savefig figure_image_segm_results save argmax savez_compressed fromarray str export_draw_image_segm_contour savefig sleep load_image debug astype close figure_segm_graphcut_debug parse_imgs_idx_path join uint8 collect compute_pairwise_cost_from_transitions segment_color2d_slic_features_model_graphcut get_idx_name join list convert_segms_2_list debug classification_report to_csv visu_histogram_labels info compute_stat_per_image pop collect load_dump_data convert_set_features_labels_2_dataset segment_image load_classifier sleep get_idx_name fit update list describe tolist zip WrapExecuteSequence join list partial info zip range len WrapExecuteSequence join update get partial collect to_csv eval_segment_with_annot CrossValidate get_summary info append sleep len update join CrossValidateGroups format debug create_classif_search_train_export copy load_classifier info feature_scoring_selection eval_classif_cross_val_roc eval_classif_cross_val_scores join close figure_used_samples savefig zeros max len WrapExecuteSequence partial describe count_label_transitions_connected_segments DEBUG DataFrame setLevel max round load_train_classifier list tolist perform_train_predictions _path_expt get_summary sleep append dataset_load_images_annot_compute_features write_skip_file get reset_index experiment_lpo zip collect min to_csv filter_train_with_purity eval_segment_with_annot nan_to_num create_experiment_folder read_csv join mkdir WrapExecuteSequence setLevel string_dict sorted list load_classifier sleep update prepare_output_dir get partial glob debug set_experiment_logger zip info INFO collect len get isfile relabel_sequential update_path load_image_2d info dict info error isinstance fromarray str uint8 astype figure_segm_graphcut_debug save info figure_image_segm_results join uint8 io_imsave int debug astype pipe_color2d_slic_features_model_graphcut export_visual load_image get_idx_name savez_compressed assume_bg_on_boundary parse_imgs_idx_path join uint8 io_imsave int debug segment_color2d_slic_features_model_graphcut astype sqrt shape export_visual load_image get_idx_name savez_compressed prod assume_bg_on_boundary parse_imgs_idx_path set_index ravel warning append DataFrame WrapExecuteSequence list partial collect zip dict info sleep len WrapExecuteSequence join list save_model load_model info estim_model_classes_group partial collect dict zip sleep len get isfile glob tolist warning info read_csv get load_path_images compare_segms_metric_ars experiment_group_gmm _path_expt experiment_single_gmm write_skip_file join basename update_path add_argument string_dict dirname ArgumentParser info vars parse_args sorted group_images_frequent_colors glob error debug len uint8 io_imsave error debug astype shape warning quantize_image_nearest_color quantize_image_nearest_pixel zeros io_imread WrapExecuteSequence sorted list partial glob see_images_color_info info len quantize_folder_images tuple randint unique convert_img_labels_to_colors unique_image_colors convert_img_colors_to_labels join DICT_COLOURS uint8 basename io_imsave debug convert_labels_2_colors astype shape warning convert_colors_2_labels imread WrapExecuteSequence sorted list partial glob debug dirname mkdir info load_dict_colours len convert_folder_images list dict zip percentile rescale_intensity relabel_sequential rollaxis norm_aplha mark_boundaries close copy load_image_2d savefig tile figure_image_segm_results imsave join debug visualise_overlap isfile sum uint8 io_imsave image_inpaint_pixels debug astype nan imread array int max list join basename io_imsave debug tolist zip imread array range WrapExecuteSequence sorted list partial glob info len relabel_folder_images fromarray int asarray getcolors warning prod int all astype flatten bincount zeros min max asarray enumerate fromarray int product debug getcolors float sum values debug image_frequent_colors info io_imread len list asarray reshape argmin keys reshape argmin asarray shape NearestNDInterpolator T reshape shape sum abs astype shape nan tile fill empty enumerate update groupby iterrows hasattr set_index DataFrame debug sort abs close tqdm info append sort_values array read_csv values len Pipeline warning BernoulliRBM Pipeline LogisticRegression update list format relabel_sequential debug hstack dict unique zip precision_recall_fscore_support array enumerate ones compute_metric_fpfn_tpfn compute_classif_metrics relabel_max_overlap_unique shape compute_metric_tpfp_tpfn WrapExecuteSequence list partial set_index map zip DataFrame range len join list set_index exists ExtraTreesClassifier f_regression debug map to_csv shape f_classif info append SelectKBest DataFrame range enumerate fit join format debug info debug list keys info list asarray debug tolist unique zeros max range enumerate len relabel_sequential export_results_clf_search debug fit Counter best_estimator_ nan_to_num shape save_classifier best_score_ get_params unique info create_classif_search create_clf_pipeline steps len join format relabel_sequential describe to_csv mean warning unique info DataFrame cross_val_score join T format debug fit clone roc_curve copy tolist to_csv predict_proba linspace unique zeros auc DataFrame enumerate split append hasattr product len create_clf_param_search_grid RandomizedSearchCV create_clf_param_search_distrib GridSearchCV search_params_cut_down_max_nb_iter info make_scorer list debug shuffle range len array unique list shuffle copy range len KMeans argmin fit_transform copy descr view unique copy unique_rows round array down_sample_dict_features_kmean debug compose_dict_label_features min down_sample_dict_features_random convert_dict_label_features_2_vectors Counter down_sample_dict_features_unique warning values sorted debug balance_dataset_by_ astype keys append array len sum remove debug tolist logical_and ravel compute_tp_tn_fp_fn float compute_tp_tn_fp_fn float warning warning debug computeColorImage2dMean shape array _check_color_image_segm max debug shape computeColorImage2dEnergy array _check_color_image_segm max computeColorImage2dVariance debug shape sqrt array _check_color_image_segm cython_img2d_color_mean max debug astype shape _check_color_image_segm zeros max range debug astype shape sqrt _check_color_image_segm zeros range max numpy_img2d_color_mean debug astype shape _check_color_image_segm zeros max range median debug shape _check_color_image_segm append zeros max range computeGrayImage3dMean debug shape array _check_gray_image_segm max debug shape computeGrayImage3dEnergy array _check_gray_image_segm max debug shape sqrt array _check_gray_image_segm computeGrayImage3dVariance cython_img3d_gray_mean max debug astype _check_gray_image_segm zeros max range numpy_img3d_gray_mean debug astype sqrt _check_gray_image_segm zeros max range debug astype _check_gray_image_segm zeros max range median debug _check_gray_image_segm append zeros max range T zeros_like _check_unrecognised_feature_names concatenate tuple _fn_mean gradient _fn_energy nan_to_num _check_gray_image_segm _fn_std numpy_img3d_gray_median append sum range sum list zeros_like _check_unrecognised_feature_names _check_color_image hstack _fn_mean gradient from_iterable nan_to_num _check_color_image_segm empty range transform StandardScaler fit exp make_gaussian_filter1d reshape asarray debug make_edge_filter2d pi dot vstack append zeros array range max array debug array astype shape zeros range gaussian_filter create_filter_bank_lm_2d image_subtract_gauss_smooth compute_image3d_gray_statistic concatenate debug zeros tuple power nan_to_num sqrt shape _check_gray_image_segm zip compute_img_filter_response3d sum log create_filter_bank_lm_2d rollaxis concatenate debug zeros _check_color_image astype tuple nan_to_num sqrt shape compute_image2d_color_statistic zip compute_img_filter_response3d sum log gaussian_filter compute_image3d_gray_statistic _check_unrecognised_feature_group concatenate error tuple nan_to_num any _check_gray_image_segm unique compute_texture_desc_lm_img3d_val append compute_selected_features_gray3d _check_gray_image_segm _check_unrecognised_feature_group compute_texture_desc_lm_img2d_clr concatenate tuple error _check_color_image nan_to_num compute_image2d_color_statistic append error shape error debug compute_label_hist_segm array append zeros compute_label_hist_proba max asarray astype enumerate cython_label_hist_seg2d adjust_bounding_box_crop logical_and shape zeros sum range computeLabelHistogram2d array int list rollaxis adjust_bounding_box_crop map shape tile sum int asarray arange gaussian_filter1d shift astype tolist index rotate shape zeros array enumerate int float computeRayFeaturesBinary2d array int asarray arange deg2rad sqrt sum array range enumerate len fn_compute gaussian_filter1d tuple astype map fft int hstack tolist rad2deg mean round float argmax len isinstance shift_ray_features opening debug disk shape append zeros float argmax array compute_ray_features_segm_2d _fn_cos arange isinstance poly1d hstack tolist fn_interp polyfit least_squares uinterp_us InterpolatedUnivariateSpline array x len T logical_and cos deg2rad pi linspace tile sin len cdist argmin delete shape unravel_index max range Inf len int sum model_class inf isinstance criterion residuals choice estimate abs array range len segment_slic_img2d float max astype int ellipse min logical_and shape zeros float sum max range split_segm_background_foreground vstack append reconstruct_ray_features_2d reduce_close_points compute_ray_features_segm_2d opening binary_fill_holes disk inf split_segm_background_foreground min append reconstruct_ray_features_2d array reduce_close_points compute_ray_features_segm_2d inf split_segm_background_foreground min mean append reconstruct_ray_features_2d array reduce_close_points compute_ray_features_segm_2d max split_segm_background_foreground cdist argmin range append reconstruct_ray_features_2d array reduce_close_points compute_ray_features_segm_2d T astype logical_and zeros make_graph_segm_connect_grid2d_conn4 max int filter_boundary_points cdist argmin segment_slic_img2d float max range append sum mean shape cov append float argmax array range int set_params BayesianGaussianMixture KMeans compute_multivarian_otsu Pipeline estim_class_model_kmeans sqrt info GaussianMixture max fit_predict fit threshold_otsu mean shape zeros abs range debug KMeans shape info GaussianMixture fit_predict fit percentile int debug KMeans tolist sqrt shape array GaussianMixture max fit_predict fit make_graph_segm_connect_grid3d_conn6 make_graph_segm_connect_grid2d_conn4 nan_to_num mean max paired_euclidean_distances enumerate exp std ones error paired_manhattan_distances max paired_euclidean_distances len ones eye ones max eye ndarray isinstance min create_pairwise_matrix_specif create_pairwise_matrix_uniform abs array copy create_pairwise_matrix array get draw_color_labeling superpixel_centers draw_graphcut_unary_cost_segments draw_graphcut_weighted_edges paired_euclidean_distances exp std ones debug astype compute_edge_model shape compute_selected_features_img2d superpixel_centers compute_spatial_dist startswith get_vertexes_edges paired_manhattan_distances array fit_transform len insert_gc_debug_images debug astype shape compute_edge_weights int32 compute_unary_cost cut_general_graph compute_pairwise_cost tuple hstack tolist unique get_vertexes_edges zeros max range len log tile sum max range len any zeros range append range zeros contour_coords binary_image_from_coords distance_transform_edt shape append ravel unique enumerate zeros ravel max enumerate nan_to_num T histogram_regions_labels_counts index keys bincount zeros float max len keys bincount zeros float argmax max len tolist shape logical_or full copy all logical_and mask_segm_labels any array full zeros_like zeros sum max keys debug shape zip zeros ravel len astype shape compute_labels_overlap_matrix max range enumerate argmax tolist astype compute_labels_overlap_matrix sum array list T find_boundaries distance_transform_edt meshgrid ravel range list unique get_image2d_boundary_color max range label2rgb rollaxis debug estim_class_model shape segment_graph_cut_general predict_proba tile info compute_color2d_superpixels_features WrapExecuteSequence partial concatenate tuple estim_class_model nan_to_num append label2rgb hasattr rollaxis debug shape segment_graph_cut_general predict_proba tile info compute_color2d_superpixels_features segment_slic_img2d debug compute_selected_features_img2d shape astype compute_color2d_superpixels_features histogram_regions_labels_norm argmax max WrapExecuteSequence list CrossValidateGroups partial info debug convert_set_features_labels_2_dataset create_classif_search_train_export dict nan_to_num zip append range len compute_selected_features_gray3d norm_features debug estim_class_model shape segment_graph_cut_general predict_proba info segment_slic_img3d_gray tuple argmax max log abs compute_spatial_dist cut_general_graph list exp std ones superpixel_centers append sum range debug sqrt tile get_vertexes_edges cdf item enumerate int histogram_regions_labels_norm logical_or eye ravel array len max log list ones logical_and shape meshgrid append range cut_grid_graph disk sqrt cdf enumerate int reshape eye zeros array len gaussian_filter1d shift_ray_features interpolate_ray_dist center_of_mass compute_ray_features_segm_2d append label compute_segm_object_shape unique int list min tolist array append zeros max range enumerate int max BayesianGaussianMixture debug len compute_cumulative_distrib MeanShift labels_ means_ unique weights_ bincount array fit max gaussian_filter1d BayesianGaussianMixture zip debug fit compute_cumulative_distrib sqrt means_ covariances_ weights_ append array len max std gaussian_filter1d KMeans cluster_centers_ compute_cumulative_distrib append array enumerate fit max gaussian_filter1d std debug fit compute_cumulative_distrib mean labels_ unique SpectralClustering bincount zeros float array enumerate len max std KMeans debug len cluster_centers_ compute_cumulative_distrib MeanShift labels_ unique bincount zeros float array enumerate fit debug tolist astype array histogram append zeros sum max range enumerate int T arctan2 rad2deg sqrt vstack floor interp2d array T arctan2 eig rad2deg mean cov tile round array list asarray compute_centre_moment_points tolist astype sqrt compute_shape_prior_table_cdf zeros sum array range enumerate len max list asarray compute_centre_moment_points tolist astype ravel sqrt compute_shape_prior_table_cdf compute_segm_object_shape zip zeros sum array enumerate len empty enumerate logical_or logical_and unique sum arange compute_pairwise_penalty len histogram_regions_labels_norm argmax update_shape_costs_points sorted get_neighboring_candidates compute_data_costs_points ones tolist get_neighboring_segments superpixel_centers shape append bincount range Inf update astype compute_rg_crit copy zip empty make_graph_segm_connect_grid2d_conn4 any enforce_center_labels zeros ravel len ones len index vstack compute_spatial_dist append zeros empty max range enumerate enumerate cut_general_graph update_shape_costs_points get_neighboring_candidates compute_data_costs_points ones tolist get_neighboring_segments superpixel_centers shape array_equal append bincount range Inf update astype compute_rg_crit copy empty make_graph_segm_connect_grid2d_conn4 any enforce_center_labels zeros ravel array prepare_graphcut_variables len max int rollaxis debug min shape prod tile slic float array int asarray debug min shape array slic label prod sort unique len list arange debug reshape get_segment_diffs_2d_conn4 dict shape unique zip len list arange get_segment_diffs_3d_conn6 debug reshape dict shape unique zip len error debug tolist shape append ravel max regionprops enumerate append tolist join exists startswith abspath expanduser range array debug groups match abspath append expanduser len debug tolist abspath expanduser read_csv len info debug to_csv warning zeros DataFrame array uint8 min astype float max percentile rescale_intensity astype uint8 imsave asarray basename convert splitext io_imread image_open uint8 io_imsave debug astype shape warning unique float max format replace debug groups match join debug rgb2gray Nifti1Pair dirname swapaxes eye save io_imread join debug reshape Nifti1Pair shape dirname swapaxes eye save io_imread load io_imsave reshape get_data shape swapaxes update_path rollaxis debug min scale_image_intensity shape max io_imread array load_zvi load_tiff_volume_split_double_band scale_image_intensity io_imread load_zvi_volume_double_band_split ANTIALIAS debug save resize image_open glob join debug len append load_image format load_image_tiff_volume zeros array rollaxis shape list basename zip _get_paths_names glob debug DataFrame index info append dropna _get_name len int dtype error hstack astype ravel shape vstack bincount median argmax array min max add_padding zeros hstack shift centroid rad2deg ndim rotate shape copy astype bincount orientation argmax array get_image2d_boundary_color append ones tuple hstack append array range int sample_segment_vertical_2d copy append array range seed int range random_integers join update_path debug exists get_image_path io_imread minimum astype maximum nonzero abs array axis tight_layout subplots_adjust set set_ticklabels gca gray2rgb set_title axis tight_layout subplots_adjust imshow set_ticklabels contour create_figure_by_image max subplots gray2rgb set_title ones astype axis colorbar subplots_adjust tight_layout shape imshow set_ticklabels float contour array max gray2rgb subplots set_title mark_boundaries axis subplots_adjust shape imshow set_ticklabels float array range enumerate len max float subplots array int ellipse_perimeter plot len axis subplots_adjust set imshow legend range create_figure_by_image enumerate set_title concatenate semilogy tuple grid set figure histogram legend gca sum values enumerate subplots set_title plot tolist grid set imshow legend set_yticklabels axis tight_layout colorbar imshow contour create_figure_by_image max get_cmap arange get_cmap max range Line array project int list closest_point_on_line norm ellipse arctan2 debug logical_and map sqrt zip zeros float sum array enumerate list all zip list closest_point_on_line map zip polygon zeros array append int max debug logical_and logical_or any append float sum array range len imshow contour plot set argmax subplots ndarray isinstance plot tight_layout set imshow contour disk hasattr circle list T rollaxis _draw_disk min astype map shape get_cmap tile line_aa zeros float max enumerate arrow plot zip cos deg2rad set imshow sin contour subplots draw_rg2sp_results plot set_title grid axis colorbar set subplots_adjust shape imshow array float contour max range debug tuple len vstack info append zeros max range enumerate int debug tuple len vstack info append zeros max range enumerate asarray plot set imshow scatter array set_ticklabels float contour max range subplots set_title find_boundaries distance_transform_edt colorbar shape imshow array float contour max get join update format load_config_yaml isdir debug gmtime warning mkdir info save_config_yaml string_dict join getLogger addHandler info DEBUG setLevel FileHandler str join format debug classification_report shape info get update copy append enumerate join mkdir unpack i32 unpack ZviImageTuple read read_struct parse_image read ZviItemTuple read_struct i32 reshape fromstring ImageTuple openstream OleFileIO read_image_container_content append listdir openstream OleFileIO openstream OleFileIO zvi_read Array OleFileIO get_layer_count append array range int list T arange rand tile range len show join close savefig figure_ray_feature load_sample_image sample_segment_vertical_2d show join close savefig figure_image_segm_results show join close figure_segm_graphcut_debug savefig pop join show_segm_debugs_2d estim_model_classes_group segment_color2d_slic_features_model_graphcut mkdir show_segm_results_2d load_sample_image load_sample_image zeros compute_shape_prior_table_cdf array arange segment_slic_img2d join load_image_2d values join subplots set_title close shape imshow array savefig contour max sample_segment_vertical_3d sample_segment_vertical_2d load_sample_image | # Image segmentation toolbox [![CI testing](https://github.com/Borda/pyImSegm/workflows/CI%20testing/badge.svg?branch=master&event=push)](https://github.com/Borda/pyImSegm/actions?query=workflow%3A%22CI+testing%22) [![codecov](https://codecov.io/gh/Borda/pyImSegm/branch/master/graph/badge.svg?token=BCvf6F5sFP)](https://codecov.io/gh/Borda/pyImSegm) [![Codacy Badge](https://api.codacy.com/project/badge/Grade/48b7976bbe9d42bc8452f6f9e573ee70)](https://www.codacy.com/app/Borda/pyImSegm?utm_source=github.com&utm_medium=referral&utm_content=Borda/pyImSegm&utm_campaign=Badge_Grade) [![CircleCI](https://circleci.com/gh/Borda/pyImSegm.svg?style=svg&circle-token=a30180a28ae7e490c0c0829d1549fcec9a5c59d0)](https://circleci.com/gh/Borda/pyImSegm) [![CodeFactor](https://www.codefactor.io/repository/github/borda/pyimsegm/badge)](https://www.codefactor.io/repository/github/borda/pyimsegm) [![Language grade: Python](https://img.shields.io/lgtm/grade/python/g/Borda/pyImSegm.svg?logo=lgtm&logoWidth=18)](https://lgtm.com/projects/g/Borda/pyImSegm/context:python) [![Documentation Status](https://readthedocs.org/projects/pyimsegm/badge/?version=latest)](https://pyimsegm.readthedocs.io/en/latest/?badge=latest) [![Gitter](https://badges.gitter.im/pyImSegm/community.svg)](https://gitter.im/pyImSegm/community?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge) ![CI experiments](https://github.com/Borda/pyImSegm/workflows/CI%20experiments/badge.svg?branch=master&event=push) | 162 |
BorealisAI/cross_domain_coherence | ['domain generalization'] | ['A Cross-Domain Transferable Neural Coherence Model'] | utils/lm_utils.py models/language_models.py run_lm_coherence.py utils/logging_utils.py eval.py utils/np_utils.py models/gan_models.py models/infersent_models.py prepare_data.py preprocess.py train_lm.py config.py models/coherence_models.py add_args.py run_bigram_coherence.py utils/data_utils.py add_bigram_args experiments download_file_from_google_drive save_response_content get_confirm_token get_infersent save_eval_perm get_lm_hidden permute_articles_with_replacement add_args get_average_glove get_s2s_hidden load_wsj_file_list prep_wsj_lm_data permute_articles load_wiki_file_list load_file_list run_bigram_coherence run_lm_coherence add_args train_lm add_args MarginRankingLoss BigramCoherence MLP MLP_Discriminator _sequence_mask compute_loss RNN_LM NLINet InferSent BGRUlastEncoder LSTMEncoder GRUEncoder InnerAttentionMILAEncoder InnerAttentionNAACLEncoder ClassificationNet BLSTMprojEncoder ConvNetEncoder InnerAttentionYANGEncoder repackage_hidden get_batch LanguageModel LMCoherence batchify WIKI_Bigram_Dataset WSJ_Bigram_Dataset DataSet SentCorpus Corpus Vocabulary _set_basic_logging _get_logger generate_random_pmatrices random_permutation_matrix add_argument _get_logger collect run_bigram_coherence bidirectional info append isoformat LOG_PATH range get get_confirm_token save_response_content Session items list startswith deepcopy list permutations shuffle append append deepcopy list shuffle append join listdir append join listdir WIKI_IN_DOMAIN WIKI_EASY_TRAIN_LIST WIKI_EASY_TEST_LIST InferSent WORD_EMBEDDING set_w2v_path cuda seed list INFERSENT_MODEL len load_state_dict encode astype zip info build_vocab_k_words load MAX_SENT_LENGTH float32 dict load_file_list seed info astype float32 load_file_list true_divide split zeros len load join seed hstack astype lm float32 LanguageModel tqdm eval load_file_list unsqueeze info encode CHECKPOINT_PATH to init_hidden len load join seed model hstack astype float32 tqdm eval load_file_list unsqueeze info encode CHECKPOINT_PATH to Seq2SeqModel len seed load_valid load_valid_sample load_test DataSet load_test_sample article_df map save_valid_perm save_test_perm info sum add_argument evaluate_dis evaluate_ins load_valid_perm DataLoader save data_name get_infersent load_valid save_eval_perm file_list load_test_perm Corpus CHECKPOINT_PATH SentCorpus format load_train DataSet get_lm_hidden BigramCoherence portion init info join load_test print load_best_state get_s2s_hidden get_average_glove fit load join load_test format load_train info DataSet file_list evaluate_dis evaluate_ins lm LanguageModel DataLoader load_test_perm lm_name LMCoherence Corpus CHECKPOINT_PATH load_test load_train file_list DataSet print LanguageModel reverse Corpus fit max Variable size expand cuda expand_as long is_cuda view size _sequence_mask float sum cross_entropy isinstance size cuda narrow contiguous Variable min len RotatingFileHandler setFormatter getLogger addHandler Formatter setLevel basicConfig permutation identity append random_permutation_matrix range | # Cross-Domain Coherence Modeling A Cross-Domain Transferable Neural Coherence Model Paper published in ACL 2019: [arxiv.org/abs/1905.11912](https://arxiv.org/abs/1905.11912) This implementation is based on PyTorch 0.4.1. ### Dataset To download the dataset: ``` python prepare_data.py ``` which includes WikiCoherence dataset we construct, 300-dim GloVe embeddings and pre-trained Infersent model. | 163 |
BorjaBalle/analytic-gaussian-mechanism | ['denoising'] | ['Improving the Gaussian Mechanism for Differential Privacy: Analytical Calibration and Optimal Denoising'] | agm-example.py calibrateAnalyticGaussianMechanism caseA doubling_trick function_s_to_alpha sqrt binary_search | # Analytic Gaussian Mechanism Example Python implementation of the analytic Gaussian mechanism proposed in the [paper](https://arxiv.org/abs/1805.06530): > B. Balle and Y.-X. Wang. Improving the Gaussian Mechanism for Differential Privacy: Analytical Calibration and Optimal Denoising. International Conference on Machine Learning (ICML), 2018. Please include a citation to the paper if you use this code. | 164 |
BreastGAN/experiment1 | ['adversarial attack'] | ['Injecting and removing malignant features in mammography with CycleGAN: Investigation of an automated adversarial attack using neural networks'] | resources/model_utils.py models/base.py docker/run_config.py resources/data/mnist.py resources/data/loader.py resources/yapf_nbformat/yapf_nbformat.py resources/data/utils.py resources/image_utils.py resources/synthetic_data.py flags/flags_parser.py models/breast_cycle_gan_graph.py docker/jupyter_notebook_config.py main parse BaseModel KerasModel set_seeds lsgan_loss_discriminator build_model cross_entropy_loss lsgan_loss_generator conv2d deconv2d patch_discriminator build_generator_resnet_9blocks instance_norm build_resnet_block discriminator cycle_consistency_loss CycleGan read_small run load_dicom normalize load_tif load_image downsample standardize normalize_gaussian to_numpy tile_images noise generate_synth gen_element read_synth read_inbreast_csv load_bcdr read_bcdr_outlines_csv init read_bcdr_img_csv load_inbreast print_format_info read_mnist read_mnist_label read_data_sets next_batch shuffle main format_nb flags_file parse model_file add_argument import_module ArgumentParser parse_args run join loads sub DotMap seed str set_random_seed load reshape expand_dims size pixel_array dcmread endswith flip mean std min max rescale shape max zeros append range linspace print reshape concat append zeros range int zeros_like min uniform is_inside randint max range generate_synth zeros next range join join read_inbreast_csv int DotMap listdir split list values patient_id join list format DotMap values image_filename lower read_bcdr_outlines_csv exists read_bcdr_img_csv print print_format_info read_data_sets print_format_info read_data_sets_label int range len permutation validate format print FormatCode cells split format_nb | BreastGAN/experiment1 | 165 |
BreastGAN/experiment2 | ['adversarial attack'] | ['Injecting and removing malignant features in mammography with CycleGAN: Investigation of an automated adversarial attack using neural networks'] | resources/data/loader.py models/breast_cycle_gan/custom/conv/spectral_norm.py models/breast_cycle_gan/custom/gan.py models/utils/icnr.py notebooks/inference_tfrecord_to_png.py models/breast_cycle_gan/inference.py docker/jupyter_notebook_config.py resources/yapf_nbformat/yapf_nbformat.py models/breast_cycle_gan/custom/conv/contrib.py models/breast_cycle_gan/custom/conv/layers.py resources/data/transformer.py resources/data/features.py resources/data/mnist.py models/breast_cycle_gan/train.py resources/synthetic_data.py models/breast_cycle_gan/discriminator.py notebooks/treval_split.py resources/model_utils.py resources/data/utils.py resources/image_utils.py models/breast_cycle_gan/generator.py notebooks/image_conversion.py resources/data/create_rcnn_dataset.py models/breast_cycle_gan/data_provider.py models/breast_cycle_gan/custom/conv/keras.py notebooks/inspect2.py undo_normalize_image normalize_image normalize_synth_image provide_cbis_dataset parse_example provide_custom_datasets provide_synth_dataset _provide_custom_dataset pix2pix_discriminator pix2pix_arg_scope discriminator generator cyclegan_arg_scope _dynamic_or_static_shape cyclegan_upsample cyclegan_generator_resnet _make_dir_if_not_exists to_example_extended make_inference_graph export main _get_lr _define_train_ops _get_optimizer main _define_model cyclegan_model _convert_tensor_or_l_or_d cycle_consistency_loss_impl gan_model cycle_consistency_loss add_cyclegan_image_summaries convolution2d convolution MyKerasConv2D MyKerasConv MyConv2D shape_list apply_spectral_norm ICNR str2bool get_image get_images to_png normalize_image to_examples show_images treval_split read_unparsed write_unparsed load_dicom show_img normalize downsample standardize normalize_gaussian load_image save_image load_with_pil to_numpy tile_images noise generate_synth gen_element read_synth get_image convert mask_to_boxes save_examples to_png feature_dict to_example get_examples to_feature_dict example_to_str numpy_to_feature int_to_feature img_to_feature show_records example_to_int str_to_feature to_example example_to_numpy read_inbreast_csv load_bcdr get_cbis_image_path load_all_datasets read_cbis_csv init read_bcdr_outlines_csv read_zrh_folder read_bcdr_img_csv load_inbreast print_format_info read_mnist read_mnist_label read_data_sets transform_img merge_shards f generate_otsu_mask transform_single transform_parallel load_inbreast_mask load_bcdr_mask merge_shards_in_folder transform transform_sequential load_masks next_batch shuffle main format_nb to_float reshape reduce_max crop_to_bounding_box expand_dims reduce_min assert_is_compatible_with squeeze read string_input_producer normalize_image WholeFileReader decode_image match_filenames_once append _provide_custom_dataset to_float reshape reduce_max reduce_min assert_is_compatible_with list reshape generate_synth batch py_func parse_single_example concat decode_img print len flatten l2_regularizer constant_value shape as_list array assert_has_rank MakeDirs include_masks normalize_image concat float32 placeholder unstack expand_dims update to_feature_dict join split generated_dir _make_dir_if_not_exists make_inference_graph format Saver model generator partial print add_cyclegan_image_summaries add_gan_model_summaries unstack discriminator cyclegan_model get_or_create_global_step max_number_of_steps AdamOptimizer gan_train_ops _get_lr generator_lr _get_optimizer discriminator_lr scalar train_log_dir MakeDirs strftime get_trainable_variables expand_dims stack isinstance _build_variable_getter l2_normalize reshape transpose squeeze divide matmul assign shape_list convert_to_tensor as_list shape append range len reshape frombuffer Example ParseFromString Glob tf_record_iterator fromarray convert save resize_image_with_crop_or_pad parse_single_example decode_img TFRecordOptions GZIP append train_test_split zip len append tf_record_iterator show add_subplot imshow hist figure ravel size pixel_array copy dcmread endswith reshape imsave print flip mean std min max min rescale shape zeros max append range linspace print reshape concat append zeros range int min randint uniform is_inside zeros max range poisson generate_synth zeros next range example_to_int Example ParseFromString Glob tf_record_iterator threshold_otsu closing square clear_border bbox append label float clip regionprops get_image join example_to_str replace to_png feature_dict mask_to_boxes makedirs as_bytes frombuffer show str print parse_example imshow tf_record_iterator join seed read_zrh read_cbis read_all read_inbreast shuffle read_bcdr02 read_bcdr01 join endswith strip listdir split join DotMap get_cbis_image_path range append DataFrame read_csv len join read_zrh_files join read_inbreast_csv str int DotMap print listdir split list values patient_id join list format DotMap values image_filename lower read_bcdr_outlines_csv exists read_bcdr_img_csv print print_format_info read_data_sets print_format_info read_data_sets_label format zeros_like gen_otsu gen_mask print logical_and array_equal generate_synth range downsample standardize normalize_gaussian normalize augment_image transform_single to_deterministic seed int64 format print join format f join list format partial print len map ThreadPool enumerate transform_fn range makedirs join merge_shards startswith append listdir zeros transpose polygon zeros array str list print load_inbreast_mask load_bcdr_mask load_image int range len permutation validate format print FormatCode cells split format_nb | BreastGAN/experiment2 | 166 |
Brendan-Reid1991/CFD-Algorithms | ['combinatorial optimization', 'experimental design'] | ['Quadratic Unconstrained Binary Optimization Problem Preprocessing: Theory and Empirical Analysis'] | DummyFiles/SA/PositionDefinitions.py Multiplier_Dim3/SA/PropagateCircuit.py Multiplier_Dim3/PT/PositionDefinitions.py DummyFiles/PT/PT_DataCollection.py DummyFiles/SA/SA_DataCollection.py DummyFiles/PT/ParallelTempering_DebuggingFile.py DummyFiles/PT/PropagateCircuit.py DummyFiles/SA/SimulatedAnnealing_DebuggingFile.py CreateFiles.py DummyFiles/PT/PositionDefinitions.py Multiplier_Dim3/SA/SA_DataCollection.py Multiplier_Dim3/SA/SA_JustForFun.py Multiplier_Dim3/SA/SimulatedAnnealing_DebuggingFile.py Multiplier_Dim3/PT/ParallelTempering.py Multiplier_Dim3/SA/SimulatedAnnealing.py DummyFiles/SA/SA_JustForFun.py DummyFiles/PT/PT_JustForFun.py DummyFiles/SA/SimulatedAnnealing.py Multiplier_Dim3/PT/PropagateCircuit.py DummyFiles/PT/ParallelTempering.py Multiplier_Dim3/SA/PositionDefinitions.py DummyFiles/SA/PropagateCircuit.py Multiplier_Dim3/PT/PT_JustForFun.py Multiplier_Dim3/PT/PT_DataCollection.py Multiplier_Dim3/PT/ParallelTempering_DebuggingFile.py ParTemp Many_Reps ParTemp Many_Reps Positions Exclusion flip output_pos faultpos OR AND output_pos flip propagate XOR Positions Exclusion flip output_pos faultpos OR AND output_pos flip propagate XOR SA SA ParTemp Many_Reps ParTemp Many_Reps Positions Exclusion flip output_pos faultpos OR AND output_pos flip propagate XOR Positions Exclusion flip output_pos faultpos OR AND output_pos flip propagate XOR SA SA Positions list exp permutation choice append range len starmap cpu_count randint linspace Pool list exp append range asarray setdiff1d close output_pos choice sample Positions min zeros len print sort slice asarray ndim flip array arange concatenate zeros arange output_pos arange setdiff1d concatenate zeros empty flip len faultpos int max arange isinstance OR AND reshape XOR output_pos ceil zeros array len Positions pop list exp randint output_pos choice index shuffle linspace sample zeros range append len print sort | # CFD-Algorithms Algorithms for solving circuit-fault-diagnosis problems These files aim to solve a circuit fault diagnosis (CFD) problem via annealing techniques. CFD problems are conceptually simple: given a circuit C, and some inputs X,Y and an output Z, if Z != C(X, Y) then a gate within the circuit must be faulty. The question is: how do we determine which gate is faulty? Naturally for small circuits it's fairly simple, however in general this problem is NP-Hard. Using the example of binary multiplier circuits of dimension n (that is, circuits that multiply two binary strings each of length n) we have created a quadratic unconstrained binary optimisation (QUBO) problem that can be tackled by our algorithm. The goal is to find a circuit-gate 'fault configuration' that explains all of the relevant input / output data. A binary multiplier of dimension n contains 6n^2 - 8n gates, and for a QUBO problem this requires 24n^2 - 30n individual spins. 4n of these are inputs and outputs. The algorithms contained within search this solution space for a `fault configuration' that explains all of the input output data. | 167 |
BrixIA/Brixia-score-COVID-19 | ['data augmentation'] | ['Chest X-Ray Analysis of Tuberculosis by Deep Learning with Segmentation and Augmentation'] | src/BSNet/backbones/classification_models/classification_models/resnet/models.py src/BSNet/backbones/inception_v3.py src/BSNet/backbones/__init__.py src/BSNet/backbones/classification_models/classification_models/resnext/preprocessing.py src/BSNet/backbones/classification_models/classification_models/resnet/params.py src/datasets/utils.py src/BSNet/backbones/classification_models/classification_models/resnet/__init__.py src/datasets/synthetic_alignment.py src/BSNet/backbones/backbones.py src/datasets/ImageDataAugmentor/utils.py src/datasets/brixiascore_cohen.py src/BSNet/backbones/classification_models/classification_models/resnet/preprocessing.py src/datasets/lung_segmentation.py src/BSNet/utils.py src/BSNet/backbones/classification_models/classification_models/resnet/blocks.py src/BSNet/backbones/classification_models/classification_models/resnext/blocks.py src/BSNet/backbones/classification_models/classification_models/resnext/models.py src/BSNet/backbones/inception_resnet_v2.py src/BSNet/backbones/classification_models/classification_models/resnext/builder.py src/BSNet/backbones/preprocessing.py src/BSNet/backbones/classification_models/classification_models/weights.py src/BSNet/backbones/classification_models/classification_models/resnext/__init__.py src/datasets/ImageDataAugmentor/directory_iterator.py src/BSNet/backbones/classification_models/classification_models/utils.py src/datasets/ImageDataAugmentor/iterator.py src/BSNet/backbones/classification_models/classification_models/resnext/params.py src/BSNet/backbones/classification_models/classification_models/resnet/builder.py src/datasets/ImageDataAugmentor/image_data_augmentator.py src/BSNet/builder.py src/BSNet/blocks.py src/BSNet/model.py src/BSNet/backbones/classification_models/classification_models/__init__.py handle_block_names Threshold create_pyramid_features ConvRelu get_initial_weights load_attributes_from_hdf5_group RetinaNetClassifier pool_rois Resize UpsampleLike Transpose2D_block K_linspace get_weights_from_hdf5_group K_meshgrid Upsample2D_block ConvBlock BilinearInterpolation STN build_BScore build_xnet BSNet set_trainable extract_outputs call_cascade to_tuple get_layer_number freeze_model reverse add_docstring recompile get_backbone InceptionResNetV2 preprocess_input inception_resnet_block conv2d_bn InceptionV3 conv2d_bn preprocess_input get_preprocessing _obtain_input_shape load_model_weights find_weights handle_block_names conv_block basic_identity_block basic_conv_block identity_block build_resnet ResNet18 ResNet34 ResNet50 ResNet101 ResNet152 get_bn_params get_conv_params preprocess_input handle_block_names identity_block GroupConv2D conv_block build_resnext ResNeXt50 ResNeXt101 get_bn_params get_conv_params preprocess_input get_data prepare_shenzhen train_generator preapre_dataset get_data prepare_montgomery val_generator get_arguments prepare_jsrt get_data train_generator val_generator equalize from_4D_image _dynamic_to_4D_image _dynamic_from_4D_image add_suffix image_preprocess preprocess to_4D_image load_image equalize_image_gray DirectoryIterator ImageDataAugmentor BatchFromFilesMixin Iterator array_to_img img_to_array load_img _list_valid_filenames_in_directory save_img list_pictures validate_filename _iter_valid_files format zeros extend append enumerate load_attributes_from_hdf5_group append reshape expand_dims crop_and_resize append range len output to_tuple Model input range len create_pyramid_features input multiply reshape output stack Model resize append range format print get_backbone output build_xnet freeze_model Model load_weights build_BScore input Input layers enumerate insert output loss metrics optimizer compile layers layers recompile isscalar isinstance l str conv2d_bn _obtain_input_shape get_file Input get_source_inputs warn Model conv2d_bn load_weights inception_resnet_block range _obtain_input_shape get_file concatenate get_source_inputs warn Model conv2d_bn load_weights Input range list name load_weights get_file find_weights str warn _obtain_input_shape get_conv_params get_source_inputs Model get_bn_params Input range enumerate load_model_weights build_resnet load_model_weights build_resnet load_model_weights build_resnet load_model_weights build_resnet load_model_weights build_resnet update update resize _obtain_input_shape get_conv_params get_source_inputs Model get_bn_params Input range enumerate load_model_weights build_resnext load_model_weights build_resnext T itertuples as_posix print tqdm resolve load_image train_test_split numpy read_csv append Compose ImageDataAugmentor flow_from_directory as_posix add_suffix preprocess append load_image list glob train_generator preapre_dataset val_generator len uint8 imwrite glob reshape name stem astype as_posix maximum tqdm resize imread equalize_hist list imwrite replace glob as_posix tqdm IMREAD_GRAYSCALE resize imread list imwrite glob as_posix name maximum tqdm IMREAD_GRAYSCALE resize imread prepare_shenzhen list print glob makedirs prepare_montgomery prepare_jsrt rmdir len add_argument ArgumentParser Resize flow_from_directory concat less_equal shape rank cast equal shape less_equal cast equal scale_channel expand_dims reduce_max rank partial map_fn to_4D_image print reshape shape IMREAD_GRAYSCALE resize imread preprocess percentile equalize_adapthist astype float32 flatten max clip append splitext isinstance convert array_to_img warn save COLOR_BGR2RGBA COLOR_BGR2RGB resize imread cvtColor tuple endswith _recursive_list sorted warn join list basename relpath append _iter_valid_files len transpose asarray max reshape transpose asarray | # BrixIA COVID-19 Project ## What do you find here Info, code (BS-Net), link to data (BrixIA COVID-19 Dataset annotated with Brixia-score), and additional material related to the [BrixIA COVID-19 Project](https://brixia.github.io/) ## Defs BrixIA COVID-19 Project: [go to the webpage](https://brixia.github.io/) Brixia score: a multi-regional score for Chest X-ray (CXR) conveying the degree of lung compromise in COVID-19 patients BS-Net: an end-to-end multi-network learning architecture for semiquantitative rating of COVID-19 severity on Chest X-rays BrixIA COVID-19 Dataset: 4703 CXRs of COVID-19 patients (anonymized) in DICOM format with manually annotated Brixia score ## Project paper Preprint avaible [here](https://arxiv.org/abs/2006.04603) | 168 |
BruceBinBoxing/Deep_Learning_Weather_Forecasting | ['weather forecasting'] | ['Deep Uncertainty Quantification: A Machine Learning Approach for Weather Forecasting'] | src/models/competition_model_class.py src/data/make_TrainAndVal_Data_from_nc.py src/weather_forecasting2018_eval/ensemble_2018102803_2/ensemble.py src/run/Load_model_and_predict.py src/models/weather_model.py src/data/utils.py src/models/seq2seq_class.py src/weather_forecasting2018_eval/ensemble_2018101503/ensemble.py src/weather_forecasting2018_eval/ensemble_2018110303/ensemble.py src/data/data_load.py src/data/make_dataset_from_nc2df.py test_environment.py docs/conf.py src/weather_forecasting2018_eval/__init__.py src/weather_forecasting2018_eval/eval_details_score.py src/weather_forecasting2018_eval/eval_details_rmse.py src/weather_forecasting2018_eval/weather_forecasting2018_eval_my.py src/weather_forecasting2018_eval/ensemble_2018102803/ensemble.py src/data/make_dataset_missing_fill.py src/weather_forecasting2018_eval/ensemble_2018103103/ensemble.py src/weather_forecasting2018_eval/ensemble_2018110203/ensemble.py src/weather_forecasting2018_eval/ensemble_2018103003/ensemble.py src/data/make_TestOnlineData_from_nc.py src/weather_forecasting2018_eval/ensemble_2018102903/ensemble.py src/models/seq2seq.py src/data/helper.py src/weather_forecasting2018_eval/ensemble_2018110103/ensemble.py src/run/Train_from_scratch.py src/weather_forecasting2018_eval/weather_forecasting2018_eval.py src/weather_forecasting2018_eval/ensemble_2018092403/ensemble.py setup.py src/models/parameter_config_class.py src/data/make_ValData_from_TestData_from_nc.py main transform_from_df2ndarray reset_value_range load_pipeline batch_iter score mae cal_miss nan_helper get_random_batch pred_batch_iter load_pkl intplt_nan_1d get_ndarray_by_sliding_window predict mse renorm evl_fn renorm_for_submit get_train_test min_max_norm cal_loss_dataset bias save_pkl split_data rmse save_pkl main Transform main netCDF2TheLastDay main process_outlier_and_stack process_outlier_and_normalize main process_outlier_and_stack process_outlier_and_normalize netCDF_filter_nan main process_outlier_and_stack process_outlier_and_normalize netCDF_filter_nan plot_prediction random_sine Seq2Seq_Class Enc_Dec_Embd renorm WeatherConv1D Enc_Dec RNN_Class crop FNN CausalCNN_Class parameter_config seq2seq_ae seq2seq_pred crop Seq2Seq_Class weather_mve weather_l2 Seq2Seq weather_ae weather_mse CausalCNN mve_loss weather_fusion Seq2Seq_MVE_subnets Seq2Seq_MVE_subnets_swish weather_conv1D Seq2Seq_MVE RNN_builder crop weather_fnn main swish Load_and_predict main train score score_bias bias rmse _eval_result delete_non_value score score_bias bias rmse _eval_result delete_non_value score score_bias bias rmse _eval_result delete_non_value score score_bias bias rmse _eval_result delete_non_value print major columns format print min max columns reset_index set_index format print min_max_norm copy reset_value_range get_ndarray_by_sliding_window append array columns reset_index set_index format print strptime min_max_norm copy reset_value_range load_pkl get_train_test int permutation arange min __exit array range len permutation arange len int permutation arange min array range len print join dump open load join print open permutation arange len append array values append split_data array values reshape mean pred_batch_iter append array run reshape reshape reshape print print reshape sum len print nan_helper delete copy y_temp any NaN interp append array range x_temp str Series concat renorm append DataFrame range enumerate sqrt stack append range Transform getLogger echo transform_and_save_data read_nc_data info format replace set_index transform_from_df2ndarray save_pkl load_pkl NaN fillna data ffill format replace fromkeys print size save_pkl any NaN interpolate bfill append DataFrame Dataset values enumerate min_max_norm format print process_outlier_and_normalize save_pkl shape stack load_pkl netCDF2TheLastDay process_outlier_and_stack data place format all fromkeys print size nanmean isnan save_pkl nan append Dataset range enumerate netCDF_filter_nan seed arange rand pi zeros expand_dims range show list plot title figure legend range len clear_session RNN decoder Adam decoder_dense Dense Model append encoder GRUCell Input compile clear_session RNN decoder Adam decoder_dense Dense Model append encoder GRUCell Input compile reduce_mean exp log Model Input compile concatenate Model Input compile concatenate Model Input compile RNN output_dense decoder print Adam Dense Model append GRUCell Input RNN output_dense decoder concatenate Adam Dense Model append encoder GRUCell Input RNN output_dense decoder concatenate Adam Dense Model variance_dense append encoder GRUCell Input RNN decoder concatenate Adam Model append encoder GRUCell Input update RNN decoder concatenate Adam Model append encoder GRUCell Input Model Input Model Input enumerate Model Input enumerate Model Input enumerate Model Input compile rename open list ones shape load_pkl append expand_dims range predict model_from_json close stack load_weights renorm_for_submit tile read print to_csv summary array Load_and_predict list build_graph print ones Seq2Seq_Class shape stack load_pkl array tile append keys range fit train index len bias list index drop from_dict list columns format print delete_non_value score index score_bias rmse append tabulate read_csv drop | Deep Uncertainty Quantification (DUQ) ============================== DUQ: A Machine Learning Approach for Weather Forecasting > 1. Sequential deep uncertainty quantification (DUQ) produces more accurate weather forecasting based on the observation and NWP prediction. Our online rank-2 (CCIT007) in *Global AI Challenger-Weather Forecasting* (https://challenger.ai/competition/wf2018) indicates deep learning is very considerable for large-scale meteorological data modeling and forecasting! > 2. Pragmatical loss function for sequence-to-sequence uncertainty quantification is proposed. > 3. **Important experimental phenomenon was reported and analysized experimentally**, which may be noteworthy in the furture deep learning researches for spatio-temporal data and time series forecasting. ### License Apache ### Paper Paper: http://urban-computing.com/pdf/kdd19-BinWang.pdf | 169 |
BruceBinBoxing/WF | ['weather forecasting'] | ['Deep Uncertainty Quantification: A Machine Learning Approach for Weather Forecasting'] | src/models/competition_model_class.py src/data/make_TrainAndVal_Data_from_nc.py src/weather_forecasting2018_eval/ensemble_2018102803_2/ensemble.py src/run/Load_model_and_predict.py src/models/weather_model.py src/data/utils.py src/models/seq2seq_class.py src/weather_forecasting2018_eval/ensemble_2018101503/ensemble.py src/weather_forecasting2018_eval/ensemble_2018110303/ensemble.py src/data/data_load.py src/data/make_dataset_from_nc2df.py test_environment.py docs/conf.py src/weather_forecasting2018_eval/__init__.py src/weather_forecasting2018_eval/eval_details_score.py src/weather_forecasting2018_eval/eval_details_rmse.py src/weather_forecasting2018_eval/weather_forecasting2018_eval_my.py src/weather_forecasting2018_eval/ensemble_2018102803/ensemble.py src/data/make_dataset_missing_fill.py src/weather_forecasting2018_eval/ensemble_2018103103/ensemble.py src/weather_forecasting2018_eval/ensemble_2018110203/ensemble.py src/weather_forecasting2018_eval/ensemble_2018103003/ensemble.py src/data/make_TestOnlineData_from_nc.py src/weather_forecasting2018_eval/ensemble_2018102903/ensemble.py src/models/seq2seq.py src/data/helper.py src/weather_forecasting2018_eval/ensemble_2018110103/ensemble.py src/run/Train_from_scratch.py src/weather_forecasting2018_eval/weather_forecasting2018_eval.py src/weather_forecasting2018_eval/ensemble_2018092403/ensemble.py setup.py src/models/parameter_config_class.py src/data/make_ValData_from_TestData_from_nc.py main transform_from_df2ndarray reset_value_range load_pipeline batch_iter score mae cal_miss nan_helper get_random_batch pred_batch_iter load_pkl intplt_nan_1d get_ndarray_by_sliding_window predict mse renorm evl_fn renorm_for_submit get_train_test min_max_norm cal_loss_dataset bias save_pkl split_data rmse save_pkl main Transform main netCDF2TheLastDay main process_outlier_and_stack process_outlier_and_normalize main process_outlier_and_stack process_outlier_and_normalize netCDF_filter_nan main process_outlier_and_stack process_outlier_and_normalize netCDF_filter_nan plot_prediction random_sine Seq2Seq_Class Enc_Dec_Embd renorm WeatherConv1D Enc_Dec RNN_Class crop FNN CausalCNN_Class parameter_config seq2seq_ae seq2seq_pred crop Seq2Seq_Class weather_mve weather_l2 Seq2Seq weather_ae weather_mse CausalCNN mve_loss weather_fusion Seq2Seq_MVE_subnets Seq2Seq_MVE_subnets_swish weather_conv1D Seq2Seq_MVE RNN_builder crop weather_fnn main swish Load_and_predict main train score score_bias bias rmse _eval_result delete_non_value score score_bias bias rmse _eval_result delete_non_value score score_bias bias rmse _eval_result delete_non_value score score_bias bias rmse _eval_result delete_non_value print major columns format print min max columns reset_index set_index format print min_max_norm copy reset_value_range get_ndarray_by_sliding_window append array columns reset_index set_index format print strptime min_max_norm copy reset_value_range load_pkl get_train_test int permutation arange min __exit array range len permutation arange len int permutation arange min array range len print join dump open load join print open permutation arange len append array values append split_data array values reshape mean pred_batch_iter append array run reshape reshape reshape print print reshape sum len print nan_helper delete copy y_temp any NaN interp append array range x_temp str Series concat renorm append DataFrame range enumerate sqrt stack append range Transform getLogger echo transform_and_save_data read_nc_data info format replace set_index transform_from_df2ndarray save_pkl load_pkl NaN fillna data ffill format replace fromkeys print size save_pkl any NaN interpolate bfill append DataFrame Dataset values enumerate min_max_norm format print process_outlier_and_normalize save_pkl shape stack load_pkl netCDF2TheLastDay process_outlier_and_stack data place format all fromkeys print size nanmean isnan save_pkl nan append Dataset range enumerate netCDF_filter_nan seed arange rand pi zeros expand_dims range show list plot title figure legend range len clear_session RNN decoder Adam decoder_dense Dense Model append encoder GRUCell Input compile clear_session RNN decoder Adam decoder_dense Dense Model append encoder GRUCell Input compile reduce_mean exp log Model Input compile concatenate Model Input compile concatenate Model Input compile RNN output_dense decoder print Adam Dense Model append GRUCell Input RNN output_dense decoder concatenate Adam Dense Model append encoder GRUCell Input RNN output_dense decoder concatenate Adam Dense Model variance_dense append encoder GRUCell Input RNN decoder concatenate Adam Model append encoder GRUCell Input update RNN decoder concatenate Adam Model append encoder GRUCell Input Model Input Model Input enumerate Model Input enumerate Model Input enumerate Model Input compile rename open list ones shape load_pkl append expand_dims range predict model_from_json close stack load_weights renorm_for_submit tile read print to_csv summary array Load_and_predict list build_graph print ones Seq2Seq_Class shape stack load_pkl array tile append keys range fit train index len bias list index drop from_dict list columns format print delete_non_value score index score_bias rmse append tabulate read_csv drop | Deep Uncertainty Quantification (DUQ) ============================== DUQ: A Machine Learning Approach for Weather Forecasting > 1. Sequential deep uncertainty quantification (DUQ) produces more accurate weather forecasting based on the observation and NWP prediction. Our online rank-2 (CCIT007) in *Global AI Challenger-Weather Forecasting* (https://challenger.ai/competition/wf2018) indicates deep learning is very considerable for large-scale meteorological data modeling and forecasting! > 2. Pragmatical loss function for sequence-to-sequence uncertainty quantification is proposed. > 3. **Important experimental phenomenon was reported and analysized experimentally**, which may be noteworthy in the furture deep learning researches for spatio-temporal data and time series forecasting. ### License Apache ### Paper Paper: http://urban-computing.com/pdf/kdd19-BinWang.pdf | 170 |
BruceBinBoxing/Weather_Forecasting | ['weather forecasting'] | ['Deep Uncertainty Quantification: A Machine Learning Approach for Weather Forecasting'] | src/models/competition_model_class.py src/data/make_TrainAndVal_Data_from_nc.py src/weather_forecasting2018_eval/ensemble_2018102803_2/ensemble.py src/run/Load_model_and_predict.py src/models/weather_model.py src/data/utils.py src/models/seq2seq_class.py src/weather_forecasting2018_eval/ensemble_2018101503/ensemble.py src/weather_forecasting2018_eval/ensemble_2018110303/ensemble.py src/data/data_load.py src/data/make_dataset_from_nc2df.py test_environment.py docs/conf.py src/weather_forecasting2018_eval/__init__.py src/weather_forecasting2018_eval/eval_details_score.py src/weather_forecasting2018_eval/eval_details_rmse.py src/weather_forecasting2018_eval/weather_forecasting2018_eval_my.py src/weather_forecasting2018_eval/ensemble_2018102803/ensemble.py src/data/make_dataset_missing_fill.py src/weather_forecasting2018_eval/ensemble_2018103103/ensemble.py src/weather_forecasting2018_eval/ensemble_2018110203/ensemble.py src/weather_forecasting2018_eval/ensemble_2018103003/ensemble.py src/data/make_TestOnlineData_from_nc.py src/weather_forecasting2018_eval/ensemble_2018102903/ensemble.py src/models/seq2seq.py src/data/helper.py src/weather_forecasting2018_eval/ensemble_2018110103/ensemble.py src/run/Train_from_scratch.py src/weather_forecasting2018_eval/weather_forecasting2018_eval.py src/weather_forecasting2018_eval/ensemble_2018092403/ensemble.py setup.py src/models/parameter_config_class.py src/data/make_ValData_from_TestData_from_nc.py main transform_from_df2ndarray reset_value_range load_pipeline batch_iter score mae cal_miss nan_helper get_random_batch pred_batch_iter load_pkl intplt_nan_1d get_ndarray_by_sliding_window predict mse renorm evl_fn renorm_for_submit get_train_test min_max_norm cal_loss_dataset bias save_pkl split_data rmse save_pkl main Transform main netCDF2TheLastDay main process_outlier_and_stack process_outlier_and_normalize main process_outlier_and_stack process_outlier_and_normalize netCDF_filter_nan main process_outlier_and_stack process_outlier_and_normalize netCDF_filter_nan plot_prediction random_sine Seq2Seq_Class Enc_Dec_Embd renorm WeatherConv1D Enc_Dec RNN_Class crop FNN CausalCNN_Class parameter_config seq2seq_ae seq2seq_pred crop Seq2Seq_Class weather_mve weather_l2 Seq2Seq weather_ae weather_mse CausalCNN mve_loss weather_fusion Seq2Seq_MVE_subnets Seq2Seq_MVE_subnets_swish weather_conv1D Seq2Seq_MVE RNN_builder crop weather_fnn main swish Load_and_predict main train score score_bias bias rmse _eval_result delete_non_value score score_bias bias rmse _eval_result delete_non_value score score_bias bias rmse _eval_result delete_non_value score score_bias bias rmse _eval_result delete_non_value print major columns format print min max columns reset_index set_index format print min_max_norm copy reset_value_range get_ndarray_by_sliding_window append array columns reset_index set_index format print strptime min_max_norm copy reset_value_range load_pkl get_train_test int permutation arange min __exit array range len permutation arange len int permutation arange min array range len print join dump open load join print open permutation arange len append array values append split_data array values reshape mean pred_batch_iter append array run reshape reshape reshape print print reshape sum len print nan_helper delete copy y_temp any NaN interp append array range x_temp str Series concat renorm append DataFrame range enumerate sqrt stack append range Transform getLogger echo transform_and_save_data read_nc_data info format replace set_index transform_from_df2ndarray save_pkl load_pkl NaN fillna data ffill format replace fromkeys print size save_pkl any NaN interpolate bfill append DataFrame Dataset values enumerate min_max_norm format print process_outlier_and_normalize save_pkl shape stack load_pkl netCDF2TheLastDay process_outlier_and_stack data place format all fromkeys print size nanmean isnan save_pkl nan append Dataset range enumerate netCDF_filter_nan seed arange rand pi zeros expand_dims range show list plot title figure legend range len clear_session RNN decoder Adam decoder_dense Dense Model append encoder GRUCell Input compile clear_session RNN decoder Adam decoder_dense Dense Model append encoder GRUCell Input compile reduce_mean exp log Model Input compile concatenate Model Input compile concatenate Model Input compile RNN output_dense decoder print Adam Dense Model append GRUCell Input RNN output_dense decoder concatenate Adam Dense Model append encoder GRUCell Input RNN output_dense decoder concatenate Adam Dense Model variance_dense append encoder GRUCell Input RNN decoder concatenate Adam Model append encoder GRUCell Input update RNN decoder concatenate Adam Model append encoder GRUCell Input Model Input Model Input enumerate Model Input enumerate Model Input enumerate Model Input compile rename open list ones shape load_pkl append expand_dims range predict model_from_json close stack load_weights renorm_for_submit tile read print to_csv summary array Load_and_predict list build_graph print ones Seq2Seq_Class shape stack load_pkl array tile append keys range fit train index len bias list index drop from_dict list columns format print delete_non_value score index score_bias rmse append tabulate read_csv drop | Deep Uncertainty Quantification (DUQ) ============================== DUQ: A Machine Learning Approach for Weather Forecasting > 1. Sequential deep uncertainty quantification (DUQ) produces more accurate weather forecasting based on the observation and NWP prediction. Our online rank-2 (CCIT007) in *Global AI Challenger-Weather Forecasting* (https://challenger.ai/competition/wf2018) indicates deep learning is very considerable for large-scale meteorological data modeling and forecasting! > 2. Pragmatical loss function for sequence-to-sequence uncertainty quantification is proposed. > 3. **Important experimental phenomenon was reported and analysized experimentally**, which may be noteworthy in the furture deep learning researches for spatio-temporal data and time series forecasting. ### License Apache ### Paper Paper: http://urban-computing.com/pdf/kdd19-BinWang.pdf | 171 |
BruceChanJianLe/Image-Text-Recognition | ['optical character recognition', 'scene text detection', 'curved text detection'] | ['EAST: An Efficient and Accurate Scene Text Detector'] | Phase 2/Text Recognition Algorithm.py Phase 1/Text detection Algorithm.py decode decode cos sin append float range | # README ## Description of algorithm and how to use it ### Phase 1 During Phase 1, I am developing an algorithm to detect the text existing in the images. The detected region will be used for Phase 2 which is text recognition. I am using 'EAST' as the text detector, which in full name it is called ' Efficient and Accurate Scene Text detector'. For reference: https://arxiv.org/abs/1704.03155 You will need at least OpenCV 3.4.5 or OpenCV 4 to implement the algorithm. ### Phase 2 During Phase 2, I used Tesseract v4 to obtain the text in the image and then maps the text back to the image. Do note that the process of installing Tesseract is needed. For reference: https://stackoverflow.com/questions/51677283/tesseractnotfounderror-tesseract-is-not-installed-or-its-not-in-your-path https://github.com/argman/EAST | 172 |
CIA-Oceanix/GeoTrackNet | ['anomaly detection'] | ['GeoTrackNet-A Maritime Anomaly Detector using Probabilistic Neural Network Representation of AIS Tracks and A Contrario Detection'] | runners.py bounds.py nested_utils.py data/calculate_AIS_mean.py data/csv2pkl.py models/vrnn.py distribution_utils.py contrario_utils.py geotracknet.py data/dataset_preprocessing.py data/datasets.py utils.py flags_config.py elbo always_resample_criterion never_resample_criterion fivo ess_criterion contrario_detection nCr zero_segments NFA nonzero_segments sample_from_logits sample_from_max_logits sample_one_hot sample_from_probs tas_for_tensors tile_tensors read_tas gather_tensors map_nested create_eval_graph restore_checkpoint_if_exists create_dataset_and_model run_train wait_for_checkpoint createShapefile gaussian_filter_with_nan show_logprob_map remove_gaussian_outlier trackOutlier interpolate plot_abnormal_tracks detectOutlier sparse_AIS_to_dense sublist create_AIS_dataset getConfig create_vrnn VRNNCell NormalApproximatePosterior ConditionalNormalDistribution ConditionalBernoulliDistribution tas_for_tensors to_float zero_state constant reduce_logsumexp sequence_mask while_loop tile_tensors transpose reduce_max read_tas float32 reshape reduce_mean int32 zeros log tas_for_tensors zero_state constant sequence_mask while_loop tile_tensors transpose reduce_max reshape float32 reduce_mean int32 zeros list mul min reduce range append range len append range len range count_nonzero int min zeros range len cumsum less_equal logical_and greater reduce_sum float32 cast tile random_uniform concat argmax one_hot split sample concat Bernoulli split squash_prob concat sample_one_hot split flatten list map tas_for_tensors to_float zero_state constant reduce_logsumexp sequence_mask while_loop tile_tensors transpose reduce_max num_samples float32 reshape reduce_mean int32 zeros ess_criterion log join create_vrnn batch_size testset_path onehot_sog_bins onehot_cog_bins data_dim dirname onehot_lon_bins trainingset_path latent_size onehot_lat_bins create_AIS_dataset ConditionalBernoulliDistribution join restore basename get_checkpoint_state model_checkpoint_path sleep restore_checkpoint_if_exists info replica_device_setter argmax zeros sum range all inv astype logical_and trackOutlier zeros float range list POINT point record field strftime gmtime Writer save keys float inv fwd mean std gaussian_filter copy gaussian_filter_with_nan join append_axes make_axes_locatable close tight_layout colorbar isnan shape imshow flipud set_visible nan figure savefig gca zeros range list cmap plot xlabel float xlim ylabel tight_layout cmap_anomaly ylim array figure savefig get_cmap keys range len append array create_dense_vect padded_batch shuffle map make_one_shot_iterator get_next from_generator repeat dirname prefetch len parse_args add_argument ArgumentParser MLP NormalApproximatePosterior LSTMCell ConditionalNormalDistribution ConditionalBernoulliDistribution | # GeoTrackNet TensorFlow implementation of the model proposed in "A Multi-Task Deep Learning Architecture for Maritime Surveillance Using AIS Data Streams" (https://ieeexplore.ieee.org/abstract/document/8631498) and "GeoTrackNet—A Maritime Anomaly Detector using Probabilistic Neural Network Representation of AIS Tracks and A Contrario Detection" (https://arxiv.org/abs/1912.00682). (GeoTrackNet is the anomaly detection module of MultitaskAIS). All the codes related to the Embedding block are adapted from the source code of Filtering Variational Objectives: https://github.com/tensorflow/models/tree/master/research/fivo #### Directory Structure The elements of the code are organized as follows: ``` geotracknet.py # script to run the model (except the A contrario detection). runners.py # graph construction code for training and evaluation. | 173 |
CLUEbenchmark/CLGE | ['text summarization'] | ['LCSTS: A Large Scale Chinese Short Text Summarization Dataset'] | tasks/autotitle_baseline.py AutoTitle Evaluator CrossEntropy data_generator load_data | CLUEbenchmark/CLGE | 174 |
CODAIT/deep-histopath | ['whole slide images'] | ['A Unified Framework for Tumor Proliferation Score Prediction in Breast Histopathology'] | deephistopath/wsi/filter.py v2/mrcnn/config.py v2/nucleus/nucleus_mitosis.py predict_mitoses_smooth.py preprocess_mitoses.py deephistopath/wsi/tiles.py deephistopath/visualization.py dist/mitosis_dist.py train_mitoses.py v2/mrcnn/parallel_model.py v2/mrcnn/visualize.py dist/utils.py v2/mrcnn/model.py v2/nucleus/mitosis_inference_pipeline.py deephistopath/inference.py deephistopath/gpu_info.py deephistopath/evaluation.py deephistopath/wsi/slide.py deephistopath/predict.py deephistopath/preprocessing.py preprocess.py resnet50.py deephistopath/wsi/util.py deephistopath/detection.py eval_mitoses.py v2/mrcnn/utils.py predict_mitoses.py dist/mitosis_spark.py resnet.py v2/nucleus/mitosis_train_pipeline.py hyperparam_tune_mitoses.py main evaluate main main flat_result_2_row test_gen_random_translation gen_fp_coords gen_dense_coords check_float_range test_gen_patches gen_random_translation test_pil_image_saving test_gen_dense_coords test_create_mask gen_batches test_gen_normal_coords extract_patch test_gen_patches_extract_patches preprocess gen_patches save_patch gen_normal_coords create_mask test_extract_patch ResNet res_block identity_block ResNet50 conv_block test_model_updates create_augmented_batch multi_gpu_model compute_data_loss create_resettable_metric test_create_model test_get_image test_marginalize process_dataset test_create_augmented_batch test_batchnorm test_get_label get_label compute_metrics test_normalize_unnormalize initialize_variables normalize test_normalize_dtype test_num_parallel_calls get_image create_model test_resettable_metric compute_l2_reg_loss preprocess test_preprocess test_initialize_variables main augment test_random_seed test_image_random_op_seeds marginalize test_compute_l2_reg_loss test_dataset_reinit_iter_augment_seeds test_augment test_compute_data_loss reset create_dataset train unnormalize ijv_2_arr smooth_prediction_results tuple_2_csv identify_mitoses detect_prediction_results disk_kernel dbscan_clustering conv_smooth arr_2_ijv cluster_prediction_result test_array_csv csv_2_arr arr_2_csv compute_f1 test_compute_f1 prepare_f1_inputs test_evaluate_global_f1 search_prob_threshold_for_f1 add_ground_truth_mark_help export_single_F1_input add_ground_truth_mark list_files get_file_id get_data_from_csv evaluate_f1 evaluate_global_f1 test_img_quality test_add_ground_truth_mark export_F1_inputs_TP_FP_FN get_locations_from_csv get_gpu get_free_gpu get_gpus predict_mitoses_cpu predict_mitoses save_mitosis_locations_2_csv predict_mitoses_gpu test_predict_mitoses_num_locations check_subsetting predict_mitoses_num_locations gen_batches pad_tile_on_edge save_array_2_image predict_mitoses_help get_scoring search_hyper_parameter gen_feature_table search_hyper_parameter_help report load_data_Y generate_model_input_data compute_kappa_score get_descriptive_statistics load_data_X main load_data_X_in_batch join_df visualize normalize_staining save_df save_jpeg_help add_row_indices keep_tile optical_density process_tile process_tile_index rdd_2_df get_labels_df save_labelled_sample_2_jpeg save_nonlabelled_sample_2_jpeg save_rdd_2_jpeg process_slide preprocess sample create_tile_generator save_2_jpeg open_slide flatten_sample_tuple get_20x_zoom_level flatten_sample visualize_sample add_mark Shape draw_circle visualize_tile filter_adaptive_equalization apply_image_filters filter_canny filter_binary_dilation filter_remove_small_objects filter_rgb_to_hed filter_binary_closing filter_complement apply_filters_to_image save_display filter_rgb_to_grayscale filter_green_channel filter_hysteresis_threshold image_cell filter_binary_opening filter_binary_erosion apply_filters_to_image_range filter_hsv_to_v filter_rag_threshold apply_filters_to_image_list filter_local_equalization filter_hed_to_hematoxylin generate_filter_html_result filter_remove_small_holes tissue_percent filter_histogram_equalization filter_blue save_filtered_image filter_kmeans_segmentation mask_percent filter_binary_fill_holes singleprocess_apply_filters_to_images filter_contrast_stretch filter_entropy html_footer filter_blue_pen uint8_to_bool multiprocess_apply_filters_to_images filter_otsu_threshold html_header filter_red filter_hed_to_eosin filter_grays filter_hsv_to_h filter_green_pen filter_red_pen filter_green mask_percentage_text filter_rgb_to_hsv filter_hsv_to_s filter_local_otsu_threshold filter_threshold get_training_thumbnail_path get_top_tiles_thumbnail_path get_filter_thumbnail_result open_image_np get_tile_summary_image_path get_tile_data_path parse_dimensions_from_image_filename get_num_training_slides get_training_image_path slide_stats get_training_slide_path get_top_tiles_on_original_thumbnail_path get_tile_summary_image_filename training_slide_to_image slide_to_scaled_np_image get_tile_image_path get_filter_image_path training_slide_range_to_images get_filter_image_result small_to_large_mapping get_top_tiles_image_filename singleprocess_training_slides_to_images show_slide slide_info open_image get_tile_summary_on_original_image_path multiprocess_training_slides_to_images slide_to_scaled_pil_image save_thumbnail get_top_tiles_on_original_image_path get_filter_image_filename get_tile_summary_on_original_thumbnail_path get_filter_thumbnail_path get_tile_summary_thumbnail_path get_tile_data_filename get_top_tiles_image_path open_slide get_tile_image_path_by_slide_row_col np_hsv_value_histogram display_image image_row np_hsv_saturation_histogram np_text np_rgb_r_histogram pil_text TissueQuantity get_num_tiles faded_tile_border_color display_image_with_hsv_hue_histogram np_rgb_b_histogram get_tile_indices dynamic_tiles save_tile_data pil_hue_histogram TileSummary save_tile_summary_image tissue_quantity_factor hsv_purple_deviation tile_to_pil_tile score_tiles generate_tiled_html_result multiprocess_filtered_images_to_tiles hsv_purple_pink_factor np_histogram hsv_saturation_and_value_factor hsv_purple_vs_pink_average_factor np_tile_stat_img display_image_with_rgb_and_hsv_histograms Tile tile_border_color save_tile_summary_on_original_image display_image_with_rgb_histograms summary_stats tile_border score_tile save_top_tiles_on_original_image create_summary_pil_img np_hsv_hue_histogram np_rgb_g_histogram generate_tile_summaries summary_title image_list_to_tiles singleprocess_filtered_images_to_tiles np_rgb_channel_histogram add_tile_stats_to_top_tile_summary hsv_pink_deviation display_image_with_hsv_histograms save_top_tiles_image image_range_to_tiles dynamic_tile display_tile generate_top_tile_summaries summary_and_tiles save_display_tile tissue_quantity rgb_to_hues tile_to_np_tile mask_rgb np_info display_img np_to_pil pil_to_np_rgb Time print_log map_fun ExportHook main get_hdfs read_images image_decoder genBinaryFileRDD toNpArray read_image Config fpn_classifier_graph MaskRCNN compose_image_meta rpn_bbox_loss_graph norm_boxes_graph compute_backbone_shapes rpn_class_loss_graph log DetectionTargetLayer trim_zeros_graph log2_graph parse_image_meta parse_image_meta_graph data_generator rpn_graph identity_block BatchNorm build_fpn_mask_graph load_image_gt build_rpn_targets resnet_graph unmold_image PyramidROIAlign apply_box_deltas_graph denorm_boxes_graph generate_random_rois detection_targets_graph build_detection_targets overlaps_graph mrcnn_bbox_loss_graph conv_block batch_pack_graph ProposalLayer smooth_l1_loss clip_boxes_graph mrcnn_class_loss_graph mrcnn_mask_loss_graph mold_image build_rpn_model DetectionLayer refine_detections_graph ParallelModel build_model compute_ap norm_boxes compute_recall apply_box_deltas compute_overlaps compute_iou resize resize_image box_refinement_graph generate_pyramid_anchors mold_mask generate_anchors compute_ap_range compute_overlaps_masks denorm_boxes unmold_mask download_trained_weights non_max_suppression minimize_mask resize_mask extract_bboxes trim_zeros compute_matches batch_slice expand_mask box_refinement Dataset display_differences draw_box display_images draw_rois draw_boxes visualize_instances apply_mask random_colors display_instances display_table display_weight_stats plot_overlaps plot_precision_recall display_top_masks load_model combine_csvs extract_patches add_mark combine_images add_groundtruth_mark run_mitosis_classification reorganize_mitosis_images is_inside MitosisInferenceConfig main get_location_from_file_name run_mitosis_classification_in_batch get_image_tf check_nucleus_inference crop_image split_train_val_datasets label_detected_nucleus load_model combine_csvs gen_mitosis_normal_tiles add_mark combine_images MitosisTrainConfig add_groundtruth_mark run_mitosis_classification reorganize_mitosis_images is_inside main get_location_from_file_name run_mitosis_classification_in_batch get_image_tf check_nucleus_inference crop_image detect_v2 rle_encode NucleusConfig detect main inference rle_decode NucleusInferenceConfig train mask_to_rle NucleusDataset run set_session print ConfigProto Session run parse_args evaluate add_argument ArgumentParser clear_session num_experiments models exp_parent_path finetune_layers finetune_momentum_range reset_default_graph train_batch_sizes uniform append set_defaults model_weights oversample range l2_range format finetune_lr_range choice reg_biases augment marginalize add_mutually_exclusive_group clf_lr_range makedirs append gpu_per_node createDataFrame count show dirname cache pred_save_path addPyFile make_archive isGPU node_number sparkContext to_csv cpu_per_node getOrCreate append zeros logical_or min ndim pad round max range shape gen_dense_coords squeeze astype float32 predict_on_batch zip normalize gen_batches randint min max fromarray dtype uint8 asarray astype cos pi rotate BILINEAR shape binomial linspace sin ceil round extract_patch join save gen_fp_coords open seed shape int64 split train_test_split format glob astype keys gen_patches enumerate join loadtxt save_patch gen_normal_coords create_mask isfile array makedirs float format int zeros_like rand create_mask array range int rand extract_patch next list rand gen_dense_coords list ones gen_normal_coords append zeros next array range gen_random_translation rand range uint8 astype gen_patches str join asarray save randint open randint extract_patch VarianceScaling add Model str add str add _obtain_input_shape conv_block get_file get_source_inputs warn Model load_weights convert_all_kernels_in_model get_layer convert_dense_weights_data_format identity_block Input decode_png read_file convert_image_dtype logical_or string_split equal Assert get_image get_label astype float32 astype float32 random_saturation to_int32 cos pi random_brightness floor clip_by_value random_uniform to_float random_hue resize_image_with_crop_or_pad rotate shape pad sin ceil random_flip_up_down random_crop random_flip_left_right sqrt stack random_contrast minimum int reshape maximum resize_image_with_crop_or_pad concat augment rots_batch expand_dims round range learning_phase reduce_mean cond logical_not map join process_dataset int prefetch list_files flat_map shuffle zip batch VGG16 ResNet inputs output Model ResNet50 VGG19 Input isinstance outputs append range enumerate len reduce_mean sigmoid_cross_entropy_with_logits hasattr bias kernel add_n beta append gamma precision_recall_at_equal_thresholds group accuracy create_resettable_metric mean precision recall variables_initializer global_variables append local_variables_initializer run layers set_random_seed Saver save Session run seed restore hasattr set_session name merge_all get_default_graph initialize_variables range format FileWriter ConfigProto flush join print weights output histogram add_summary model copy2 exp_name exp_full_path strftime exp_name_suffix patches_path today realpath join print randint train clear_session reset_default_graph get_image join str get_session reset save randint run get_label reset get_session run join str constant get_session preprocess reset save run randint makedirs get_session astype float32 test placeholder reset normalize unnormalize test_batch run get_image join str get_session reset save randint augment run get_session astype float32 test reset test2 get_session randn marginalize reset full run layers get_session compute_l2_reg_loss kernel bias Model reset beta gamma Input l2_loss run reset get_session test get_session astype float32 reset compute_data_loss run get_session reshape placeholder reset int32 global_variables_initializer local_variables_initializer run VGG16 get_session global_variables weights output Model reset initialize_variables Input Dense reset build set_random_seed reset test seed random_hue astype float32 set_random_seed reset seed from_tensor_slices output_types astype float32 map set_random_seed from_structure make_initializer get_next reset output_shapes Session batch run VGG16 normalize get_session normalize_incorrect astype float32 placeholder reset randint predict run Model reset Input BatchNormalization bn Session set_session updates randn output Model reset initialize_variables ConfigProto Input BatchNormalization bn run zeros T nonzero asarray get_locations_from_csv tuple_2_csv arr_2_ijv to_csv DataFrame dirname makedirs meshgrid arange array count_nonzero reshape conv2d pad disk_kernel expand_dims list_files get_file_id conv_smooth placeholder arange shape unravel_index meshgrid argmax max append items list replace size list_files identify_mitoses get_file_id tuple_2_csv dirname csv_2_arr len set labels_ append range fit items list replace print tuple_2_csv list_files get_file_id rmtree dbscan_clustering dirname exists get_locations_from_csv join ranf csv_2_arr arr_2_csv print sqrt remove append Path read_csv read_csv compute_f1 prepare_f1_inputs list_files get_file_id union set append keys get_locations_from_csv compute_f1 prepare_f1_inputs list_files get_file_id union set append keys get_locations_from_csv append DataFrame evaluate_global_f1 to_csv dirname append DataFrame makedirs groupby join to_csv dirname DataFrame makedirs print add_mark save get_locations_from_csv open list foreach list_files get_file_id append keys parallelize compute_f1 prepare_f1_inputs add_ground_truth_mark getOrCreate sparkContext SQUARE int read_png_tf remove read_jpeg_tf_fast DataFrame read_img_openslide len astype close read_img_PIL read_jpeg_tf_accurate range save randint empty compute_diff open evaluate_global_f1 value cudaGetDeviceCount c_int LoadLibrary system byref range decode format info debug shuffle warn sleep append len sort get_gpu_info min sleep append range len fromarray asarray shape save len DataFrame to_csv zeros range concatenate shape create_augmented_batch model gen_dense_coords run list placeholder shape append normalize gen_batches range predict concatenate astype stack empty get_session print marginalize float32 len Path save max Session fromarray str list set_session load_model strftime Model dirname append asarray replace get_tile gmtime save_mitosis_locations_2_csv zip create_tile_generator ConfigProto save_array_2_image open_slide print output dimensions create_mask get_20x_zoom_level predict_mitoses_num_locations makedirs predict_mitoses_cpu predict_mitoses_gpu collect print cache flatMap Counter dict Path append parallelize Path mapPartitionsWithIndex asarray load_model print astype output Model normalize expand_dims predict_mitoses_num_locations predict open to_numeric read_csv apply to_numeric list insert apply range read_csv len to_csv gen_feature_table load_data_Y load_data_X_in_batch train_test_split join_df values show plot apply reset_index join_df get_descriptive_statistics reset_index apply merge cohen_kappa_score print format range flatnonzero RandomizedSearchCV GridSearchCV get_scoring SVC report generate_model_input_data cv_results_ compute_kappa_score fit collect map parallelize len search_hyper_parameter join format zfill DeepZoomGenerator level_count int floor create_tile_generator get_20x_zoom_level open_slide create_tile_generator get_tile asarray open_slide float64 astype optical_density rgb2gray disk mean binary_closing canny binary_fill_holes binary_dilation shape astype reshape percentile svd T asarray uint8 arctan2 reshape float64 astype dot shape array lstsq reshape reshape set_index join read_csv index cache flatMap map repartition filter get_labels_df round save molecular_score astype map select toDF sampleBy molecular_score astype map select toDF foreach save_nonlabelled_sample_2_jpeg save_labelled_sample_2_jpeg join format save_jpeg_help join format save_jpeg_help str fromarray uint8 astype dirname save makedirs show imshow show int reshape transpose imshow int ellipse range draw_circle line Draw elapsed np_info astype dot Time elapsed np_info Time apply_hysteresis_threshold elapsed np_info astype Time threshold_otsu elapsed np_info astype Time elapsed otsu astype disk np_info Time elapsed np_info Time astype elapsed np_info astype canny Time count_nonzero size mask_percent print elapsed np_info astype remove_small_objects Time elapsed np_info astype remove_small_holes Time percentile rescale_intensity elapsed np_info Time elapsed np_info astype Time equalize_hist elapsed equalize_adapthist np_info astype Time elapsed equalize np_info Time rescale_intensity elapsed np_info astype rgb2hed Time elapsed np_info Time rgb2hsv elapsed np_info astype flatten Time flatten flatten rescale_intensity elapsed np_info astype Time rescale_intensity elapsed np_info astype Time elapsed np_info astype binary_fill_holes Time elapsed disk astype np_info Time binary_erosion elapsed disk astype np_info Time binary_dilation elapsed np_info disk astype Time binary_opening elapsed disk astype np_info binary_closing Time label2rgb elapsed np_info slic Time rag_mean_color label2rgb cut_threshold elapsed np_info slic Time elapsed np_info Time astype mask_percent print elapsed np_info astype ceil Time elapsed np_info Time astype elapsed np_info astype Time filter_red elapsed np_info Time astype elapsed np_info astype filter_green Time elapsed np_info Time astype elapsed np_info astype filter_blue Time int elapsed np_info astype shape Time astype filter_grays mask_rgb filter_remove_small_objects filter_green_pen filter_red_pen save_display filter_blue_pen filter_green_channel get_filter_image_result apply_image_filters print dict get_filter_thumbnail_result open_image_np np_to_pil save save_thumbnail THUMBNAIL_SIZE Time get_training_image_path FILTER_DIR makedirs mask_percent display_img save_filtered_image mask_percentage_text get_filter_image_path get_filter_thumbnail_path get_filter_image_filename print get_filter_image_path get_filter_thumbnail_path np_to_pil save save_thumbnail THUMBNAIL_SIZE Time join list sorted FILTER_PAGINATION_SIZE write close add set open floor ceil FILTER_HTML_DIR range len dict apply_filters_to_image update dict apply_filters_to_image update range str print elapsed apply_filters_to_image_list generate_filter_html_result get_num_training_slides Time apply_filters_to_image_range get str int update print elapsed cpu_count len apply_async dict generate_filter_html_result append get_num_training_slides Time Pool range FILTER_DIR makedirs open open_image pil_to_np_rgb join zfill join zfill join zfill join zfill str join zfill str join get_filter_image_filename makedirs join get_filter_image_filename makedirs zfill join get_tile_summary_image_filename makedirs join get_tile_summary_image_filename makedirs join get_tile_summary_image_filename makedirs join get_tile_summary_image_filename makedirs join get_top_tiles_image_filename makedirs join get_top_tiles_image_filename makedirs parse_dimensions_from_image_filename get_training_image_path zfill str parse_dimensions_from_image_filename get_training_image_path zfill str join get_top_tiles_image_filename makedirs join get_top_tiles_image_filename makedirs parse_dimensions_from_image_filename get_training_image_path zfill str join get_tile_data_filename makedirs join str zfill parse_dimensions_from_image_filename get_training_image_path join str zfill parse_dimensions_from_image_filename get_training_image_path int match group round floor get_training_thumbnail_path print slide_to_scaled_pil_image save save_thumbnail get_training_image_path makedirs get_training_slide_path open_slide print convert read_region BILINEAR floor dimensions resize get_best_level_for_downsample slide_to_scaled_pil_image pil_to_np_rgb show print tuple BILINEAR dirname save resize makedirs glob1 len range training_slide_to_image elapsed_display get_num_training_slides Time training_slide_range_to_images get str int print cpu_count apply_async elapsed_display append get_num_training_slides Time Pool range set_cmap rand clf get_num_training_slides Time open get_training_slide_path show list str ylabel elapsed_display title scatter savefig append range format close tight_layout zip annotate maxsize join open_slide print xlabel makedirs write hist dimensions len get_training_slide_path str int get list open_slide print level_count level_downsamples detect_format elapsed_display dimensions append get_num_training_slides Time keys range level_dimensions ceil append list range get_num_tiles fill zeros np_to_pil c_e tiles get_num_tiles scaled_w scaled_tile_h open_image_np get_training_image_path show save_tile_summary_image scaled_h truetype tile_border_color save_tile_summary_on_original_image scaled_tile_w summary_stats tile_border slide_num create_summary_pil_img summary_title Draw tissue_percentage r_e text r_s c_s c_e tiles get_num_tiles faded_tile_border_color scaled_w scaled_tile_h open_image_np get_training_image_path show top_tiles scaled_h truetype tile_border_color scaled_tile_w summary_stats tile_border save_top_tiles_on_original_image slide_num create_summary_pil_img Draw tissue_percentage r_e text add_tile_stats_to_top_tile_summary r_s save_top_tiles_image c_s shape np_to_pil pil_to_np_rgb fill zeros max np_tile_stat_img np_text sorted rectangle range print get_tile_summary_image_path save save_thumbnail THUMBNAIL_SIZE Time get_tile_summary_thumbnail_path get_top_tiles_image_path print get_top_tiles_thumbnail_path save save_thumbnail THUMBNAIL_SIZE Time print get_tile_summary_on_original_image_path save save_thumbnail THUMBNAIL_SIZE Time get_tile_summary_on_original_thumbnail_path get_top_tiles_on_original_thumbnail_path print save save_thumbnail get_top_tiles_on_original_image_path THUMBNAIL_SIZE Time get_filter_image_result generate_tile_summaries top_tiles score_tiles open_image_np generate_top_tile_summaries save_tile save_tile_data summary_title print tiles summary_stats write close get_tile_data_path slide_num Time open get_training_slide_path open_slide convert slide_num read_region tile_to_pil_tile pil_to_np_rgb show print tile_to_pil_tile get_tile_image_path dirname save Time makedirs TileSummary Tile tiles_by_score SCALE_FACTOR get_num_tiles score_tile open_image_np round tissue_percent get_tile_indices append parse_dimensions_from_image_filename tissue_quantity get_filter_image_result small_to_large_mapping tissue_quantity_factor hsv_purple_pink_factor hsv_saturation_and_value_factor tissue_quantity log dict summary_and_tiles list dict summary_and_tiles range append str image_list_to_tiles print elapsed generate_tiled_html_result image_range_to_tiles get_num_training_slides Time elapsed cpu_count get_num_training_slides Time Pool str list apply_async generate_tiled_html_result append range get update TILE_SUMMARY_DIR int print makedirs extend dict len get_top_tiles_on_original_thumbnail_path sorted get_training_thumbnail_path get_top_tiles_image_path top_tiles get_tile_image_path get_top_tiles_thumbnail_path get_tile_summary_on_original_image_path ceil get_filter_thumbnail_result get_tile_summary_image_path get_tile_summary_on_original_thumbnail_path get_top_tiles_on_original_image_path get_filter_image_result get_training_image_path range get_tile_summary_thumbnail_path len join sorted TILE_SUMMARY_HTML_DIR TILE_SUMMARY_PAGINATION_SIZE len write close ceil range open get_width_height canvas reshape hsv_to_rgb draw close np_info title hist set_facecolor figure get_width_height canvas reshape draw close np_info title hist figure flatten np_rgb_channel_histogram np_rgb_channel_histogram np_rgb_channel_histogram np_hsv_hue_histogram np_to_pil show SCALE_FACTOR filter_hsv_to_h np_text shape filter_rgb_to_hsv repeat np_to_pil fill np_hsv_hue_histogram max zeros show SCALE_FACTOR np_text shape repeat np_to_pil fill zeros max show np_hsv_value_histogram SCALE_FACTOR filter_hsv_to_h np_hsv_saturation_histogram np_text zeros filter_hsv_to_v shape filter_rgb_to_hsv repeat np_to_pil fill np_hsv_hue_histogram max filter_hsv_to_s np_rgb_g_histogram show SCALE_FACTOR np_text np_rgb_r_histogram shape repeat np_to_pil np_rgb_b_histogram fill zeros max truetype Draw text new textsize pil_text pil_to_np_rgb display_image_with_rgb_and_hsv_histograms get_np_tile display_image display_image_with_rgb_histograms display_image_with_hsv_histograms get_np_scaled_tile np_hsv_value_histogram SCALE_FACTOR np_hsv_saturation_histogram np_text np_rgb_r_histogram np_rgb_b_histogram max show filter_hsv_to_v shape np_to_pil fill np_hsv_hue_histogram filter_hsv_to_s np_rgb_g_histogram filter_hsv_to_h filter_rgb_to_hsv repeat zeros filter_rgb_to_hsv filter_hsv_to_h filter_hsv_to_s filter_rgb_to_hsv filter_hsv_to_v std mean sqrt abs mean sqrt abs hsv_purple_deviation rgb_to_hues average hsv_pink_deviation average rgb_to_hues SCALE_FACTOR slide_to_scaled_np_image apply_image_filters score_tiles dynamic_tiles get_tile elapsed np_info Time asarray astype mean min max print show truetype Draw text convert rectangle np_to_pil textsize elapsed np_info Time dstack format info absolute_path join format batch_size model gethostname close job_name rdma FileWriter task_index info sleep isoformat worker_num start_cluster_server mapPartitions hdfs_port mitosis_img_dir run cluster_size tensorboard map epochs normal_img_dir inference shutdown get get_hdfs map_fun info ls isoformat error extend output saveAsTextFile hdfs_host SPARK len height nChannels width BytesIO asarray open repartition connect asarray close open format gethostname info append read_image len array ljust size print BACKBONE callable conv_block identity_block range stack minimum concat maximum set_shape split minimum reshape maximum tile expand_dims split concat reduce_max boolean_mask MASK_SHAPE crop_and_resize gather box_refinement_graph round trim_zeros_graph ROI_POSITIVE_RATIO transpose squeeze pad cast expand_dims range USE_MINI_MASK overlaps_graph cond int TRAIN_ROIS_PER_IMAGE float32 greater maximum int32 split minimum apply_box_deltas_graph reshape clip_boxes_graph concat gather map_fn DETECTION_MAX_INSTANCES stack gather_nd DETECTION_MIN_CONFIDENCE pad set_intersection expand_dims argmax BBOX_STD_DEV Input rpn_graph int_shape less abs cast switch constant not_equal squeeze where mean sparse_categorical_crossentropy gather_nd cast int32 equal IMAGES_PER_GPU batch_pack_graph switch constant smooth_l1_loss squeeze where mean gather_nd cast int32 sum equal reduce_sum sparse_softmax_cross_entropy_with_logits cast gather argmax switch constant reshape smooth_l1_loss mean int64 stack cast gather_nd gather switch constant reshape transpose mean shape int64 stack cast gather_nd gather binary_crossentropy uint8 minimize_mask compose_image_meta extract_bboxes load_mask zeros astype randint resize_image shape warning resize_mask MINI_MASK_SHAPE load_image bool fliplr augment_image to_deterministic int ROI_POSITIVE_RATIO concatenate resize astype TRAIN_ROIS_PER_IMAGE compute_iou choice MASK_SHAPE int32 box_refinement USE_MINI_MASK zeros argmax range sum zip ones compute_overlaps choice RPN_TRAIN_ANCHORS_PER_IMAGE zeros argmax amax len int sort min hstack randint zeros max range split image_ids arange IMAGE_SHAPE compute_backbone_shapes RPN_ANCHOR_RATIOS generate_pyramid_anchors BACKBONE_STRIDES MAX_GT_INSTANCES shape expand_dims load_image_gt build_rpn_targets astype shuffle copy choice generate_random_rois build_detection_targets RPN_ANCHOR_SCALES mold_image RPN_ANCHOR_STRIDE float32 extend zeros len list array boolean_mask reduce_sum cast bool abs append range constant concat float32 cast split constant concat float32 cast split reset_default_graph Input zeros array range minimum maximum zeros range compute_iou T astype float32 dot sum astype delete float32 compute_iou append astype float32 stack cast float32 log astype float32 log dtype min pad resize randint max pad astype resize zeros bool range astype resize zeros bool range zeros bool astype resize arange concatenate reshape flatten sqrt meshgrid array append generate_anchors range len ones trim_zeros compute_overlaps_masks range len arange concatenate cumsum compute_matches astype float32 maximum sum range len compute_ap format print mean append compute_overlaps set argmax max len list graph_fn zip append range len print array array show subplot uint8 axis astype imshow title figure zip len list shuffle range where subplots axis show set_title apply_mask imshow find_contours range set_xlim astype copy zeros uint8 Polygon print text add_patch Rectangle randint fliplr set_ylim int print polylines astype copy append zeros find_contours fliplr range compute_matches display_instances concatenate len subplots arange rand axis Line2D unmold_mask shape title apply_mask imshow format set_xlim astype copy enumerate add_line print text add_patch Rectangle int32 set_ylim len format arange display_images unique append sum range format subplots set_title plot set_xlim set_ylim list format arange product yticks text xlabel tight_layout ylabel imshow figure xticks max range len subplots axis Line2D random_colors set_title apply_mask imshow find_contours range set_xlim astype copy zeros add_line uint8 Polygon text add_patch Rectangle int32 randint fliplr set_ylim HTML display get_trainable_layers name weights display_table append get_weights enumerate join basename format print copyfile rmtree dirname exists makedirs join format imwrite print rmtree shape imread exists makedirs int join imwrite makedirs rmtree shape zeros imread split int join tuple_2_csv len makedirs extend rmtree range get_locations_from_csv split print join format add_ground_truth_mark_help add_ground_truth_mark_help sqrt join format print is_inside get_locations_from_csv split join format replace extract_patch print save_patch imread get_locations_from_csv makedirs decode_png read_file convert_image_dtype str basename int split tuple_2_csv get_next dbscan_clustering get_location_from_file_name map shape prefetch append range predict asarray from_tensor_slices format size join print reshape make_one_shot_iterator average len set_session output Model ConfigProto Session join format load_model print name run_mitosis_classification Path extracted_nucleus_dir combine_small_inference_results mitosis_classification_result_dir extract_patches evaluate_global_f1 cluster_nucleus_detection_results reorganize_folder_structure maskrcnn_inference_result_dir mitosis_reorganized_dir combine_images inference_input_dir run_mitosis_classification_in_batch ground_truth_dir mitosis_input_dir run_nucleus_detection maskrcnn_inference_combined_result maskrcnn_inference_combined_clusterd_result add_groundtruth_mark extract_nucleus_patch reorganize_mitosis_images MitosisInferenceConfig cluster_prediction_result evaluate_nucleus_inference crop_image compute_f1 visualize_the_ground_truth weights combine_csvs add_mark run_mitosis_classification split_big_images_to_small_ones check_nucleus_inference clear join format print tuple_2_csv is_inside append get_locations_from_csv split join format replace extract_patch print get_data_from_csv makedirs save_patch imread split join copy makedirs num_gpus label_detected_nucleus train_batch_size gen_mitosis_normal_tiles val_cases threads val_batch_size finetune_epochs clf_epochs clf_lr gen_mitosis_normal_tile finetune_lr prefetch_batches split_train_val_datasets finetune_momentum train_model log_interval l2 MitosisTrainConfig extracted_normal_mitosis_patch_dir mitosis_classification_model_file array max len load_nucleus prepare NucleusDataset SomeOf flatten reshape concatenate diff list T reshape map zeros bool split format reshape rle_encode where append max join format imwrite class_names image_ids load_nucleus print tuple_2_csv visualize_instances prepare load_image NucleusDataset makedirs format imwrite class_names image_ids print tuple_2_csv visualize_instances prepare list_images load_image NucleusDataset makedirs format MaskRCNN display print download_trained_weights NucleusConfig load_weights get_imagenet_weights detect find_last NucleusInferenceConfig train MaskRCNN display detect_v2 print load_weights NucleusInferenceConfig MaskRCNN display subset download_trained_weights NucleusConfig load_weights get_imagenet_weights command detect find_last NucleusInferenceConfig dataset | <!-- {% comment %} Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to you under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software | 175 |
CODEJIN/RHRNet | ['speech enhancement'] | ['RHR-Net: A Residual Hourglass Recurrent Neural Network for Speech Enhancement'] | Datasets.py STFTLoss.py Logger.py Radam.py Train.py Trace.py Modules.py Noam_Scheduler.py Arg_Parser.py Audio.py Recursive_Parse Spectrogram_Generate Audio_Prep Preemphasis Mel_Generate Inference_Collater Collater Calc_RMS Inference_Dataset Dataset Logger RHRNet GRU Log_Cosh_Loss Modified_Noam_Scheduler Noam_Scheduler RAdam MultiResolutionSTFTLoss STFTLoss Trainer items list isinstance Namespace normalize T Preemphasis stft log10 mel abs T Preemphasis stft log10 abs | # RHRNet * This repository is a RHRNet unofficial code. * I added STFT loss additionally. * The following is the paper I referred: * [Abdulbaqi, J., Gu, Y., & Marsic, I. (2019). RHR-Net: A Residual Hourglass Recurrent Neural Network for Speech Enhancement. arXiv preprint arXiv:1904.07294.](https://arxiv.org/abs/1904.07294) # Requirements * torch >= 1.6.0 * tensorboardX >= 2.0 * librosa >= 0.7.2 * matplotlib >= 3.1.3 | 176 |
COINtoolbox/RESSPECT | ['active learning'] | ['Active learning with RESSPECT: Resource allocation for extragalactic astronomical transients'] | resspect/__init__.py resspect/tests/test_example.py resspect/batch_functions.py resspect/lightcurves_utils.py resspect/time_domain_loop.py resspect/tests/test_fixtures.py resspect/testing.py resspect/scripts/fit_dataset.py resspect/scripts/__init__.py resspect/tests/test_fit_lightcurves.py resspect/bazin.py resspect/tests/conftest.py resspect/tests/test_bazin.py resspect/tests/test_database.py resspect/plot_results.py docs/conf.py resspect/database.py resspect/scripts/build_time_domain_SNPCC.py resspect/snana_fits_to_pd.py resspect/tests/test_learn_loop.py resspect/classifiers.py resspect/tests/test_cosmo_metric_utils.py resspect/exposure_time_calculator.py resspect/query_budget_strategies.py resspect/tests/test_metrics.py resspect/snanapipe/snana_hook.py resspect/snanapipe/samplerun.py example_scripts/build_canonical_plasticc.py resspect/scripts/build_canonical.py resspect/scripts/run_time_domain.py example_scripts/build_canonical_snpcc.py resspect/query_strategies.py resspect/scripts/make_metrics_plots.py resspect/time_domain_PLAsTiCC.py resspect/salt3_utils.py resspect/cosmo_metric_utils.py resspect/fit_lightcurves.py resspect/metrics.py resspect/build_plasticc_metadata.py resspect/build_plasticc_canonical.py resspect/scripts/build_time_domain.py resspect/scripts/calculate_cosmology_metric.py resspect/scripts/run_loop.py resspect/tests/test_testing.py setup.py resspect/tests/test_batch_functions.py resspect/build_snpcc_canonical.py resspect/time_domain_SNPCC.py resspect/learn_loop.py from_M_K entropy_from_M_K sample_M_K batch_sample entropy_from_probs_b_M_C joint_probs_M_K fast_multi_choices take_expand importance_weighted_entropy_p_b_M_C compute_conditional_entropies_B joint_probs_M_K_impl exact_batch split_arrays entropy_joint_probs_B_M_C errfunc bazinr fit_scipy bazin main build_plasticc_canonical main CanonicalPLAsTiCC main get_SNR_headers build_plasticc_metadata calculate_SNR _get_files_list build_snpcc_canonical get_light_curve_meta_info get_meta_data_from_features plot_snpcc_train_canonical main Canonical mlp random_forest knn main nbg gradient_boosted_trees PreFitVotingClassifier svm bootstrap_clf column_deriv_m update_matrix fisher_results assign_cosmo compare_two_fishers find_most_useful main fish_deriv_m main DataBase main ExpTimeCalc _get_features_to_write fit_plasticc_bazin write_features_to_output_file fit_resspect_bazin fit_snpcc_bazin LightCurve main run_classification save_photo_ids run_evaluation run_make_query _save_metrics_and_queried_samples load_features main learn_loop update_alternative_label get_photometry_with_id_name_and_snid get_snpcc_sntype _update_resspect_filter_values insert_band_column_to_resspect_df load_snpcc_photometry_df find_available_key_name_in_header _update_plasticc_filter_values get_resspect_header_data maybe_create_directory load_plasticc_photometry_df read_plasticc_full_photometry_data read_file read_tar_file read_resspect_full_photometry_data load_resspect_photometry_df get_query_flags purity cosmo_metric accuracy fom efficiency get_cosmo_metric main get_snpcc_metric main Canvas batch_queries_mi_entropy batch_queries_uncertainty qbd_mi qbd_entropy uncertainty_sampling compute_entropy random_sampling uncertainty_sampling_margin uncertainty_sampling_least_confident main uncertainty_sampling_entropy compute_qbd_mi_entropy parse_snid_file parse_salt2mu_output get_distances combine_fitres rewrite_master_colnames replace_zHD_with_simZCMB main parse_wfit_output read_fits download_data main time_domain_loop load_dataset PLAsTiCCPhotometry main _get_files_list SNPCCPhotometry main str2bool main main main main main main main str2bool SNANAHook pytest_report_header change_working_dir base_temp path_to_output_data path_to_input_data test_entropy_from_probs_b_M_C test_fit_scipy test_errfunc test_bazin test_assign_cosmo test_fish_deriv_m make_fake_data test_load_bazin_features test_random_smaller_than_one test_one_is_an_integer test_load_snpcc_lc test_check_queryable test_calc_exp_time test_fit_bazin_all test_load_plasticc_lc test_evaluate_bazin input_lc test_fit_bazin test_conv_flux_mag test_load_resspect_lc test_change_working_dir test_path_to_inputs extract_feature test_can_run_learn_loop test_purity test_efficiency test_fom labels test_get_snpcc_metric test_accuracy test_download_data shape shape transpose range reshape ones shape mean sum ones entropy_from_probs_b_M_C shape zeros range entropy_joint_probs_B_M_C shape empty range matmul append range exp fast_multi_choices reshape transpose take_expand sum log mean list repeat reshape repeat sum matmul importance_weighted_entropy_p_b_M_C shape zeros empty range bazin asarray least_squares isnan array argmax std clip x subplot list CanonicalPLAsTiCC read_metadata print to_csv clean_samples savefig build_canonical_sample figure find_subsamples legend distplot keys range find_neighbors append list f logical_and choice append range values int list get_SNR_headers str calculate_SNR print write close zfill keys sub range array read_csv read_fits open tolist extend join DataBase load_features snpcc_identify_samples data DataFrame concat to_csv get_meta_data_from_features metadata features snpcc_get_canonical_info Canonical find_neighbors show subplot exp plot xlabel reshape close ylabel tight_layout subplots_adjust score_samples savefig figure legend xlim values fit argmax list resample size mean append zeros clf_function PreFitVotingClassifier range RandomForestClassifier predict_proba predict fit XGBClassifier predict_proba predict fit fit predict_proba predict KNeighborsClassifier MLPClassifier predict_proba predict fit predict_proba SVC predict fit predict_proba predict GaussianNB fit clone list value w0waCDM print assign_cosmo zeros to distmod enumerate sum inv dot sqrt any array diagonal zeros range fish_deriv_m diag len list value w0waCDM print assign_cosmo zeros to distmod enumerate len column_deriv_m T transpose matmul dot array T sorted print len zip zeros enumerate print fisher_results bazin_features extend _get_features_to_write join write listdir get_resspect_header_data tolist LightCurve find_available_key_name_in_header read_plasticc_full_photometry_data tolist LightCurve find_available_key_name_in_header build_samples warning isinstance classify classify_bootstrap evaluate_classification output_photo_Ia str save_metrics save_queried_sample replace save_photo_ids run_classification update_samples run_evaluation deepcopy list run_classification info progressbar save_photo_ids DataBase run_evaluation run_make_query _save_metrics_and_queried_samples load_features range update_samples update_alternative_label endswith DataFrame read_tar_file read_fits zeros_like insert _update_resspect_filter_values values values endswith read_tar_file read_csv items list zeros_like endswith read_resspect_full_photometry_data makedirs append sum sum sum sum efficiency purity accuracy fom compare_two_fishers read_csv isinstance values get_distances str cosmo_metric to_csv abs sort rand copy argsort array zip append sum max log values batch_sample inf sample_M_K joint_probs_M_K shape compute_conditional_entropies_B exact_batch append sum array values mean compute_entropy int print argsort append abs array len seed int arange print index choice append array len int print append sum array log len int print argsort array append max len int print sort argsort append array len int print append array compute_qbd_mi_entropy len int print append array compute_qbd_mi_entropy len parse_snid_file strip DataFrame exists run str parse_salt2mu_output strftime combine_fitres append format zip enumerate join print SNANAHook now system parse_wfit_output makedirs list format all append min to_csv read_csv unique sample exists values makedirs join read_csv sleep append StringIO to_csv read_csv replace_zHD_with_simZCMB append DataFrame fillna print warn read_csv read replace len astype hstack to_pandas int32 zeros range drop join chmod basename move getenv download_file expanduser makedirs identify_keywords DataBase build_samples load_features queried_sample update_samples concat delete make_query drop train_labels validation_features test_features validation_labels DataFrame values test_labels str list validation_metadata evaluate_classification pool_labels load_dataset append classify range train_features insert pool_features queryable_ids astype features_names save_queried_sample load_features save_metrics make_query_budget int pool_metadata identify_keywords print DataBase classify_bootstrap index build_samples metadata_names test_metadata array len listdir compute print build_snpcc_canonical plot_snpcc_train_canonical output_plot_file isinstance SNPCCPhotometry raw_data_dir build_one_epoch output create_daily_file day_of_survey tel_sizes days_since_obs feature_method spec_SNR queryable_criteria tel_names get_cost data fisher_results comparison_data loadtxt compare_two_fishers update_data to_csv head sort_update save_file total_update read_csv sort_values array screen fit_plasticc_bazin fit_resspect_bazin fit_snpcc_bazin input load_metrics plot_metrics set_plot_dimensions Canvas training int learn_loop isinstance metrics feat_method features_dir time_domain_loop classifier batch full_features queried days strategy print join makedirs join format print strip makedirs skip getenv fail split join makedirs split seed entropy_from_probs_b_M_C rand bazin errfunc arange bazin download_data fit_scipy read_csv values assign_cosmo w0waCDM fish_deriv_m arange array download_data DataBase load_bazin_features read_csv assert_array_less rand download_data load_snpcc_lc LightCurve download_data load_snpcc_lc array LightCurve download_data load_resspect_lc array LightCurve download_data array LightCurve load_plasticc_lc conv_flux_mag values min check_queryable append filters fit_bazin_all range values calc_exp_time min check_queryable values fit_bazin bazin_features filters fit_bazin_all len max arange evaluate_bazin min array filters fit_bazin_all values getenv learn_loop fit_snpcc_bazin efficiency purity fom accuracy get_snpcc_metric array download_data | [![resspect](https://img.shields.io/badge/COIN--Focus-RESSPECT-red)](http://cosmostatistics-initiative.org/resspect/) # <img align="right" src="docs/images/logo_small.png" width="350"> RESSPECT ## Recommendation System for Spectroscopic follow-up This repository holds the pipeline of the RESSPECT project, built as part of the inter-collaboration activities developed by the Cosmostatistics Initiative ([COIN](cosmostatistics-initiative.org)) and the LSST Dark Energy Science Collaboration ([DESC](https://lsstdesc.org/)). This work grew from activities developed within the [COIN Residence Program #4](http://iaacoin.wix.com/crp2017), using as a starting point their [ActSNClass](https://github.com/COINtoolbox/ActSNClass) software. The active learning and telescope resources pipeline is described in [Kennamer et al, 2020](https://cosmostatistics-initiative.org/portfolio-item/resspect1/). The pre-processed data set used to obtain the results shown in the paper is available through zenodo at [de Souza et al., 2020](https://zenodo.org/record/4399109#.X-sL21lKhNg). We kindly ask you to include the full citation for the above mentioned work if you use this material in your research. Full documentation can be found at [readthedocs](https://resspect.readthedocs.io/en/latest/). # Dependencies ### For code: | 177 |
COMP6248-Reproducability-Challenge/REPRODUCIBILITY-REPORT-THE-LOTTERY-TICKET-HYPOTHESIS | ['network pruning'] | ['The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks'] | archs/cifar100/resnet.py archs/cifar100/vgg.py archs/mnist/AlexNet.py archs/cifar10/AlexNet.py archs/cifar10/LeNet5.py archs/cifar10/resnet.py archs/mnist/resnet.py archs/cifar100/AlexNet.py archs/cifar10/vgg.py utils.py archs/mnist/vgg.py combine_plots.py archs/cifar100/fc1.py archs/cifar10/fc1.py archs/cifar10/densenet.py main.py archs/mnist/LeNet5.py archs/mnist/fc1.py time_plot.py archs/cifar100/LeNet5.py make_mask test prune_by_percentile weight_init main train original_initialization checkdir original_initialization plot_train_test_stats print_nonzeros AlexNet _bn_function_factory densenet161 _load_state_dict DenseNet densenet169 densenet201 _DenseLayer _DenseBlock _densenet _Transition densenet121 fc1 LeNet5 ResNet ResNet34 Bottleneck ResNet101 test ResNet50 resnet18 BasicBlock ResNet152 vgg19 VGG vgg16_bn _vgg vgg19_bn vgg11_bn vgg13 vgg11 make_layers vgg13_bn vgg16 AlexNet fc1 LeNet5 conv1x1 resnext50_32x4d wide_resnet50_2 ResNet resnet50 resnext101_32x8d Bottleneck resnet152 wide_resnet101_2 conv3x3 _resnet resnet34 resnet18 BasicBlock resnet101 vgg19 VGG vgg16_bn _vgg vgg19_bn vgg11_bn vgg13 vgg11 make_layers vgg13_bn vgg16 AlexNet fc1 LeNet5 conv1x1 resnext50_32x4d wide_resnet50_2 ResNet resnet50 resnext101_32x8d Bottleneck resnet152 wide_resnet101_2 conv3x3 _resnet resnet34 resnet18 BasicBlock resnet101 vgg_block vgg16 prune_iterations arange make_mask grid DataLoader set_description save device xticks original_initialization list prune_percent FashionMNIST exit Adam ylabel apply checkdir title ylim savefig legend to CIFAR100 CrossEntropyLoss range state_dict dump plot print_nonzeros Compose astype close test CIFAR10 start_iter MNIST deepcopy print end_iter xlabel min named_parameters prune_by_percentile parameters tqdm zeros train add_scalar criterion model backward to step zero_grad where named_parameters device numpy enumerate eval device percentile where named_parameters numpy device to abs ones_like numpy named_parameters to named_parameters device data constant_ ConvTranspose3d BatchNorm3d Conv3d normal_ BatchNorm1d xavier_normal_ GRUCell GRU BatchNorm2d ConvTranspose1d LSTMCell Conv1d ConvTranspose2d Linear isinstance orthogonal_ Conv2d parameters LSTM count_nonzero print named_parameters shape numpy prod makedirs yscale show arange plot xlabel ylabel title clf set_style ylim legend savefig list group match load_state_dict load_state_dict_from_url keys compile _load_state_dict DenseNet randn print ResNet18 size net Conv2d make_layers VGG ResNet range | # REPRODUCIBILITY-REPORT-THE-LOTTERY-TICKET-HYPOTHESIS This one is reproduction of 'THE LOTTERY TICKET HYPOTHESIS: FINDING SPARSE, TRAINABLE NEURAL NETWORKS' paper \ The paper: https://arxiv.org/abs/1803.03635 \ The source code: https://github.com/rahulvigneswaran/Lottery-Ticket-Hypothesis-in-Pytorch \ Group name: SQL Group member: Fengxia Shu, Xuan Qi, Liang Liang | 178 |
CQFIO/FastImageProcessing | ['style transfer'] | ['Fast Image Processing with Fully-Convolutional Networks'] | Single_Network/combined.py CAN24_AN/demo.py Parameterized_Network/parameterized.py lrelu identity_initializer nm prepare_data build lrelu identity_initializer nm prepare_data build lrelu identity_initializer nm one_hot_map prepare_data build Variable conv2d append range zeros | CQFIO/FastImageProcessing | 179 |
CR-Gjx/LeakGAN | ['text generation'] | ['Long Text Generation via Adversarial Training with Leaked Information'] | Synthetic Data/LeakGANModel.py Image COCO/convert.py Synthetic Data/Main.py Image COCO/LeakGANModel.py Image COCO/Main.py No Temperature/Synthetic Data/Main.py No Temperature/Synthetic Data/Discriminator.py Synthetic Data/target_lstm.py No Temperature/Synthetic Data/dataloader.py No Temperature/Image COCO/eval_bleu.py No Temperature/Image COCO/LeakGANModel.py Synthetic Data/Discriminator.py No Temperature/Synthetic Data/LeakGANModel.py No Temperature/Image COCO/Main.py Synthetic Data/dataloader.py No Temperature/Image COCO/Discriminator.py No Temperature/Synthetic Data/target_lstm.py No Temperature/Image COCO/convert.py No Temperature/Image COCO/dataloader.py Image COCO/Discriminator.py Synthetic Data/target_lstm20.py Image COCO/dataloader.py Image COCO/eval_bleu.py No Temperature/Synthetic Data/target_lstm20.py Gen_Data_loader Dis_dataloader cosine_similarity linear highway Discriminator LeakGAN generate_samples rescale get_reward main target_loss redistribution pre_train_epoch Gen_Data_loader Dis_dataloader cosine_similarity linear highway Discriminator LeakGAN generate_samples rescale get_reward main target_loss redistribution pre_train_epoch Gen_Data_loader Dis_dataloader cosine_similarity linear highway Discriminator LeakGAN generate_samples rescale get_reward main target_loss redistribution pre_train_epoch TARGET_LSTM TARGET_LSTM20 Gen_Data_loader Dis_dataloader cosine_similarity linear highway Discriminator LeakGAN generate_samples rescale get_reward main target_loss redistribution pre_train_epoch TARGET_LSTM TARGET_LSTM20 multiply l2_normalize as_list int range extend generate num_batch append next_batch reset_pointer range run pretrain_step num_batch append next_batch reset_pointer range shape redistribution zeros array range len int gen_for_reward step_size concatenate transpose min ypred_for_auc rescale sequence_length append next_batch array range run model Dis_dataloader num_batch Gen_Data_loader create_batches Saver save update_feature_function Session open seed run str restore global_variables Discriminator generate pre_train_epoch range latest_checkpoint load_train_data close get_reward gen_x ConfigProto generate_samples LeakGAN print write global_variables_initializer next_batch reset_pointer pretrain_loss TARGET_LSTM target_loss length load TARGET_LSTM20 | # LeakGAN The code of research paper [Long Text Generation via Adversarial Training with Leaked Information](https://arxiv.org/abs/1709.08624). This paper has been accepted at the Thirty-Second AAAI Conference on Artificial Intelligence ([AAAI-18](https://aaai.org/Conferences/AAAI-18/)). ## Requirements * **Tensorflow r1.2.1** * Python 2.7 * CUDA 7.5+ (For GPU) ## Introduction Automatically generating coherent and semantically meaningful text has many applications in machine translation, dialogue systems, image captioning, etc. Recently, by combining with policy gradient, Generative Adversarial Nets (GAN) that use a discriminative model to guide the training of the generative model as a reinforcement learning policy has shown promising results in text generation. However, the scalar guiding signal is only available after the entire text has been generated and lacks intermediate information about text structure during the generative process. As such, it limits its success when the length of the generated text samples is long (more than 20 words). In this project, we propose a new framework, called LeakGAN, to address the problem for long text generation. We allow the discriminative net to leak its own high-level extracted features to the generative net to further help the guidance. The generator incorporates such informative signals into all generation steps through an additional Manager module, which takes the extracted features of current generated words and outputs a latent vector to guide the Worker module for next-word generation. Our extensive experiments on synthetic data and various real-world tasks with Turing test demonstrate that LeakGAN is highly effective in long text generation and also improves the performance in short text generation scenarios. More importantly, without any supervision, LeakGAN would be able to implicitly learn sentence structures only through the interaction between Manager and Worker. ![](https://github.com/CR-Gjx/LeakGAN/blob/master/figures/leakgan.png) | 180 |
CRIPAC-DIG/A-PGNN | ['session based recommendations', 'machine translation'] | ['Personalized Graph Neural Networks with Attention Mechanism for Session-Aware Recommendation'] | model_last.py transformer.py train_last.py record.py normalize decoder feedforward multihead_attention pos_encoding encoder to_float exp concat log expand_dims float range | # A-PGNN The code and dataset for our TKDE paper: Personalized Graph Neural Networks with Attention Mechanism for Session-Aware Recommendation (https://ieeexplore.ieee.org/abstract/document/9226110). We have implemented our methods in Tensorflow. Here are two datasets we used in our paper. * Xing http://2016.recsyschallenge.com/ * Reddit https://www.kaggle.com/colemaclean/subreddit-interactions The processed data can be downloaded: https://www.dropbox.com/sh/hwx2347ir1worag/AABJK6IBXHNBlbvrvKqw94YKa?dl=0 ## Usage ### Generate data You need to run the file ```record.py``` first to preprocess the data to generate the tf.record formart data for training and test. For example: | 181 |
CRIPAC-DIG/SR-GNN | ['session based recommendations'] | ['Session-based Recommendation with Graph Neural Networks'] | tensorflow_code/model.py tensorflow_code/main.py tensorflow_code/utils.py datasets/preprocess.py pytorch_code/utils.py pytorch_code/model.py pytorch_code/main.py process_seqs obtian_tra obtian_tes main GNN trans_to_cuda train_test trans_to_cpu SessionGraph forward data_masks build_graph split_validation Data GGNN Model data_masks build_graph split_validation Data print list range zip len load validation Data trans_to_cuda time epoch print train_test valid_portion SessionGraph split_validation dataset range open is_available is_available trans_to_cuda get_slice model stack float long arange batch_size zero_grad forward step mask append isin generate_batch mean eval zip long trans_to_cuda backward print loss_function train numpy len add_edge in_edges DiGraph nodes range len max int arange shuffle round len | # SR-GNN ## Paper data and code This is the code for the AAAI 2019 Paper: [Session-based Recommendation with Graph Neural Networks](https://arxiv.org/abs/1811.00855). We have implemented our methods in both **Tensorflow** and **Pytorch**. Here are two datasets we used in our paper. After downloaded the datasets, you can put them in the folder `datasets/`: - YOOCHOOSE: <http://2015.recsyschallenge.com/challenge.html> or <https://www.kaggle.com/chadgostopp/recsys-challenge-2015> - DIGINETICA: <http://cikm2016.cs.iupui.edu/cikm-cup> or <https://competitions.codalab.org/competitions/11161> There is a small dataset `sample` included in the folder `datasets/`, which can be used to test the correctness of the code. We have also written a [blog](https://sxkdz.github.io/research/SR-GNN) explaining the paper. ## Usage You need to run the file `datasets/preprocess.py` first to preprocess the data. | 182 |
CRIPAC-DIG/TAGNN | ['session based recommendations'] | ['TAGNN: Target Attentive Graph Neural Networks for Session-based Recommendation'] | utils.py main.py model.py main GNN trans_to_cuda train_test trans_to_cpu SessionGraph forward data_masks build_graph split_validation Data load validation Data trans_to_cuda time epoch print train_test valid_portion SessionGraph split_validation dataset range open is_available is_available trans_to_cuda get_slice model stack float long arange batch_size zero_grad forward step mask append isin generate_batch mean eval zip long trans_to_cuda backward print loss_function train numpy len add_edge in_edges DiGraph nodes range len max int arange shuffle round len | # TAGNN Implementation for the paper entitled "TAGNN: Target Attentive Graph Neural Networks for Session-based Recommendation" | 183 |
CSAILVision/GazeCapture | ['gaze estimation'] | ['Eye Tracking for Everyone'] | pytorch/main.py pytorch/ITrackerData.py pytorch/ITrackerModel.py pytorch/prepareDataset.py SubtractMean loadMetadata ITrackerData ITrackerModel FaceImageModel FaceGridModel ItrackerImageModel validate load_checkpoint AverageMeter save_checkpoint adjust_learning_rate main train str2bool cropImage readJson preparePath main logError print loadmat ITrackerModel validate print load_checkpoint min SGD parameters DataParallel DataLoader adjust_learning_rate load_state_dict save_checkpoint ITrackerData train cuda range update time format criterion model Variable backward size AverageMeter zero_grad print item step cuda enumerate len update time mul format criterion Variable print size AverageMeter mean sqrt eval item sum cuda enumerate len print join load copyfile join save makedirs dataset_path object flatten save open cropImage ones logical_and input sum bboxFromJson int16 astype stack readJson preparePath eval logError listdir enumerate join uint8 sort convert savemat int32 zeros bool loadmat array output_path len logError join remove isdir rmtree listdir makedirs print exit minimum dtype maximum zeros array | # Eye Tracking for Everyone Code, Dataset and Models ## Introduction This is the README file for the official code, dataset and model release associated with the 2016 CVPR paper, "Eye Tracking for Everyone". The dataset release is broken up into three parts: * **Data** (image files and associated metadata) * **Models** (Caffe model definitions) * **Code** (some essential scripts to make use of the data) Continue reading for more information on each part. ## History Any necessary changes to the dataset will be documented here. | 184 |
CSAILVision/sceneparsing | ['scene parsing', 'semantic segmentation'] | ['Semantic Understanding of Scenes through the ADE20K Dataset'] | evaluationCode/utils_eval.py trainingCode/caffe/ade_layers.py pixelAccuracy intersectionAndUnion AdeSegDataLayer asarray histogram sum asarray | # Development Kit for MIT Scene Parsing Benchmark [NEW!] Our PyTorch implementation is released in the following repository: https://github.com/hangzhaomit/semantic-segmentation-pytorch ## Introduction Table of contents: - Overview of scene parsing benchmark - Benchmark details 1. Image list and annotations 2. Submission format 3. Evaluation routines | 185 |
CUVL/Neural-Manifold-Ordinary-Differential-Equations | ['density estimation'] | ['Neural Manifold Ordinary Differential Equations'] | flows/utils.py test_densities/density_sphere.py flows/manifold.py test_densities/density_hyp.py distributions/vmf.py flows/hyperbolic.py flows/mcnf.py distributions/wnormal.py main_density.py flows/sphere.py main compute_loss _kl_vmf_uniform VonMisesFisher IveFunction Ive ive_fraction_approx ive_fraction_approx2 HypersphericalUniform WrappedNormal Lorentz Manifold _flip TimeNetwork ODEfunc SphereProj HCNF AmbientProjNN create_network divergence_bf SCNF FirstJacobianScalar Sphere tanh logsinh MultiInputSequential Acosh logsumexp_signs Arsinh check_mkdir clamp Divsinh sqrt LeakyClamp cosh sinh Artanh Sindiv Sinhdiv Divsin true_5gaussians_probs plot_poincare_density true_1wrapped_probs true_bigcheckerboard_probs make_grid_hyp true_mult_wrapped_probs data_gen_hyp plot_distr model_probs true_4wrapped_probs model_probs make_grid_sphere plot_sphere_density data_gen_sphere true_1wrapped_probs true_bigcheckerboard_probs xyz_to_spherical plot_distr spherical_to_xyz log_prob model batch_size zero_grad MultiStepLR dev compute_loss plot_distr Adam epochs to range glob close contsave item vars remove check_mkdir backward print parameters num_drops step real real clamp delta_a real range range size dim arange clamp_ clamp_ unsqueeze cat squeeze ones_like max join isdigit basename format isdir print makedirs dirname split T exp0 rsample randn ones squeeze rand unsqueeze_tangent stack unsqueeze cat append tensor range WrappedNormal len reshape axis xlim pcolormesh ylim Normalize figure get_cmap true_5gaussians_probs plot_poincare_density print true_1wrapped_probs true_bigcheckerboard_probs make_grid_hyp savefig true_mult_wrapped_probs model_probs T exp0 exp ones unsqueeze_tangent unsqueeze log_prob WrappedNormal tensor ones_like eye MultivariateNormal zeros range in_board repeat append zeros range WrappedNormal len to exp log_prob model exp0 unsqueeze_tangent to_poincare stack linspace meshgrid sqrt empty pi arctan2 stack cos sin reshape set_yticks grid add_subplot set_global pcolormesh pi Mollweide set_xticks figure Normalize get_cmap rsample ones rand pi stack repeat cat Sphere projx append tensor spherical_to_xyz range WrappedNormal len true_4wrapped_probs make_grid_sphere plot_sphere_density repeat Sphere tensor spherical_to_xyz projx exp ones log_prob repeat Sphere append tensor spherical_to_xyz range WrappedNormal len pi abs pi stack linspace meshgrid spherical_to_xyz log | # Neural Manifold Ordinary Differential Equations (ODEs) We provide the code for [Neural Manifold ODEs](https://arxiv.org/abs/2006.10254) in this repository. Summary: We introduce Neural Manifold Ordinary Differential Equations, a manifold generalization of Neural ODEs, and construct Manifold Continuous Normalizing Flows (MCNFs). MCNFs require only local geometry (therefore generalizing to arbitrary manifolds) and compute probabilities with continuous change of variables (allowing for a simple and expressive flow construction). We find that leveraging continuous manifold dynamics produces a marked improvement for both density estimation and downstream tasks. The multi-chart method from our paper (allowing generality) is showcased in the below figure. ![Multi-chart approach](https://i.imgur.com/TuTFi2n.png) Example learned densities, together with baselines, are given below. Hyperboloid | Sphere :-------------------------:|:-------------------------: ![H^2](https://i.imgur.com/xcbMjnK.png)| ![S^2](https://i.imgur.com/JyQYdiL.png) Below we have visualized how our Neural Manifold ODEs learn the `5gaussians` and `bigcheckerboard` densities on the hyperboloid (second and third rows in the hyperboloid figure above), as well as the `4wrapped` and `bigcheckerboard` densities on the sphere (second and third rows in the sphere figure above). | 186 |
CVBase-Bupt/EndtoEndCroppingSystem | ['image cropping'] | ['An End-to-End Neural Network for Image Cropping by Learning Composition from Aesthetic Photos'] | demo.py models/config.py models/RoiPoolingConv.py models/model.py utils.py main get_shape runn normalization recover_from_normalization recover_from_normalization_with_order add_offset Config EndToEndModel RoiPoolingConv_Boundary RoiPoolingConv save saliency_box_color log aesthetics_box_color open crop_out_path get_shape list copyMakeBorder log_path normalization append expand_dims predict asarray size astype copy mkdir scale ratio listdir crop join box_out_path isdir Draw convert draw rectangle recover_from_normalization_with_order array add_offset makedirs runn model image_path load_weights BuildModel int float min max | # End-to-End Cropping System This is an offical implemenation for **An End-to-End Neural Network for Image Cropping by Learning Composition from Aesthetic Photos**. Given a source image, our algorithm could take actions step by step to find almost the best cropping window on source image. ## Get Start Install the python libraries. (See Requirements). Download the code from GitHub: ``` git clone https://github.com/CVBase-Bupt/EndtoEndCroppingSystem.git cd EndtoEndCroppingSystem | 187 |
CVLAB-Unibo/Semantic-Mono-Depth | ['depth estimation', 'monocular depth estimation', 'semantic segmentation'] | ['Geometry meets semantics for semi-supervised monocular depth estimation'] | monodepth_dataloader.py average_gradients.py utils/evaluation_utils.py utils/evaluate_kitti.py monodepth_main.py utils/shuffler.py utils.py bilinear_sampler.py monodepth_model.py utils/visualize_semantic.py average_gradients bilinear_sampler_1d_h string_length_tf MonodepthDataloader count_text_lines test post_process_disparity main train MonodepthModel get_num_classes get_var_to_restore_list colormap_semantic colormap_depth sub2ind lin_interp convert_disps_to_depths_kitti read_calib_file generate_depth_map compute_errors read_file_data read_text_lines load_gt_disp_kitti load_velodyne_points get_focal_length_baseline concat reduce_mean zip append expand_dims shape linspace meshgrid fliplr clip readlines close open trainable_variables checkpoint_path semantic_image_batch get_var_to_restore_list filenames_file Saver MonodepthDataloader save dataset argmax Session run str restore squeeze log_directory MonodepthModel dirname range valid_image_batch start_queue_runners format latest_checkpoint right_image_batch post_process_disparity ConfigProto zeros local_variables_initializer task print count_text_lines data_path Coordinator mode output_directory model_name global_variables_initializer left_image_batch len test template train monodepth_parameters uint8 ones_like zeros_like multiply squeeze where stack cast range equal len constant to_int32 gather get_cmap round NewCheckpointReader replace get_collection GLOBAL_VARIABLES get_variable_to_shape_map maximum mean sqrt abs log astype float32 zfill append imread range shape resize append range len readlines close open format print int32 isfile append split reshape T arange LinearNDInterpolator reshape meshgrid set reshape read_calib_file int T sub2ind lin_interp read_calib_file reshape hstack min dot shape vstack round eye zeros load_velodyne_points | # Semantic-Mono-Depth ![image](images/SemanticMonoDepth.PNG) This repository contains the source code of Semantic-Mono-Depth, proposed in the paper "Geometry meets semantics for semi-supervised monocular depth estimation", ACCV 2018. If you use this code in your projects, please cite our paper: ``` @inproceedings{ramirez2018, title = {Geometry meets semantic for semi-supervised monocular depth estimation}, author = {Zama Ramirez, Pierluigi and Poggi, Matteo and Tosi, Fabio and | 188 |
CVRL/OpenSourceIrisPAD | ['iris recognition', 'semantic segmentation'] | ['Open Source Presentation Attack Detection Baseline for Iris Recognition'] | python/BSIF_C/setup.py python/manager.py python/filter.py extract s2i generateHistogram manager load int copyMakeBorder ones float64 filter2D int64 histogram BORDER_WRAP empty range str asarray File generateHistogram close imread pyrDown | # OpenSourceIrisPAD (v2 - 13 April 2019) This repo contains the open-source implementation of iris PAD based on BSIF and a fusion of multiple classifiers, and is based on Jay Doyle's paper: ["Robust Detection of Textured Contact Lenses in Iris Recognition Using BSIF", IEEE Access, 2015](https://ieeexplore.ieee.org/document/7264974/). The paper presenting this implementation is available in [arXiv](https://arxiv.org/abs/1809.10172). ## Linux installation ### Environment ```bash conda env create -f environment.yaml ``` ### BSIF ```bash | 189 |
CVRL/RaspberryPiOpenSourceIris | ['iris recognition', 'iris segmentation', 'semantic segmentation'] | ['Open Source Iris Recognition Hardware and Software with Presentation Attack Detection', 'Open Source Presentation Attack Detection Baseline for Iris Recognition', 'Iris Presentation Attack Detection Based on Photometric Stereo Features'] | utils/utils.py CCNet/modules/network.py CCNet/modules/transform.py PAD/OSPAD_3D.py CCNet/modules/criterion.py segmentation/SegNet.py OSIRIS_SEGM/OSIRIS_SEGM.py PAD/OSPAD_2D.py CCNet/modules/dataset.py PAD/BSIF_C/test.py utils/test_camera.py segmentation/UNet.py segmentation/custom_layers.py CCNet/torch2keras.py recognition/IrisRecognition.py PAD/filter.py PAD/BSIF_C/setup.py CCNet/main.py code/iris_system.py main train evaluate CrossEntropyLoss2d jaccard_loss image_basename load_image IrisSegmDataset image_path UNetDown UNetUp UNet ToLabel Relabel colormap OsIris s2i generateHistogram OSPAD_2D OSPAD_3D IrisRecognition MaxPoolingWithIndices UpSamplingWithIndices SegNet UNet show show xyr_from_txt get_cfg save pi_camera dataset_metadata mask_dir model zero_grad DataLoader numpy save argmax cuda ones image_dir len logical_and Adam LogSoftmax dirname append sum range state_dict format LongTensor item num_epochs long enumerate criterion backward print CrossEntropyLoss2d Variable parameters logical_or IrisSegmDataset step makedirs argmax fromarray list uint8 model Variable print convert BILINEAR astype LogSoftmax shape eval unsqueeze save resize numpy cuda load evaluate print set_device state UNet load_state_dict train cuda gpu sum tuple sigmoid mean softmax float type range cat ndimension format uint8 arange astype zeros array load int asarray std copyMakeBorder ones print float64 filter2D mean int64 histogram BORDER_WRAP empty pyrDown range namedWindow waitKey resizeWindow imshow WINDOW_NORMAL destroyAllWindows load open int astype reshape sqrt fromarray | # RaspberryPiOpenSourceIris Official Repo for IJCB 2020 Paper: Open Source Iris Recognition Hardware and Software with Presentation Attack Detection ([https://arxiv.org/abs/2008.08220](https://arxiv.org/abs/2008.08220))<br/> *Zhaoyuan Fang, Adam Czajka<br/>* <img src="Teaser.png" width="800" > ## Cite If you find this repository useful for your research, please consider citing our work: ``` @article{fang2020osirishardsoft, title={Open Source Iris Recognition Hardware and Software with Presentation Attack Detection}, | 190 |
CVRL/iris-recognition-OTS-DNN | ['iris recognition', 'iris segmentation', 'semantic segmentation'] | ['Iris Recognition with Image Segmentation Employing Retrained Off-the-Shelf Deep Neural Networks'] | code/augment.py | # iris-recognition-OTS-DNN Code and models for the ICB 2019 paper: Iris Recognition with Image Segmentation Employing Retrained Off-the-Shelf Deep Neural Networks. Pre-print available at: https://arxiv.org/abs/1901.01028 ## Contents ### code You will need to edit the scripts and programs below to point to the correct paths for your data. See the linked pages for more details, such as how to prepare the data. - dilated-cnn: scripts used to train and test the frontend and context modules, proposed by Yu and Koltun. See their [training page](https://github.com/fyu/dilation/blob/master/docs/training.md) for more info. - DRN: script used to train the dilated residual network, proposed by Yu, Koltun, and Funkhouser. See their [README](https://github.com/fyu/drn) for more info. - SegNet: MATLAB programs used to train and test SegNet. See MATLAB's old guide on [Semantic Segmentation Using Deep Learning](https://web.archive.org/web/20180527004009/https://www.mathworks.com/help/vision/examples/semantic-segmentation-using-deep-learning.html), which the training program was based on. - augment.py: script used to augment the training data ### models | 191 |
CVxTz/fingerprint_denoising | ['denoising'] | ['Deep End-to-end Fingerprint Denoising and Inpainting'] | code/aug_utils.py code/baseline_aug.py code/baseline_aug_predict.py random_crop random_channel_shift random_zoom random_saturation shift random_rotate random_flip random_brightness random_gray random_shear random_augmentation random_contrast random_shift custom_activation get_unet read_gt read_input gen batch flip_axis apply_affine_transform uniform pi apply_affine_transform uniform shift apply_affine_transform uniform apply_affine_transform uniform sum array dstack size uniform sum array clip uniform clip uniform sum array clip stack rollaxis randint random_channel_shift random_zoom random_saturation random_rotate random_flip random_brightness random_gray random_shear random_contrast random_shift concatenate Model summary Input compile imread resize imread resize list map choice random_augmentation zip append range len range len | CVxTz/fingerprint_denoising | 192 |
CW-Huang/NAF | ['density estimation', 'speech synthesis'] | ['Neural Autoregressive Flows'] | external_maf/power.py external_maf/util.py external_maf/cifar10.py external_maf/datasets/__init__.py download_datasets.py external_maf/__init__.py sf_sinewave.py external_maf/mnist.py external_maf/miniboone.py maf_experiments.py external_maf/bsds300.py steps_plot.py external_maf/hepmass.py vae_experiments.py ops.py external_maf/gas.py get_file load_mnist_images_np load_cifar10 ParanoidURLopener Progbar load_batch args2fn model main parse_args MAF check_args load_omniglot_image load_mnist_image load_cifar10_image load_bmnist_image InputOnly load_maf_data DatasetWrapper model_1d energy1 Sinewave step sigmoids model VAE main parse_args check_args BSDS300 CIFAR10 GAS get_correlation_numbers load_data load_data_and_clean load_data_and_clean_and_split load_data_no_discrete_normalised_as_array HEPMASS load_data_no_discrete load_data_no_discrete_normalised load_data load_data_normalised load_data MINIBOONE MNIST POWER load_data_split_with_noise load_data load_data_normalised discrete_sample load logit ess_importance probs2contours ess_mcmc calc_whitening_transform copy_model_parms one_hot_encode isdistribution plot_pdf_marginals disp_imdata isposint plot_hist_marginals logistic save whiten make_folder join print extractall retrieve close open expanduser makedirs load items list reshape close open get_file join str print reshape zeros range load_batch add_argument ArgumentParser log_dir result_dir dataset save_dir makedirs without_keys model loads valid_loader save_dir seed args2fn exit parse_args to_train update epoch test_loader resume manual_seed read evaluate print fn isfile train reshape format astype load astype OneHotEncoder open arange reshape_data astype delete choice loadmat mul pi permute sin sum anneal replace load read_pickle drop corr sum get_correlation_numbers mean any load_data std drop int as_matrix read_csv load_data drop mean std load_data_no_discrete int T Counter load_data_no_discrete_normalised append load int mean vstack load_data std int RandomState rand hstack shuffle delete load_data zeros load_data_split_with_noise subplots ndarray isinstance mpl_connect flatten set_visible plot_page prod rand sum zeros_like mean shape sum range ones_like asarray cumsum reshape flatten shape argsort show list asarray subplots probs2contours plot concatenate vlines reshape set_xlim ndim shape eval linspace meshgrid contour range set_ylim show int asarray subplots vlines plot set_xlim sqrt hist range set_ylim dump close open close open T eig mean sqrt dot dot copy parms get_value set_value zip zeros makedirs | # NAF Experiments for the Neural Autoregressive Flows paper This repo depends on another library for pytorch modules: https://github.com/CW-Huang/torchkit To download datasets, please modify L21-24 of `download_datasets.py`. | 193 |
CZHQuality/AAA-Pix2pix | ['adversarial attack'] | ['A New Ensemble Adversarial Attack Powered by Long-term Gradient Memories'] | code/Main_Ensemble_Attack_2.py code/data/aligned_dataset_My_2.py code/Options.py code/Main_Single_Image_Space_Attack.py code/Main_Single_Feature_Space_Attack.py code/Main_Ensemble_Attack_IF.py code/data/aligned_dataset.py code/data/base_data_loader.py code/Loss_functions.py code/Attack_methods_library.py code/config_global.py code/data/custom_dataset_data_loader.py code/Main_Ensemble_Attack.py code/data/aligned_dataset_My.py code/data/data_loader.py code/data/image_folder.py code/Pretrained_models.py code/data/base_dataset.py attack_method_8 attack_method_13 attack_method_4 attack_method_16 attack_method_3 attack_method_10 attack_method_1 attack_method_7 attack_method_12 attack_method_5 attack_method_11 attack_method_2 attack_method_9 attack_method_15 attack_method_6 attack_method_14 ssim MS_SSIM KLLoss _ssim ms_ssim SSIM NSSLoss CCLoss _fspecial_gauss_1d gaussian_filter test test test test test tensor2saveimg My_dataload GazeGAN_1 SALICON_2 Localpix2pix MyLSTM MyLSTMCell SAM_VGG_1 Globalpix2pix ResnetBlock decoderconv_2 SalGAN_BCE dimredconv SELayer ResnetBlock_dilated DCN_2 DCN_SAM_VGG DCN_Inception GazeGAN_CSC SAM_VGG_2 SAM_ResNet encoderconv_2 GazeGAN_2 DeepGaze_only_VGG decoderconv_3 Spatial_Channel_Gate_Layer DCN_LSTM_1 AlignedDataset AlignedDataset AlignedDataset BaseDataset __flip get_transform __crop __make_power_2 __scale_width normalize get_params BaseDataLoader CustomDatasetDataLoader CreateDataset CreateDataLoader is_image_file ImageFolder default_loader make_dataset to exp shape transpose conv2d mean pow shape device to gaussian_filter _ssim repeat _fspecial_gauss_1d mean _ssim avg_pool2d mean stack repeat prod unsqueeze device append to _fspecial_gauss_1d range L1Loss data KLLoss criterion_percp zero_grad model_1 criterion_NSS resize CCLoss criterion_KL max str transpose SSIM MSELoss from_numpy NSSLoss to expand_dims criterion_CC range size criterion_L1 close attack_used saliency_attack_start item float saliency_attack tensor2saveimg ANTIALIAS print backward min convert write model_2 pow zeros numpy array My_dataload sorted ANTIALIAS convert make_dataset from_numpy resize open array len fromarray squeeze min transpose astype save cpu numpy max clip loadSize fineSize maximum randint Lambda Scale n_downsample_global append float int size round int size size print name AlignedDataset initialize CustomDatasetDataLoader name print initialize is_image_file join sorted append walk | # SMGEA: Serial-Mini-Group-Ensemble-Attack **SMGEA** is a new **Black-Box** Adversarial Attack against various **Pixel-to-Pixel** Tasks, such as **Saliency Detection, Depth Estimation, Image Translation, etc.** This code repository is an Open-Source Toolbox based on Pytorch Platform. A **preliminary version** of this repository has been accepted by **AAAI2020**: ‘‘***A New Ensemble Adversarial Attack Powered by Long-term Gradient Memories***’’ We provide 3 visualizations (GIF format) for your reference. Each GIF contains two parts: Part-I: In the begining still frames: the upper-left region is the original clean image, the bottom-left region is the ground-truth output of the clean image, the upper-right region is the guide image, the bottom-right region is the ground-truth output of the guide image Part-II: In the following dynamic frames: the upper-left region is the crafted adversarial example, the upper-right region is the normalized perturbation (obtained by elemen-wise subtraction of clean image and adversarial example, and normalized by min-max normalization for better obvervation). | 194 |
CZWin32768/seqmnist | ['machine translation'] | ['Sequence Level Training with Recurrent Neural Networks'] | seqmnist/main.py seqmnist/field.py seqmnist/trainer.py seqmnist/seqmnist_dataset.py setup.py seqmnist/img_encoder.py TgtField SrcField EncoderCNN2D conf build_dataset train build_model SeqMnistDataset SeqMnistExample SupervisedTrainer print parameters EncoderCNN2D uniform_ DecoderRNN Seq2seq train_path dev_path print SeqMnistDataset TgtField SrcField build_vocab build_dataset SupervisedTrainer build_model add_argument ArgumentParser | # Sequence MNIST ## Dataset https://drive.google.com/open?id=1I8NbuUc0vF3igpCihhryVuiNd31nlwkh ## Requirements For `seq2seq` package, refer to https://github.com/CZWin32768/pytorch-seq2seq. ``` imageio torchtext torch tqdm | 195 |
Caiyq2019/DNF | ['speaker recognition'] | ['Deep Normalization for Speaker Vectors'] | train.py tsne.py data_load.py utils.py score.py model.py data_load dataset_prepare load_data_normalised get_data get_mask FlowSequential MADE MaskedLinear compute_eer supervise_mean_var compute_idr data_load Cosine_score load_trails cosine_scoring_by_trails initial data_normlize supervise_mean_var train get_data main plot_embedding AverageMeter save_images load print append array range len get_data y print data_load from_numpy TensorDataset x arange load load_trails mean int sorted list float range len load_trails argmax set append split array open deepcopy list set mean append array supervise_mean_var norm dot append array range data_load Cosine_score compute_eer numpy cat print clone dnf_Gaussian_log_likelihood vc zero_grad set_description initial update val class_mean format size close mean avg item enumerate backward AverageMeter tqdm index_select step format set subplot title scatter figure format TSNE print close set get_data plot_embedding mkdir savefig fit_transform len eval makedirs | # DNF A Pytorch implementations of DNF for author's article ["Deep normalization for speaker vectors"](https://arxiv.org/abs/2004.04095) The neural network structure is based on "Masked Autoregressive Flow", and the source code from [ikostrikov](https://github.com/ikostrikov/pytorch-flows/blob/master/README.md) ## Datasets ```bash trainingset:Voxceleb testset: SITW, CNCeleb ``` Following this [link](https://pan.baidu.com/s/1NZXZhKbrJUk75FDD4_p6PQ) to download the dataset (extraction code:8xwe) | 196 |
Caiyq2019/MG | ['speaker recognition'] | ['Deep Speaker Vector Normalization with Maximum Gaussianality Training'] | train.py tsne.py data_load.py utils.py score.py model.py data_load dataset_prepare load_data_normalised get_data get_mask FlowSequential MADE MaskedLinear compute_eer supervise_mean_var compute_idr data_load Cosine_score load_trails cosine_scoring_by_trails initial supervise_mean_var angle_Gaussian_log_likelihood L2_Gaussian_log_likelihood data_normlize train get_data main plot_embedding AverageMeter save_images load print append array range len get_data y print data_load from_numpy TensorDataset x arange load load_trails mean int sorted list float range len load_trails argmax set append split array open deepcopy list set mean append array supervise_mean_var norm dot append array range data_load Cosine_score compute_eer numpy cat print clone norm sum cosine_similarity reshape repeat dnf_Gaussian_log_likelihood vc L2_Gaussian_log_likelihood cos_SW zero_grad set_description max initial angle_Gaussian_log_likelihood to update val class_mean format size close avg L2_SB item enumerate backward AverageMeter tqdm index_select cos_SB step L2_SW format set subplot title scatter figure format TSNE print close set get_data plot_embedding mkdir savefig fit_transform len eval makedirs | # MG Training (Maximum Gaussianality Training) A Pytorch implementations of MG training for author's article ["Deep Speaker Vector Normalization with Maximum Gaussianality Training"](https://arxiv.……) This is a general Gaussian distribution training method and can be used in any task that requires Gaussian distribution in latent space. ## Datasets ```bash trainingset:Voxceleb testset: SITW, CNCeleb ``` Following this [link](https://pan.baidu.com/s/1NZXZhKbrJUk75FDD4_p6PQ) to download the dataset (extraction code:8xwe) | 197 |
CalayZhou/MBNet | ['pedestrian detection'] | ['Improving Multispectral Pedestrian Detection by Addressing Modality Imbalance Problems'] | keras_MBNet/model/model_AP_IAFA.py keras_MBNet/parallel_model.py keras_MBNet/utils/timer.py keras_MBNet/model/__init__.py keras_MBNet/model/deform_conv.py keras_MBNet/bbox_process.py keras_MBNet/model/base_model.py train.py demo_video.py keras_MBNet/nms/py_cpu_nms.py keras_MBNet/utils/__init__.py keras_MBNet/model/FixedBatchNormalization.py keras_MBNet/model/scale_bias.py test.py keras_MBNet/model/MBNetModel.py keras_MBNet/losses.py keras_MBNet/utils/blob.py keras_MBNet/bbox_transform.py keras_MBNet/model/keras_layer_L2Normalization.py keras_MBNet/data_generators.py demo.py keras_MBNet/model/MBNetBackbone.py keras_MBNet/data_augment.py keras_MBNet/config.py keras_MBNet/nms_wrapper.py keras_MBNet/model/deform_layers.py filter_negboxes compute_targets pred_det get_target_1st_posfirst pred_pp_1st format_img filter_boxes compute_targets bbox_transform bbox_transform_inv clip_boxes Config augment_lwir _saturation_kaist _hue_kaist _brightness_kaist _scale_enum _whctrs get_target_kaist _ratio_enum calc_target_multilayer_posfirst get_anchors _mkanchors _ratio_enum2 cls_loss illumination_loss regr_loss nms ParallelModel build_model Base_model tf_map_coordinates tf_batch_map_offsets tf_flatten sp_batch_map_offsets tf_repeat sp_batch_map_coordinates tf_repeat_2d tf_batch_map_coordinates ConvOffset2D FixedBatchNormalization L2Normalization conv_block DM_aware_fusion MBNetBackbone ResNet_DMAF_Block Illumination_Gate identity_block illumination_mechanism MBNetModel AP IAFA_stage prior_probability AP_stage create_AP_IAFA IAFA Scale_bias py_cpu_nms im_list_to_blob prep_im_for_blob Timer expand_dims astype float32 bbox_transform array ones_like concatenate clip_boxes ones astype bbox_transform_inv copy ascontiguousarray compute_targets classifier_regr_std array append zeros expand_dims argmax bbox_overlaps range len concatenate clip_boxes reshape astype bbox_transform_inv copy nms overlap_thresh clip_boxes reshape hstack astype bbox_transform_inv copy roi_stride filter_boxes transpose log dtype exp astype shape zeros minimum maximum uniform COLOR_RGB2HSV cvtColor where uniform COLOR_RGB2HSV cvtColor where uniform COLOR_RGB2HSV cvtColor where deepcopy int asarray max _saturation_kaist _hue_kaist len astype copy uniform resize randint imread flip _brightness_kaist hstack sqrt _whctrs round _mkanchors ones _whctrs round _mkanchors len _whctrs _mkanchors asarray arange append concatenate reshape transpose vstack meshgrid zeros expand_dims array range _ratio_enum2 len concatenate ones ascontiguousarray copy classifier_regr_std compute_targets zeros expand_dims argmax bbox_overlaps range len int augment_lwir concatenate astype shuffle float32 calc_target_multilayer_posfirst append expand_dims array split reshape reduce_mean sparse_softmax_cross_entropy_with_logits to_int32 to_float maximum reduce_sum where less abs constant float32 reduce_sum maximum sigmoid cast clip_by_value log reset_default_graph Input expand_dims tile tf_flatten expand_dims tile stack gather_nd floor cast ceil array clip _get_vals_by_coords shape stack cast clip_by_value floor tf_repeat ceil range reshape repeat clip sp_batch_map_coordinates reshape shape stack cast meshgrid tf_repeat_2d tf_batch_map_coordinates range str str multiply subtract_feature tf_original3 tf_original2 tf_div multiply Lambda tf_original1 tf_resize_images tf_half_add tf_sub tf_expand_dims multiply conv_block identity_block DM_aware_fusion range conv_block print ResNet_DMAF_Block Illumination_Gate identity_block illumination_mechanism array multiply Lambda tf_score_onesub AP IAFA IAFA_stage AP_stage append maximum minimum transpose zeros max range len min astype float32 shape resize float max | # MBNet Improving Multispectral Pedestrian Detection by Addressing Modality Imbalance Problems (ECCV 2020) - paper download: https://arxiv.org/pdf/2008.03043.pdf - the introduction PPT: https://github.com/CalayZhou/MBNet/blob/master/MBNet-3128.pdf - video demo: https://www.bilibili.com/video/BV1Hi4y137aS # Usage ## 1. Dependencies This code is tested on [Ubuntu18.04, tensorflow1.14, keras2.1.6, python3.6,cuda10.0,cudnn7.6]. | 198 |
Canjie-Luo/MORAN_v2 | ['scene text recognition'] | ['A Multi-Object Rectified Attention Network for Scene Text Recognition'] | tools/dataset.py demo.py models/moran.py models/morn.py test.py models/asrn_res.py inference.py tools/utils.py main.py models/fracPickup.py Recognizer val trainBatch ResNet BidirectionalLSTM ASRN AttentionCell Attention Residual_block fracPickup MORAN MORN lmdbDataset randomSequentialSampler resizeNormalize averager loadData strLabelConverterForAttention get_torch_version data decode DataLoader max view add iter append encode next range cat BidirDecoder averager loadData size MORAN zip float criterion print min len criterion backward loadData MORAN zero_grad step encode next cat BidirDecoder copy_ get_torch_version split | # MORAN: A Multi-Object Rectified Attention Network for Scene Text Recognition ![](https://img.shields.io/badge/version-v2-brightgreen.svg) | <center>Python 2.7</center> | <center>Python 3.6</center> | | :---: | :---: | | <center>[![Build Status](https://travis-ci.org/Canjie-Luo/MORAN_v2.svg?branch=master)](https://travis-ci.org/Canjie-Luo/MORAN_v2)</center> | <center>[![Build Status](https://travis-ci.org/Canjie-Luo/MORAN_v2.svg?branch=master)](https://travis-ci.org/Canjie-Luo/MORAN_v2)</center> | MORAN is a network with rectification mechanism for general scene text recognition. The paper (accepted to appear in Pattern Recognition, 2019) in [arXiv](https://arxiv.org/abs/1901.03003), [final](https://www.sciencedirect.com/science/article/pii/S0031320319300263) version is available now. [Here is a brief introduction in Chinese.](https://mp.weixin.qq.com/s/XbT_t_9C__KdyCCw8CGDVA) ![](demo/MORAN_v2.gif) ## Recent Update - 2019.03.21 Fix a bug about Fractional Pickup. | 199 |