The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed because of a cast error
Error code:   DatasetGenerationCastError
Exception:    DatasetGenerationCastError
Message:      An error occurred while generating the dataset

All the data files must have the same columns, but at some point there are 1 new columns ({'decision'})

This happened while the csv dataset builder was generating data using

hf://datasets/CIKM-23/MetaRev/conference_papers.csv (at revision e57202aa741f41e39de67134e3d186861f4247d7)

Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2011, in _prepare_split_single
                  writer.write_table(table)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 585, in write_table
                  pa_table = table_cast(pa_table, self._schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2302, in table_cast
                  return cast_table_to_schema(table, schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2256, in cast_table_to_schema
                  raise CastError(
              datasets.table.CastError: Couldn't cast
              paper_id: int64
              abstract: string
              title: string
              conference: string
              forum_id: string
              peer_reviews: string
              meta_review: string
              author_reply: string
              level_4_1_replies: string
              level_4_2_replies: string
              level_4_3_replies: string
              decision: string
              keywords: string
              -- schema metadata --
              pandas: '{"index_columns": [{"kind": "range", "name": null, "start": 0, "' + 1842
              to
              {'paper_id': Value(dtype='int64', id=None), 'abstract': Value(dtype='string', id=None), 'title': Value(dtype='string', id=None), 'conference': Value(dtype='string', id=None), 'forum_id': Value(dtype='string', id=None), 'peer_reviews': Value(dtype='string', id=None), 'meta_review': Value(dtype='string', id=None), 'keywords': Value(dtype='string', id=None), 'author_reply': Value(dtype='string', id=None), 'level_4_1_replies': Value(dtype='string', id=None), 'level_4_2_replies': Value(dtype='string', id=None), 'level_4_3_replies': Value(dtype='string', id=None)}
              because column names don't match
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1321, in compute_config_parquet_and_info_response
                  parquet_operations = convert_to_parquet(builder)
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 935, in convert_to_parquet
                  builder.download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1027, in download_and_prepare
                  self._download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1122, in _download_and_prepare
                  self._prepare_split(split_generator, **prepare_split_kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1882, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2013, in _prepare_split_single
                  raise DatasetGenerationCastError.from_cast_error(
              datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset
              
              All the data files must have the same columns, but at some point there are 1 new columns ({'decision'})
              
              This happened while the csv dataset builder was generating data using
              
              hf://datasets/CIKM-23/MetaRev/conference_papers.csv (at revision e57202aa741f41e39de67134e3d186861f4247d7)
              
              Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

paper_id
int64
abstract
string
title
string
conference
string
forum_id
string
peer_reviews
string
meta_review
string
keywords
string
author_reply
string
level_4_1_replies
string
level_4_2_replies
string
level_4_3_replies
string
682
In cities with tall buildings, emergency responders need an accurate floor level location to find 911 callers quickly. We introduce a system to estimate a victim's floor level via their mobile device's sensor data in a two-step process. First, we train a neural network to determine when a smartphone enters or exits a building via GPS signal changes. Second, we use a barometer equipped smartphone to measure the change in barometric pressure from the entrance of the building to the victim's indoor location. Unlike impractical previous approaches, our system is the first that does not require the use of beacons, prior knowledge of the building infrastructure, or knowledge of user behavior. We demonstrate real-world feasibility through 63 experiments across five different tall buildings throughout New York City where our system predicted the correct floor level with 100% accuracy.
Predicting Floor-Level for 911 Calls with Neural Networks and Smartphone Sensor Data
ICLR.cc/2018/Conference
ryBnUWb0b
[{'ddate': None, 'original': None, 'tddate': 1511728326898, 'tmdate': 1515642491552, 'tcdate': 1511728310814, 'number': 1, 'cdate': 1511728310814, 'id': 'B11TNj_gM', 'invitation': 'ICLR.cc/2018/Conference/-/Paper682/Official_Review', 'forum': 'ryBnUWb0b', 'replyto': 'ryBnUWb0b', 'signatures': ['ICLR.cc/2018/Conference/Paper682/AnonReviewer2'], 'readers': ['everyone'], 'content': {'title': 'A simple but useful method that serves a practical purpose well; improvements needed in writing and experimental comparisons.', 'rating': '7: Good paper, accept', 'review': "The paper proposes a two-step method to determine which floor a mobile phone is on inside a tall building. \\nAn LSTM RNN classifier analyzes the changes/fading in GPS signals to determine whether a user has entered a building. Using the entrance point's barometer reading as a reference, the method calculates the relative floor the user has moved to using a well known relationship between heights and barometric readings.\\n\\nThe paper builds on a simple but useful idea and is able to develop it into a basic method for the goal. The method has minimal dependence on prior knowledge and is thus expected to have wide applicability, and is found to be sufficiently successful on data collected from a real world context. The authors present some additional explorations on the cases when the method may run into complications.\\n\\nThe paper could use some reorganization. The ideas are presented often out of order and are repeated in cycles, with some critical details that are needed to understand the method revealed only in the later cycles. Most importantly, it should be stated upfront that the outdoor-indoor transition is determined using the loss of GPS signals. Instead, the paper elaborates much on the neural net model but delays until the middle of p.4 to state this critical fact. However once this fact is stated, it is obvious that the neural net model is not the only solution.\\n\\nThe RNN model for Indoor/Outdoor determination is compared to several baseline classifiers. However these are not the right methods to compare to -- at least, it is not clear how you set up the vector input to these non-auto-regressive classifiers. You need to compare your model to a time series method that includes auto-regressive terms, or other state space methods like Markov models or HMMs.\\n\\nOther questions:\\n\\np.2, Which channel's RSSI is the one included in the data sample per second?\\n\\np.4, k=3, what is k?\\n\\nDo you assume that the entrance is always at the lowest floor? What about basements or higher floor entrances? Also, you may continue to see good GPS signals in elevators that are mounted outside a building, and by the time they fade out, you can be on any floor reached by those elevators.\\n\\nHow does each choice of your training parameters affect the performance? e.g. number of epoches, batch size, learning rate. What are the other architectures considered? What did you learn about which architecture works and which does not? Why?\\n\\nAs soon as you start to use clustering to help in floor estimation, you are exploiting prior knowledge about previous visits to the building. This goes somewhat against the starting assumption and claim.", 'confidence': '5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature'}, 'writers': [], 'nonreaders': []}, {'tddate': None, 'ddate': None, 'original': None, 'tmdate': 1515642491514, 'tcdate': 1511800514188, 'number': 2, 'cdate': 1511800514188, 'id': 'ryca0nYef', 'invitation': 'ICLR.cc/2018/Conference/-/Paper682/Official_Review', 'forum': 'ryBnUWb0b', 'replyto': 'ryBnUWb0b', 'signatures': ['ICLR.cc/2018/Conference/Paper682/AnonReviewer1'], 'readers': ['everyone'], 'content': {'title': 'The paper combines existing methods to outperform baseline methods on floor level estimation. Limitations of their approach are not explored.', 'rating': '6: Marginally above acceptance threshold', 'review': 'The authors motivate the problem of floor level estimation and tackle it with a RNN. The results are good. The models the authors compare to are well chosen. As the paper foremost provides application (and combination) of existing methods it would be benefitial to know something about the limitations of their approach and about the observed prequesits. ', 'confidence': '3: The reviewer is fairly confident that the evaluation is correct'}, 'writers': [], 'nonreaders': []}, {'tddate': None, 'ddate': None, 'original': None, 'tmdate': 1516123364372, 'tcdate': 1511825702741, 'number': 3, 'cdate': 1511825702741, 'id': 'ry1E-75eG', 'invitation': 'ICLR.cc/2018/Conference/-/Paper682/Official_Review', 'forum': 'ryBnUWb0b', 'replyto': 'ryBnUWb0b', 'signatures': ['ICLR.cc/2018/Conference/Paper682/AnonReviewer3'], 'readers': ['everyone'], 'content': {'title': 'A fairly simple application of existing methods to a problem, and there remain some methodological issues', 'rating': '6: Marginally above acceptance threshold', 'review': "Update: Based on the discussions and the revisions, I have improved my rating. However I still feel like the novelty is somewhat limited, hence the recommendation.\\n\\n======================\\n\\nThe paper introduces a system to estimate a floor-level via their mobile device's sensor data using an LSTM to determine when a smartphone enters or exits a building, then using the change in barometric pressure from the entrance of the building to indoor location. Overall the methodology is a fairly simple application of existing methods to a problem, and there remain some methodological issues (see below).\\n\\nGeneral Comments\\n- The claim that the bmp280 device is in most smartphones today doesn’t seem to be backed up by the “comScore” reference (a simple ranking of manufacturers). Please provide the original source for this information.\\n- Almost all exciting results based on RNNs are achieved with LSTMs, so calling an RNN with LSTM hidden units a new name IOLSTM seems rather strange - this is simply an LSTM.\\n- There exist models for modelling multiple levels of abstraction, such as the contextual LSTM of [1]. This would be much more satisfying that the two level approach taken here, would likely perform better, would replace the need for the clustering method, and would solve issues such as the user being on the roof. The only caveat is that it may require an encoding of the building (through a one-hot encoding) to ensure that the relationship between the floor height and barometric pressure is learnt. For unseen buildings a background class could be used, the estimators as used before, or aggregation of the other buildings by turning the whole vector on.\\n- It’s not clear if a bias of 1 was added to the forget gate of the LSTM or not. This has been shown to improve results [2].\\n- Overall the whole pipeline feels very ad-hoc, with many hand-tuned parameters. Notwithstanding the network architecture, here I’m referring to the window for the barometric pressure, the Jaccard distance threshold, the binary mask lengths, and the time window for selecting p0.\\n- Are there plans to release the data and/or the code for the experiments? Currently the results would be impossible to reproduce.\\n- The typo of accuracy given by the authors is somewhat worrying, given that the result is repeated several times in the paper.\\n\\nTypographical Issues\\n- Page 1: ”floor-level accuracy” back ticks\\n- Page 4: Figure 4.1→Figure 1; Nawarathne et al Nawarathne et al.→Nawarathne et al.\\n- Page 6: ”carpet to carpet” back ticks\\n- Table 2: What does -4+ mean?\\n- References. The references should have capitalisation where appropriate.For example, Iodetector→IODetector, wi-fi→Wi-Fi, apple→Apple, iphone→iPhone, i→I etc.\\n\\n[1] Shalini Ghosh, Oriol Vinyals, Brian Strope, Scott Roy, Tom Dean, and LarryHeck. Contextual LSTM (CLSTM) models for large scale NLP tasks. arXivpreprint arXiv:1602.06291, 2016.\\n[2] Rafal Jozefowicz, Wojciech Zaremba, and Ilya Sutskever. An empirical exploration of recurrent network architectures. InProceedings of the 32nd International Conference on Machine Learning (ICML-15), pages 2342–2350,2015", 'confidence': '4: The reviewer is confident but not absolutely certain that the evaluation is correct'}, 'writers': [], 'nonreaders': []}]
Reviewers agree that the paper is well done and addresses an interesting problem, but uses fairly standard ML techniques. The authors have responded to rebuttals with careful revisions, and improved results.
['Recurrent Neural Networks', 'RNN', 'LSTM', 'Mobile Device', 'Sensors']
[{'tddate': None, 'ddate': None, 'tmdate': 1515129408153, 'tcdate': 1515128426667, 'number': 8, 'cdate': 1515128426667, 'id': 'S1mOIY2mf', 'invitation': 'ICLR.cc/2018/Conference/-/Paper682/Official_Comment', 'forum': 'ryBnUWb0b', 'replyto': 'ryBnUWb0b', 'signatures': ['ICLR.cc/2018/Conference/Paper682/Authors'], 'readers': ['everyone'], 'writers': ['ICLR.cc/2018/Conference/Paper682/Authors'], 'content': {'title': 'HMM added, code released, models verified again', 'comment': "Thank you once again for your feedback during these last few weeks. We've gone ahead and completed the following: \\n1. Added the HMM baseline\\n2. Reran all the models and updated the results. The LSTM and feedforward model performed the same on the test set. We've reworded the results and method page to reflect this.\\n3. By increasing the classifier accuracy we improved the floor-prediction task to 100% with no margin of error on the floor predictions.\\n4. We tried the hierarchical LSTM approach as suggested but did not get a model to work in the few weeks we experimented with it. It looks promising, but it'll need more experimentation. We included this approach in future works section.\\n5. We released all the code at this repository: https://github.com/blindpaper01/paper_FMhXSlwRYpUtuchTv/ \\n\\nAlthough the code is mostly organized, works and is commented it'll be polished up once it needs to be released to the broader community. The Sensory app was not released yet to preserve anonymity. \\n\\n6. Fixed typos (some of the numbering typos are actually from the ICLR auto-formatting file).\\n\\nPlease let us know if there's anything else you'd like us to clarify.\\nThank you so much for your feedback once again!\\n"}, 'nonreaders': []}, {'tddate': None, 'ddate': None, 'tmdate': 1514987162488, 'tcdate': 1514987162488, 'number': 6, 'cdate': 1514987162488, 'id': 'H1zsR8cmz', 'invitation': 'ICLR.cc/2018/Conference/-/Paper682/Official_Comment', 'forum': 'ryBnUWb0b', 'replyto': 'ryBnUWb0b', 'signatures': ['ICLR.cc/2018/Conference/Paper682/AnonReviewer3'], 'readers': ['everyone'], 'writers': ['ICLR.cc/2018/Conference/Paper682/AnonReviewer3'], 'content': {'title': 'Drop in accuracy (?) and baselines', 'comment': 'Can you explain why in the table 1 in the revision from 29th October the validation and test accuracy of the LSTM are 0.949 and 0.911 and in the most recent version they have dropped to 0.935 and 0.898 (worse than the baselines)?\\n\\nAlso I agree with the statement by reviewer 2:\\n\\n"The RNN model for Indoor/Outdoor determination is compared to several baseline classifiers. However these are not the right methods to compare to -- at least, it is not clear how you set up the vector input to these non-auto-regressive classifiers. You need to compare your model to a time series method that includes auto-regressive terms, or other state space methods like Markov models or HMMs."\\n\\nIt seems like no changes have been made to address this.'}, 'nonreaders': []}]
[{'tddate': None, 'ddate': None, 'tmdate': 1515128678906, 'tcdate': 1515128678906, 'number': 10, 'cdate': 1515128678906, 'id': 'B1kuDK27M', 'invitation': 'ICLR.cc/2018/Conference/-/Paper682/Official_Comment', 'forum': 'ryBnUWb0b', 'replyto': 'B11TNj_gM', 'signatures': ['ICLR.cc/2018/Conference/Paper682/Authors'], 'readers': ['everyone'], 'writers': ['ICLR.cc/2018/Conference/Paper682/Authors'], 'content': {'title': 'HMM added', 'comment': "Hello. We've added the HMM baseline. We apologize for the delay, we wanted to make sure we set the HMM baseline as rigorous as possible.\\n\\nThe code is also available for your review. \\n\\nThank you once again for your feedback!\\n"}, 'nonreaders': []}, {'tddate': None, 'ddate': None, 'tmdate': 1515128757246, 'tcdate': 1515128757246, 'number': 11, 'cdate': 1515128757246, 'id': 'rJp2Dt3XG', 'invitation': 'ICLR.cc/2018/Conference/-/Paper682/Official_Comment', 'forum': 'ryBnUWb0b', 'replyto': 'ryca0nYef', 'signatures': ['ICLR.cc/2018/Conference/Paper682/Authors'], 'readers': ['everyone'], 'writers': ['ICLR.cc/2018/Conference/Paper682/Authors'], 'content': {'title': 'Code released, results updated', 'comment': "Dear reviewer,\\n\\nWe've released a main update listed above. Please let us know if there's anything we can help clarify! \\n\\nThank you once again for your feedback!"}, 'nonreaders': []}, {'tddate': None, 'ddate': None, 'tmdate': 1512575431926, 'tcdate': 1512575431926, 'number': 5, 'cdate': 1512575431926, 'id': 'r1gRb9SWG', 'invitation': 'ICLR.cc/2018/Conference/-/Paper682/Official_Comment', 'forum': 'ryBnUWb0b', 'replyto': 'B11TNj_gM', 'signatures': ['ICLR.cc/2018/Conference/Paper682/Authors'], 'readers': ['everyone'], 'writers': ['ICLR.cc/2018/Conference/Paper682/Authors'], 'content': {'title': 'New results. We turned the problem into classification by fixing a window in the series of size (k=3). Updating paper with structural suggestions.', 'comment': 'Thank you for your feedback! We\\'re working on adding your suggestions and will post an update in the next few weeks.\\n\\nWanted to let you know we\\'ve improved the results from 91% to 100% by adjusting our regularization mechanism in the LSTM. We\\'ll make the appropriate changes to the paper.\\n\\n"The paper could use some reorganization"\\n1. Agreed and the updated draft will have:\\n - Cleaner organization\\n - Upfront clarification about the GPS signal\\n - Shortened discussion about the neural net model\\n\\n"The RNN model for Indoor/Outdoor determination is compared to several baseline classifiers."\\n2. The problem is reduced to classification by creating a fixed window of width k (in our case, k=3) where the middle point is what we\\'re trying to classify as indoors/outdoors. \\n - Happy to add the HMM comparison.\\n - Happy to add a time series comparison.\\n\\n"p.2, Which channel\\'s RSSI is the one included in the data sample per second?\\n"\\n3. We get the RSSI strength as proxied by the iPhone status bar. Unfortunately, the API to access the details of that signal is private. Therefore, we don\\'t have that detailed information. However, happy to add clarification about how exactly we\\'re getting that signal (also available in the sensory app code).\\n\\n\\n4. k is the window size. Will clarify this. \\n\\n"Do you assume that the entrance is always at the lowest floor? What about basements or higher floor entrances? "\\n\\n5. We actually don\\'t assume the entrance is on the lower floors. In fact, one of the buildings that we test in has entrances 4 stories appart. This is where the clustering method shines. As soon as the user enters the building through one of those lower entrances, the floor-level indexes will update because it will detect another cluster.\\n\\n\\n"Also, you may continue to see good GPS signals in elevators that are mounted outside a building, and by the time they fade out, you can be on any floor reached by those elevators."\\n6. Yup, this is true. Unfortunately this method does heavily rely on the indoor/outdoor classifier. \\n - We\\'ll add a brief discussion to highlight this issue.\\n\\n\\n"How does each choice of your training parameters affect the performance? e.g. number of epoches, batch size, learning rate. What are the other architectures considered? What did you learn about which architecture works and which does not? Why?\\n"\\n7. We can add a more thorough description about this and provide training logs in the code that give visibility into the parameters for each experiment and the results.\\n - The window choice (k) actually might be the most critical hyperparameter (next to learning rate). The general pattern is that a longer window did not help much. \\n - The fully connected network actually does surprisingly well but the RNN generalizes slightly better. A 1-layer RNN did not provide much modeling power. It was the multi-layer model that added the needed complexity to capture these relationships. We also tried bi-directional but it failed to perform well. \\n\\n"As soon as you start to use clustering to help in floor estimation, you are exploiting prior knowledge about previous visits to the building. This goes somewhat against the starting assumption and claim.\\n"\\n8. Fair point. We provide a prior for each situation will will get you pretty close to the correct floor-level. However, it\\'s impossible to get more accurate without building plans, beacons or some sort of learning. We consider the clustering method more of the learning approach: It updates the estimated floor heights as either the same user or other users walk in that building. In the case where the implementer of the system (ie. A company), only wants to use a single-user\\'s information and keep it 100% on their device, the clustering system will still work using that user\\'s repeated visits. In the case where a central database might aggregate this data, the clusters for each building will develop a lot faster and converge on the true distribution of floor heights in a buillding.'}, 'nonreaders': []}, {'tddate': None, 'ddate': None, 'tmdate': 1512627818079, 'tcdate': 1512488846582, 'number': 3, 'cdate': 1512488846582, 'id': 'SJw9Jr4-G', 'invitation': 'ICLR.cc/2018/Conference/-/Paper682/Official_Comment', 'forum': 'ryBnUWb0b', 'replyto': 'ry1E-75eG', 'signatures': ['ICLR.cc/2018/Conference/Paper682/Authors'], 'readers': ['everyone'], 'writers': ['ICLR.cc/2018/Conference/Paper682/Authors'], 'content': {'title': 'New model accuracy is 100% with no margin of error. Added device references, discussion about new model, and code + data can be public if requested beforehand', 'comment': '\\nThank you so much for your valuable feedback! I want to preface the breakdown below by letting you know that we added time-distributed dropout which helped our model\\'s accuracy. The new accuracy is 100% with no margin of error in the floor number.\\n\\n1. As of June 2017 the market share of phones in the US is 44.9% Apple and 29.1% Samsung [1]. 74% are iPhone 6 or newer [2]. The iPhone 6 has a barometer [3]. Models after the 6 still continue to have a barometer. \\nFor the Samsung phones, the Galaxy s5 is the most popular [4], and has a barometer [5].\\n\\n\\n[1] https://www.prnewswire.com/news-releases/comscore-reports-june-2017-us-smartphone-subscriber-market-share-300498296.html\\n[2] https://s3.amazonaws.com/open-source-william-falcon/911/2017_US_Cross_Platform_Future_in_Focus.pdf\\n[3] https://support.apple.com/kb/sp705?locale=en_US\\n[4] https://deviceatlas.com/blog/most-popular-smartphones-2016\\n[5] https://news.samsung.com/global/10-sensors-of-galaxy-s5-heart-rate-finger-scanner-and-more\\n\\n2. Makes sense, we separated it for the non deep learning audience trying to understand it. However, happy to update everything to say LSTM.\\n3. Thanks for this great suggestion. We had experimented with end-to-end models but decided against it. We did have a seq2seq model that attempted to turn the sequence of readings into a sequence of meter offsets. It did not fully work, but we\\'re still experimenting with it. This model does not however get rid of the clustering step. \\n\\nAn additional benefit of separating this step from the rest of the model is that it can be used as a stand-alone indoor/outdoor classifier. \\n\\nI\\'ll address your concerns one at a time:\\n a. In which task would it perform better? The indoor-outdoor classification task or the floor prediction task?\\n c. What about this model would solve the issue of the user being on the roof?\\n d. Just to make sure I understand, the one-hot encoding suggestion aims to learn a mapping between the floor height and the barometric pressure which in turn removes the need for clustering?\\n e. This sounds like an interesting approach, but seems to fall outside of the constraint of having a self-contained model which did not need prior knowledge. Generating a one-hot encoding for every building in the world without a central repository of building plans makes this intractable.\\n\\n4. We used the bias (tensorflow LSTM cell). https://www.tensorflow.org/api_docs/python/tf/contrib/rnn/LSTMCell\\n5. Happy to add explanations for why the "ad-hoc" parameters were chosen:\\n a. Jaccard window, binary mask lengths, and window length were chosen via grid search.\\n b. Will add those details to the paper.\\n\\n6. Yes! All the data + code will be made public after reviews. However, if you feel strongly about having it before, we can make it available sooner through an anonymous repository. In addition, we\\'re planning on releasing a basic iOS app which you\\'ll be able to download from the app store to run the model on your phone and see how it works on any arbitrary building for you.\\n\\n7. Yes, many typos. Apologize for that. We did a last minute typo review too close to the deadline and missed those issues. This is in fact going to change now that we\\'ve increased the model accuracy to 100% with no floor margin of error.\\n\\nWe\\'re updating the paper now and will submit a revised version in the coming weeks'}, 'nonreaders': []}, {'tddate': None, 'ddate': None, 'tmdate': 1512489112297, 'tcdate': 1512489112297, 'number': 4, 'cdate': 1512489112297, 'id': 'Hkesgr4Zf', 'invitation': 'ICLR.cc/2018/Conference/-/Paper682/Official_Comment', 'forum': 'ryBnUWb0b', 'replyto': 'ryca0nYef', 'signatures': ['ICLR.cc/2018/Conference/Paper682/Authors'], 'readers': ['everyone'], 'writers': ['ICLR.cc/2018/Conference/Paper682/Authors'], 'content': {'title': 'Appendix A, section B has potential pitfalls', 'comment': "Thank you for your valuable feedback!\\nIn Appendix A, section B we provide a lengthy discussion about potential pitfalls of our system in a real-world scenario and offer potential solutions.\\n\\nWas there something in addition to this that you'd like to see?"}, 'nonreaders': []}]
[]
[]
41
We consider the problem of exploration in meta reinforcement learning. Two new meta reinforcement learning algorithms are suggested: E-MAML and ERL2. Results are presented on a novel environment we call 'Krazy World' and a set of maze environments. We show E-MAML and ERL2 deliver better performance on tasks where exploration is important.
Some Considerations on Learning to Explore via Meta-Reinforcement Learning
ICLR.cc/2018/Conference
Skk3Jm96W
[{'tddate': None, 'ddate': None, 'original': None, 'tmdate': 1515642445131, 'tcdate': 1511540774475, 'number': 1, 'cdate': 1511540774475, 'id': 'SJ0Q_6Hlf', 'invitation': 'ICLR.cc/2018/Conference/-/Paper41/Official_Review', 'forum': 'Skk3Jm96W', 'replyto': 'Skk3Jm96W', 'signatures': ['ICLR.cc/2018/Conference/Paper41/AnonReviewer3'], 'readers': ['everyone'], 'content': {'title': 'review', 'rating': '7: Good paper, accept', 'review': 'This is an interesting paper about correcting some of the myopic bias in meta RL. For two existing algorithms (MAML, RL2) it proposes a modification of the metaloss that encourages more exploration in the first (couple of) test episodes. The approach is a reasonable one, the proposed methods seem to work, the (toy) domains are appropriate, and the paper is well-rounded with background, motivation and a lot of auxiliary results.\\n\\nNevertheless, it could be substantially improved:\\n\\nSection 4 is of mixed rigor: some aspects are formally defined and clear, others are not defined at all, and in the current state many things are either incomplete or redundant. Please be more rigorous throughout, define all the terms you use (e.g. \\\\tau, R, \\\\bar{\\\\tau}, ...). Actually, the text never makes it clear how \\\\tau and \\\\ber{\\\\tau} relate to each other: make this connection in a formal way, please.\\n\\nIn your (Elman) formulation, “L” is not an RNN, but just a feed-forward mapping?\\n\\nEquation 3 is over-complicated: it is actually just a product of two integrals, because all the terms are separable. \\n\\nThe integral notation is not meaningful: you can’t sample something in the subscript the way you would in an expectation. Please make this rigorous.\\n\\nThe variability across seems extremely large, so it might be worth averaging over mores seeds for the learning curves, so that differences are more likely to be significant.\\n\\nFigure fontsizes are too small to read, and the figures in the appendix are impossible to read. Also, I’d recommend always plotting std instead of variance, so that the units or reward remain comparable.\\n\\nI understand that you built a rich, flexible domain. But please describe the variant you actually use, cleanly, without all the other variants. Or, alternatively, run experiments on multiple variants.', 'confidence': '4: The reviewer is confident but not absolutely certain that the evaluation is correct'}, 'writers': [], 'nonreaders': []}, {'tddate': None, 'ddate': None, 'original': None, 'tmdate': 1515642445049, 'tcdate': 1511811059316, 'number': 3, 'cdate': 1511811059316, 'id': 'ryse_yclM', 'invitation': 'ICLR.cc/2018/Conference/-/Paper41/Official_Review', 'forum': 'Skk3Jm96W', 'replyto': 'Skk3Jm96W', 'signatures': ['ICLR.cc/2018/Conference/Paper41/AnonReviewer1'], 'readers': ['everyone'], 'content': {'title': 'A new exploration algorithm for reinforcement learning', 'rating': '4: Ok but not good enough - rejection', 'review': 'Summary: this paper proposes algorithmic extensions to two existing RL algorithms to improve exploration in meta-reinforcement learning. The new approach is compared to the baselines on which they are built on a new domain, and a grid-world.\\n\\nThis paper needs substantial revision. The first and primary issue is that authors claim their exists not prior work on "exploration in Meta-RL". This appears to be the case because the authors did not use the usual names for this: life-long learning, learning-to-learn, continual learning, multi-task learning, etc. If you use these terms you see that much of the work in these settings is about how to utilize and adapt exploration. Either given a "free learning phases", exploration based in internal drives (curiosity, intrinsic motivation). These are subfields with too much literature to list here. The paper under-review must survey such literature and discuss why these new approaches are a unique contribution.\\n\\nThe empirical results do not currently support the claimed contributions of the paper. The first batch of results in on a new task introduced by this paper. Why was a new domain introduced? How are existing domains not suitable. This is problematic because domains can easily exhibit designer bias, which is difficult to detect. Designing domains are very difficult and why benchmark domains that have been well vetted by the community are such an important standard. In the experiment, the parameters were randomly sampled---is a very non-conventional choice. Usually one performance a search for the best setting and then compares the results. This would introduce substantial variance in the results, requiring many more runs to make statistically significant conclusions.\\n\\nThe results on the first task are not clear. In fig4 one could argue that e-maml is perhaps performing the best, but the variance of the individual lines makes it difficult to conclude much. In fig5 rl2 gets the best final performance---do you have a hypothesis as to why? Much more analysis of the results is needed.\\n\\nThere are well-known measures used in transfer learning to access performance, such as jump-start. Why did you define new ones here?\\n \\nFigure 6 is difficult to read. Why not define the Gap and then plot the gap. These are very unclear plots especially bottom right. It\\'s your job to sub-select and highlight results to clearly support the contribution of the paper---that is not the case here. Same thing with figure 7. I am not sure what to conclude from this graph.\\n\\nThe paper, overall is very informal and unpolished. The text is littered with colloquial language, which though fun, is not as precise as required for technical documents. Meta-RL is never formally and precisely defined. There are many strong statements e.g., : "which indicates that at the very least the meta learning is able to do system identification correctly.">> none of the results support such a claim. Expectations and policies are defined with U which is never formally defined. The background states the problem of study is a finite horizon MDP, but I think they mean episodic tasks. The word heuristic is used, when really should be metric or measure. ', 'confidence': '4: The reviewer is confident but not absolutely certain that the evaluation is correct'}, 'writers': [], 'nonreaders': []}, {'tddate': None, 'ddate': None, 'original': None, 'tmdate': 1515642445086, 'tcdate': 1511629771783, 'number': 2, 'cdate': 1511629771783, 'id': 'SkE07mveG', 'invitation': 'ICLR.cc/2018/Conference/-/Paper41/Official_Review', 'forum': 'Skk3Jm96W', 'replyto': 'Skk3Jm96W', 'signatures': ['ICLR.cc/2018/Conference/Paper41/AnonReviewer2'], 'readers': ['everyone'], 'content': {'title': "Interesting direction for exploration in meta-RL. Many relations to prior work missing though. Let's wait for rebuttal.", 'rating': '6: Marginally above acceptance threshold', 'review': 'The paper proposes a trick of extending objective functions to drive exploration in meta-RL on top of two recent so-called meta-RL algorithms, Model-Agnostic Meta-Learning (MAML) and RL^2. \\n\\nPros:\\n\\n+ Quite simple but promising idea to augment exploration in MAML and RL^2 by taking initial sampling distribution into account. \\n\\n+ Excellent analysis of learning curves with variances across two different environments. Charts across different random seeds and hyperparameters indicate reproducibility. \\n\\n\\nCons/Typos/Suggestions:\\n\\n- The brief introduction to meta-RL is missing lots of related work - see below.\\n\\n- Equation (3) and equations on the top of page 4: Mathematically, it looks better to swap \\\\mathrm{d}\\\\tau and \\\\mathrm{d}\\\\bar{\\\\tau}, to obtain a consistent ordering with the double integrals. \\n\\n- In page 4, last paragraph before Section 5, “However, during backward pass, the future discounted returns for the policy gradient computation will zero out the contributions from exploratory episodes”: I did not fully understand this - please explain better. \\n\\n- It is not very clear if the authors use REINFORCE or more advanced approaches like TRPO/PPO/DDPG to perform policy gradient updates?\\n\\n- I\\'d like to see more detailed hyperparameter settings. \\n\\n- Figures 10, 11, 12, 13, 14: Too small to see clearly. I would propose to re-arrange the figures in either [2, 2]-layout, or a single column layout, particularly for Figure 14. \\n\\n- Figures 5, 6, 9: Wouldn\\'t it be better to also use log-scale on the x-axis for consistent comparison with curves in Krazy World experiments ?\\n\\n3. It could be very interesting to benchmark also in Mujoco environments, such as modified Ant Maze. \\n\\nOverall, the idea proposed in this paper is interesting. I agree with the authors that a good learner should be able to generalize to new tasks with very few trials compared with learning each task from scratch. This, however, is usually called transfer learning, not metalearning. As mentioned above, experiments in more complex, continuous control tasks with Mujoco simulators might be illuminating. \\n\\nRelation to prior work:\\n\\np 2: Authors write: "Recently, a flurry of new work in Deep Reinforcement Learning has provided the foundations for tackling RL problems that were previously thought intractable. This work includes: 1) Mnih et al. (2015; 2016), which allow for discrete control in complex environments directly from raw images. 2) Schulman et al. (2015); Mnih et al. (2016); Schulman et al. (2017); Lillicrap et al. (2015), which have allowed for high-dimensional continuous control in complex environments from raw state information."\\n\\nHere it should be mentioned that the first RL for high-dimensional continuous control in complex environments from raw state information was actually published in mid 2013:\\n\\n(1) Koutnik, J., Cuccu, G., Schmidhuber, J., and Gomez, F. (July 2013). Evolving large-scale neural networks for vision-based reinforcement learning. GECCO 2013, pages 1061-1068, Amsterdam. ACM.\\n\\np2: Authors write: "In practice, these methods are often not used due to difficulties with high-dimensional observations, difficulty in implementation on arbitrary domains, and lack of promising results."\\n\\nNot quite true - RL robots with high-dimensional video inputs and intrinsic motivation learned to explore in 2015: \\n\\n(2) Kompella, Stollenga, Luciw, Schmidhuber. Continual curiosity-driven skill acquisition from high-dimensional video inputs for humanoid robots. Artificial Intelligence, 2015.\\n\\np2: Authors write: "Although this line of work does not explicitly deal with exploration in meta learning, it remains a large source of inspiration for this work."\\n\\np2: Authors write: "To the best of our knowledge, there does not exist any literature addressing the topic of exploration in meta RL."\\n\\nBut there is such literature - see the following meta-RL work where exploration is the central issue:\\n\\n(3) J. Schmidhuber. Exploring the Predictable. In Ghosh, S. Tsutsui, eds., Advances in Evolutionary Computing, p. 579-612, Springer, 2002.\\n\\nThe RL method of this paper is the one from the original meta-RL work:\\n\\n(4) J. Schmidhuber. On learning how to learn learning strategies. Technical Report FKI-198-94, Fakultät für Informatik, Technische Universität München, November 1994.\\n\\nWhich then led to:\\n\\n(5) J. Schmidhuber, J. Zhao, N. Schraudolph. Reinforcement learning with self-modifying policies. In S. Thrun and L. Pratt, eds., Learning to learn, Kluwer, pages 293-309, 1997.\\n\\np2: "In hierarchical RL, a major focus is on learning primitives that can be reused and strung together. These primitives will frequently enable better exploration, since they’ll often relate to better coverage over state visitation frequencies. Recent work in this direction includes (Vezhnevets et al., 2017; Bacon & Precup, 2015; Tessler et al., 2016; Rusu et al., 2016)."\\n\\nThese are very recent refs - one should cite original work on hierarchical RL including:\\n\\nJ. Schmidhuber. Learning to generate sub-goals for action sequences. In T. Kohonen, K. Mäkisara, O. Simula, and J. Kangas, editors, Artificial Neural Networks, pages 967-972. Elsevier Science Publishers B.V., North-Holland, 1991.\\n\\nM. B. Ring. Incremental Development of Complex Behaviors through Automatic Construction of Sensory-Motor Hierarchies. Machine Learning: Proceedings of the Eighth International Workshop, L. Birnbaum and G. Collins, 343-347, Morgan Kaufmann, 1991.\\n\\nM. Wiering and J. Schmidhuber. HQ-Learning. Adaptive Behavior 6(2):219-246, 1997\\n\\nReferences to original work on meta-RL are missing. How does the approach of the authors relate to the following approaches? \\n\\n(6) J. Schmidhuber. Gödel machines: Fully Self-Referential Optimal Universal Self-Improvers. In B. Goertzel and C. Pennachin, eds.: Artificial General Intelligence, p. 119-226, 2006. \\n\\n(7) J. Schmidhuber. Evolutionary principles in self-referential learning, or on learning how to learn: The meta-meta-... hook. Diploma thesis, TUM, 1987. \\n \\nPapers (4,5) above describe a universal self-referential, self-modifying RL machine. It can implement and run all kinds of learning algorithms on itself, but cannot learn them by gradient descent (because it\\'s RL). Instead it uses what was later called the success-story algorithm (5) to handle all the meta-learning and meta-meta-learning etc.\\n\\nRef (7) above also has a universal programming language such that the system can learn to implement and run all kinds of computable learning algorithms, and uses what\\'s now called Genetic Programming (GP), but applied to itself, to recursively evolve better GP methods through meta-GP and meta-meta-GP etc. \\n\\nRef (6) is about an optimal way of learning or the initial code of a learning machine through self-modifications, again with a universal programming language such that the system can learn to implement and run all kinds of computable learning algorithms.\\n\\nGeneral recommendation: Accept, provided the comments are taken into account, and the relation to previous work is established.\\n', 'confidence': '5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature'}, 'writers': [], 'nonreaders': []}]
Overall, the paper is missing a couple of ingredients that would put it over the bar for acceptance: - I am mystified by statements such as "RL2 no longer gets the best final performance." from one revision to another, as I have lower confidence in the results now. - More importantly, the paper is missing comparisons of the proposed methods on *already existing* benchmarks. I agree with Reviewer 1 that a paper that only compares on benchmarks introduced in the very same submission is not as strong as it could be. In general, the idea seems interesting and compelling enough (at least on the Krazy World & maze environments) that I can recommend inviting to the workshop track.
['reinforcement learning', 'rl', 'exploration', 'meta learning', 'meta reinforcement learning', 'curiosity']
[{'tddate': None, 'ddate': None, 'tmdate': 1514120561182, 'tcdate': 1514120561182, 'number': 2, 'cdate': 1514120561182, 'id': 'rkYuBQTfG', 'invitation': 'ICLR.cc/2018/Conference/-/Paper41/Official_Comment', 'forum': 'Skk3Jm96W', 'replyto': 'Skk3Jm96W', 'signatures': ['ICLR.cc/2018/Conference/Paper41/Authors'], 'readers': ['everyone'], 'writers': ['ICLR.cc/2018/Conference/Paper41/Authors'], 'content': {'title': 'We have made substantial changes based on reviewer comments', 'comment': 'The following concerns were listed across multiple reviewers: \\n\\n1) Our paper misses citations wherein the similar problems are considered under different names. This problem is quite a large one, and it is unfortunate that the literature is at times disjoint and difficult to search. You will notice that the first and second reviewer both told us that we missed many essential references, but the crucial missed references provided by both are entirely different. Further, the third reviewer did not indicate any issues with the literature we cited. We believe this indicates the difficulty in accurately capturing prior work in this area. \\n\\n2) The graphs suffered from a variety of deficiencies. These deficiencies were both major (not clearly and convincingly demonstrating the strengths of our proposed methods) and minor (the text or graphs themselves being at times too small). \\n\\n3) There were portions of the paper that appeared hastily written or wherein spelling and grammatical mistakes were present. Further, there were claims that the reviewers felt were not sufficiently substantiated and parts of the paper lacked rigor. \\n\\nWe have addressed these concerns in the following ways: \\n\\n1) We have made an effort to address relevant prior literature. In particular, we have better explained the work’s connection to prior work by Schmidhuber et al and better explained what distinguishes this work from prior work on lifelong learning. See responses to individual reviewers for a more thorough explanation of these changes. Further, we have included an additional appendix which highlights our algorithmic development as a novel process for investigating exploration in meta-RL. We feel this appendix should completely remove any doubts regarding the novelty of this work. \\n\\n2) As for the graphs, we have fixed the presentation and layout issues. We have averaged over more seeds, which decreased the overall reported standard deviation across all algorithms, thus making the graphs more legible. We have also separated the learning curves onto multiple plots so that we can directly plot the standard deviations onto the learning curves without the plots appearing too busy. \\n\\n3) We have carefully edited the paper and fixed any substandard writing. We have also taken care to properly define notation, and made several improvements to the notation. We improved the writing’s clarity, and better highlighted the strength of our contributions. We removed several claims that the reviewers felt were too strong, and replaced them with more agreeable claims that are better supported by the experimental results. We have added an interesting new appendix which considers some of our insights in a more formal and rigorous manner. Finally, we have completely rewritten the experiments section, better explaining the experimental procedure. \\n\\n\\nPlease see the responses to individual reviews below for further elaboration on specific changes we made to address reviewer comments. \\n'}, 'nonreaders': []}]
[{'tddate': None, 'ddate': None, 'tmdate': 1514120938026, 'tcdate': 1514120938026, 'number': 5, 'cdate': 1514120938026, 'id': 'rJfePXpMG', 'invitation': 'ICLR.cc/2018/Conference/-/Paper41/Official_Comment', 'forum': 'Skk3Jm96W', 'replyto': 'SJ0Q_6Hlf', 'signatures': ['ICLR.cc/2018/Conference/Paper41/Authors'], 'readers': ['everyone'], 'writers': ['ICLR.cc/2018/Conference/Paper41/Authors'], 'content': {'title': 'We have fixed the plots and made the notation more clear and rigorous. ', 'comment': 'This is an interesting paper about correcting some of the myopic bias in meta RL. For two existing algorithms (MAML, RL2) it proposes a modification of the metaloss that encourages more exploration in the first (couple of) test episodes. The approach is a reasonable one, the proposed methods seem to work, the (toy) domains are appropriate, and the paper is well-rounded with background, motivation and a lot of auxiliary results.\\n\\nThank you for this excellent summary and compliment of the work! \\n\\n=========================================================================\\n\\nSection 4 is of mixed rigor: some aspects are formally defined and clear, others are not defined at all, and in the current state many things are either incomplete or redundant. Please be more rigorous throughout, define all the terms you use (e.g. \\\\tau, R, \\\\bar{\\\\tau}, ...). Actually, the text never makes it clear how \\\\tau and \\\\ber{\\\\tau} relate to each other: make this connection in a formal way, please.\\n\\nWe have made the suggested improvements, clarifying notation and more explicitly defining tau and \\\\bar{tau}. R was defined in the MDP notation section and means the usual thing for MDPs. \\n\\n=========================================================================\\n\\nEquation 3 is over-complicated: it is actually just a product of two integrals, because all the terms are separable. \\n\\nYes, this is true. It was not our intention to show off or otherwise make this equation seem more complex than it is. In fact, we were trying to simplify things by not skipping steps and separating the integrals prematurely. We asked our colleagues about this, and the response was mixed with half of them preferring the current notation and the other half preferring earlier separation. If you have strong feelings about this, we are willing to change it for the final version. \\n=========================================================================\\n\\n\\nThe integral notation is not meaningful: you can’t sample something in the subscript the way you would in an expectation. Please make this rigorous.\\n\\nThis is a fair comment. We were simply trying to make explicit the dependence on the sampling distribution, since it is one of the key insights of our method. However, we agree with you and have changed the notation. We have added an appendix B which seeks to examine some of these choices in a more rigorous context. \\n\\n=========================================================================\\n\\n\\nThe variability across seems extremely large, so it might be worth averaging over mores seeds for the learning curves, so that differences are more likely to be significant.\\n\\nWe did this and it helped substantially with obtaining more smooth results with more significant differences. Thank you for the suggestion it was very helpful! \\n\\n=========================================================================\\n\\n\\nFigure fontsizes are too small to read, and the figures in the appendix are impossible to read. Also, I’d recommend always plotting std instead of variance, so that the units or reward remain comparable.\\n\\nFixed. Thanks! \\n=========================================================================\\n\\n\\nI understand that you built a rich, flexible domain. But please describe the variant you actually use, cleanly, without all the other variants. Or, alternatively, run experiments on multiple variants.\\n\\nWe plan to release the source for the domain we used. But the variant we used is the one pictured in the paper, with all options turned on. We can add the environment hyperparameters to an appendix of the paper with a brief description if you think this would be useful. \\n\\n=========================================================================\\n\\nRating: 6: Marginally above acceptance threshold\\n\\nIn light of the fact we have addressed your major concerns with this work, we would appreciate it if you would consider revising your score. \\n'}, 'nonreaders': []}, {'tddate': None, 'ddate': None, 'tmdate': 1514120839212, 'tcdate': 1514120783026, 'number': 4, 'cdate': 1514120783026, 'id': 'ByDI8XpMz', 'invitation': 'ICLR.cc/2018/Conference/-/Paper41/Official_Comment', 'forum': 'Skk3Jm96W', 'replyto': 'ryse_yclM', 'signatures': ['ICLR.cc/2018/Conference/Paper41/Authors'], 'readers': ['everyone'], 'writers': ['ICLR.cc/2018/Conference/Paper41/Authors'], 'content': {'title': 'We have added discussion of prior literature and better highlighted the novelty of our contributions Part 2', 'comment': '\\nFigure 6 is difficult to read. \\n\\nThe figures have been dramatically improved. We apologize for the poor initial pass. \\n\\n=========================================================================\\n\\n\\nWhy not define the Gap and then plot the gap. \\n\\nWe feel it is illustrative to see the initial policy and the post-update policy in the same place. Actually seeing the gap between the two algorithms can be easier to interpret than the gap itself, which is a scalar. \\n\\n=========================================================================\\n\\n\\nThese are very unclear plots especially bottom right. It\\'s your job to sub-select and highlight results to clearly support the contribution of the paper---that is not the case here. Same thing with figure 7. I am not sure what to conclude from this graph.\\n\\nWe took these comments to heart and exerted a lot of effort on improving the plots. We solicited feedback from our colleagues who suggest the new plots are much more clear, readable, and better convey our points. We also took better care to clarify this in our captions. \\n\\n=========================================================================\\n\\nThe paper, overall is very informal and unpolished. The text is littered with colloquial language, which though fun, is not as precise as required for technical documents. Meta-RL is never formally and precisely defined. There are many strong statements e.g., : "which indicates that at the very least the meta learning is able to do system identification correctly.">> none of the results support such a claim. Expectations and policies are defined with U which is never formally defined. The background states the problem of study is a finite horizon MDP, but I think they mean episodic tasks. The word heuristic is used, when really should be metric or measure. \\n\\nThank you for these comments. We have cleaned up the writing. \\n========================================================================='}, 'nonreaders': []}, {'tddate': None, 'ddate': None, 'tmdate': 1514121247271, 'tcdate': 1514121213529, 'number': 7, 'cdate': 1514121213529, 'id': 'S1Sbum6ff', 'invitation': 'ICLR.cc/2018/Conference/-/Paper41/Official_Comment', 'forum': 'Skk3Jm96W', 'replyto': 'SkE07mveG', 'signatures': ['ICLR.cc/2018/Conference/Paper41/Authors'], 'readers': ['everyone'], 'writers': ['ICLR.cc/2018/Conference/Paper41/Authors'], 'content': {'title': 'We have fixed issues with plots and exposition and addressed the prior literature Part 2', 'comment': '=========================================================================\\n\\n\\n\\np2: Authors write: "In practice, these methods are often not used due to difficulties with high-dimensional observations, difficulty in implementation on arbitrary domains, and lack of promising results."\\n\\n“Not quite true - RL robots with high-dimensional video inputs and intrinsic motivation learned to explore in 2015: \\n\\n(2) Kompella, Stollenga, Luciw, Schmidhuber. Continual curiosity-driven skill acquisition from high-dimensional video inputs for humanoid robots. Artificial Intelligence, 2015.”\\n\\n\\nWe have adjusted the discussion and added this reference. \\n\\n=========================================================================\\n\\np2: Authors write: "Although this line of work does not explicitly deal with exploration in meta learning, it remains a large source of inspiration for this work."\\n\\np2: Authors write: "To the best of our knowledge, there does not exist any literature addressing the topic of exploration in meta RL."\\n\\n“But there is such literature - see the following meta-RL work where exploration is the central issue:\\n\\n(3) J. Schmidhuber. Exploring the Predictable. In Ghosh, S. Tsutsui, eds., Advances in Evolutionary Computing, p. 579-612, Springer, 2002.”\\n\\n\\nWe have adjusted the discussion and added this reference. \\n\\n=========================================================================\\n\\n\\n“J. Schmidhuber, J. Zhao, N. Schraudolph. Reinforcement learning with self-modifying policies. In S. Thrun and L. Pratt, eds., Learning to learn, Kluwer, pages 293-309, 1997.” \\n\\n\\nWe have added this reference. \\n\\n=========================================================================\\n'}, 'nonreaders': []}, {'tddate': None, 'ddate': None, 'tmdate': 1514121126056, 'tcdate': 1514121126056, 'number': 6, 'cdate': 1514121126056, 'id': 'BJCjPQpMM', 'invitation': 'ICLR.cc/2018/Conference/-/Paper41/Official_Comment', 'forum': 'Skk3Jm96W', 'replyto': 'SkE07mveG', 'signatures': ['ICLR.cc/2018/Conference/Paper41/Authors'], 'readers': ['everyone'], 'writers': ['ICLR.cc/2018/Conference/Paper41/Authors'], 'content': {'title': 'We have fixed issues with plots and exposition and addressed the prior literature. ', 'comment': "\\nFirst and foremost, we would like to apologize for having missed the relevant prior work by Schmidhuber et al. We have taken care to better connect our work to this prior work, as detailed below. \\n\\n=========================================================================\\n\\n“Equation (3) and equations on the top of page 4: Mathematically, it looks better to swap \\\\mathrm{d}\\\\tau and \\\\mathrm{d}\\\\bar{\\\\tau}, to obtain a consistent ordering with the double integrals.” \\n\\nAgreed. This change has been made. \\n\\n=========================================================================\\n\\n\\n“In page 4, last paragraph before Section 5, “However, during backward pass, the future discounted returns for the policy gradient computation will zero out the contributions from exploratory episodes”: I did not fully understand this - please explain better.”\\n\\nPlease see equation 4 in the latest draft and the accompanying text. We have better explained the procedure. \\n\\n=========================================================================\\n\\n\\nIt is not very clear if the authors use REINFORCE or more advanced approaches like TRPO/PPO/DDPG to perform policy gradient updates? \\n\\nFor E-MAML/MAML, the inner update is VPG and the outer update is PPO. For E-RL2/RL2, PPO is used. We have noted this in the experiments section of the paper. \\n\\n=========================================================================\\n\\n\\n“I'd like to see more detailed hyperparameter settings.”\\nWe have included some further discussion on the training procedure in the experiments section. Further, it is our intention to release the code for this paper, which will include the hyper-parameters used in these algorithms. We can also put these hyper-parameters into a table in an appendix of this paper, to ensure redundancy in their availability. \\n\\n=========================================================================\\n\\n\\n“Figures 10, 11, 12, 13, 14: Too small to see clearly. I would propose to re-arrange the figures in either [2, 2]-layout, or a single column layout, particularly for Figure 14.”\\n\\nWe agree. We have switched to a [2, 2]-layout. The figures are still somewhat small, though when viewed on a computer one can easily zoom in and read them more easily. Of course, we would be willing to move to a single column layout in the final version if the present figures are still too difficult to read. \\n\\n=========================================================================\\n\\n\\n“Figures 5, 6, 9: Wouldn't it be better to also use log-scale on the x-axis for consistent comparison with curves in Krazy World experiments ?”\\n\\nWe have updated the figures and made the axes consistent. \\n\\n=========================================================================\\n\\n\\n“It could be very interesting to benchmark also in Mujoco environments, such as modified Ant Maze.” \\n\\nWe have been working on continuous control tasks and would hope to include them in the final version. The difficulties we have thus far encountered with these tasks are interesting, but perhaps outside the scope of this paper at the present time. \\n\\n=========================================================================\\n\\n\\n“Overall, the idea proposed in this paper is interesting. I agree with the authors that a good learner should be able to generalize to new tasks with very few trials compared with learning each task from scratch. This, however, is usually called transfer learning, not metalearning. As mentioned above, experiments in more complex, continuous control tasks with Mujoco simulators might be illuminating. “\\n\\nSee the above comment regarding continuous control. As for difficulties with terminology, some of this stems from following the leads set in the prior literature (the MAML and RL2 papers) which refer to the problem as meta learning. We have attempted to give a more thorough overview of lifelong learning/transfer learning in this revised draft. Please see our response to the first review for further details. \\n\\n=========================================================================\\n\\n\\n“(1) Koutnik, J., Cuccu, G., Schmidhuber, J., and Gomez, F. (July 2013). Evolving large-scale neural networks for vision-based reinforcement learning. GECCO 2013, pages 1061-1068, Amsterdam. ACM.” \\n\\n\\nWe have added this citation. Apologies for having missed it. This reference was actually in our bib file but for some reason did not make it into the paper proper. "}, 'nonreaders': []}, {'tddate': None, 'ddate': None, 'tmdate': 1514120743140, 'tcdate': 1514120743140, 'number': 3, 'cdate': 1514120743140, 'id': 'rJyE8QpzG', 'invitation': 'ICLR.cc/2018/Conference/-/Paper41/Official_Comment', 'forum': 'Skk3Jm96W', 'replyto': 'ryse_yclM', 'signatures': ['ICLR.cc/2018/Conference/Paper41/Authors'], 'readers': ['everyone'], 'writers': ['ICLR.cc/2018/Conference/Paper41/Authors'], 'content': {'title': 'We have added discussion of prior literature and better highlighted the novelty of our contributions. ', 'comment': 'The first and primary issue is that authors claim their exists not prior work on "exploration in Meta-RL"....The paper under-review must survey such literature and discuss why these new approaches are a unique contribution.\\n\\nWe have added numerous references to these fields in the related literature section of the paper and clarified our contribution in this context. We are interested in the problem of meta-learning for RL (which largely deals with finding initializations that are quick to adapt to new domains). This problem ends up having a different formulation from the areas mentioned above. Our specific contribution is the creation of two new algorithms that find good initializations for RL algorithms to quickly adapt to new domains, yet do not sacrifice exploratory power to obtain these initializations. We show further that one can consider a large number of interesting algorithms for finding initializations that are good at exploring. This is also a novel contribution. \\n=========================================================================\\n\\n\\nThe empirical results do not currently support the claimed contributions of the paper. \\n\\nThe results have been strengthened since the initial submission. It is now clear that our methods provide substantially better performance. Further, the heuristic metrics indicate they are superior at exploration. \\n\\n=========================================================================\\n\\nThe first batch of results in on a new task introduced by this paper. Why was a new domain introduced? How are existing domains not suitable. \\n\\nThe domains are gridworlds and mazes, neither of which should require this sort of justification prior to use. The gridworld does not use a standard reference implementation (we am not aware of any such implementation) and was designed so that its level of difficulty could be more easily controlled during experimentation. \\n\\n=========================================================================\\n\\nDesigning domains are very difficult and why benchmark domains that have been well vetted by the community are such an important standard\\nWe agree with this. And indeed, we ourselves have designed reference domains for RL problems that are extremely popular in the community. In these cases, the domains were usually derived from an initial paper such as this one and subsequently improved upon by the community over time. In our experience, releasing a new domain in the context of this paper aligns well with how our previous successful domains have been released. \\n=========================================================================\\n\\nIn the experiment, the parameters were randomly sampled---is a very non-conventional choice. Usually one performance a search for the best setting and then compares the results. This would introduce substantial variance in the results, requiring many more runs to make statistically significant conclusions.\\n\\nWe have averaged over many more trials and this has significantly smoothed the curves. We were trying to avoid overfitting, which is a systematic problem in the way RL results are typically reported. \\n\\n=========================================================================\\n\\n\\nThe results on the first task are not clear. In fig4 one could argue that e-maml is perhaps performing the best, but the variance of the individual lines makes it difficult to conclude much. In fig5 rl2 gets the best final performance---do you have a hypothesis as to why? Much more analysis of the results is needed.\\n\\nThe result are more clear now and RL2 no longer gets the best final performance. Also, an important thing to consider is how fast the algorithms approach their final performance. For instance, in the referenced graph, E-MAML converged within ~10 million timesteps whereas RL2 took nearly twice as long. We apologize for not making this important point more explicit in the paper. In any case, this particular comment has been outmoded. \\n\\n=========================================================================\\n\\n\\nThere are well-known measures used in transfer learning to access performance, such as jump-start. Why did you define new ones here?\\n\\nJump start is quite similar to the gap metric we consider in the paper. We have clarified this. \\n\\n=========================================================================\\n'}, 'nonreaders': []}, {'tddate': None, 'ddate': None, 'tmdate': 1514121291118, 'tcdate': 1514121291118, 'number': 8, 'cdate': 1514121291118, 'id': 'BkQIdmTMG', 'invitation': 'ICLR.cc/2018/Conference/-/Paper41/Official_Comment', 'forum': 'Skk3Jm96W', 'replyto': 'SkE07mveG', 'signatures': ['ICLR.cc/2018/Conference/Paper41/Authors'], 'readers': ['everyone'], 'writers': ['ICLR.cc/2018/Conference/Paper41/Authors'], 'content': {'title': 'We have fixed issues with plots and exposition and addressed the prior literature Part 3', 'comment': '\\n“p2: "In hierarchical RL, a major focus is on learning primitives that can be reused and strung together. These primitives will frequently enable better exploration, since they’ll often relate to better coverage over state visitation frequencies. Recent work in this direction includes (Vezhnevets et al., 2017; Bacon & Precup, 2015; Tessler et al., 2016; Rusu et al., 2016)."\\n\\n“These are very recent refs - one should cite original work on hierarchical RL including:\\n\\nJ. Schmidhuber. Learning to generate sub-goals for action sequences. In T. Kohonen, K. Mäkisara, O. Simula, and J. Kangas, editors, Artificial Neural Networks, pages 967-972. Elsevier Science Publishers B.V., North-Holland, 1991.\\n\\nM. B. Ring. Incremental Development of Complex Behaviors through Automatic Construction of Sensory-Motor Hierarchies. Machine Learning: Proceedings of the Eighth International Workshop, L. Birnbaum and G. Collins, 343-347, Morgan Kaufmann, 1991.”\\n\\nM. Wiering and J. Schmidhuber. HQ-Learning. Adaptive Behavior 6(2):219-246, 1997”\\n\\n\\nThese refs cite older work in the area, which in turn cites the work you mention. This is not a review paper and hence mentioning every prior work in a field as large as hierarchical RL is not practical nor desired. We have added a review article by Barto and your last reference on HQ learning to account for this. \\n\\n=========================================================================\\n\\n\\n\\n\\n“References to original work on meta-RL are missing. How does the approach of the authors relate to the following approaches? \\n\\n(6) J. Schmidhuber. Gödel machines: Fully Self-Referential Optimal Universal Self-Improvers. In B. Goertzel and C. Pennachin, eds.: Artificial General Intelligence, p. 119-226, 2006. \\n\\n(7) J. Schmidhuber. Evolutionary principles in self-referential learning, or on learning how to learn: The meta-meta-... hook. Diploma thesis, TUM, 1987. \\n \\nPapers (4,5) above describe a universal self-referential, self-modifying RL machine. It can implement and run all kinds of learning algorithms on itself, but cannot learn them by gradient descent (because it\\'s RL). Instead it uses what was later called the success-story algorithm (5) to handle all the meta-learning and meta-meta-learning etc.\\n\\nRef (7) above also has a universal programming language such that the system can learn to implement and run all kinds of computable learning algorithms, and uses what\\'s now called Genetic Programming (GP), but applied to itself, to recursively evolve better GP methods through meta-GP and meta-meta-GP etc. \\n\\nRef (6) is about an optimal way of learning or the initial code of a learning machine through self-modifications, again with a universal programming language such that the system can learn to implement and run all kinds of computable learning algorithms.”\\n\\nWe added several sentences regarding this to our paper. We have also connected this idea to a more broad interpretation of our work. Please see appendix B which cites this work in reference to our algorithm derivation. \\n=========================================================================\\n\\n\\nGeneral recommendation: Accept, provided the comments are taken into account, and the relation to previous work is established\\n\\nWe feel the paper now is substantially improved and we exerted significant energy addressing your concerns. Please see in particular the improved figures and heuristic metrics, as well as the improved works cited section, which address the majority of the major issues you had with this work. We would appreciate it if you could reconsider your score in light of these new revisions. \\n\\n\\n\\n========================================================================='}, 'nonreaders': []}]
[{'tddate': None, 'ddate': None, 'tmdate': 1514913787656, 'tcdate': 1514913787656, 'number': 10, 'cdate': 1514913787656, 'id': 'H1NWlrYmM', 'invitation': 'ICLR.cc/2018/Conference/-/Paper41/Official_Comment', 'forum': 'Skk3Jm96W', 'replyto': 'rJfePXpMG', 'signatures': ['ICLR.cc/2018/Conference/Paper41/AnonReviewer3'], 'readers': ['everyone'], 'writers': ['ICLR.cc/2018/Conference/Paper41/AnonReviewer3'], 'content': {'title': 'improved', 'comment': 'The revised paper is not perfect, but improved substantially, and addresses multiple issues. I raised my review score.'}, 'nonreaders': []}]
[]
455
We present Merged-Averaged Classifiers via Hashing (MACH) for $K$-classification with large $K$. Compared to traditional one-vs-all classifiers that require $O(Kd)$ memory and inference cost, MACH only need $O(d\\log{K})$ memory while only requiring $O(K\\log{K} + d\\log{K})$ operation for inference. MACH is the first generic $K$-classification algorithm, with provably theoretical guarantees, which requires $O(\\log{K})$ memory without any assumption on the relationship between classes. MACH uses universal hashing to reduce classification with a large number of classes to few independent classification task with very small (constant) number of classes. We provide theoretical quantification of accuracy-memory tradeoff by showing the first connection between extreme classification and heavy hitters. With MACH we can train ODP dataset with 100,000 classes and 400,000 features on a single Titan X GPU (12GB), with the classification accuracy of 19.28\\%, which is the best-reported accuracy on this dataset. Before this work, the best performing baseline is a one-vs-all classifier that requires 40 billion parameters (320 GB model size) and achieves 9\\% accuracy. In contrast, MACH can achieve 9\\% accuracy with 480x reduction in the model size (of mere 0.6GB). With MACH, we also demonstrate complete training of fine-grained imagenet dataset (compressed size 104GB), with 21,000 classes, on a single GPU.
MACH: Embarrassingly parallel $K$-class classification in $O(d\\log{K})$ memory and $O(K\\log{K} + d\\log{K})$ time, instead of $O(Kd)$
ICLR.cc/2018/Conference
r1RQdCg0W
[{'tddate': None, 'ddate': None, 'original': None, 'tmdate': 1515642451218, 'tcdate': 1511810907824, 'number': 3, 'cdate': 1511810907824, 'id': 'H1VwD15lG', 'invitation': 'ICLR.cc/2018/Conference/-/Paper455/Official_Review', 'forum': 'r1RQdCg0W', 'replyto': 'r1RQdCg0W', 'signatures': ['ICLR.cc/2018/Conference/Paper455/AnonReviewer2'], 'readers': ['everyone'], 'content': {'title': 'MACH: Embarrassingly parallel $K$-class classification', 'rating': '6: Marginally above acceptance threshold', 'review': 'The paper presents a hashing based scheme (MACH) for reducing memory and computation time for K-way classification when K is large. The main idea is to use R hash functions to generate R different datasets/classifiers where the K classes are mapped into a small number of buckets (B). During inference the probabilities from the R classifiers are summed up to obtain the best scoring class. The authors provide theoretical guarantees showing that both memory and computation time become functions of log(K) and thus providing significant speed-up for large scale classification problems. Results are provided on the Imagenet and ODP datasets with comparisons to regular one-vs-all classifiers and tree-based methods for speeding up classification.\\n\\nPositives\\n- The idea of using R hash functions to remap K-way classification into R B-way classification problems is fairly novel and the authors provide sound theoretical arguments showing how the K probabilities can be approximated using the R different problems.\\n- The theoritical savings in memory and computation time is fairly significant and results suggest the proposed approach provides a good trade-off between accuracy and resource costs.\\n\\nNegatives\\n- Hierarchical softmax is one of more standard techniques that has been very effective at large-scale classification. The paper does not provide comparisons with this baseline which also reduces computation time to log(K).\\n- The provided baselines LOMTree, Recall Tree are missing descriptions/citations. Without this it is hard to judge if these are good baselines to compare with.\\n- Figure 1 only shows how accuracy varies as the model parameters are varied. A better graph to include would be a time vs accuracy trade-off for all methods. \\n- On the Imagenet dataset the best result using the proposed approach is only 85% of the OAA baseline. Is there any setting where the proposed approach reaches 95% of the baseline accuracy?', 'confidence': '4: The reviewer is confident but not absolutely certain that the evaluation is correct'}, 'writers': [], 'nonreaders': []}, {'tddate': None, 'ddate': None, 'original': None, 'tmdate': 1515642451255, 'tcdate': 1511789792729, 'number': 2, 'cdate': 1511789792729, 'id': 'H1tJH9FxM', 'invitation': 'ICLR.cc/2018/Conference/-/Paper455/Official_Review', 'forum': 'r1RQdCg0W', 'replyto': 'r1RQdCg0W', 'signatures': ['ICLR.cc/2018/Conference/Paper455/AnonReviewer1'], 'readers': ['everyone'], 'content': {'title': 'Extreme multi-class classification with Hashing', 'rating': '6: Marginally above acceptance threshold', 'review': "Thanks to the authors for their feedback.\\n==============================\\nThe paper presents a method for classification scheme for problems involving large number of classes in multi-class setting. This is related to the theme of extreme classification but the setting is restricted to that of multi-class classification instead of multi-label classification. The training process involves data transformation using R hash functions, and then learning R classifiers. During prediction the probability of a test instance belonging to a class is given by the sum of the probabilities assigned by the R meta-classifiers to the meta-class in the which the given class label falls. The paper demonstrates better results on ODP and Imagenet-21K datasets compared to LOMTree, RecallTree and OAA.\\n\\nThere are following concerns regarding the paper which don't seem to be adequately addressed :\\n \\n - The paper seems to propose a method in which two-step trees are being constructed based on random binning of labels, such that the first level has B nodes. It is not intuitively clear why such a method could be better in terms of prediction accuracy than OAA. The authors mention algorithms for training and prediction, and go on to mention that the method performs better than OAA. Also, please refer to point 2 below.\\n\\n - The paper repeatedly mentions that OAA has O(Kd) storage and prediction complexity. This is however not entirely true due to sparsity of training data, and the model. These statements seem quite misleading especially in the context of text datasets such as ODP. The authors are requested to check the papers [1] and [2], in which it is shown that OAA can perform surprisingly well. Also, exploiting the sparsity in the data/models, actual model sizes for WikiLSHTC-325K from [3] can be reduced from around 900GB to less than 10GB with weight pruning, and sparsity inducing regularizers. It is not clear if the 160GB model size reported for ODP took the above suggestions into considerations, and which kind of regularization was used. Was the solver used from vowpal wabbit or packages such as Liblinear were used for reporting OAA results.\\n\\n - Lack of empirical comparison - The paper lacks empirical comparisons especially on large-scale multi-class LSHTC-1/2/3 datasets [4] on which many approaches have been proposed. For a fair comparison, the proposed method must be compared against these datasets. It would be important to clarify if the method can be used on multi-label datasets or not, if so, it needs to be evaluated on the XML datasets [3].\\n\\n[1] PPDSparse - http://www.kdd.org/kdd2017/papers/view/a-parallel-and-primal-dual-sparse-method-for-extreme-classification\\n[2] DiSMEC - https://arxiv.org/abs/1609.02521\\n[3] http://manikvarma.org/downloads/XC/XMLRepository.html\\n[4] http://lshtc.iit.demokritos.gr/LSHTC2_CFP", 'confidence': '4: The reviewer is confident but not absolutely certain that the evaluation is correct'}, 'writers': [], 'nonreaders': []}, {'tddate': None, 'ddate': None, 'original': None, 'tmdate': 1515642451295, 'tcdate': 1511759356697, 'number': 1, 'cdate': 1511759356697, 'id': 'SJB-0Mtlz', 'invitation': 'ICLR.cc/2018/Conference/-/Paper455/Official_Review', 'forum': 'r1RQdCg0W', 'replyto': 'r1RQdCg0W', 'signatures': ['ICLR.cc/2018/Conference/Paper455/AnonReviewer3'], 'readers': ['everyone'], 'content': {'title': 'Good ideas, but insufficient results', 'rating': '6: Marginally above acceptance threshold', 'review': 'The manuscript proposes an efficient hashing method, namely MACH, for softmax approximation in the context of large output space, which saves both memory and computation. In particular, the proposed MACH uses 2-universal hashing to randomly group classes, and trains a classifier to predict the group membership. It does this procedure multiple times to reduce the collision and trains a classifier for each run. The final prediction is the average of all classifiers up to some constant bias and multiplier as shown in Eq (2).\\n\\nThe manuscript is well written and easy to follow. The idea is novel as far as I know. And it saves both training time and prediction time. One unique advantage of the proposed method is that, during inference, the likelihood of a given class can be computed very efficiently without computing the expensive partition function as in traditional softmax and many other softmax variants. Another impressive advantage is that the training and prediction is embarrassingly parallel, and thus can be linearly sped up, which is very practical and rarely seen in other softmax approximation.\\n\\nThough the results on ODP dataset is very strong, the experiments still leave something to be desired.\\n(1) More baselines should be compared. There are lots of softmax variants for dealing with large output space, such as NCE, hierarchical softmax, adaptive softmax ("Efficient softmax approximation for GPUs" by Grave et. al), LSH hashing (as cited in the manuscript) and matrix factorization (adding one more hidden layer). The results of MACH would be more significant if comparison to these or some of these baselines can be available.\\n(2) More datasets should be evaluated. In this manuscript, only ODP and imagenet are evaluated. However, there are also lots of other datasets available, especially in the area of language modeling, such as one billion word dataset ("One billion\\nword benchmark for measuring progress in statistical language modeling" by Chelba et. al) and many others.\\n(3) Why the experiments only focus on simple logistic regression? With neural network, it could actually save computation and memory. For example, if one more hidden layer with M hidden units is added, then the memory consumption would be M(d+K) rather than Kd. And M could be a much smaller number, such as 512. I guess the accuracy might possibly be improved, though the memory is still linear in K.\\n\\nMinor issues:\\n(1) In Eq (3), it should be P^j_b rather than P^b_j?\\n(2) The proof of theorem 1 seems unfinished', 'confidence': '4: The reviewer is confident but not absolutely certain that the evaluation is correct'}, 'writers': [], 'nonreaders': []}]
There is a very nice discussion with one of the reviewers on the experiments, that I think would need to be battened down in an ideal setting. I'm also a bit surprised at the lack of discussion or comparison to two seemingly highly related papers: 1. T. G. Dietterich and G. Bakiri (1995) Solving Multiclass via Error Correcting Output Codes. 2. Hsu, Kakade, Langford and Zhang (2009) Multi-Label Prediction via Compressed Sensing.
['Extreme Classification', 'Large-scale learning', 'hashing', 'GPU', 'High Performance Computing']
[]
[{'tddate': None, 'ddate': None, 'tmdate': 1514112380547, 'tcdate': 1514112380547, 'number': 2, 'cdate': 1514112380547, 'id': 'HJVtSZaMz', 'invitation': 'ICLR.cc/2018/Conference/-/Paper455/Official_Comment', 'forum': 'r1RQdCg0W', 'replyto': 'H1VwD15lG', 'signatures': ['ICLR.cc/2018/Conference/Paper455/Authors'], 'readers': ['everyone'], 'writers': ['ICLR.cc/2018/Conference/Paper455/Authors'], 'content': {'title': 'Thanks for Positive Feedback', 'comment': 'Thanks for taking the time in improving our work. \\n\\n- We DID compare with log(K)running time methods (both LOMTree and RecallTree are log(K) running time not memory). Hierarchical softmax and any tree-like structure will lead to more (around twice) memory compared to the vanilla classifier. Every leaf (K leaves) requires a memory and hence the total memory is of the order 2k ( K + K/2 + ...) . Of course running time will be log(K). \\nHowever, as mentioned memory is prime bottleneck in scalability. We still have to update and store those many parameters. \\n- Although, we have provided citations. we appreciate you pointing it out. We will make it more explicit at various places.\\n- We avoided the time tradeoff because time depends on several factors like parallelism, implementation etc. For example, we can trivially parallelize across R processors. \\n- It seems there is a price for approximations on fine-grained imagenet. Even recalltree and LOMTree with twice the memory does worse than MACH. \\n\\nWe thank you again for the encouragement and hope that your opinion will be even more positive after these discussions. \\n'}, 'nonreaders': []}, {'tddate': None, 'ddate': None, 'tmdate': 1514141508014, 'tcdate': 1514109618544, 'number': 1, 'cdate': 1514109618544, 'id': 'Skch5gafG', 'invitation': 'ICLR.cc/2018/Conference/-/Paper455/Official_Comment', 'forum': 'r1RQdCg0W', 'replyto': 'H1tJH9FxM', 'signatures': ['ICLR.cc/2018/Conference/Paper455/Authors'], 'readers': ['everyone'], 'writers': ['ICLR.cc/2018/Conference/Paper455/Authors'], 'content': {'title': 'MACH seems superior (more experiments) ', 'comment': "Thanks for pointing our sparsity and also reference related. We tried compared with [1] and [2] (referred in your comment) as pointed out on ODP dataset, and we are delighted to share the results. We hope these results (below) will convince you that \\n1) we are indeed using challenging large-scale dataset. \\n2) sparsity is nice to reduce the model size, but training is prohibitively slow. We still have 40 billion parameters to think about, even if we are not storing all of them (See results of dismec) \\n3) And our proposal is blazing fast and accurate and above all simple. Afterall what will beat small (32 classes only instead of 100k) logistic regression? \\n4) Still, we stress, (to the best of our knowledge) no known method can train ODP dataset on a single Titan X.\\n\\nWe will add the new results in any future version of the paper. \\n\\nFirst of all, ODP is a large scale dataset, evident from the fact that both the methods [1] and [2] are either prohibitively slow or goes out or memory.\\n\\nIt is perfectly fine to have sparse models which will make the final model small in memory. The major hurdle is to train them. We have no idea which weights are sparse. So the only hope to always keep the memory small is some variant of iterative hard thresholding to get rid of small weights repetitively. That is what is done by \\nDismec, reference [2]. As expected, this should be very slow. \\n\\n****** Dismec Details on ODP dataset***********\\n\\nWe tried running dismec with the recommended control model set. \\nControl Model size: Set a ambiguity control hyper-parameter delta (0.01). if a value in weight matrix is between -delta and delta, prune the value because the value carries very little discriminative information of distinguishing one label against another.\\n\\nRunning time: approx. 3 models / 24h, requires 106 models for ODP dataset, approx. 35 days to finish training on Rush. We haven't finished it yet. \\nCompare this to our proposed MACH which takes 7.3 hrs on a single GPU. Afterall, we are training small logistic regression with 32 classes only, its blazing fast. No iterative thresholding, not slow training. \\n\\nFurthermore, Dismec does not come with probabilistic guarantees of log{K} memory. Sparsity is also a very specific assumption and not always the way to reduce model size. \\n\\nThe results are not surprising as in [2] sophisticated computers with 300-1000 cores were used. We use a simple machine with a single Titan X. \\n\\n********** PD-Sparse**************\\n\\nWe also ran PD-sparse a non-parallel version [1] (we couldn't find the code for [1]), but it should have same memory consumption as [1]. The difference seems regarding parallelization. We again used the ODP dataset with recommended settings. We couldn't run it. Below are details \\n\\nIt goes out of memory on our 64gb machine. So we tried using another 512GB RAM machine, it failed after consuming 70% of memory. \\n\\nTo do a cross sanity check, we ran PPD on LSHTC1 (one of the datasets used in the original paper [1]). It went out of memory on our machine (64 GB) but worked on 512 GB RAM Machine with accuracy as expected in [1]. Interestingly, the run consumed more than 343 GB of main memory. This is ten times more than the memory required for storing KD double this dataset with K =12294 and D=347255. \\n***********************************\\n\\nLet us know if you are still not convinced. We are excited about MACH, a really simple, theoretically sound algorithm, for extreme class classification. No bells and whistles, no assumption, not even sparsity."}, 'nonreaders': []}, {'tddate': None, 'ddate': None, 'tmdate': 1514115477777, 'tcdate': 1514115477777, 'number': 3, 'cdate': 1514115477777, 'id': 'HkC5WzafM', 'invitation': 'ICLR.cc/2018/Conference/-/Paper455/Official_Comment', 'forum': 'r1RQdCg0W', 'replyto': 'SJB-0Mtlz', 'signatures': ['ICLR.cc/2018/Conference/Paper455/Authors'], 'readers': ['everyone'], 'writers': ['ICLR.cc/2018/Conference/Paper455/Authors'], 'content': {'title': 'Thanks for nice comments. The methods you mentioned do not save memory', 'comment': "First of all, we appreciate your detail comments, spotting typos, and encouragement. \\n\\n(1) Hierarchical softmax and LSH does not save memory; they make memory worse compared to the vanilla classifier. \\nHierarchical softmax and any tree-like structure will lead to more (around twice) memory compared to the vanilla classifier. Every leaf (K leaves) requires memory (for a vector), and hence the total memory is of the order 2k ( K + K/2 + ...) . Of course, running time will be log(K). \\nIn theory, LSH requires K^{1 + \\\\rho} memory ( way more than K or 2K). We still need all the weights. \\nMemory is the prime bottleneck for scalability. Note prediction is parallelizable over K (then argmax) even for vanilla models. Thus prediction time is not a major barrier with parallelism.\\n\\nWe stress, (to the best of our knowledge) no known method can train ODP dataset on a single Titan X with 12GB memory. All other methods will need more than 160gb main memory. The comparison will be trivial, they all will go out of memory. \\n\\nAlso, see new comparisons with Dismec and PDsparse algorithms (similar) in comment to AnonReviewer1\\n\\nmatrix factorization (see 3) \\n\\n2) ODP is similar domain as word2vec. We are not sure, but direct classification accuracy in word2vec does not make sense (does it?), it is usually for word embeddings (or other language models) which need all the parameters as those are the required outputs, not the class label (which is argmax ). \\n\\n3) What you are mentioning (similar to matrix factorization) is a form of dimensionality reduction from D to M. As mentioned in the paper, this is orthogonal and complementary. We can treat the final layer as the candidate for MACH for more savings. As you said, just dimentionality reduction won't be logarithmic in K by itself. \\n\\n\\nWe thank you again for the encouragement and hope that your opinion will be even more favorable after the discussions mentioned above. \\n\\n\\n"}, 'nonreaders': []}]
[{'tddate': None, 'ddate': None, 'tmdate': 1514309066267, 'tcdate': 1514309066267, 'number': 4, 'cdate': 1514309066267, 'id': 'SJfRHbeQz', 'invitation': 'ICLR.cc/2018/Conference/-/Paper455/Official_Comment', 'forum': 'r1RQdCg0W', 'replyto': 'Skch5gafG', 'signatures': ['ICLR.cc/2018/Conference/Paper455/AnonReviewer1'], 'readers': ['everyone'], 'writers': ['ICLR.cc/2018/Conference/Paper455/AnonReviewer1'], 'content': {'title': 'DiSMEC on ODP dataset', 'comment': 'Thanks for the update on various points. \\n\\nI would disagree with some of the responses particularly on sparsity, on the merit of using a single Titan X and hence the projected training time mentioned for DiSMEC on ODP dataset. These are mentioned in detail below. Before that I would like to mention some of my empirical findings.\\n\\nTo verify my doubts on using DiSMEC on ODP as in the initial review, I was able to run it in a day or so, since I had access to a few hundreds cores. It turns out it gives an accuracy of 24.8% which is about 30% better than MACH, and much better than reported for the OAA performance in earlier papers such as Daume etal [1] which reported 9% on this dataset. \\n\\nFurthermore, after storing the model in sparse format, the model size was around 3.1GB, instead of 160 GB as mentioned in this and earlier papers. It would be great if the authors could verify these findings if they have access to a moderately sized cluster with a few hundred cores. If the authors then agree, it would be great to mention these in the new version of the paper for future references.\\n\\n - Sparsity : For text dataset with large number of labels such as in ODP, it is quite common for the model to be sparse. This is because, all the words/features are highly unlikely to be surely present or surely not present for each label/class. Therefore, there is bound to lots of zeros in the model. From an information theoretic view-point as well, it does not make much of a sense for ODP model to be 160GB when the training data is 4GB. Therefore, sparsity is not merely an assumption as an approximation but is a reasonable way to control model complexity and hence the model size.\\n\\n- Computational resources - The argument of the paper mainly hinges on the usage of a single Titan X. However, it is not clear what is the use-case/scenario in which one wants to train strictly on a single GPU. This needs to be appropriately emphasized and explained. On the other hand, a few hundred/thousands cores is something which typically is available in organizations/institutions which might care about problems of large sizes such as on ODP and Imagenet dataset.\\n\\nAlso, the authors can download the PPDSparse code from XMC respository or directly from the link http://www.cs.cmu.edu/~eyan/software/AsyncPDSparse.zip\\n\\n[1] Logarithmic Time One-Against-Some, ICML 2017'}, 'nonreaders': []}]
[{'tddate': None, 'ddate': None, 'tmdate': 1514315094725, 'tcdate': 1514314592603, 'number': 5, 'cdate': 1514314592603, 'id': 'S1YDsMlXf', 'invitation': 'ICLR.cc/2018/Conference/-/Paper455/Official_Comment', 'forum': 'r1RQdCg0W', 'replyto': 'SJfRHbeQz', 'signatures': ['ICLR.cc/2018/Conference/Paper455/Authors'], 'readers': ['everyone'], 'writers': ['ICLR.cc/2018/Conference/Paper455/Authors'], 'content': {'title': 'Memory usage, time ? (We want to add this comparison to the paper)', 'comment': "We are really grateful for your efforts and taking time to run dismec. \\nCould you send us details of (or link to your codes?). We would like to report this in the paper (and also the comparison with imagenet). \\nWe want to know memory usage, running time (approx a day?), how many cores. In our codes, dismec was run on a single 64GB machine, with 8 cores and one titan X. \\n\\nFurthermore, on imagenet, sparsity won't help. MACH does not need this assumption. So we need to think beyond sparsity. \\n\\nMACH has all these properties. \\n\\nThe main argument is that we can run on Titan X (< 12GB working memory) (sequentially run 25 logistic regression of size 32 classes each) in 7.2 hrs. If we run with 25 GPUs in parallel, then it can be done in 17 minutes! Compare this to about a day on a large machine. \\n\\nWe think the ability to train dataset on GPUs or even single GPU is very impactful. GPU clusters are everywhere and cheap now. If we can train in few hours on easily available single GPU or in few minutes on 25 GPUs (also cheap to have). Then why wait over a day on a high-memory, high-core machines (expensive). Furthermore, with data growing faster than our machines, any work which enhances our capability to train them is beneficial. \\n\\nWe hope you see the importance of simplicity of our method and how fast we can train with increased parallelism. 17 min on 25 Titan X. The parallelism is trivial. \\n\\nWe are happy to run any specific benchmark (head-to-head) you have in mind if that could convince you. "}, 'nonreaders': []}]
184
The goal of imitation learning (IL) is to enable a learner to imitate an expert’s behavior given the expert’s demonstrations. Recently, generative adversarial imitation learning (GAIL) has successfully achieved it even on complex continuous control tasks. However, GAIL requires a huge number of interactions with environment during training. We believe that IL algorithm could be more applicable to the real-world environments if the number of interactions could be reduced. To this end, we propose a model free, off-policy IL algorithm for continuous control. The keys of our algorithm are two folds: 1) adopting deterministic policy that allows us to derive a novel type of policy gradient which we call deterministic policy imitation gradient (DPIG), 2) introducing a function which we call state screening function (SSF) to avoid noisy policy updates with states that are not typical of those appeared on the expert’s demonstrations. Experimental results show that our algorithm can achieve the goal of IL with at least tens of times less interactions than GAIL on a variety of continuous control tasks.
Deterministic Policy Imitation Gradient Algorithm
ICLR.cc/2018/Conference
rJ3fy0k0Z
[{'tddate': None, 'ddate': None, 'original': None, 'tmdate': 1515642405594, 'tcdate': 1511726708162, 'number': 2, 'cdate': 1511726708162, 'id': 'B1nuCculG', 'invitation': 'ICLR.cc/2018/Conference/-/Paper184/Official_Review', 'forum': 'rJ3fy0k0Z', 'replyto': 'rJ3fy0k0Z', 'signatures': ['ICLR.cc/2018/Conference/Paper184/AnonReviewer1'], 'readers': ['everyone'], 'content': {'title': 'This paper proposes an extension of the generative adversarial imitation learning (GAIL) algorithm by replacing the stochastic policy of the learner with a deterministic one. Simulation results with MuJoCo physics simulator show that this simple trick reduces the amount of needed data by an order of magnitude.', 'rating': '5: Marginally below acceptance threshold', 'review': "This paper considers the problem of model-free imitation learning. The problem is formulated in the framework of generative adversarial imitation learning (GAIL), wherein we alternate between optimizing reward parameters and learner policy's parameters. The reward parameters are optimized so that the margin between the cost of the learner's policy and the expert's policy is maximized. The learner's policy is optimized (using any model-free RL method) so that the same cost margin is minimized. Previous formulation of GAIL uses a stochastic behavior policy and the RIENFORCE-like algorithms. The authors of this paper propose to use a deterministic policy instead, and apply the deterministic policy gradient DPG (Silver et al., 2014) for optimizing the behavior policy. \\nThe authors also briefly discuss the problem of the little overlap between the teacher's covered state space and the learner's. A state screening function (SSF) method is proposed to drive the learner to remain in areas of the state space that have been covered by the teacher. Although, a more detailed discussion and a clearer explanation is needed to clarify what SSF is actually doing, based on the provided formulation.\\nExcept from a few typos here and there, the paper is overall well-written. The proposed idea seems new. However, the reviewer finds the main contribution rather incremental in its nature. Replacing a stochastic policy with a deterministic one does not change much the original GAIL algorithm, since the adoption of stochastic policies is often used just to have differentiable parameterized policies, and if the action space is continuous, then there is not much need for it (except for exploration, which is done here through re-initializations anyway). My guess is that if someone would use the GAIL algorithm for real problems (e.g, robotic task), they would significantly reduce the stochasticity of the behavior policy, which would make it virtually similar in term of data efficiency to the proposed method.\\nPros:\\n- A new GAIL formulation for saving on interaction data. \\nCons:\\n- Incremental improvement over GAIL\\n- Experiments only on simulated toy problems \\n- No theoretical guarantees for the state screening function (SSF) method", 'confidence': '4: The reviewer is confident but not absolutely certain that the evaluation is correct'}, 'writers': [], 'nonreaders': []}, {'tddate': None, 'ddate': None, 'original': None, 'tmdate': 1515642405633, 'tcdate': 1511718320481, 'number': 1, 'cdate': 1511718320481, 'id': 'S1_na_OlG', 'invitation': 'ICLR.cc/2018/Conference/-/Paper184/Official_Review', 'forum': 'rJ3fy0k0Z', 'replyto': 'rJ3fy0k0Z', 'signatures': ['ICLR.cc/2018/Conference/Paper184/AnonReviewer3'], 'readers': ['everyone'], 'content': {'title': 'Hard to read', 'rating': '6: Marginally above acceptance threshold', 'review': "This paper proposes to extend the determinist policy gradient algorithm to learn from demonstrations. The method is combined with a type of density estimation of the expert to avoid noisy policy updates. It is tested on Mujoco tasks with expert demonstrations generated with a pre-trained network. \\n\\nI found the paper a bit hard to read. My interpretation is that the main original contribution of the paper (besides changing a stochastic policy for a deterministic one) is to integrate an automatic estimate of the density of the expert (probability of a state to be visited by the expert policy) so that the policy is not updated by gradient coming from transitions that are unlikely to be generated by the expert policy. \\n\\nI do think that this part is interesting and I would have liked this trick to be used with other imitation methods. Indeed, the deterministic policy is certainly helpful but it is tested in a deterministic continuous control task. So I'm not sure about how it generalizes to other tasks. Also, the expert demonstration are generated by the pre-trained network so the distribution of the expert is indeed the distribution of the optimal policy. So I'm not sure the experiments tell a lot. But if the density estimation could be combined with other methods and tested on other tasks, I think this could be a good paper. ", 'confidence': '4: The reviewer is confident but not absolutely certain that the evaluation is correct'}, 'writers': [], 'nonreaders': []}, {'tddate': None, 'ddate': None, 'original': None, 'tmdate': 1515642405555, 'tcdate': 1511789361151, 'number': 3, 'cdate': 1511789361151, 'id': 'S1tVQ5Kef', 'invitation': 'ICLR.cc/2018/Conference/-/Paper184/Official_Review', 'forum': 'rJ3fy0k0Z', 'replyto': 'rJ3fy0k0Z', 'signatures': ['ICLR.cc/2018/Conference/Paper184/AnonReviewer2'], 'readers': ['everyone'], 'content': {'title': 'Combines IRL, adversarial training, and ideas from deterministic policy gradients. Paper is hard to read. MuJoCo results are good.', 'rating': '5: Marginally below acceptance threshold', 'review': 'The paper lists 5 previous very recent papers that combine IRL, adversarial learning, and stochastic policies. The goal of this paper is to do the same thing but with deterministic policies as a way of decreasing the sample complexity. The approach is related to that used in the deterministic policy gradient work. Imitation learning results on the standard control problems appear very encouraging.\\n\\nDetailed comments:\\n\\n"s with environment" -> "s with the environment"?\\n\\n"that IL algorithm" -> "that IL algorithms".\\n\\n"e to the real-world environments" -> "e to real-world environments".\\n\\n" two folds" -> " two fold".\\n\\n"adopting deterministic policy" -> "adopting a deterministic policy".\\n\\n"those appeared on the expert’s demonstrations" -> "those appearing in the expert’s demonstrations".\\n\\n"t tens of times less interactions" -> "t tens of times fewer interactions".\\n\\nOk, I can\\'t flag all of the examples of disfluency. The examples above come from just the abstract. The text of the paper seems even less well edited. I\\'d highly recommend getting some help proof reading the work.\\n\\n"Thus, the noisy policy updates could frequently be performed in IL and make the learner’s policy poor. From this observation, we assume that preventing the noisy policy updates with states that are not typical of those appeared on the expert’s demonstrations benefits to the imitation.": The justification for filtering is pretty weak. What is the statistical basis for doing so? Is it a form of a standard variance reduction approach? Is it a novel variance reduction approach? If so, is it more generally applicable?\\n\\nUnfortunately, the text in Figure 1 is too small. The smallest font size you should use is that of a footnote in the text. As such, it is very difficult to assess the results.\\n\\nAs best I can tell, the empirical results seem impressive and interesting.\\n', 'confidence': '3: The reviewer is fairly confident that the evaluation is correct'}, 'writers': [], 'nonreaders': []}]
All of the reviewers found some aspects of the formulation and experiments interesting, but they found the paper hard to read and understand. Some of the components of the technique such as the state screening function (SSF) seem ad-hoc and heuristic without much justification. Please improve the exposition and remove the unnecessary component of the technique, or come up with better justifications.
['Imitation Learning']
[]
[{'tddate': None, 'ddate': None, 'tmdate': 1515179172495, 'tcdate': 1515179172495, 'number': 2, 'cdate': 1515179172495, 'id': 'SknsnHTQG', 'invitation': 'ICLR.cc/2018/Conference/-/Paper184/Official_Comment', 'forum': 'rJ3fy0k0Z', 'replyto': 'B1nuCculG', 'signatures': ['ICLR.cc/2018/Conference/Paper184/Authors'], 'readers': ['everyone'], 'writers': ['ICLR.cc/2018/Conference/Paper184/Authors'], 'content': {'title': 'Responses', 'comment': 'Thank you for your constructive comments on our paper. We will fix typos and clarify the role of SSF in the camera-ready version.\\n\\n> The authors also briefly discuss the problem of the little overlap between the teacher\\'s covered state space and the learner\\'s. A state screening function (SSF) method is proposed to drive the learner to remain in areas of the state space that have been covered by the teacher.\\n\\nThe main purpose of introducing a SSF is not what you mentioned. Since we use the Jacobian of reward function to derive PG as opposed to prior IL works, the Jacobian is supposed to have information about how to get close to the expert\\'s behavior for the learner. However, in the IRL objective (4), which is general in (max-margin) IRL literature, the reward function could know how the expert acts just only on the states appearing in the demonstration. In other words, the Jacobian could have information about how to get close to the expert\\'s behavior just only on states appearing in the demonstration. What we claimed in Sec.3.2 is that the Jacobian for states which does not appear in the demonstration is just garbage for the learner since it does not give any information about how to get close to the expert. The main purpose of introducing the SSF is to sweep the garbage as much as possible.\\n\\n> However, the reviewer finds the main contribution rather incremental in its nature. Replacing a stochastic policy with a deterministic one does not change much the original GAIL algorithm, since the adoption of stochastic policies is often used just to have differentiable parameterized policies, and if the action space is continuous, then there is not much need for it (except for exploration, which is done here through re-initializations anyway)\\n\\nFigure.1 shows worse performance of Ours \\\\setminus SSF which just replace a stochastic policy with a deterministic one. If Ours \\\\setminus SSF worked well, we agree with your opinion that the main contribution is just incremental. However, introducing the SSF besides replacing a stochastic policy with a deterministic one is required to imitate the expert\\'s behavior. Hence, we don\\'t agree that the proposed method is just incremental. \\n\\n> My guess is that if someone would use the GAIL algorithm for real problems (e.g, robotic task), they would reduce the stochasticity of the behavior policy, which would make it virtually similar in term of data efficiency to the proposed method.\\n\\nBecause the GAIL algorithm is an on-policy algorithm, it essentially requires much interactions for an update and never uses behavior policy. Hence, it would not make it virtually similar in term of data efficiency to the proposed method which is off-policy algorithm.\\n\\n> Cons:\\n> - Incremental improvement over GAIL\\n\\nAs mentioned above, we think that the proposed method is not just incremental improvement over GAIL. \\n\\n> - Experiments only on simulated toy problems \\n\\nWe wonder why you thought the Mujoco tasks are just "toy" problems. Even though those tasks are not real-world problems, they have not been solved until GAIL has been proposed. In addition, the variants of GAIL (Baram et al., 2017; Wang et al., 2017; Hausman et al.) also evaluated their performance using those tasks. Hence, we think that those tasks are enough difficult to solve and can be used as a well-suited benchmark to evaluate whether the proposed method is applicable to the real-world problems in comparison with other IL algorithms.\\n'}, 'nonreaders': []}, {'tddate': None, 'ddate': None, 'tmdate': 1515179316987, 'tcdate': 1515179316987, 'number': 3, 'cdate': 1515179316987, 'id': 'SypN6BT7M', 'invitation': 'ICLR.cc/2018/Conference/-/Paper184/Official_Comment', 'forum': 'rJ3fy0k0Z', 'replyto': 'S1_na_OlG', 'signatures': ['ICLR.cc/2018/Conference/Paper184/Authors'], 'readers': ['everyone'], 'writers': ['ICLR.cc/2018/Conference/Paper184/Authors'], 'content': {'title': 'Thank you for positive evaluations.', 'comment': "Thank you for your constructive comments and positive evaluations on our paper. We will clarify the role of SSF in the camera-ready version.\\n\\n> My interpretation is that the main original contribution of the paper (besides changing a stochastic policy for a deterministic one) is to integrate an automatic estimate of the density of the expert (probability of a state to be visited by the expert policy)\\n\\nThank you for clearly understanding the role of SSF.\\n\\n> Indeed, the deterministic policy is certainly helpful but it is tested in a deterministic continuous control task. So I'm not sure about how it generalizes to other tasks.\\n\\nThe expert's policy used in the experimetns is a stochastic one. Hence, the proposed method works not only on a deterministic continuous control tasks but also a stochastic one. We expect that it generalizes well to other tasks.\\n"}, 'nonreaders': []}, {'tddate': None, 'ddate': None, 'tmdate': 1515178969191, 'tcdate': 1515178969191, 'number': 1, 'cdate': 1515178969191, 'id': 'S1WJnrpmz', 'invitation': 'ICLR.cc/2018/Conference/-/Paper184/Official_Comment', 'forum': 'rJ3fy0k0Z', 'replyto': 'S1tVQ5Kef', 'signatures': ['ICLR.cc/2018/Conference/Paper184/Authors'], 'readers': ['everyone'], 'writers': ['ICLR.cc/2018/Conference/Paper184/Authors'], 'content': {'title': 'Responses', 'comment': "Thank you for your constructive comments on our paper. We will fix typos and Figure.1. in the camera-ready version. \\n\\n> The justification for filtering is pretty weak. \\n\\nSince Figure.1 shows worse performance of Ours \\\\setminus SSF which does not filter states appearing in the demonstration, we think that the justification is enough.\\n\\n> What is the statistical basis for doing so?\\n\\nIntroducing a SSF is a kind of heuristic method, but it works as mentioned above.\\n\\n> Is it a form of a standard variance reduction approach? Is it a novel variance reduction approach? If so, is it more generally applicable?\\n\\nIntroducing the SSF itself is not a variance reduction approach. We would say that direct use of the Joacobian of (single-step) reward function rather than that of Q-function to derive the PG (8) might reduce the variance because the range of outputs are bounded.\\nSince we use the Jacobian of reward function to derive PG as opposed to prior IL works, the Jacobian is supposed to have information about how to get close to the expert's behavior for the learner. However, in the IRL objective (4), which is general in (max-margin) IRL literature, the reward function could know how the expert acts just only on the states appearing in the demonstration. In other words, the Jacobian could have the information about how to get close to the expert's behavior just only on states appearing in the demonstration. What we claimed in Sec.3.2 is that the Jacobian for states which does not appear in the demonstration is just garbage for the learner since it does not give any information about how to get close to the expert. The main purpose of introducing the SSF is to sweep the garbage as much as possible. The prior IL works have never mentioned about the garbage."}, 'nonreaders': []}]
[]
[]
503
The choice of activation functions in deep networks has a significant effect on the training dynamics and task performance. Currently, the most successful and widely-used activation function is the Rectified Linear Unit (ReLU). Although various hand-designed alternatives to ReLU have been proposed, none have managed to replace it due to inconsistent gains. In this work, we propose to leverage automatic search techniques to discover new activation functions. Using a combination of exhaustive and reinforcement learning-based search, we discover multiple novel activation functions. We verify the effectiveness of the searches by conducting an empirical evaluation with the best discovered activation function. Our experiments show that the best discovered activation function, f(x) = x * sigmoid(beta * x), which we name Swish, tends to work better than ReLU on deeper models across a number of challenging datasets. For example, simply replacing ReLUs with Swish units improves top-1 classification accuracy on ImageNet by 0.9% for Mobile NASNet-A and 0.6% for Inception-ResNet-v2. The simplicity of Swish and its similarity to ReLU make it easy for practitioners to replace ReLUs with Swish units in any neural network.
Searching for Activation Functions
ICLR.cc/2018/Conference
SkBYYyZRZ
[{'tddate': None, 'ddate': None, 'original': None, 'tmdate': 1515642457935, 'tcdate': 1511810827232, 'number': 2, 'cdate': 1511810827232, 'id': 'Hy7GD19gM', 'invitation': 'ICLR.cc/2018/Conference/-/Paper503/Official_Review', 'forum': 'SkBYYyZRZ', 'replyto': 'SkBYYyZRZ', 'signatures': ['ICLR.cc/2018/Conference/Paper503/AnonReviewer1'], 'readers': ['everyone'], 'content': {'title': 'Review', 'rating': '5: Marginally below acceptance threshold', 'review': 'This paper is utilizing reinforcement learning to search new activation function. The search space is combination of a set of unary and binary functions. The search result is a new activation function named Swish function. The authors also run a number of ImageNet experiments, and one NTM experiment.\\n\\nComments:\\n\\n1. The search function set and method is not novel. \\n2. There is no theoretical depth in the searched activation about why it is better.\\n3. For leaky ReLU, use larger alpha will lead better result, eg, alpha = 0.3 or 0.5. I suggest to add experiment to leak ReLU with larger alpha. This result has been shown in previous work.\\n\\nOverall, I think this paper is not meeting ICLR novelty standard. I recommend to submit this paper to ICLR workshop track. \\n\\n', 'confidence': '5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature'}, 'writers': [], 'nonreaders': []}, {'tddate': None, 'ddate': None, 'original': None, 'tmdate': 1515642457971, 'tcdate': 1511500825553, 'number': 1, 'cdate': 1511500825553, 'id': 'Sy-QnQHef', 'invitation': 'ICLR.cc/2018/Conference/-/Paper503/Official_Review', 'forum': 'SkBYYyZRZ', 'replyto': 'SkBYYyZRZ', 'signatures': ['ICLR.cc/2018/Conference/Paper503/AnonReviewer3'], 'readers': ['everyone'], 'content': {'title': 'Another approach for arriving at proven concepts on activation functions', 'rating': '4: Ok but not good enough - rejection', 'review': 'Authors propose a reinforcement learning based approach for finding a non-linearity by searching through combinations from a set of unary and binary operators. The best one found is termed Swish unit; x * sigmoid(b*x). \\n\\nThe properties of Swish like allowing information flow on the negative side and linear nature on the positive have been proven to be important for better optimization in the past by other functions like LReLU, PLReLU etc. As pointed out by the authors themselves for b=1 Swish is equivalent to SiL proposed in Elfwing et. al. (2017).\\n\\nIn terms of experimental validation, in most cases the increase is performance when using Swish as compared to other models are very small fractions. Again, the authors do state that "our results may not be directly comparable to the results in the corresponding works due to differences in our training steps." \\n\\nBased on the Figure 6 authors claim that the non-monotonic bump of Swish on the negative side is very important aspect. More explanation is required on why is it important and how does it help optimization. Distribution of learned b in Swish for different layers of a network can interesting to observe.', 'confidence': '4: The reviewer is confident but not absolutely certain that the evaluation is correct'}, 'writers': [], 'nonreaders': []}, {'tddate': None, 'ddate': None, 'original': None, 'tmdate': 1515642457898, 'tcdate': 1512523383570, 'number': 3, 'cdate': 1512523383570, 'id': 'HylYITVZG', 'invitation': 'ICLR.cc/2018/Conference/-/Paper503/Official_Review', 'forum': 'SkBYYyZRZ', 'replyto': 'SkBYYyZRZ', 'signatures': ['ICLR.cc/2018/Conference/Paper503/AnonReviewer4'], 'readers': ['everyone'], 'content': {'title': 'Well written paper and well conducted experiments.', 'rating': '7: Good paper, accept', 'review': 'The author uses reinforcement learning to find new potential activation functions from a rich set of possible candidates. The search is performed by maximizing the validation performance on CIFAR-10 for a given network architecture. One candidate stood out and is thoroughly analyze in the reste of the paper. The analysis is conducted across images datasets and one translation dataset on different architectures and numerous baselines, including recent ones such as SELU. The improvement is marginal compared to some baselines but systematic. Signed test shows that the improvement is statistically significant.\\n\\nOverall the paper is well written and the lack of theoretical grounding is compensated by a reliable and thorough benchmark. While a new activation function is not exiting, improving basic building blocks is still important for the community. \\n\\nSince the paper is fairly experimental, providing code for reproducibility would be appreciated.', 'confidence': '5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature'}, 'writers': [], 'nonreaders': []}]
The author's propose to use swish and show that it performs significantly better than Relus on sota vision models. Reviewers and anonymous ones counter that PRelus should be doing quite well too. Unfortunately, the paper falls in the category where it is hard to prove the utility of the method through one paper alone, and broader consensus relies on reproduction by the community. As a results, I'm going to recommend publishing to a workshop for now.
['meta learning', 'activation functions']
[{'tddate': None, 'ddate': None, 'tmdate': 1515211572929, 'tcdate': 1515211572929, 'number': 5, 'cdate': 1515211572929, 'id': 'r1a4oTTmz', 'invitation': 'ICLR.cc/2018/Conference/-/Paper503/Official_Comment', 'forum': 'SkBYYyZRZ', 'replyto': 'SkBYYyZRZ', 'signatures': ['ICLR.cc/2018/Conference/Paper503/Authors'], 'readers': ['everyone'], 'writers': ['ICLR.cc/2018/Conference/Paper503/Authors'], 'content': {'title': 'Clearing up concerns and misunderstandings', 'comment': 'We thank the reviewers for their comments and feedback. We are extremely surprised by the low scores for the paper that proposes a novel method that finds better activation functions, one of which has a potential to be better than ReLUs. During the discussion with the reviewers, we have found a few major concerns and misunderstandings amongst the reviewers, and we want to bring it up to a general discussion:\\n\\nThe reviewers are concerned that our activation only beats other nonlinearities by “a small fraction”. First of all, we question the conventional wisdom that ReLU greatly outperforms tanh or sigmoid units in modern architectures. While AlexNet may benefit from the optimization properties of ReLU, modern architectures use BatchNorm, which eases optimization even for sigmoid and tanh units. The BatchNorm paper [1] reports around a 3% gap between sigmoid and ReLU (it’s unclear if the sigmoid experiment was with tuning and this experiment is done on the older Inception-v1). The PReLU paper [2], cited 1800 times, proposes PReLU and reports a gain of 1.2%, again on a much weaker baseline. We cannot find any evidence in recent work that suggests that gap between sigmoid / tanh units and ReLU is huge. The gains produced by Swish are around 1% on top of much harder baselines, such as Inception-ResNet-v2, is already a third of the gain produced by ReLU and on par with the gains produced by PReLU. \\n\\nThe reviewers are concerned that the small gains are simply due to hyperparameter tuning. We stress here that unlike many prior works, the models we tried (e.g., Inception-ResNet-v2) have been **heavily tuned** using ReLUs. The fact that Swish improves on these heavily tuned models with very minor additional tuning is impressive. This result suggests that models can simply replace the ReLUs with Swish units and enjoy performance gains. We believe the drop-in-replacement property of Swish is extremely powerful because one of the key impediments to the adoption of a new technique is the need to run many additional experiments (e,g,, a lot of hyperparameter tuning). This achievement is impactful because it enables the replacement of ReLUs that are widely used across research and industry.\\n\\nThe reviewers are also concerned that our activation function is too similar to the work by Elfwing et al. When we conducted our research, we were honestly not aware of the work by Elfwing et al (their paper was first posted fairly recently on arxiv in Feb, 2017 and to the best of our knowledge, not accepted to any mainstream conference). That said, we have happily cited their work and credited their contributions. We are also happy to reuse the name “SiL” proposed by Elfwing et al if the reviewers see fit. In that case, Elfwing et al should be thrilled to know that their proposal is validated through a thorough search procedure. We also want to emphasize a number of key differences between our work and Elfwing et al. First, the focus of our paper is to search for an activation functions. Any researcher can use our recipes to drop in new primitives to search for better activation functions. Furthermore, our work has much more comprehensive empirical validation. Elfwing et al. only conducted experiments on relatively shallow reinforcement learning tasks, whereas we evaluated on challenging supervised benchmarks such as ImageNet with extremely tough baselines and equal amounts of tuning for fairness. We believe that we have conducted the most thorough evaluation of activation functions among any published work.\\n\\nPlease reconsider your rejection decisions.\\n\\n[1] Sergey Ioffe, Christian Szegedy. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. In ICML, 2015. (See Figure 3: https://arxiv.org/pdf/1502.03167.pdf )\\n[2] Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun. Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification. In CVPR, 2015 (See Table 2: https://arxiv.org/pdf/1502.01852.pdf )\\n'}, 'nonreaders': []}]
[{'tddate': None, 'ddate': None, 'tmdate': 1514910923909, 'tcdate': 1514775194822, 'number': 2, 'cdate': 1514775194822, 'id': 'rkQoM7wmM', 'invitation': 'ICLR.cc/2018/Conference/-/Paper503/Official_Comment', 'forum': 'SkBYYyZRZ', 'replyto': 'Hy7GD19gM', 'signatures': ['ICLR.cc/2018/Conference/Paper503/Authors'], 'readers': ['everyone'], 'writers': ['ICLR.cc/2018/Conference/Paper503/Authors'], 'content': {'title': 'Re: Reviewer1', 'comment': '1. Can the reviewer explain further why our work is not novel? Our activation function and the method to find it have not been explored before, and our work holds the promise of improving representation learning across many models. Furthermore, no previous work has come close to our level of thorough empirical evaluation. This type of contribution is as important as novelty -- it can be argued that the resurgence of CNNs is primarily due to conceptually simple empirical studies demonstrating their effectiveness on new datasets.\\n\\n2. We respectfully disagree with the reviewer that theoretical depth is necessary to be accepted. Following this argument, we can also argue that many extremely useful techniques in representation / deep learning, such as word2vec, ReLU, BatchNorm, etc, should not be accepted to ICLR because the original papers did not supply theoretical results about why they worked. Our community has typically followed that paradigm of discovering techniques experimentally and further work analyzing the technique. We believe our thorough and fair empirical evaluation provides a solid foundation for further work analyzing the theoretical properties of Swish.\\n\\n3. We experimented with the leaky ReLU using alpha = 0.5 on Inception-ResNet-v2 using the same hyperparameter sweep, and and did not find any improvement over the alpha used in our work (which was suggested by the original paper that proposed leaky ReLUs).\\n'}, 'nonreaders': []}, {'tddate': None, 'ddate': None, 'tmdate': 1514775476480, 'tcdate': 1514775476480, 'number': 3, 'cdate': 1514775476480, 'id': 'rk32mXwXz', 'invitation': 'ICLR.cc/2018/Conference/-/Paper503/Official_Comment', 'forum': 'SkBYYyZRZ', 'replyto': 'Sy-QnQHef', 'signatures': ['ICLR.cc/2018/Conference/Paper503/Authors'], 'readers': ['everyone'], 'writers': ['ICLR.cc/2018/Conference/Paper503/Authors'], 'content': {'title': 'Re: Reviewer3', 'comment': 'We don’t completely understand the reviewer’s rationale for rejection. Is it because of the lack of novelty, the inconsistent gains, or the work being insignificant? \\n\\nFirst, in terms of the work being significant, we want to emphasize that ReLU is the cornerstone of deep learning models. Being able to replace ReLU is extremely impactful because it produces a gain across a large number of models. So in terms of impact, we believe that our work is significant.\\n\\nSecondly, in terms of inconsistent gains, the signed tests already confirm that the gains are statistically significant in our experiments. These results suggest that switching to Swish is an easy and consistent way of getting an improvement regardless of which baseline activation function is used. Unlike previous studies, the baselines in our work are extremely strong: they are state-of-the-art models where the models are built with ReLUs as the default activation. Furthermore, the same amount of tuning was used for every activation function, and in fact, many non-Swish activation functions actually got more tuning. Thus, it is unreasonable to expect a huge improvement. That said, in some cases, Swish on Imagenet makes a more than 1% top-1 improvement. For context, the gap between Inception-v3 and Inception-v4 (a year of work) is only 1.2%.\\n\\nFinally, in terms of novelty, our work differs from Elfwing et al. (2017) in a number of significant ways. They just propose a single activation function, whereas our work searches over a vast space of activation functions to find the best empirically performing activation function. The search component is important because we save researchers from the painful process of manually trying out a number of individual activation functions in order to find one that outperforms ReLU (i.e., graduate student descent). The activation function found by this search, Swish, is more general than the other proposed by Elfwing et al. (2017). Another key contribution is our thorough empirical study. Their activation function was tested only on relatively shallow reinforcement learning models. We performed a thorough experimental evaluation on many challenging, deep, large-scale supervised models with extremely strong baselines. We believe these differences are significant enough to differentiate us. \\n\\nThe non-monotonic bump, which is controlled by beta, has gradients for negative preactivations (unlike ReLU). We have plotted the beta distribution over the each layer Swish here: https://imgur.com/a/AIbS2 . Note this is on the Mobile NASNet-A model, which has many layers composed in parallel (similar to Inception and unlike ResNet). The plot suggests that the tuneable beta is flexibly used. Early layers use large values of beta, which corresponds to ReLU-like behavior, whereas later layers tend to stay around the [0, 1.5] range, corresponding to a more linear-like behavior. '}, 'nonreaders': []}, {'tddate': None, 'ddate': None, 'tmdate': 1514774988299, 'tcdate': 1514774988299, 'number': 1, 'cdate': 1514774988299, 'id': 'SkVAW7PXM', 'invitation': 'ICLR.cc/2018/Conference/-/Paper503/Official_Comment', 'forum': 'SkBYYyZRZ', 'replyto': 'HylYITVZG', 'signatures': ['ICLR.cc/2018/Conference/Paper503/Authors'], 'readers': ['everyone'], 'writers': ['ICLR.cc/2018/Conference/Paper503/Authors'], 'content': {'title': 'Re: Reviewer4', 'comment': 'The reviewer suggested “Since the paper is fairly experimental, providing code for reproducibility would be appreciated”. We agree, and we will open source some of the experiments around the time of acceptance.\\n'}, 'nonreaders': []}]
[{'tddate': None, 'ddate': None, 'tmdate': 1515791902283, 'tcdate': 1515791902283, 'number': 7, 'cdate': 1515791902283, 'id': 'BkIXIiLNG', 'invitation': 'ICLR.cc/2018/Conference/-/Paper503/Official_Comment', 'forum': 'SkBYYyZRZ', 'replyto': 'rkQoM7wmM', 'signatures': ['ICLR.cc/2018/Conference/Paper503/AnonReviewer1'], 'readers': ['everyone'], 'writers': ['ICLR.cc/2018/Conference/Paper503/AnonReviewer1'], 'content': {'title': 'Reply', 'comment': "1. Novelty \\n\\nThe methodology of searching has been used in Genetic Programming for a long time. The RNN controller has been used in many paper from Google Brain. This paper's contribution is using RL to search in a GP flavor. Although it is new in activation function search field, in methodology view, it is not novel.\\n\\n2. Theoretical depth\\n\\nActually, BatchNorm and ReLU provides its explanation of why they work in the original paper and the explanation was accepted by community for a long time. I understand how deep learning community's experimentally flavor, but activation function is a fundamentally problem in understanding how neural network works. Swish performs similarly or slightly better compare to the commonly used activation functions. If without any theoretical explanation, it is hard to acknowledge it as a breaking research. What's more, different activation function may requires different initialization and learning rate, I respect the authors have enough computation power to sweep, but without any theoretical explanation, the paper is more like a experiment report rather than a good ICLR paper. \\n\\n\\n"}, 'nonreaders': []}, {'tddate': None, 'ddate': None, 'tmdate': 1514982553772, 'tcdate': 1514982553772, 'number': 4, 'cdate': 1514982553772, 'id': 'rJMj2S57z', 'invitation': 'ICLR.cc/2018/Conference/-/Paper503/Official_Comment', 'forum': 'SkBYYyZRZ', 'replyto': 'rk32mXwXz', 'signatures': ['ICLR.cc/2018/Conference/Paper503/AnonReviewer3'], 'readers': ['everyone'], 'writers': ['ICLR.cc/2018/Conference/Paper503/AnonReviewer3'], 'content': {'title': 'Reply:', 'comment': 'Yes, I do agree that ReLU is one of the major reason for improvement of deep learning models. But, it is not just because ReLU was able to experimentally beat performance of existing non-linearities by a small fraction.\\n\\nThe fractional increase in performance on benchmarks can be because of various reasons, not just switching non-linearity. For example, in many cases a simple larger batch size can result in small fractional change in performance. The hyper-parameter settings in which other non-linearities might perform better can be different than the ones more suitable for proposed non-linearity. Also, I do not agree that the search factor helps researchers to save time on trying out different non-linearities, still one has to spend time on searching best \\'betas\\' (which will result in small improvement over benchmarks) for every dataset. I would rather use a more well understood non-linearity which gives reasonable results on benchmarks.\\n\\nThe properties of the non-linearities proposed in the article like "allowing information flow on the negative side and linear nature on the positive side"(also mentioned in my review) have been proven to be important for better optimization in the past by other functions like LReLU, PLReLU etc.\\n\\nThe results from the article show that Swish-1 ( or SiL from Elfwing et al. (2017)) performs same as Swish.'}, 'nonreaders': []}]
[{'tddate': None, 'ddate': None, 'tmdate': 1515211674177, 'tcdate': 1515211674177, 'number': 6, 'cdate': 1515211674177, 'id': 'Skfsiap7G', 'invitation': 'ICLR.cc/2018/Conference/-/Paper503/Official_Comment', 'forum': 'SkBYYyZRZ', 'replyto': 'rJMj2S57z', 'signatures': ['ICLR.cc/2018/Conference/Paper503/Authors'], 'readers': ['everyone'], 'writers': ['ICLR.cc/2018/Conference/Paper503/Authors'], 'content': {'title': 'Re: Reviewer3', 'comment': 'Thank you for the comment.\\n\\n[[Our activation only beats other nonlinearities by “a small fraction”]] First of all, we question the conventional wisdom that ReLU greatly outperforms tanh or sigmoid units in modern architectures. While AlexNet may benefit from the optimization properties of ReLU, modern architectures use BatchNorm, which eases optimization even for sigmoid and tanh units. The BatchNorm paper [1] reports around a 3% gap between sigmoid and ReLU (it’s unclear if the sigmoid experiment was with tuning and this experiment is done on the older Inception-v1). The PReLU paper [2], cited 1800 times, proposes PReLU and reports a gain of 1.2% (Figure 3), again on a much weaker baseline. We cannot find any evidence in recent work that suggests that gap between sigmoid / tanh units and ReLU is huge. The gains produced by Swish are around 1% on top of much harder baselines, such as Inception-ResNet-v2, is already a third of the gain produced by ReLU and on par with the gains produced by PReLU. \\n\\n[[Small fraction gained due to hyperparameter tuning]] We want to emphasize how hard it is to get improvements on these state-of-art models. The models we tried (e.g., Inception-ResNet-v2) have been **heavily tuned** using ReLUs. The fact that Swish improves on these heavily tuned models with very minor additional tuning is impressive. This result suggests that models can simply replace the ReLUs with Swish units and enjoy performance gains. We believe the drop-in-replacement property of Swish is extremely powerful because one of the key impediments to the adoption of a new technique is the need to run many additional experiments (e,g,, a lot of hyperparameter tuning). This achievement is impactful because it enables the replacement of ReLUs that are widely used across research and industry.\\n\\n[[Searching for betas]] The reviewer also misunderstands the betas in Swish. When we use Swish-beta, one does not need to search for the optimal value of beta because it can be learned by backpropagation.\\n\\n[[Gradient on the negative side]] We do not claim that Swish is the first activation function to utilize gradients in the negative preactivation regime. We simply suggested that Swish may benefit from same properties utilized by LReLU and PReLU.\\n\\n[1] Sergey Ioffe, Christian Szegedy. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. In JMLR, 2015. (See Figure 3: https://arxiv.org/pdf/1502.03167.pdf )\\n[2] Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun. Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification. In CVPR, 2015 (See Table 2: https://arxiv.org/pdf/1502.01852.pdf )\\n'}, 'nonreaders': []}]
370
We develop a reinforcement learning based search assistant which can assist users through a set of actions and sequence of interactions to enable them realize their intent. Our approach caters to subjective search where the user is seeking digital assets such as images which is fundamentally different from the tasks which have objective and limited search modalities. Labeled conversational data is generally not available in such search tasks and training the agent through human interactions can be time consuming. We propose a stochastic virtual user which impersonates a real user and can be used to sample user behavior efficiently to train the agent which accelerates the bootstrapping of the agent. We develop A3C algorithm based context preserving architecture which enables the agent to provide contextual assistance to the user. We compare the A3C agent with Q-learning and evaluate its performance on average rewards and state values it obtains with the virtual user in validation episodes. Our experiments show that the agent learns to achieve higher rewards and better states.
Improving Search Through A3C Reinforcement Learning Based Conversational Agent
ICLR.cc/2018/Conference
rkfbLilAb
[{'tddate': None, 'ddate': None, 'original': None, 'tmdate': 1515856439645, 'tcdate': 1511818885213, 'number': 3, 'cdate': 1511818885213, 'id': 'Hy4tIW5xf', 'invitation': 'ICLR.cc/2018/Conference/-/Paper370/Official_Review', 'forum': 'rkfbLilAb', 'replyto': 'rkfbLilAb', 'signatures': ['ICLR.cc/2018/Conference/Paper370/AnonReviewer3'], 'readers': ['everyone'], 'content': {'title': 'An interesting problem but a not convincing experimental protocol', 'rating': '5: Marginally below acceptance threshold', 'review': 'The paper "IMPROVING SEARCH THROUGH A3C REINFORCEMENT LEARNING BASED CONVERSATIONAL AGENT" proposes to define an agent to guide users in information retrieval tasks. By proposing refinements of the query, categorizations of the results or some other bookmarking actions, the agent is supposed to help the user in achieving his search. The proposed agent is learned via reinforcement learning. \\n\\nMy concern with this paper is about the experiments that are only based on simulated agents, as it is the case for learning. While it can be questionable for learning (but we understand why it is difficult to overcome), it is very problematic for the experiments to not have anything that demonstrates the usability of the approach in a real-world scenario. I have serious doubts about the performances of such an artificially learned approach for achieving real-world search tasks. Also, for me the experimental section is not sufficiently detailed, which lead to not reproducible results. Moreover, authors should have considered baselines (only the two proposed agents are compared which is clearly not sufficient). \\n\\nAlso, both models have some issues from my point of view. First, the Q-learning methods looks very complex: how could we expect to get an accurate model with 10^7 states ? No generalization about the situations is done here, examples of trajectories have to be collected for each individual considered state, which looks very huge (especially if we think about the number of possible trajectories in such an MDP). The second model is able to generalize from similar situations thanks to the neural architecture that is proposed. However, I have some concerns about it: why keeping the history of actions in the inputs since it is captured by the LSTM cell ? It is a redondant information that might disturb the process. Secondly, the proposed loss looks very heuristic for me, it is difficult to understand what is really optimized here. Particularly, the loss entropy function looks strange to me. Is it classical ? Are there some references of such a method to maintain some exploration ability. I understand the need of exploration, but including it in the loss function reduces the interpretability of the objective (wouldn\\'t it be preferable to use a more classical loss but with an epsilon greedy policy?).\\n\\n\\nOther remarks: \\n - In the begining of "varying memory capacity" section, what is "100, 150 and 250" ? Time steps ? What is the unit ? Seconds ? \\n - I did not understand the "Capturing seach context at local and global level" at all\\n - In the loss entropy formula, the two negation signs could be removed\\n \\n', 'confidence': '4: The reviewer is confident but not absolutely certain that the evaluation is correct'}, 'writers': [], 'nonreaders': []}, {'tddate': None, 'ddate': None, 'original': None, 'tmdate': 1515642440244, 'tcdate': 1511800654276, 'number': 2, 'cdate': 1511800654276, 'id': 'BkL816Ygf', 'invitation': 'ICLR.cc/2018/Conference/-/Paper370/Official_Review', 'forum': 'rkfbLilAb', 'replyto': 'rkfbLilAb', 'signatures': ['ICLR.cc/2018/Conference/Paper370/AnonReviewer2'], 'readers': ['everyone'], 'content': {'title': 'lack of details', 'rating': '3: Clear rejection', 'review': 'The paper describes reinforcement learning techniques for digital asset search. The RL techniques consist of A3C and DQN. This is an application paper since the techniques described already exist. Unfortunately, there is a lack of detail throughout the paper and therefore it is not possible for someone to reproduce the results if desired. Since there is no corpus of message response pairs to train the model, the paper trains a simulator from logs to emulate user behaviours. Unfortunately, there is no description of the algorithm used to obtain the simulator. The paper explains that the simulator is obtained from log data, but this is not sufficient. The RL problem is described at a very high level in the sense that abstract states and actions are listed, but there is no explanation about how those abstract states are recognized from the raw text and there is no explanation about how the actions are turned into text. There seems to be some confusion in the notion of state. After describing the abstract states, it is explained that actions are selected based on a history of states. This suggests that the abstract states are really abstract observations. In fact, this becomes obvious when the paper introduces the RNN where a hidden belief is computed by combining the observations. The rewards are also described at a hiogh level, but it is not clear how exactly they are computed. The digital search application is interesting, however a detailed description with comprehensive experiments are needed for the publication of an application paper.', 'confidence': '5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature'}, 'writers': [], 'nonreaders': []}, {'tddate': None, 'ddate': None, 'original': None, 'tmdate': 1515642440280, 'tcdate': 1511734121566, 'number': 1, 'cdate': 1511734121566, 'id': 'H1f_jh_ef', 'invitation': 'ICLR.cc/2018/Conference/-/Paper370/Official_Review', 'forum': 'rkfbLilAb', 'replyto': 'rkfbLilAb', 'signatures': ['ICLR.cc/2018/Conference/Paper370/AnonReviewer1'], 'readers': ['everyone'], 'content': {'title': 'Lack of context', 'rating': '2: Strong rejection', 'review': 'This paper proposes to use RL (Q-learning and A3C) to optimize the interaction strategy of a search assistant. The method is trained against a simulated user to bootstrap the learning process. The algorithm is tested on some search base of assets such as images or videos. \\n\\nMy first concern is about the proposed reward function which is composed of different terms. These are very engineered and cannot easily transfer to other tasks. Then the different algorithms are assessed according to their performance w.r.t. to these rewards. They of course improve with training since this is the purpose of RL to optimize these numbers. Assessment of a dialogue system should be done according to metrics obtained through actual interactions with users, not according to auxiliary tasks etc. \\n\\nBut above all, this paper incredibly lacks of context in both RL and dialogue systems. The authors cite a 2014 paper when it comes to refer to Q-learning (Q-learning was first published in 1989 by Watkins). The first time dialogue has been casted into a RL problem is in 1997 by E. Levin and R. Pieraccini (although it has been suggested before by M. Walker). User simulation has been proposed at the same time and further developed in the early 2000 by Schatzmann, Young, Pietquin etc. Using LSTMs to build user models has been proposed in 2016 (Interspeech) by El Asri et al. Buiding efficient reward functions for RL-based conversational systems has also been studied for more than 20 years with early work by M. Walker on PARADISE (@ACL 1997) but also via inverse RL by Chandramohan et al (2011). A2C (which is a single-agent version of A3C) has been used by Strub et al (@ IJCAI 2017) to optimize visually grounded dialogue systems. RL-based recommender systems have also been studied before (e.g. Shani in JMLR 2005). \\n\\nI think the authors should first read the state of the art in the domain before they suggest new solutions. ', 'confidence': '5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature'}, 'writers': [], 'nonreaders': []}]
meta score: 4 This paper is primarily an application paper applying known RL techniques to dialogue. Very little reference to the extensive literature in this area. Pros: - interesting application (digital search) - revised version contains subjective evaluation of experiments Cons: - limited technical novelty - very weak links to the state-of-the-art, missing many key aspects of the research domain
['Subjective search', 'Reinforcement Learning', 'Conversational Agent', 'Virtual user model', 'A3C', 'Context aggregation']
[{'tddate': None, 'ddate': None, 'tmdate': 1515160693860, 'tcdate': 1515160693860, 'number': 1, 'cdate': 1515160693860, 'id': 'HkAuVWpmz', 'invitation': 'ICLR.cc/2018/Conference/-/Paper370/Official_Comment', 'forum': 'rkfbLilAb', 'replyto': 'rkfbLilAb', 'signatures': ['ICLR.cc/2018/Conference/Paper370/Authors'], 'readers': ['everyone'], 'writers': ['ICLR.cc/2018/Conference/Paper370/Authors'], 'content': {'title': 'We evaluated our system by performing human evaluation and updated our paper with corresponding results, please refer to section 4.3 in the updated paper.', 'comment': 'We evaluated our system trained using A3C algorithm through professional designers who regularly use image search site for their design tasks and asked them to compare our system with conventional search interface in terms of engagement, time required and ease of performing the search. In addition to this we asked them to rate our system on the basis of information flow, appropriateness and repetitiveness. The evaluation shows that although we trained the bootstrapped agent through user model, it performs decently well with actual users by driving their search forward with appropriate actions without being much repetitive. The comparison with the conventional search shows that conversational search is more engaging. In terms of search time, it resulted in more search time for some designers while it reduces the time required to search the desired results in some cases, in majority cases it required about the same time. The designers are regular users of conventional search interface and well versed with it, even then majority of them did not face any cognitive load while using our system with one-third of them believing that it is easier than conventional search.'}, 'nonreaders': []}]
[{'tddate': None, 'ddate': None, 'tmdate': 1515172086452, 'tcdate': 1515163435116, 'number': 3, 'cdate': 1515163435116, 'id': 'H1QVkGpmM', 'invitation': 'ICLR.cc/2018/Conference/-/Paper370/Official_Comment', 'forum': 'rkfbLilAb', 'replyto': 'Hy4tIW5xf', 'signatures': ['ICLR.cc/2018/Conference/Paper370/Authors'], 'readers': ['everyone'], 'writers': ['ICLR.cc/2018/Conference/Paper370/Authors'], 'content': {'title': 'Q-learning and A3C System Modeling', 'comment': 'Q-Learning Model:\\nWe experimented with Q-learning approach in order to obtain baseline results for the task defined in the paper since RL has not been applied before for providing assistance in searching digital assets. The large size of the state space requires large amount training data for model to learn useful representations since number of parameters is directly proportional to the size of state space which is indicative of the complexity of the model. The number of training episodes is not a problem in our case since we leverage the user model to sample interactions between the learning agent and user. This indeed is reflected in figure 6 (left), which shows that the model converges when trained on sufficient number of episodes.\\n\\nSince our state space is discrete, we have used table storage method for Q-learning. Kindly elaborate on what does generalisation of state means in this context so that we may elaborate more and improve our paper.\\n\\n\\nA3C Model: \\n\\nWe capture the search context by including history of actions taken by the user and the agent in last ‘k’ turns explicitly in the state representation. Since a search episode can extend indefinitely and suitability & dependence of action taken by the agent can go beyond last ‘k’ turns, we include an LSTM in our model which aggregates the local context represented in state (‘local’ in terms of including only the recent user and agent actions) to capture such long term dependencies and analyse the trend in reward and state values obtained by comparing it with the case when we do not include the history of actions in the state and let the LSTM learn the context alone (section 4.1.3).\\n\\nIn varying memory capacity, by LSTM size (100,150,250), we mean dimension of the hidden state h of the LSTM. With more number of units, the LSTM can capture much richer latent representations and long term dependencies. We have explored the impact of varying the hidden state size in the experiments (section 4.1.2).\\n\\n\\nEntropy loss function has been studied to provide exploration ability to the agent while optimising its action strategy in the Actor-Critic Model [1]. While epsilon-greedy policy has been successfully used in many RL algorithms for achieving exploration vs exploitation balance, it is commonly used in off-policy algorithms like Q-learning where the policy is not represented explicitly. The model is trained on observations which are sampled following epsilon-greedy policy which is different from the actual policy learned in terms of state-action value function. \\n\\nThis is in contrast to A3C where we apply an on-policy algorithm such that the agent take actions according to the learned policy and is trained on observations which are obtained using the same policy. This policy is optimized to both maximise the expected reward in an episode as well as to incorporate the exploration behavior (which is enabled by using the exploration loss). Using epsilon-greedy policy will disturb the on-policy behavior of the learned agent since it will then learn on observations and actions sampled according to epsilon-greedy policy which will be different from the actual policy learnt which we represent as explicit output of our A3C model.\\n\\nThe loss described in the paper optimise the policy to maximise the expected reward obtained in an episode where the expectation is taken with respect to different possible trajectories that can be sampled in an episode. In A3C algorithm, the standard policy gradient methods is modified by replacing the reward term by an advantage term which is difference between reward obtained by taking an action and value of the state which is used as a baseline (complete derivation in [2]). The learned baseline enforces that parameters are updated in a way that likelihood of actions that results in rewards better than value of the state is increased while it is decreased for those which provide rewards lower than the average action in that state.\\n\\n\\n\\n[1] : Mnih, Volodymyr, et al. "Asynchronous methods for deep reinforcement learning." International Conference on Machine Learning. 2016.\\n[2] : Sutton, R. et al., Policy Gradient Methods for Reinforcement Learning with Function Approximation, NIPS, 1999)\\n\\n'}, 'nonreaders': []}, {'tddate': None, 'ddate': None, 'tmdate': 1515176941655, 'tcdate': 1515163990888, 'number': 5, 'cdate': 1515163990888, 'id': 'ryJv-MaXM', 'invitation': 'ICLR.cc/2018/Conference/-/Paper370/Official_Comment', 'forum': 'rkfbLilAb', 'replyto': 'BkL816Ygf', 'signatures': ['ICLR.cc/2018/Conference/Paper370/Authors'], 'readers': ['everyone'], 'writers': ['ICLR.cc/2018/Conference/Paper370/Authors'], 'content': {'title': 'Details of User Model', 'comment': 'Due to legal issues, we cannot not share the query session logs data. We have tried to provide details of our algorithm which can be used for obtaining user model from any given session logs data. The mapping between interactions in session log data and user actions which the agent can understand has been discussed in table 3. Using these mapping, we obtain a probabilistic user model (algorithm has been described in section 3.5). Figure 1 in the paper demonstrates how interactions in a session can be mapped to user actions. \\n\\nKindly mention the sections which are lacking details and missing information in the algorithm for user model which will help us in improving our paper.'}, 'nonreaders': []}, {'tddate': None, 'ddate': None, 'tmdate': 1515178098023, 'tcdate': 1515164959112, 'number': 6, 'cdate': 1515164959112, 'id': 'SkPmHMpXz', 'invitation': 'ICLR.cc/2018/Conference/-/Paper370/Official_Comment', 'forum': 'rkfbLilAb', 'replyto': 'BkL816Ygf', 'signatures': ['ICLR.cc/2018/Conference/Paper370/Authors'], 'readers': ['everyone'], 'writers': ['ICLR.cc/2018/Conference/Paper370/Authors'], 'content': {'title': 'State and Reward Modeling', 'comment': 'Thanks for your reviews.\\n\\nOur state representation comprises of history of actions taken by the user and the agent (along with other variables as described in the state space section 3.3) and not only the most recent action taken by the user. User action is obtained from user utterance using a rule-based Natural language unit (NLU) which uses dependency tree based syntactic parsing, stop words and pre-defined rules (as described in appendix, section 6.1.2). We capture the search context by including the history of actions taken by the user and the agent in the state representation. The state at a turn in the conversation comprises of agent and user actions in last ‘k’ turns. Since a search episode can extend indefinitely and suitability & dependence of action taken by the agent can go beyond last ‘k’ turns, we include an LSTM in our model which aggregates the local context represented in state (‘local’ in terms of state including only the recent user and agent actions) into a global context to capture such long term dependencies. We analyse the trend in reward and state values obtained by comparing it with the case when we do not include the history of actions is state and let the LSTM learn the context alone (section 4.1.3).\\n\\nOur system does not generate utterances, it instead selects an utterance based on the action taken by the agent from a corpus of possible utterances. This is because we train our agent to assist user in their search through optimising dialogue strategy and not actual dialogue utterances made by the agent. Though we aim to pursue this as future work where we generate agent utterances and train NLU for obtaining user action in addition to optimising dialogue strategy (which we have done in our current work).\\n\\nSince we aim to optimise dialogue strategy and do not generate dialogue utterances, we assign the rewards corresponding to the appropriateness of the action performed by the agent considering the state and history of the search. We have used some rewards such as task success, extrinsic rewards based on feedback signals from the user and auxiliary rewards based on performance on auxiliary tasks. These rewards have been modelled numerically on a relative scale.\\n\\nWe have evaluated our model through humans and updated the paper, please refer to section 4.3 for human evaluation results and appendix (section 6.2) for conversations between actual users and trained agent.'}, 'nonreaders': []}, {'tddate': None, 'ddate': None, 'tmdate': 1515175984295, 'tcdate': 1515173823497, 'number': 7, 'cdate': 1515173823497, 'id': 'SkDTDVamf', 'invitation': 'ICLR.cc/2018/Conference/-/Paper370/Official_Comment', 'forum': 'rkfbLilAb', 'replyto': 'H1f_jh_ef', 'signatures': ['ICLR.cc/2018/Conference/Paper370/Authors'], 'readers': ['everyone'], 'writers': ['ICLR.cc/2018/Conference/Paper370/Authors'], 'content': {'title': 'Reward Function and Evaluation', 'comment': 'Thanks for your reviews.\\n\\nWe have modeled rewards specifically for the domain of digital assets search in order to obtain a bootstrapped agent which performs reasonably well in assisting humans in their search so that it can be fine tuned further based on interaction with humans. As our problem caters to a subjective task of searching digital assets which is different from more common objective tasks such as reservation, it is difficult to determine generic rewards based on whether the agent has been able to provide exact information to the user unlike objective search tasks where rewards are measured based on required information has been provided to the user. This makes rewards transferability between subjective and objective search difficult. Though our modeled rewards are easily transferable to search tasks such as e-commerce sites where search tasks comprises of a subjective component (in addition to objective preferences such as price).\\n\\nSince we aim to optimise dialogue strategy and do not generate dialogue utterances, we assign the rewards corresponding to the appropriateness of the action performed by the agent considering the state and history of the search. We have used some rewards such as task success (based on implicit and explicit feedback from the user during the search) which is also used in PARADISE framework [1]. At the same time several metrics used by PARADISE cannot be used for modelling rewards. For instance, time required (number of turns) for user to search desired results cannot be penalised since it can be possible that user is finding the system engaging and helpful in refining the results better which may increase number of turns in the search.\\n\\nWe evaluated our system through humans and added the results to the paper, please refer to section 4.3 in the updated paper. You may refer to appendix (section 6.2) for some conversations between actual users and the trained agent.\\n\\nThanks for suggesting related references, we have updated our paper based on the suggestions. Kindly suggest any other further improvements.\\n\\n[1] Walker, Marilyn A., et al. "PARADISE: A framework for evaluating spoken dialogue agents." Proceedings of the eighth conference on European chapter of the Association for Computational Linguistics. Association for Computational Linguistics, 1997.'}, 'nonreaders': []}, {'tddate': None, 'ddate': None, 'tmdate': 1515174528619, 'tcdate': 1515163528274, 'number': 4, 'cdate': 1515163528274, 'id': 'Byxcyzp7M', 'invitation': 'ICLR.cc/2018/Conference/-/Paper370/Official_Comment', 'forum': 'rkfbLilAb', 'replyto': 'Hy4tIW5xf', 'signatures': ['ICLR.cc/2018/Conference/Paper370/Authors'], 'readers': ['everyone'], 'writers': ['ICLR.cc/2018/Conference/Paper370/Authors'], 'content': {'title': 'A3C and rollouts are better than REINFORCE', 'comment': 'Thanks for your reviews.\\n\\nStandard REINFORCE method for policy gradient has high variance in gradient estimates [1]. Moreover while optimising and weighing the likelihood for performing an action in a given state, it does not measure the reward with respect to a baseline reward due to which the agent is not able to compare different actions. This may result in gradient pointing in wrong direction since it does not know how good an action is with respect to other good actions in a given state. This may weaken the probability with which the agent takes the best action (or better actions).\\n\\nIt has been shown that if a baseline value for a state is used to critic the rewards obtained for performing different actions in that state reduces the variance in gradient estimates as well as provides correct appraisal for an action taken in a given state (good actions get a positive appraisal) without requiring to sample other actions [2]. Moreover it has been shown that if baseline value of the state is learned through function approximation, we get an an unbiased or very less biased gradient estimates with reduced variance achieving better bias-variance tradeoff. Due to these advantages we use A3C algorithm since it learns the state value function along with the policy and provides unbiased gradient estimator with reduces variance.\\n\\nIn standard policy gradient methods, multiple episodes are sampled before updating the parameters using the gradients obtained over these episodes. It has been observed that sampling gradients over multiple episodes which can span over large number of turns results in higher variance in the gradient estimates due to which the model takes more time to learn [3]. The higher variance is the result of stochastic nature of policy since taking sampling random actions initially (when the agent has not learned much) over multiple episodes before updating the parameters compounds the variance. Due to this reason, we instead use truncated rollouts where we update parameters of the policy and value model after every n-steps in an episode which are proven to be much effective and results in faster learning.\\n\\n[1] : Sehnke, Frank, et al. "Parameter-exploring policy gradients." Neural Networks 23.4 (2010): 551-559.\\n[2] : Sutton, Richard S., et al. "Policy gradient methods for reinforcement learning with function approximation." Advances in neural information processing systems. 2000\\n[3] : Tesauro, Gerald, and Gregory R. Galperin. "On-line policy improvement using Monte-Carlo search." Advances in Neural Information Processing Systems. 1997. ; Gabillon, Victor, et al. "Classification-based policy iteration with a critic." (2011).\\n\\n'}, 'nonreaders': []}, {'tddate': None, 'ddate': None, 'tmdate': 1515176639542, 'tcdate': 1515161252369, 'number': 2, 'cdate': 1515161252369, 'id': 'Hy3jUZaXf', 'invitation': 'ICLR.cc/2018/Conference/-/Paper370/Official_Comment', 'forum': 'rkfbLilAb', 'replyto': 'Hy4tIW5xf', 'signatures': ['ICLR.cc/2018/Conference/Paper370/Authors'], 'readers': ['everyone'], 'writers': ['ICLR.cc/2018/Conference/Paper370/Authors'], 'content': {'title': 'Experimental Details', 'comment': 'We evaluated our system through real humans and added the results in section 4.3. Please refer to appendix (section 6.2) for some conversations between actual users and trained agent. For performing experiments with humans, we developed chat interface where an actual user can interact with the agent during their search. The implementation details of the chat interface have been discussed in the appendix (section 6.1.1). User action is obtained from user utterance using a rule-based Natural language unit (NLU) which uses dependency tree based syntactic parsing, stop words and pre-defined rules (as described in appendix, section 6.1.2). You may refer to supplementary material (footnote-2, page-9) which contains a video demonstrating search on our conversational search interface.\\n\\nIn order to evaluate our system with the virtual user, we simulate validation episodes between the agent and the virtual user after every training episode. This simulation comprises of sequence of alternate actions between the user and the agent. The user action is sampled using the user model while the agent action is sampled using the policy learned till that point. Corresponding to a single validation episode, we determine two performance metrics. First is total reward obtained at the end of the episode. The values of the states observed in the episode is obtained using the model, average of states values observed during the validation episode is determined and used as the second performance metric. Average of these values over different validation episodes is taken and depicted in figures 3,4,5 and 6.'}, 'nonreaders': []}]
[]
[]
390
Identifying analogies across domains without supervision is a key task for artificial intelligence. Recent advances in cross domain image mapping have concentrated on translating images across domains. Although the progress made is impressive, the visual fidelity many times does not suffice for identifying the matching sample from the other domain. In this paper, we tackle this very task of finding exact analogies between datasets i.e. for every image from domain A find an analogous image in domain B. We present a matching-by-synthesis approach: AN-GAN, and show that it outperforms current techniques. We further show that the cross-domain mapping task can be broken into two parts: domain alignment and learning the mapping function. The tasks can be iteratively solved, and as the alignment is improved, the unsupervised translation function reaches quality comparable to full supervision.
Identifying Analogies Across Domains
ICLR.cc/2018/Conference
BkN_r2lR-
[{'tddate': None, 'ddate': None, 'original': None, 'tmdate': 1515642442663, 'tcdate': 1512294804088, 'number': 3, 'cdate': 1512294804088, 'id': 'ryhcYB-bG', 'invitation': 'ICLR.cc/2018/Conference/-/Paper390/Official_Review', 'forum': 'BkN_r2lR-', 'replyto': 'BkN_r2lR-', 'signatures': ['ICLR.cc/2018/Conference/Paper390/AnonReviewer2'], 'readers': ['everyone'], 'content': {'title': 'Interesting direction but unconvincing experiments and uncompelling applications', 'rating': '4: Ok but not good enough - rejection', 'review': 'This paper adds an interesting twist on top of recent unpaired image translation work. A domain-level translation function is jointly optimized with an instance-level matching objective. This yields the ability to extract corresponding image pairs out of two unpaired datasets, and also to potentially refine unpaired translation by subsequently training a paired translation function on the discovered matches. I think this is a promising direction, but the current paper has unconvincing results, and it’s not clear if the method is really solving an important problem yet.\\n\\nMy main criticism is with the experiments and results. The experiments focus almost entirely on the setting where there actually exist exact matches between the two image sets. Even the partial matching experiments in Section 4.1.2 only quantify performance on the images that have exact matches. This is a major limitation since the compelling use cases of the method are in scenarios where we do not have exact matches. It feels rather contrived to focus so much on the datasets with exact matches since, 1) these datasets actually come as paired data and, in actual practice, supervised translation can be run directly, 2) it’s hard to imagine datasets that have exact but unknown matches (I welcome the authors to put forward some such scenarios), 3) when exact matches exist, simpler methods may be sufficient, such as matching edges. There is no comparison to any such simple baselines.\\n\\nI think finding analogies that are not exact matches is much more compelling. Quantifying performance in this case may be hard, and the current paper only offers a few qualitative results. I’d like to see far more results, and some attempt at a metric. One option would be to run user studies where humans judge the quality of the matches. The results shown in Figure 2 don’t convince me, not just because they are qualitative and few, but also because I’m not sure I even agree that the proposed method is producing better results: for example, the DiscoGAN results have some artifacts but capture the texture better in row 3.\\n\\nI was also not convinced by the supervised second step in Section 4.3. Given that the first step achieves 97% alignment accuracy, it’s no surprised that running an off-the-shelf supervised method on top of this will match the performance of running on 100% correct data. In other words, this section does not really add much new information beyond what we could already infer given that the first stage alignment was so successful.\\n\\nWhat I think would be really interesting is if the method can improve performance on datasets that actually do not have ground truth exact matches. For example, the shoes and handbags dataset or even better, domain adaptation datasets like sim to real.\\n\\nI’d like to see more discussion of why the second stage supervised problem is beneficial. Would it not be sufficient to iterate alpha and T iterations enough times until alpha is one-hot and T is simply training against a supervised objective (Equation 7)?\\n\\nMinor comments:\\n1. In the intro, it would be useful to have a clear definition of “analogy” for the present context.\\n2. Page 2: a link should be provided for the Putin example, as it is not actually in Zhu et al. 2017.\\n3. Page 3: “Weakly Supervised Mapping” — I wouldn’t call this weakly supervised. Rather, I’d say it’s just another constraint / prior, similar to cycle-consistency, which was referred to under the “Unsupervised” section.\\n4. Page 4 and throughout: It’s hard to follow which variables are being optimized over when. For example, in Eqn. 7, it would be clearer to write out the min over optimization variables.\\n5. Page 6: The Maps dataset was introduced in Isola et al. 2017, not Zhu et al. 2017.\\n6. Page 7: The following sentence is confusing and should be clarified: “This shows that the distribution matching is able to map source images that are semantically similar in the target domain.”\\n7. Page 7: “This shows that a good initialization is important for this task.” — Isn’t this more than initialization? Rather, removing the distributional and cycle constraints changes the overall objective being optimized.\\n8. In Figure 2, are the outputs the matched training images, or are they outputs of the translation function?\\n9. Throughout the paper, some citations are missing enclosing parentheses.', 'confidence': '3: The reviewer is fairly confident that the evaluation is correct'}, 'writers': [], 'nonreaders': []}, {'tddate': None, 'ddate': None, 'original': None, 'tmdate': 1515642442701, 'tcdate': 1512079705086, 'number': 2, 'cdate': 1512079705086, 'id': 'HJ08-bCef', 'invitation': 'ICLR.cc/2018/Conference/-/Paper390/Official_Review', 'forum': 'BkN_r2lR-', 'replyto': 'BkN_r2lR-', 'signatures': ['ICLR.cc/2018/Conference/Paper390/AnonReviewer1'], 'readers': ['everyone'], 'content': {'title': 'The approach is interesting but the paper lacks clarity of presentation', 'rating': '5: Marginally below acceptance threshold', 'review': 'The paper presents a method for finding related images (analogies) from different domains based on matching-by-synthesis. The general idea is interesting and the results show improvements over previous approaches, such as CycleGAN (with different initializations, pre-learned or not). The algorithm is tested on three datasets.\\n\\nWhile the approach has some strong positive points, such as good experiments and theoretical insights (the idea to match by synthesis and the proposed loss which is novel, and combines the proposed concepts), the paper lacks clarity and sufficient details.\\n\\nInstead of the longer intro and related work discussion, I would prefer to see a Figure with the architecture and more illustrative examples to show that the insights are reflected in the experiments. Also, the matching part, which is discussed at the theoretical level, could be better explained and presented at a more visual level. It is hard to understand sufficiently well what the formalism means without more insight.\\n\\nAlso, the experiments need more details. For example, it is not clear what the numbers in Table 2 mean.\\n\\n\\n\\n', 'confidence': '4: The reviewer is confident but not absolutely certain that the evaluation is correct'}, 'writers': [], 'nonreaders': []}, {'tddate': None, 'ddate': None, 'original': None, 'tmdate': 1515642442743, 'tcdate': 1511913916642, 'number': 1, 'cdate': 1511913916642, 'id': 'SkHatuolz', 'invitation': 'ICLR.cc/2018/Conference/-/Paper390/Official_Review', 'forum': 'BkN_r2lR-', 'replyto': 'BkN_r2lR-', 'signatures': ['ICLR.cc/2018/Conference/Paper390/AnonReviewer3'], 'readers': ['everyone'], 'content': {'title': 'AN-GAN: match-aware translation of images across domains, new ideas for combining image matching and GANs', 'rating': '7: Good paper, accept', 'review': 'This paper presents an image-to-image cross domain translation framework based on generative adversarial networks. The contribution is the addition of an explicit exemplar constraint into the formulation which allows best matches from the other domain to be retrieved. The results show that the proposed method is superior for the task of exact correspondence identification and that AN-GAN rivals the performance of pix2pix with strong supervision.\\n\\n\\nNegatives:\\n1.) The task of exact correspondence identification seems contrived. It is not clear which real-world problems have this property of having both all inputs and all outputs in the dataset, with just the correspondence information between inputs and outputs missing.\\n2.) The supervised vs unsupervised experiment on Facades->Labels (Table 3) is only one scenario where applying a supervised method on top of AN-GAN’s matches is better than an unsupervised method. More transfer experiments of this kind would greatly benefit the paper and support the conclusion that “our self-supervised method performs similarly to the fully supervised method.” \\n\\nPositives:\\n1.) The paper does a good job motivating the need for an explicit image matching term inside a GAN framework\\n2.) The paper shows promising results on applying a supervised method on top of AN-GAN’s matches.\\n\\nMinor comments:\\n1. The paper sometimes uses L1 and sometimes L_1, it should be L_1 in all cases.\\n2. DiscoGAN should have the Kim et al citation, right after the first time it is used. I had to look up DiscoGAN to realize it is just Kim et al.', 'confidence': '4: The reviewer is confident but not absolutely certain that the evaluation is correct'}, 'writers': [], 'nonreaders': []}]
This paper builds on top of Cycle GAN ideas where the main idea is to jointly optimize the domain-level translation function with an instance-level matching objective. Initially the paper received two negative reviews (4,5) and a positive (7). After the rebuttal and several back and forth between the first reviewer and the authors, the reviewer was finally swayed by the new experiments. While not officially changing the score, the reviewer recommended acceptance. The AC agrees that the paper is interesting and of value to the ICLR audience.
['unsupervised mapping', 'cross domain mapping']
[{'tddate': None, 'ddate': None, 'tmdate': 1514987205323, 'tcdate': 1514987205323, 'number': 5, 'cdate': 1514987205323, 'id': 'rJ6aA85QG', 'invitation': 'ICLR.cc/2018/Conference/-/Paper390/Official_Comment', 'forum': 'BkN_r2lR-', 'replyto': 'BkN_r2lR-', 'signatures': ['ICLR.cc/2018/Conference/Paper390/Authors'], 'readers': ['everyone'], 'writers': ['ICLR.cc/2018/Conference/Paper390/Authors'], 'content': {'title': 'A real-world application of our method in cell biology', 'comment': 'Two reviewers were concerned that the problem of unsupervised simultaneous cross-domain alignment and mapping, while well suited to the existing ML benchmarks, may not have real-world applications. In our rebuttal, we responded to the challenge posed by AnonReviewer2 to present examples of applications with many important use cases.\\n\\nIn order to further demonstrate that the task has general scientific significance, we present results obtained using our method in the domain of single cell expression analysis. This field has emerged recently, due to new technologies that enable the measurement of gene expression at the level of individual cells. This capability already led to the discovery of quite a few previously unknown cell types and holds the potential to revolutionize cell biology. However, there are many computational challenges since the data is given as sets of unordered measurements. Here, we show how to use our method to map between gene expression of cell samples from two individuals and find interpersonal matching cells.\\n\\nFrom the data of [1], we took the expressions of blood cells (PMBC) extracted for donors A and B (available online at https://support.10xgenomics.com/single-cell-gene-expression/datasets; we used the matrices of what is called “filtered results”). These expressions are sparse matrices, denoting 3k and 7k cells in the two samples and expressions of around 32k genes. We randomly subsampled the 7k cells from donor B to 3k and reduced the dimensions of each sample from 32k to 100 via PCA. Then, we applied our method in order to align the expression of the two donors (find a transformation) and match between the cell samples in each. Needless to say, there is no supervision in the form of matching between the cells of the two donors and the order of the samples is arbitrary. However, we can expect such matches to exist. \\n\\nWe compare three methods:\\nThe mean distance between a sample in set A and a sample in set B (identity transformation). \\nThe mean distance after applying a CycleGAN to compute the transformation from A to B (CG for CycleGAN).\\nThe mean distance after applying our complete method.\\n\\nThe mean distance with the identity mapping is 3.09, CG obtains 2.67, and our method 1.18. The histograms of the distances are shown in the anonymous url:\\nhttps://imgur.com/xP3MVmq\\n\\nWe see a great potential in further applying our method in biology with applications ranging from interspecies biological network alignment [2] to drug discovery [3], i.e. aligning expression signatures of molecules to that of diseases.\\n \\n[1] Zheng et al, “Massively parallel digital transcriptional profiling of single cells”. Nature Communications, 2017.\\n\\n[2] Singh, Rohit, Jinbo Xu, and Bonnie Berger. "Global alignment of multiple protein interaction networks with application to functional orthology detection." Proceedings of the National Academy of Sciences 105.35 (2008): 12763-12768.\\n\\n[3] Gottlieb, et al. "PREDICT: a method for inferring novel drug indications with application to personalized medicine." Molecular systems biology 7.1 (2011): 496.\\n'}, 'nonreaders': []}]
[{'tddate': None, 'ddate': None, 'tmdate': 1513188478089, 'tcdate': 1513188135968, 'number': 3, 'cdate': 1513188135968, 'id': 'rklEiy1Mz', 'invitation': 'ICLR.cc/2018/Conference/-/Paper390/Official_Comment', 'forum': 'BkN_r2lR-', 'replyto': 'ryhcYB-bG', 'signatures': ['ICLR.cc/2018/Conference/Paper390/Authors'], 'readers': ['everyone'], 'writers': ['ICLR.cc/2018/Conference/Paper390/Authors'], 'content': {'title': 'Response to the rest of the comments', 'comment': 'We thank the reviewer for the extensive style and reference comments. They have been fixed in the revised version:\\n1. A definition of “analogy” for the present context added to intro.\\n2. Putin example removed for need of space.\\n3. “Weakly Supervised Mapping” previous work section removed and references merged for need of space.\\n4. Optimization variables have been explicitly added to equations.\\n5. Maps dataset citation was changed to Isola et al. 2017\\n6. Removed confusing comment: “This shows that the distribution matching is able to map source images that are semantically similar in the target domain.”\\n7. “This shows that a good initialization is important for this task.”: one way of looking at it, is that the exemplar loss optimizes the matching problem that we care about but is a hard optimization task. The two other losses are auxiliary losses that help optimization converge. Clarification added in text.\\n8. The results shown for inexact matching are as follows: For alpha iterations and ANGAN we show the matches recovered by our methods, The DiscoGAN results are the outputs of the translation function.\\n9. Parentheses added to all citations.\\n\\nWe hope that this has convinced the reviewer of the importance of this work and are keen to answer any further questions.\\n'}, 'nonreaders': []}, {'tddate': None, 'ddate': None, 'tmdate': 1513188521809, 'tcdate': 1513187814392, 'number': 2, 'cdate': 1513187814392, 'id': 'Sk0k9JkfG', 'invitation': 'ICLR.cc/2018/Conference/-/Paper390/Official_Comment', 'forum': 'BkN_r2lR-', 'replyto': 'HJ08-bCef', 'signatures': ['ICLR.cc/2018/Conference/Paper390/Authors'], 'readers': ['everyone'], 'writers': ['ICLR.cc/2018/Conference/Paper390/Authors'], 'content': {'title': 'Response', 'comment': 'Thank you for your positive feedback on the theoretical and experimental merits of this paper.\\n\\nFollowing your feedback on the clarity of presentation of the method. we included a diagram (including example images) illustrating the algorithm. To help keep the length under control, we shortened the introduction and related work section as you suggested.\\n\\nWe further clarified the text of the experiments. Specifically the numbers in Tab 2 are the top-1 accuracy for both directions (A to B and B to A) when 0%, 10% and 25% of examples do not have matches in the other domain. If some details remain unclear, we would be glad to clarify them.\\n\\nWe hope that your positive opinion of the content of the paper with the improvement in clarity of presentation will merit an acceptance.\\n'}, 'nonreaders': []}, {'tddate': None, 'ddate': None, 'tmdate': 1513188537867, 'tcdate': 1513187636001, 'number': 1, 'cdate': 1513187636001, 'id': 'Hyj4tk1GM', 'invitation': 'ICLR.cc/2018/Conference/-/Paper390/Official_Comment', 'forum': 'BkN_r2lR-', 'replyto': 'SkHatuolz', 'signatures': ['ICLR.cc/2018/Conference/Paper390/Authors'], 'readers': ['everyone'], 'writers': ['ICLR.cc/2018/Conference/Paper390/Authors'], 'content': {'title': 'Response', 'comment': 'We thank you for highlighting the novelty and successful motivation of the exemplar-based matching loss. \\n\\nWe think that the exact-analogy problem is very important. Please refer to our comment to AnonReviewer2 for an extensive discussion. \\n\\nFollowing your request, we have added AN-GAN supervised experiments for the edges2shoes and edges2handbags datasets. The results as for the Facades case are very good.\\n\\nThank you for highlighting the inconsistency in L_1 notation and the confusing reference. This has been fixed in the revised version.\\n'}, 'nonreaders': []}, {'tddate': None, 'ddate': None, 'tmdate': 1513188450517, 'tcdate': 1513188450517, 'number': 4, 'cdate': 1513188450517, 'id': 'Byqw2JyGf', 'invitation': 'ICLR.cc/2018/Conference/-/Paper390/Official_Comment', 'forum': 'BkN_r2lR-', 'replyto': 'ryhcYB-bG', 'signatures': ['ICLR.cc/2018/Conference/Paper390/Authors'], 'readers': ['everyone'], 'writers': ['ICLR.cc/2018/Conference/Paper390/Authors'], 'content': {'title': 'Response to the motivation and experimental comments', 'comment': 'Thank you for the detailed and constructive review. It highlighted motivation and experimental protocols that were further clarified in the revised version.\\n\\nThis paper is focused on exact analogy identification. A core question in the reviews was the motivation for the scenario of exact matching, and we were challenged by the reviewer to find real world applications for it. \\n\\nWe believe that finding exact matches is an important problem and occurs in multiple real-world problems. Exact or near-exact matching occurs in: \\n* 3D point cloud matching.\\n* Matching between different cameras panning the same scene in different trajectories (hard if they are in different modalities such as RGB and IR).\\n* Matching between the audio samples of two speakers uttering the same set of sentences.\\n* Two repeats of the same scripted activity (recipe, physics experiment, theatrical show)\\n* Two descriptions of the same news event in different styles (at the sentence level or at the story level).\\n* Matching parallel dictionary definitions and visual collections.\\n* Learning to play one racket sport after knowing to play another, building on the existing set of acquired movements and skills.\\n\\nIn all these cases, there are exact or near exact analogies that could play a major rule in forming unsupervised links between the domains.\\n \\nWe note that on a technical level, most numerical benchmarks in cross domain translation are already built using exact matches, and many of the unsupervised techniques could be already employing this information, even if implicitly. We show that our method is more effective at it than other methods.\\n\\nOn a more theoretical level, cognitive theories of analogy-based reasoning mostly discuss exact analogies from memory (see, e.g., G. Fauconnier, and M. Turner, “The way we think”, 2002 ). For example, a new situation is dealt with by retrieving and adopting a motor action that was performed before. Here, the chances of finding such analogies are high since the source domain is heavily populated due to life experiences. \\n\\nRegarding experiments. We believe that in some cases the requests are conflicting: we cannot provide numerical results in places for which there are no analogies and no metrics for success. We provide a large body of experiments for exact matches and show that our method far surpasses everything else. We have compared with multiple baselines covering all the reasonable successful approaches for matching between domains. \\n\\nThe experiments regarding cases without exact matches are, admittedly, less extensive, added for completeness, and not the focus of this paper.\\n\\nThe reviewer wondered if matching will likely work better with simpler methods. Our baselines test precisely this possibility and show that the simpler methods do not perform well. Specifically edge-based matches are well covered by the more general VGG feature baseline (which uses also low level maps - not just fc7). AN-GAN has easily outperformed this method. If it is possible to hand-craft a successful method for each task individually, these hand-crafted features are unlikely to generalize as well as the multi-scale VGG features or AN-GAN.\\n\\nWe put further clarification in the paper for the motivation for the second “supervised” step. In unsupervised semantic matching, larger neural architecture have been theoretically and practically shown to be less successful (due to overfitting and finding it less easy to recover the correct transformation). The distribution matching loss function (e.g. CycleGAN) is adversarial and is therefore less stable and might not optimize the quantity we care about (e.g. L1/L2 loss). Once the datasets are aligned and analogies are identified, however, the cross domain translation becomes a standard supervised deep learning problem where large architectures do well and standard loss functions can be used. This is the reason for the two steps. It might be possible to include the increase in architecture into the alpha-iterations but it’s non-trivial and we didn’t find it necessary.\\n'}, 'nonreaders': []}]
[{'tddate': None, 'ddate': None, 'tmdate': 1515816395809, 'tcdate': 1515816395809, 'number': 6, 'cdate': 1515816395809, 'id': 'ByECSWv4z', 'invitation': 'ICLR.cc/2018/Conference/-/Paper390/Official_Comment', 'forum': 'BkN_r2lR-', 'replyto': 'Byqw2JyGf', 'signatures': ['ICLR.cc/2018/Conference/Paper390/AnonReviewer2'], 'readers': ['everyone'], 'writers': ['ICLR.cc/2018/Conference/Paper390/AnonReviewer2'], 'content': {'title': 'Response to rebuttal', 'comment': "Thank you for your detailed reply. I still think the paper could be much improved with more extensive experiments and better applications. However, I agree that the problem setting is interesting and novel, the method is compelling, and the experiments provide sufficient evidence that the method actually works. Therefore I would not mind seeing this paper accepted into ICLR, and, upon reflection, I think this paper is hovering around the acceptance threshold.\\n\\nI really like the real-world examples listed! I would be excited to see the proposed method applied to some of these problems. I think that would greatly improve the paper. (Although, I would still argue that several of the listed examples are cases where the data would naturally come in a paired format, and direct supervision could be applied.)\\n\\nIt's a good point that previous unsupervised, cross domain GANs were also evaluated on contrived datasets with exact matches available at training time. However, I'd argue that these papers were convincing mainly because of extensive qualitative results on datasets without exact matches. Those qualitative results were enough to demonstrate that unpaired translation is possible. The current paper aims to go further, and show that the proposed method does _better_ at unpaired translation than previous methods. Making a comparison like this is harder than simply showing that the method can work at all, and I think it calls for quantitative metrics on real unpaired problems (like the examples listed in the rebuttal).\\n\\nThere are a number of quantitative ways to evaluate performance on datasets without exact matches. First, user studies could be run on Mechanical Turk. Second, unconditional metrics could be evaluated, such as Inception score or moment matching (do the statistics of the output distribution match the statistics of the target domain?).\\n\\nHowever, I actually think it is fine to evaluate on ground truth matches as long as the training data is less contrived. For example, I would find it compelling if the system were tested on 3D point cloud matching, even if the training data contains exact matches, as long as there is no trivial way of finding these matches.\\n\\n"}, 'nonreaders': []}]
[{'tddate': None, 'ddate': None, 'tmdate': 1516055638897, 'tcdate': 1516055638897, 'number': 7, 'cdate': 1516055638897, 'id': 'BkyDnj5VG', 'invitation': 'ICLR.cc/2018/Conference/-/Paper390/Official_Comment', 'forum': 'BkN_r2lR-', 'replyto': 'ByECSWv4z', 'signatures': ['ICLR.cc/2018/Conference/Paper390/Authors'], 'readers': ['everyone'], 'writers': ['ICLR.cc/2018/Conference/Paper390/Authors'], 'content': {'title': 'Additional experiment requested by Reviewer', 'comment': 'We are deeply thankful to AnonReviewer2 for holding an open discussion and for acknowledging the significance of the proposed problem setting, the work’s novelty, and the quality of the experiments.\\nWe are also happy that AnonReviewer2 found the list of possible applications, provided in reply to the challenge posted in the review, to be exciting. We therefore gladly accept the new challenge that was set, to demonstrate the success of our method on one of the proposed applications in the list.\\nSince the reviewer explicitly requested 3D point cloud matching, we have evaluated our method on this task. It should be noted that our method was never tested before in low-D settings, so this experiment is of particular interest.\\nSpecifically, we ran the experiment using the Bunny benchmark, exactly as is shown in “Discriminative optimization: theory and applications to point cloud registration”, CVPR’17 available as an extended version at https://arxiv.org/pdf/1707.04318.pdf, Sec. 6.2.3 . In this benchmark, the object is rotated by a random degree, and we tested the success rate of our model in achieving alignment for various ranges of rotation angles. \\nFor both CycleGAN and our method, the following architecture was used. D is a fully connected network with 2 hidden layers, each of 2048 hidden units, followed by BatchNorm and with Leaky ReLU activations. The mapping function is a linear affine matrix of size 3 * 3 with a bias term. Since in this problem, the transformation is restricted to be a rotation matrix, in both methods we added a loss term that encourages orthonormality of the weights of the mapper. Namely, ||WW^T-I||, where W are the weights of our mapping function.\\nThe table below depicts the success rate for the two methods, for each rotation angle bin, where success is defined in this benchmark as achieving an RMSE alignment accuracy of 0.05.\\nRotation angle | CycleGAN | Ours\\n============================\\n0-30 0.12000 1.00000 \\n30-60 0.12500 1.00000 \\n60-90 0.11538 0.88462 \\n90-120 0.07895 0.78947 \\n120-150 0.05882 0.64706 \\n150-180 0.10000 0.76667\\n \\nComparing to the results reported in Fig. 3 of https://arxiv.org/pdf/1707.04318.pdf, middle column, our results seem to significantly outperform the methods presented there at large angles. Therefore, the proposed method outperforms all baselines and, once again, proves to be effective as well as broadly applicable.\\nP.S. It seems that the comment we posted above, which was titled “A real-world application of our method in cell biology” (https://openreview.net/forum?id=BkN_r2lR-&noteId=rJ6aA85QG), went unnoticed. In a way, it already addressed the new challenge by presenting quantitative results on a real-world dataset for which there are no underlying ground truth matches. '}, 'nonreaders': []}]
366
"Recurrent neural networks (RNN), convolutional neural networks (CNN) and self-attention networks (S(...TRUNCATED)
Bi-Directional Block Self-Attention for Fast and Memory-Efficient Sequence Modeling
ICLR.cc/2018/Conference
H1cWzoxA-
"[{'ddate': None, 'original': None, 'tddate': 1511806221587, 'tmdate': 1515642439599, 'tcdate': 1511(...TRUNCATED)
"The proposed Bi-BloSAN is a two-levels' block SAN, which has both parallelization efficiency and me(...TRUNCATED)
"['deep learning', 'attention mechanism', 'sequence modeling', 'natural language processing', 'sente(...TRUNCATED)
"[{'tddate': None, 'ddate': None, 'tmdate': 1513062985913, 'tcdate': 1513062985913, 'number': 5, 'cd(...TRUNCATED)
"[{'tddate': None, 'ddate': None, 'tmdate': 1513062835663, 'tcdate': 1513062720964, 'number': 4, 'cd(...TRUNCATED)
[]
[]
916
"To train an inference network jointly with a deep generative topic model, making it both scalable t(...TRUNCATED)
WHAI: Weibull Hybrid Autoencoding Inference for Deep Topic Modeling
ICLR.cc/2018/Conference
S1cZsf-RW
"[{'tddate': None, 'ddate': None, 'original': None, 'tmdate': 1515642530672, 'tcdate': 1511899037154(...TRUNCATED)
"The paper proposes a new approach for scalable training of deep topic models based on amortized inf(...TRUNCATED)
[]
[]
"[{'tddate': None, 'ddate': None, 'tmdate': 1513573461760, 'tcdate': 1513573461760, 'number': 2, 'cd(...TRUNCATED)
"[{'tddate': None, 'ddate': None, 'tmdate': 1520324393097, 'tcdate': 1520324393097, 'number': 1, 'cd(...TRUNCATED)
[]
493
"We analyze the expressiveness and loss surface of practical deep convolutional\nneural networks (CN(...TRUNCATED)
The loss surface and expressivity of deep convolutional neural networks
ICLR.cc/2018/Conference
BJjquybCW
"[{'tddate': None, 'ddate': None, 'original': None, 'tmdate': 1515642456435, 'tcdate': 1513195189900(...TRUNCATED)
"Dear authors,\n\nWhile I appreciate the result that a convolutional layer can have full rank output(...TRUNCATED)
"['convolutional neural networks', 'loss surface', 'expressivity', 'critical point', 'global minima'(...TRUNCATED)
[]
"[{'tddate': None, 'ddate': None, 'tmdate': 1515186899255, 'tcdate': 1515186899255, 'number': 4, 'cd(...TRUNCATED)
[]
[]
End of preview.