repo
stringclasses 32
values | instance_id
stringlengths 13
37
| base_commit
stringlengths 40
40
| patch
stringlengths 1
1.89M
| test_patch
stringclasses 1
value | problem_statement
stringlengths 304
69k
| hints_text
stringlengths 0
246k
| created_at
stringlengths 20
20
| version
stringclasses 1
value | FAIL_TO_PASS
stringclasses 1
value | PASS_TO_PASS
stringclasses 1
value | environment_setup_commit
stringclasses 1
value | traceback
stringlengths 64
23.4k
| __index_level_0__
int64 29
19k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
DataDog/integrations-core | DataDog__integrations-core-446 | 0b9be7366a08b2fa1b83c036d823d8848762770f | diff --git a/postgres/check.py b/postgres/check.py
--- a/postgres/check.py
+++ b/postgres/check.py
@@ -651,14 +651,17 @@ def _get_custom_metrics(self, custom_metrics, key):
self.log.debug("Metric: {0}".format(m))
- for ref, (_, mtype) in m['metrics'].iteritems():
- cap_mtype = mtype.upper()
- if cap_mtype not in ('RATE', 'GAUGE', 'MONOTONIC'):
- raise CheckException("Collector method {0} is not known."
- " Known methods are RATE, GAUGE, MONOTONIC".format(cap_mtype))
-
- m['metrics'][ref][1] = getattr(PostgreSql, cap_mtype)
- self.log.debug("Method: %s" % (str(mtype)))
+ try:
+ for ref, (_, mtype) in m['metrics'].iteritems():
+ cap_mtype = mtype.upper()
+ if cap_mtype not in ('RATE', 'GAUGE', 'MONOTONIC'):
+ raise CheckException("Collector method {0} is not known."
+ " Known methods are RATE, GAUGE, MONOTONIC".format(cap_mtype))
+
+ m['metrics'][ref][1] = getattr(PostgreSql, cap_mtype)
+ self.log.debug("Method: %s" % (str(mtype)))
+ except Exception as e:
+ raise CheckException("Error processing custom metric '{}': {}".format(m, e))
self.custom_metrics[key] = custom_metrics
return custom_metrics
| [postgres] Improve config reading errors
I had this `postgres.yaml`:
```
init_config:
instances:
- host: pepepe
...
custom_metrics:
- query: SELECT %s FROM pg_locks WHERE granted = false;
metrics:
count(distinct pid): [postgresql.connections_locked]
descriptors: []
relation: false
```
with a few other hosts and custom metrics. When deploying this I got the following error:
```
2017-02-13 15:33:14 UTC | ERROR | dd.collector | checks.postgres(__init__.py:762) | Check 'postgres' instance #0 failed
Traceback (most recent call last):
File "/opt/datadog-agent/agent/checks/__init__.py", line 745, in run
self.check(copy.deepcopy(instance))
File "/opt/datadog-agent/agent/checks.d/postgres.py", line 606, in check
custom_metrics = self._get_custom_metrics(instance.get('custom_metrics', []), key)
File "/opt/datadog-agent/agent/checks.d/postgres.py", line 576, in _get_custom_metrics
for ref, (_, mtype) in m['metrics'].iteritems():
ValueError: need more than 1 value to unpack
```
This was caused by a missing metric type in the yaml above i.e. it should have been `[postgresql.connections_locked, GAUGE]`.
Because the error message is unclear and also doesn't point to the offending metric (remember I have other hosts and custom metrics), it took me a couple of hours to figure out the cause of this error.
Please consider improving the error messages around config reading.
| Thanks a lot for this report @mausch!
We can't validate the config in a consistent manner, which makes something like this tricky to make the error better. We will work on making this a lot better in the future.
However, what we can do in the very near future is make the documentation both online and in the config yaml itself a lot better. The documentation for the postgres check does not make it clear how to use the custom metrics very well, so better documentation will definitely help to assuage this issue!
Thanks again for your report, we really appreciate this and I will add this to our issue board.
> We can't validate the config in a consistent manner
Not sure what this means exactly, but generally speaking a good error message should give the user enough context so that they can readily fix it.
Better docs are great, but ultimately people will always make mistakes when defining complex config so you need good error messages.
In this particular case, it could be as easy as wrapping the iteration in `_get_custom_metrics` with a `try..except` and in the exception handler wrap the exception with another one that displays the metric being processed (e.g. `raise CheckException("Error processing custom metric: " + str(m)) from e`)
More generally, avoiding partial functions (like tuple unpacking in Python) makes it much easier to validate input and report errors correctly.
Adding to our queue, this would make the life of support engineers much easier, thanks for reporting and for the suggestions. | 2017-05-29T13:10:25Z | [] | [] |
Traceback (most recent call last):
File "/opt/datadog-agent/agent/checks/__init__.py", line 745, in run
self.check(copy.deepcopy(instance))
File "/opt/datadog-agent/agent/checks.d/postgres.py", line 606, in check
custom_metrics = self._get_custom_metrics(instance.get('custom_metrics', []), key)
File "/opt/datadog-agent/agent/checks.d/postgres.py", line 576, in _get_custom_metrics
for ref, (_, mtype) in m['metrics'].iteritems():
ValueError: need more than 1 value to unpack
| 29 |
|||
DataDog/integrations-core | DataDog__integrations-core-5659 | 3b850d826a2f245e9dcc8a1d87d5e2343123882e | diff --git a/datadog_checks_base/datadog_checks/base/checks/win/wmi/__init__.py b/datadog_checks_base/datadog_checks/base/checks/win/wmi/__init__.py
--- a/datadog_checks_base/datadog_checks/base/checks/win/wmi/__init__.py
+++ b/datadog_checks_base/datadog_checks/base/checks/win/wmi/__init__.py
@@ -114,14 +114,15 @@ def _get_tag_query_tag(self, sampler, wmi_obj, tag_query):
target_class, target_property, filters = self._format_tag_query(sampler, wmi_obj, tag_query)
# Create a specific sampler
- tag_query_sampler = WMISampler(self.log, target_class, [target_property], filters=filters, **sampler.connection)
+ with WMISampler(
+ self.log, target_class, [target_property], filters=filters, **sampler.connection
+ ) as tag_query_sampler:
+ tag_query_sampler.sample()
- tag_query_sampler.sample()
+ # Extract tag
+ self._raise_on_invalid_tag_query_result(tag_query_sampler, wmi_obj, tag_query)
- # Extract tag
- self._raise_on_invalid_tag_query_result(tag_query_sampler, wmi_obj, tag_query)
-
- link_value = str(tag_query_sampler[0][target_property]).lower()
+ link_value = str(tag_query_sampler[0][target_property]).lower()
tag = "{tag_name}:{tag_value}".format(tag_name=target_property.lower(), tag_value="_".join(link_value.split()))
@@ -235,14 +236,17 @@ def _get_instance_key(self, host, namespace, wmi_class, other=None):
return "{host}:{namespace}:{wmi_class}".format(host=host, namespace=namespace, wmi_class=wmi_class)
- def _get_wmi_sampler(self, instance_key, wmi_class, properties, tag_by="", **kwargs):
+ def _get_running_wmi_sampler(self, instance_key, wmi_class, properties, tag_by="", **kwargs):
"""
- Create and cache a WMISampler for the given (class, properties)
+ Return a running WMISampler for the given (class, properties).
+
+ If no matching WMISampler is running yet, start one and cache it.
"""
properties = list(properties) + [tag_by] if tag_by else list(properties)
if instance_key not in self.wmi_samplers:
wmi_sampler = WMISampler(self.log, wmi_class, properties, **kwargs)
+ wmi_sampler.start()
self.wmi_samplers[instance_key] = wmi_sampler
return self.wmi_samplers[instance_key]
diff --git a/datadog_checks_base/datadog_checks/base/checks/win/wmi/sampler.py b/datadog_checks_base/datadog_checks/base/checks/win/wmi/sampler.py
--- a/datadog_checks_base/datadog_checks/base/checks/win/wmi/sampler.py
+++ b/datadog_checks_base/datadog_checks/base/checks/win/wmi/sampler.py
@@ -105,6 +105,7 @@ def __init__(
# Sampling state
self._sampling = False
+ self._stopping = False
self.logger = logger
@@ -146,12 +147,35 @@ def __init__(
self._runSampleEvent = Event()
self._sampleCompleteEvent = Event()
- thread = Thread(target=self._query_sample_loop, name=class_name)
- thread.daemon = True
+ def start(self):
+ """
+ Start internal thread for sampling
+ """
+ thread = Thread(target=self._query_sample_loop, name=self.class_name)
+ thread.daemon = True # Python 2 does not support daemon as Thread constructor parameter
thread.start()
+ def stop(self):
+ """
+ Dispose of the internal thread
+ """
+ self._stopping = True
+ self._runSampleEvent.set()
+ self._sampleCompleteEvent.wait()
+
+ def __enter__(self):
+ self.start()
+ return self
+
+ def __exit__(self, type, value, traceback):
+ self.stop()
+
def _query_sample_loop(self):
try:
+ # Initialize COM for the current (dedicated) thread
+ # WARNING: any python COM object (locator, connection, etc) created in a thread
+ # shouldn't be used in other threads (can lead to memory/handle leaks if done
+ # without a deep knowledge of COM's threading model).
pythoncom.CoInitialize()
except Exception as e:
self.logger.info("exception in CoInitialize: %s", e)
@@ -159,6 +183,11 @@ def _query_sample_loop(self):
while True:
self._runSampleEvent.wait()
+ if self._stopping:
+ self.logger.debug("_query_sample_loop stopping")
+ self._sampleCompleteEvent.set()
+ return
+
self._runSampleEvent.clear()
if self.is_raw_perf_class and not self._previous_sample:
self._current_sample = self._query()
@@ -335,11 +364,6 @@ def get_connection(self):
self.username,
)
- # Initialize COM for the current thread
- # WARNING: any python COM object (locator, connection, etc) created in a thread
- # shouldn't be used in other threads (can lead to memory/handle leaks if done
- # without a deep knowledge of COM's threading model). Because of this and given
- # that we run each query in its own thread, we don't cache connections
additional_args = []
if self.provider != ProviderArchitecture.DEFAULT:
diff --git a/win32_event_log/datadog_checks/win32_event_log/win32_event_log.py b/win32_event_log/datadog_checks/win32_event_log/win32_event_log.py
--- a/win32_event_log/datadog_checks/win32_event_log/win32_event_log.py
+++ b/win32_event_log/datadog_checks/win32_event_log/win32_event_log.py
@@ -115,7 +115,7 @@ def check(self, instance):
filters.append(query)
- wmi_sampler = self._get_wmi_sampler(
+ wmi_sampler = self._get_running_wmi_sampler(
instance_key,
self.EVENT_CLASS,
event_properties,
diff --git a/wmi_check/datadog_checks/wmi_check/wmi_check.py b/wmi_check/datadog_checks/wmi_check/wmi_check.py
--- a/wmi_check/datadog_checks/wmi_check/wmi_check.py
+++ b/wmi_check/datadog_checks/wmi_check/wmi_check.py
@@ -52,7 +52,7 @@ def check(self, instance):
metric_name_and_type_by_property, properties = self._get_wmi_properties(instance_key, metrics, tag_queries)
- wmi_sampler = self._get_wmi_sampler(
+ wmi_sampler = self._get_running_wmi_sampler(
instance_key,
wmi_class,
properties,
| WMI integration throws Exception: SWbemLocator Not enough storage is available to process this command
```text
===============
Agent (v7.16.0)
===============
Status date: 2020-02-05 15:56:45.740020 GMT
Agent start: 2020-02-05 15:03:08.601503 GMT
Pid: 25188
Go Version: go1.12.9
Python Version: 3.7.4
Build arch: amd64
Host Info
=========
bootTime: 2020-01-30 09:06:55.000000 GMT
os: windows
platform: Windows Server 2016 Datacenter
platformFamily: Windows Server 2016 Datacenter
platformVersion: 10.0 Build 14393
procs: 255
uptime: 149h56m12s
wmi_check (1.6.0)
```
**Steps to reproduce the issue:**
The WMI Check integration is configured to capture metrics for multiple instances of a specific process and tag them using the command line, as below
```yaml
- class: Win32_PerfFormattedData_PerfProc_Process
metrics:
- - ThreadCount
- proc.threads.count
- gauge
- - VirtualBytes
- proc.mem.virtual
- gauge
- - PrivateBytes
- proc.mem.private
- gauge
- - WorkingSet
- proc.mem.workingset
- gauge
- - PageFaultsPerSec
- proc.mem.page_faults_per_sec
- gauge
- - PercentProcessorTime
- proc.cpu_pct
- gauge
- - IOReadBytesPerSec
- proc.io.bytes_read
- gauge
- - IOWriteBytesPerSec
- proc.io.bytes_written
- gauge
filters:
- Name: Calastone.Core.MessageAdapter.Console%
tag_by: Name
tag_queries:
- [IDProcess, Win32_Process, Handle, CommandLine]
```
There are 17 instances of the process running.
**Describe the results you received:**
- After a period of time (can be 40+ minutes) the following error starts to be logged
```
2020-02-04 16:31:29 GMT | CORE | WARN | (pkg/collector/python/datadog_agent.go:118 in LogMessage) | wmi_check:a7174f61bd7a5360 | (sampler.py:469) | Failed to execute WMI query (Select CommandLine from Win32_Process WHERE ( Handle = '8408' ))
Traceback (most recent call last):
File "C:\Program Files\Datadog\Datadog Agent\embedded3\lib\site-packages\datadog_checks\base\checks\win\wmi\sampler.py", line 464, in _query
raw_results = self.get_connection().ExecQuery(wql, "WQL", query_flags)
File "C:\Program Files\Datadog\Datadog Agent\embedded3\lib\site-packages\datadog_checks\base\checks\win\wmi\sampler.py", line 351, in get_connection
connection = locator.ConnectServer(self.host, self.namespace, self.username, self.password, *additional_args)
File "<COMObject WbemScripting.SWbemLocator>", line 5, in ConnectServer
File "C:\Program Files\Datadog\Datadog Agent\embedded3\lib\site-packages\win32com\client\dynamic.py", line 287, in _ApplyTypes_
result = self._oleobj_.InvokeTypes(*(dispid, LCID, wFlags, retType, argTypes) + args)
pywintypes.com_error: (-2147352567, 'Exception occurred.', (0, 'SWbemLocator', 'Not enough storage is available to process this command. ', None, 0, -2147024888), None)
2020-02-04 16:31:29 GMT | CORE | WARN | (pkg/collector/python/datadog_agent.go:118 in LogMessage) | wmi_check:a7174f61bd7a5360 | (__init__.py:88) | Failed to extract a tag from `tag_queries` parameter: no result was returned. wmi_object={'threadcount': 27.0, 'virtualbytes': 823386112.0, 'privatebytes': 304635904.0, 'workingset': 367628288.0, 'pagefaultspersec': 0.0, 'percentprocessortime': 0.0, 'ioreadbytespersec': 0.0, 'iowritebytespersec': 0.0, 'idprocess': 8408.0, 'name': 'Calastone.Core.MessageAdapter.Console#3'} - query=['IDProcess', 'Win32_Process', 'Handle', 'CommandLine']
2020-02-04 16:31:29 GMT | CORE | WARN | (pkg/collector/python/datadog_agent.go:118 in LogMessage) | wmi_check:a7174f61bd7a5360 | (sampler.py:469) | Failed to execute WMI query (Select CommandLine from Win32_Process WHERE ( Handle = '14836' ))
```
- The number of threads used by the agent process is observed to be rocketing (> 1700)
- The server becomes unresponsive
**Diagnosis:**
This issue didn't occur on the previous version of the agent we were using (6.7.0).
Looking at the source code suggests the problem was introduced as part of #3987
https://github.com/DataDog/integrations-core/blob/010ed622d62c9dd7de28d76f1191a4be5960a965/datadog_checks_base/datadog_checks/base/checks/win/wmi/__init__.py#L117 creates a WMISampler for EVERY tag query that needs to be run. With the new logic that creates a thread for each query that is never released!
**Solution:**
The follow hack fixes the problem. I'll put it into a PR.
Change `sampler.py`
```python
def _query_sample_loop(self):
...
while True:
self._runSampleEvent.wait()
if self._stopping:
return
def dispose(self):
"""
Dispose of the internal thread
"""
self._stopping = True
self._runSampleEvent.set()
```
Change `__init__.py`
```python
def _get_tag_query_tag(self, sampler, wmi_obj, tag_query):
...
tag = "{tag_name}:{tag_value}".format(tag_name=target_property.lower(), tag_value="_".join(link_value.split()))
tag_query_sampler.dispose()
```
There also looks to be scope to cache these WMISampler classes like the main metric samplers. Also the connection created in `get_connection` could be created in the sampler thread method since it is now bound to that thread
| 2020-02-06T12:16:14Z | [] | [] |
Traceback (most recent call last):
File "C:\Program Files\Datadog\Datadog Agent\embedded3\lib\site-packages\datadog_checks\base\checks\win\wmi\sampler.py", line 464, in _query
raw_results = self.get_connection().ExecQuery(wql, "WQL", query_flags)
File "C:\Program Files\Datadog\Datadog Agent\embedded3\lib\site-packages\datadog_checks\base\checks\win\wmi\sampler.py", line 351, in get_connection
connection = locator.ConnectServer(self.host, self.namespace, self.username, self.password, *additional_args)
File "<COMObject WbemScripting.SWbemLocator>", line 5, in ConnectServer
File "C:\Program Files\Datadog\Datadog Agent\embedded3\lib\site-packages\win32com\client\dynamic.py", line 287, in _ApplyTypes_
result = self._oleobj_.InvokeTypes(*(dispid, LCID, wFlags, retType, argTypes) + args)
pywintypes.com_error: (-2147352567, 'Exception occurred.', (0, 'SWbemLocator', 'Not enough storage is available to process this command. ', None, 0, -2147024888), None)
| 36 |
||||
DataDog/integrations-core | DataDog__integrations-core-9857 | 8006a053c108af2cf1988efe23db8f58df8262dc | diff --git a/mongo/datadog_checks/mongo/collectors/custom_queries.py b/mongo/datadog_checks/mongo/collectors/custom_queries.py
--- a/mongo/datadog_checks/mongo/collectors/custom_queries.py
+++ b/mongo/datadog_checks/mongo/collectors/custom_queries.py
@@ -56,8 +56,10 @@ def _collect_custom_metrics_for_query(self, api, raw_query):
mongo_query = deepcopy(raw_query.get('query'))
if not mongo_query: # no cov
raise ValueError("Custom query field `query` is required")
+ # The mongo command to run (find, aggregate, count...)
mongo_command = self._extract_command_from_mongo_query(mongo_query)
- collection_name = mongo_query[mongo_command]
+ # The value of the command, it is usually the collection name on which to run the query.
+ mongo_command_value = mongo_query[mongo_command]
del mongo_query[mongo_command]
if mongo_command not in ALLOWED_CUSTOM_QUERIES_COMMANDS:
raise ValueError("Custom query command must be of type {}".format(ALLOWED_CUSTOM_QUERIES_COMMANDS))
@@ -90,20 +92,26 @@ def _collect_custom_metrics_for_query(self, api, raw_query):
if field_type not in ALLOWED_CUSTOM_METRICS_TYPES + ['tag']:
raise ValueError('Field `type` must be one of {}'.format(ALLOWED_CUSTOM_METRICS_TYPES + ['tag']))
- tags = list(tags)
- tags.extend(raw_query.get('tags', []))
- tags.append('collection:{}'.format(collection_name))
-
try:
# This is where it is necessary to extract the command and its argument from the query to pass it as the
# first two params.
- result = db.command(mongo_command, collection_name, **mongo_query)
+ result = db.command(mongo_command, mongo_command_value, **mongo_query)
if result['ok'] == 0:
raise pymongo.errors.PyMongoError(result['errmsg'])
except pymongo.errors.PyMongoError:
self.log.error("Failed to run custom query for metric %s", metric_prefix)
raise
+ # `1` is Mongo default value for commands that are collection agnostics.
+ if str(mongo_command_value) == '1':
+ # https://github.com/mongodb/mongo-python-driver/blob/01e34cebdb9aac96c72ddb649e9b0040a0dfd3a0/pymongo/aggregation.py#L208
+ collection_name = '{}.{}'.format(db_name, mongo_command)
+ else:
+ collection_name = mongo_command_value
+
+ tags.append('collection:{}'.format(collection_name))
+ tags.extend(raw_query.get('tags', []))
+
if mongo_command == 'count':
# A count query simply returns a number, no need to iterate through it.
submit_method(metric_prefix, result['n'], tags)
| MongoDB: Collection-agnostic aggregations like $currentOp doesn't work
Agent 7.29.1, running on Ubuntu Linux 18.04.
**Steps to reproduce the issue:**
Add the following configuration to `/etc/datadog-agent/conf.d/mongo.d/conf.yaml` and restart the agent:
```
custom_queries:
- metric_prefix: mongodb.custom.queries_slower_than_60sec
run_on_secondary: true
query: { "aggregate": 1, "maxTimeMS": 1000, "pipeline": [ { "$currentOp": { "allUsers": true }}, { "$match": { "active": true, "secs_running": {"$gt": 60}}} ], "cursor": {}}
fields:
- field_name: secs_running
name: secs_running
type: gauge
- field_name: appName
name: app_name
type: tag
- field_name: ns
name: mongo_op_namespace
type: tag
```
**Describe the results you received:**
When Datadog attempts to run this command, it produces an error (found via `journalctl`):
```
Traceback (most recent call last):
2021-07-22 06:44:38 UTC | CORE | WARN | (pkg/collector/python/datadog_agent.go:122 in LogMessage) | mongo:375a6f2e54dabf11 | (custom_queries.py:153) | Errors while collecting custom metrics with prefix mongodb.custom.queries_slower_than_60sec
TypeError: name must be an instance of str
raise TypeError("name must be an instance "
File "/opt/datadog-agent/embedded/lib/python3.8/site-packages/pymongo/collection.py", line 160, in __init__
pymongo.collection.Collection(db, collection_name), result['cursor'], None
File "/opt/datadog-agent/embedded/lib/python3.8/site-packages/datadog_checks/mongo/collectors/custom_queries.py", line 113, in _collect_custom_metrics_for_query
self._collect_custom_metrics_for_query(api, raw_query)
File "/opt/datadog-agent/embedded/lib/python3.8/site-packages/datadog_checks/mongo/collectors/custom_queries.py", line 150, in collect
```
**Describe the results you expected:**
I would like to be able to send information about slow queries to Datadog.
**Additional information you deem important (e.g. issue happens only occasionally):**
It seems like the problem here is that when using this syntax to run an admin aggregation like `$currentOp`, you have to specify `"aggregate": 1` in the query to indicate that there is no associated collection. However, the API that Datadog is calling in pymongo expects the collection name to always be a string. Unfortunately, `"aggregate": "1"` is not equivalent and will fail.
More details on the syntax: https://docs.mongodb.com/manual/reference/command/aggregate/
| Hey @atodd-circleci
Acknowledging the limitation, I'm able to reproduce.
I'm thinking we should be able to work around that by putting `$cmd.aggregate` instead of "1" as the collection name here: https://github.com/DataDog/integrations-core/blob/master/mongo/datadog_checks/mongo/collectors/custom_queries.py#L113 but I'd have to confirm that
@FlorianVeaux Thanks for taking a look so quickly. I manually edited `custom_queries.py` on my installation to replace `collection_name` with the literal `$cmd.aggregate`. It seems to have worked. When I start the agent, I see this in the log:
```
Exception: Custom query returned an empty result set.
raise Exception('Custom query returned an empty result set.')
File "/opt/datadog-agent/embedded/lib/python3.8/site-packages/datadog_checks/mongo/collectors/custom_queries.py", line 145, in _collect_custom_metrics_for_query
self._collect_custom_metrics_for_query(api, raw_query)
File "/opt/datadog-agent/embedded/lib/python3.8/site-packages/datadog_checks/mongo/collectors/custom_queries.py", line 150, in collect
Traceback (most recent call last):
2021-07-27 05:20:05 UTC | CORE | WARN | (pkg/collector/python/datadog_agent.go:122 in LogMessage) | mongo:<redacted> | (custom_queries.py:153) | Errors while collecting custom metrics with prefix mongodb.custom.queries_slower_than_60sec
```
I'm not expecting any results, so this is good. I can't really go around manually editing our installations this way, though, so I'm looking forward to a more permanent fix.
(I am a little concerned about having all of these exceptions in the system log, as well. I'll have to look at using [$count](https://docs.mongodb.com/manual/reference/operator/aggregation/count/) to always output a count instead of what I'm doing now). | 2021-08-05T15:17:59Z | [] | [] |
Traceback (most recent call last):
2021-07-22 06:44:38 UTC | CORE | WARN | (pkg/collector/python/datadog_agent.go:122 in LogMessage) | mongo:375a6f2e54dabf11 | (custom_queries.py:153) | Errors while collecting custom metrics with prefix mongodb.custom.queries_slower_than_60sec
TypeError: name must be an instance of str
| 58 |
|||
Lightning-AI/lightning | Lightning-AI__lightning-1360 | ebd9fc9530242e1c9b5f3093dc62ceb4185735b0 | diff --git a/pytorch_lightning/loggers/wandb.py b/pytorch_lightning/loggers/wandb.py
--- a/pytorch_lightning/loggers/wandb.py
+++ b/pytorch_lightning/loggers/wandb.py
@@ -65,10 +65,11 @@ def __init__(self, name: Optional[str] = None, save_dir: Optional[str] = None,
def __getstate__(self):
state = self.__dict__.copy()
+ # args needed to reload correct experiment
+ state['_id'] = self._experiment.id if self._experiment is not None else None
+
# cannot be pickled
state['_experiment'] = None
- # args needed to reload correct experiment
- state['_id'] = self.experiment.id
return state
@property
@@ -87,7 +88,7 @@ def experiment(self) -> Run:
os.environ['WANDB_MODE'] = 'dryrun'
self._experiment = wandb.init(
name=self._name, dir=self._save_dir, project=self._project, anonymous=self._anonymous,
- id=self._id, resume='allow', tags=self._tags, entity=self._entity)
+ reinit=True, id=self._id, resume='allow', tags=self._tags, entity=self._entity)
# save checkpoints in wandb dir to upload on W&B servers
if self._log_model:
self.save_dir = self._experiment.dir
@@ -109,8 +110,11 @@ def log_metrics(self, metrics: Dict[str, float], step: Optional[int] = None) ->
@property
def name(self) -> str:
- return self.experiment.project_name()
+ # don't create an experiment if we don't have one
+ name = self._experiment.project_name() if self._experiment else None
+ return name
@property
def version(self) -> str:
- return self.experiment.id
+ # don't create an experiment if we don't have one
+ return self._experiment.id if self._experiment else None
| WandbLogger cannot be used with 'ddp'
<!--
### Common bugs:
1. Tensorboard not showing in Jupyter-notebook see [issue 79](https://github.com/PyTorchLightning/pytorch-lightning/issues/79).
2. PyTorch 1.1.0 vs 1.2.0 support [see FAQ](https://github.com/PyTorchLightning/pytorch-lightning#faq)
-->
## ๐ Bug
wandb modifies `init` such that a child process calling init returns None if the master process has called init. This seems to cause a bug with ddp, and results in rank zero having experiment = None, which crashes the program.
### To Reproduce
Can be reproduced with the basic MNIST gpu template, simply add a WandbLogger and pass 'ddp' as the distributed backend.
```
-- Process 0 terminated with the following error:
Traceback (most recent call last):
File "/home/rmrao/anaconda3/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 19, in _wrap
fn(i, *args)
File "/home/rmrao/anaconda3/lib/python3.6/site-packages/pytorch_lightning/trainer/distrib_data_parallel.py", line 331, in ddp_train
self.run_pretrain_routine(model)
File "/home/rmrao/anaconda3/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 757, in run_pretrain_routine
self.logger.log_hyperparams(ref_model.hparams)
File "/home/rmrao/anaconda3/lib/python3.6/site-packages/pytorch_lightning/logging/base.py", line 14, in wrapped_fn
fn(self, *args, **kwargs)
File "/home/rmrao/anaconda3/lib/python3.6/site-packages/pytorch_lightning/logging/wandb.py", line 79, in log_hyperparams
self.experiment.config.update(params)
AttributeError: 'NoneType' object has no attribute 'config'
```
This occurs with the latest wandb version and with pytorch-lightning 0.6.
| Hi! thanks for your contribution!, great first issue!
Some hacky solutions: calling `reinit=True` for wandb, adding or this terrible hack:
```python
def init_ddp_connection(self, *args, **kwargs):
super().init_ddp_connection(*args, **kwargs)
if torch.distributed.get_rank() == 0:
import wandb
wandb.run = None
```
These both seem to only kind-of work and result in multiple independent calls to wandb.init. I think the ideal solution is that the experiment is only ever initialized on rank zero. *However* doing this means that wandb *cannot* be initialized in the master thread at all.
Better than this probably requires some changes to the wandb API.
Following up slightly - my hacky solution doesn't really work. It's easy enough though to get the rank zero only solution to work. If this seems like a reasonable solution, let me know and I'll submit a PR.
well, have observed some issues with `wandb` earlier #906 could you check it?
Hmm, I think this is a slightly different issue (I'm running on Ubuntu so I don't think that's the issue). Pickling also works correctly.
This particular problem I think stems from this part of the `wandb.init(...)` code:
```python
def init(...):
...
# If a thread calls wandb.init() it will get the same Run object as
# the parent. If a child process with distinct memory space calls
# wandb.init(), it won't get an error, but it will get a result of
# None.
# This check ensures that a child process can safely call wandb.init()
# after a parent has (only the parent will create the Run object).
# This doesn't protect against the case where the parent doesn't call
# wandb.init but two children do.
if run or os.getenv(env.INITED):
return run
```
Child processes end up getting `None` for the wandb run object, which causes logging to fail. There are probably two reasonable and complementary solutions:
1. The main thread should avoid creating a wandb experiment unless absolutely necessary.
Right now, [this](https://github.com/PyTorchLightning/pytorch-lightning/blob/e586ed47674fd78b158322bb7b14d00aeb912abd/pytorch_lightning/loggers/wandb.py#L63-L69) is the only part of the logging code that the parent thread calls (I assume it's called when pickling):
```python
def __getstate__(self):
state = self.__dict__.copy()
# cannot be pickled
state['_experiment'] = None
# args needed to reload correct experiment
state['_id'] = self.experiment.id
return state
```
If this is changed to:
```python
def __getstate__(self):
state = self.__dict__.copy()
# args needed to reload correct experiment
if self._experiment is not None:
state['_id'] = self._experiment.id
else:
state['_id'] = None
# cannot be pickled
state['_experiment'] = None
return state
```
That will ensure that unless the user explicitly logs something or creates the wandb experiment first, then the main thread will not try to create an experiment. Since subsequent logging / saving code is wrapped by the `@rank_zero_only` decorator, this will generally solve the issue in the base case.
It's also possible that [these properties](https://github.com/PyTorchLightning/pytorch-lightning/blob/e586ed47674fd78b158322bb7b14d00aeb912abd/pytorch_lightning/loggers/wandb.py#L112-L118) are also called by master. Ideally they would be wrapped to not create the experiment unless it had been already created (i.e. experiment should only be created by a function that is wrapped with the `@rank_zero_only` decorator).
2. If the main thread *has* created an experiment, rank zero should be passed the re-init argument.
`wandb` does allow you to reinitialize the experiment. I tried to play around with this a little bit and got some errors, but in theory adding this:
```python
wandb.init(..., reinit=dist.is_available() and dist.is_initialized() and dist.get_rank() == 0)
```
should force a re-initialization when wandb is already initialzed for rank zero.
| 2020-04-03T13:32:07Z | [] | [] |
Traceback (most recent call last):
File "/home/rmrao/anaconda3/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 19, in _wrap
fn(i, *args)
File "/home/rmrao/anaconda3/lib/python3.6/site-packages/pytorch_lightning/trainer/distrib_data_parallel.py", line 331, in ddp_train
self.run_pretrain_routine(model)
File "/home/rmrao/anaconda3/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 757, in run_pretrain_routine
self.logger.log_hyperparams(ref_model.hparams)
File "/home/rmrao/anaconda3/lib/python3.6/site-packages/pytorch_lightning/logging/base.py", line 14, in wrapped_fn
fn(self, *args, **kwargs)
File "/home/rmrao/anaconda3/lib/python3.6/site-packages/pytorch_lightning/logging/wandb.py", line 79, in log_hyperparams
self.experiment.config.update(params)
AttributeError: 'NoneType' object has no attribute 'config'
| 104 |
|||
Lightning-AI/lightning | Lightning-AI__lightning-1377 | b8ff9bc1d242a18f5e7147f34d63f43fcdd0e50a | diff --git a/pytorch_lightning/loggers/tensorboard.py b/pytorch_lightning/loggers/tensorboard.py
--- a/pytorch_lightning/loggers/tensorboard.py
+++ b/pytorch_lightning/loggers/tensorboard.py
@@ -9,6 +9,7 @@
from torch.utils.tensorboard import SummaryWriter
from pytorch_lightning.loggers.base import LightningLoggerBase, rank_zero_only
+from pytorch_lightning import _logger as log
class TensorBoardLogger(LightningLoggerBase):
@@ -163,6 +164,11 @@ def version(self) -> int:
def _get_next_version(self):
root_dir = os.path.join(self.save_dir, self.name)
+
+ if not os.path.isdir(root_dir):
+ log.warning('Missing logger folder: %s', root_dir)
+ return 0
+
existing_versions = []
for d in os.listdir(root_dir):
if os.path.isdir(os.path.join(root_dir, d)) and d.startswith("version_"):
| Tensorboard logger error: lightning_logs directory not exists in multi-node DDP on nodes with rank != 0
## ๐ Bug
In multi-node DDP train mode on all nodes except rank 0 errors appears at the start of the training caused by accessing lightning_logs directory in tensorboard logger which is not exist at the moment.
### To Reproduce
Steps to reproduce the behavior:
1. setup multi-node cluster (without SLURM)
2. set environment variables on each node:
```
export MASTER_ADDR=<rank 0 node IP>
export MASTER_PORT=23456
export RANK=<node id>
export SLURM_NODEID=<node id>
export WORLD_SIZE=<world-size>
```
3. install dependencies:
```
pip install torch torchvision hydra-core pytorch-lightning
```
4. copy app.y and conf.yaml to each node
5. run script on each node
```
python app.py
```
6. see the error:
```
Exception:
-- Process 0 terminated with the following error:
Traceback (most recent call last):
File "/home/ubuntu/anaconda3/envs/nightly_pt/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 19, in _wrap
fn(i, *args)
File "/home/ubuntu/anaconda3/envs/nightly_pt/lib/python3.6/site-packages/pytorch_lightning/trainer/distrib_data_parallel.py", line 342, in ddp_train
self.run_pretrain_routine(model)
File "/home/ubuntu/anaconda3/envs/nightly_pt/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 777, in run_pretrain_routine
self.configure_checkpoint_callback()
File "/home/ubuntu/anaconda3/envs/nightly_pt/lib/python3.6/site-packages/pytorch_lightning/trainer/callback_config.py", line 45, in configure_checkpoint_callback
f'version_{self.logger.version}',
File "/home/ubuntu/anaconda3/envs/nightly_pt/lib/python3.6/site-packages/pytorch_lightning/loggers/tensorboard.py", line 161, in version
self._version = self._get_next_version()
File "/home/ubuntu/anaconda3/envs/nightly_pt/lib/python3.6/site-packages/pytorch_lightning/loggers/tensorboard.py", line 167, in _get_next_version
for d in os.listdir(root_dir):
FileNotFoundError: [Errno 2] No such file or directory: '/home/ubuntu/pytorch-lightning-intro-guide/outputs/2020-04-04/15-53-26/lightning_logs'
```
#### Code sample
app.py:
```
import pathlib
import hydra
import pytorch_lightning as pl
import torch
from omegaconf import OmegaConf
from torch.nn import functional as F
from torch.optim import Adam
from torch.utils.data import DataLoader, random_split
from torchvision import datasets, transforms
class LitMNIST(pl.LightningModule):
def __init__(self):
super().__init__()
self.layer_1 = torch.nn.Linear(28 * 28, 128)
self.layer_2 = torch.nn.Linear(128, 256)
self.layer_3 = torch.nn.Linear(256, 10)
self.train_dataset = None
self.val_dataset = None
self.test_dataset = None
def forward(self, x):
batch_size, channels, width, height = x.size()
x = x.view(batch_size, -1)
x = self.layer_1(x)
x = F.relu(x)
x = self.layer_2(x)
x = F.relu(x)
x = self.layer_3(x)
x = F.log_softmax(x, dim=1)
return x
def prepare_data(self):
# transform
transform = transforms.Compose(
[transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))])
# download
data_dir = pathlib.Path.home() / 'data'
mnist_train = datasets.MNIST(data_dir, train=True,
download=True, transform=transform)
mnist_test = datasets.MNIST(data_dir, train=False,
download=True, transform=transform)
# train/val split
mnist_train, mnist_val = random_split(mnist_train, [55000, 5000])
# assign to use in dataloaders
self.train_dataset = mnist_train
self.val_dataset = mnist_val
self.test_dataset = mnist_test
def train_dataloader(self):
return DataLoader(self.train_dataset, batch_size=64)
def val_dataloader(self):
return DataLoader(self.val_dataset, batch_size=64)
def test_dataloader(self):
return DataLoader(self.test_dataset, batch_size=64)
def configure_optimizers(self):
return Adam(self.parameters(), lr=1e-3)
def training_step(self, batch, batch_idx):
x, y = batch
logits = self(x)
loss = F.nll_loss(logits, y)
# add logging
logs = {'loss': loss}
return {'loss': loss, 'log': logs}
def validation_step(self, batch, batch_idx):
x, y = batch
logits = self(x)
loss = F.nll_loss(logits, y)
return {'val_loss': loss}
def validation_epoch_end(self, outputs):
avg_loss = torch.stack( # pylint: disable=no-member
[x['val_loss'] for x in outputs]).mean()
tensorboard_logs = {'val_loss': avg_loss}
return {'avg_val_loss': avg_loss, 'log': tensorboard_logs}
def test_step(self, batch, batch_idx):
x, y = batch
logits = self(x)
loss = F.nll_loss(logits, y)
return {'val_loss': loss}
def test_epoch_end(self, outputs):
avg_loss = torch.stack( # pylint: disable=no-member
[x['val_loss'] for x in outputs]).mean()
tensorboard_logs = {'val_loss': avg_loss}
return {'avg_val_loss': avg_loss, 'log': tensorboard_logs}
def init_ddp_connection(self, proc_rank: int, world_size: int) -> None:
torch.distributed.init_process_group(
'nccl', rank=proc_rank, world_size=world_size)
@hydra.main(config_path='conf.yaml')
def main(conf: OmegaConf):
model = LitMNIST()
trainer = pl.Trainer(gpus=conf.gpus,
num_nodes=conf.num_nodes,
distributed_backend=conf.distributed_backend,
max_epochs=3)
trainer.fit(model)
if __name__ == '__main__':
main() # pylint: disable=no-value-for-parameter
```
conf.yaml:
```
gpus: 1
num_nodes: 2
distributed_backend: ddp
```
### Expected behavior
Train should go without error
### Environment
```
cuda:
GPU:
Tesla K80
Tesla K80
Tesla K80
Tesla K80
Tesla K80
Tesla K80
Tesla K80
Tesla K80
available: True
version: 10.1
packages:
numpy: 1.18.1
pyTorch_debug: False
pyTorch_version: 1.4.0
pytorch-lightning: 0.7.1
tensorboard: 2.2.0
tqdm: 4.45.0
system:
OS: Linux
architecture:
64bit
processor: x86_64
python: 3.6.10
version: #113-Ubuntu SMP Wed Jan 29 14:54:54 UTC 2020
```
### Additional context
<!-- Add any other context about the problem here. -->
| 2020-04-04T16:35:26Z | [] | [] |
Traceback (most recent call last):
File "/home/ubuntu/anaconda3/envs/nightly_pt/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 19, in _wrap
fn(i, *args)
File "/home/ubuntu/anaconda3/envs/nightly_pt/lib/python3.6/site-packages/pytorch_lightning/trainer/distrib_data_parallel.py", line 342, in ddp_train
self.run_pretrain_routine(model)
File "/home/ubuntu/anaconda3/envs/nightly_pt/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 777, in run_pretrain_routine
self.configure_checkpoint_callback()
File "/home/ubuntu/anaconda3/envs/nightly_pt/lib/python3.6/site-packages/pytorch_lightning/trainer/callback_config.py", line 45, in configure_checkpoint_callback
f'version_{self.logger.version}',
File "/home/ubuntu/anaconda3/envs/nightly_pt/lib/python3.6/site-packages/pytorch_lightning/loggers/tensorboard.py", line 161, in version
self._version = self._get_next_version()
File "/home/ubuntu/anaconda3/envs/nightly_pt/lib/python3.6/site-packages/pytorch_lightning/loggers/tensorboard.py", line 167, in _get_next_version
for d in os.listdir(root_dir):
FileNotFoundError: [Errno 2] No such file or directory: '/home/ubuntu/pytorch-lightning-intro-guide/outputs/2020-04-04/15-53-26/lightning_logs'
| 105 |
||||
Lightning-AI/lightning | Lightning-AI__lightning-1385 | 4ed3027309fe1882554e9b7ffe33f1aa92c88106 | diff --git a/pytorch_lightning/trainer/distrib_data_parallel.py b/pytorch_lightning/trainer/distrib_data_parallel.py
--- a/pytorch_lightning/trainer/distrib_data_parallel.py
+++ b/pytorch_lightning/trainer/distrib_data_parallel.py
@@ -363,15 +363,19 @@ def load_spawn_weights(self, original_model):
:param model:
:return:
"""
- # load weights saved in ddp
- path = os.path.join(self.default_save_path, '__temp_weight_ddp_end.ckpt')
- loaded_model = original_model.__class__.load_from_checkpoint(path)
- # copy loaded weights to old model
- original_model.load_state_dict(loaded_model.state_dict())
+ loaded_model = original_model
- # remove ddp weights
- os.remove(path)
+ if self.proc_rank == 0:
+ # load weights saved in ddp
+ path = os.path.join(self.default_save_path, '__temp_weight_ddp_end.ckpt')
+ loaded_model = original_model.__class__.load_from_checkpoint(path)
+
+ # copy loaded weights to old model
+ original_model.load_state_dict(loaded_model.state_dict())
+
+ # remove ddp weights
+ os.remove(path)
return loaded_model
| Trainer DDP should invoke load_spawn_weights() only in proc_rank == 0
## ๐ Bug
Trainer DDP load_spawn_weights should happen only in proc_rank == 0 since only in this process (node) `save_spawn_weights` actually saves checkpoint
### To Reproduce
Steps to reproduce the behavior:
1. setup two-node cluster.
1. set SLURM_NODEID on each node: '0' on node 0 and '1' on node 1.
2. run the script `python app.py` on each node.
3. see stdout on the node 1:
```
Traceback (most recent call last):
File "app.py", line 166, in <module>
main_() # pylint: disable=no-value-for-parameter
File "app.py", line 162, in main_
trainer.fit(model)
File "/home/ubuntu/anaconda3/envs/nightly_pt/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 593, in fit
self.load_spawn_weights(model)
File "/home/ubuntu/anaconda3/envs/nightly_pt/lib/python3.7/site-packages/pytorch_lightning/trainer/distrib_data_parallel.py", line 368, in load_spawn_weights
loaded_model = original_model.__class__.load_from_checkpoint(path)
File "/home/ubuntu/anaconda3/envs/nightly_pt/lib/python3.7/site-packages/pytorch_lightning/core/lightning.py", line 1353, in load_from_checkpoint
checkpoint = torch.load(checkpoint_path, map_location=lambda storage, loc: storage)
File "/home/ubuntu/anaconda3/envs/nightly_pt/lib/python3.7/site-packages/torch/serialization.py", line 525, in load
with _open_file_like(f, 'rb') as opened_file:
File "/home/ubuntu/anaconda3/envs/nightly_pt/lib/python3.7/site-packages/torch/serialization.py", line 212, in _open_file_like
return _open_file(name_or_buffer, mode)
File "/home/ubuntu/anaconda3/envs/nightly_pt/lib/python3.7/site-packages/torch/serialization.py", line 193, in __init__
super(_open_file, self).__init__(open(name, mode))
FileNotFoundError: [Errno 2] No such file or directory: '/home/ubuntu/pytorch-lightning-intro-guide/__temp_weight_ddp_end.ckpt'
```
#### Code sample
app.py:
```
import pathlib
import pytorch_lightning as pl
import torch
from torch.nn import functional as F
from torch.optim import Adam
from torch.utils.data import DataLoader, random_split
from torchvision import datasets, transforms
class LitMNIST(pl.LightningModule):
def __init__(self):
super().__init__()
self.layer_1 = torch.nn.Linear(28 * 28, 128)
self.layer_2 = torch.nn.Linear(128, 256)
self.layer_3 = torch.nn.Linear(256, 10)
self.train_dataset = None
self.val_dataset = None
self.test_dataset = None
def forward(self, x):
batch_size, channels, width, height = x.size()
x = x.view(batch_size, -1)
x = self.layer_1(x)
x = F.relu(x)
x = self.layer_2(x)
x = F.relu(x)
x = self.layer_3(x)
x = F.log_softmax(x, dim=1)
return x
def prepare_data(self):
# transform
transform = transforms.Compose(
[transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))])
# download
data_dir = pathlib.Path.home() / 'data'
mnist_train = datasets.MNIST(data_dir, train=True,
download=True, transform=transform)
mnist_test = datasets.MNIST(data_dir, train=False,
download=True, transform=transform)
# train/val split
mnist_train, mnist_val = random_split(mnist_train, [55000, 5000])
# assign to use in dataloaders
self.train_dataset = mnist_train
self.val_dataset = mnist_val
self.test_dataset = mnist_test
def train_dataloader(self):
return DataLoader(self.train_dataset, batch_size=64)
def val_dataloader(self):
return DataLoader(self.val_dataset, batch_size=64)
def test_dataloader(self):
return DataLoader(self.test_dataset, batch_size=64)
def configure_optimizers(self):
return Adam(self.parameters(), lr=1e-3)
def training_step(self, batch, batch_idx):
x, y = batch
logits = self(x)
loss = F.nll_loss(logits, y)
# add logging
logs = {'loss': loss}
return {'loss': loss, 'log': logs}
def validation_step(self, batch, batch_idx):
x, y = batch
logits = self(x)
loss = F.nll_loss(logits, y)
return {'val_loss': loss}
def test_step(self, batch, batch_idx):
x, y = batch
logits = self(x)
loss = F.nll_loss(logits, y)
return {'val_loss': loss}
def test_epoch_end(self, outputs):
avg_loss = torch.stack( # pylint: disable=no-member
[x['val_loss'] for x in outputs]).mean()
tensorboard_logs = {'val_loss': avg_loss}
return {'avg_val_loss': avg_loss, 'log': tensorboard_logs}
def init_ddp_connection(self, proc_rank: int, world_size: int) -> None:
torch.distributed.init_process_group(
'nccl', rank=proc_rank, world_size=world_size)
def main():
model = LitMNIST()
gpus = 1
num_nodes = 2
trainer = pl.Trainer(gpus=gpus,
num_nodes=num_nodes,
distributed_backend='ddp',
max_epochs=3)
trainer.fit(model)
if __name__ == '__main__':
main()
```
### Expected behavior
All workers on all nodes should finish without errors.
### Environment
On each node:
```
cuda:
GPU:
Tesla K80
Tesla K80
Tesla K80
Tesla K80
Tesla K80
Tesla K80
Tesla K80
Tesla K80
available: True
version: 10.1
packages:
numpy: 1.16.6
pyTorch_debug: False
pyTorch_version: 1.4.0
pytorch-lightning: 0.7.1
tensorboard: 2.2.0
tqdm: 4.44.1
system:
OS: Linux
architecture:
64bit
processor: x86_64
python: 3.7.7
version: #113-Ubuntu SMP Wed Jan 29 14:54:54 UTC 2020
```
### Additional context
<!-- Add any other context about the problem here. -->
| 2020-04-05T23:51:47Z | [] | [] |
Traceback (most recent call last):
File "app.py", line 166, in <module>
main_() # pylint: disable=no-value-for-parameter
File "app.py", line 162, in main_
trainer.fit(model)
File "/home/ubuntu/anaconda3/envs/nightly_pt/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 593, in fit
self.load_spawn_weights(model)
File "/home/ubuntu/anaconda3/envs/nightly_pt/lib/python3.7/site-packages/pytorch_lightning/trainer/distrib_data_parallel.py", line 368, in load_spawn_weights
loaded_model = original_model.__class__.load_from_checkpoint(path)
File "/home/ubuntu/anaconda3/envs/nightly_pt/lib/python3.7/site-packages/pytorch_lightning/core/lightning.py", line 1353, in load_from_checkpoint
checkpoint = torch.load(checkpoint_path, map_location=lambda storage, loc: storage)
File "/home/ubuntu/anaconda3/envs/nightly_pt/lib/python3.7/site-packages/torch/serialization.py", line 525, in load
with _open_file_like(f, 'rb') as opened_file:
File "/home/ubuntu/anaconda3/envs/nightly_pt/lib/python3.7/site-packages/torch/serialization.py", line 212, in _open_file_like
return _open_file(name_or_buffer, mode)
File "/home/ubuntu/anaconda3/envs/nightly_pt/lib/python3.7/site-packages/torch/serialization.py", line 193, in __init__
super(_open_file, self).__init__(open(name, mode))
FileNotFoundError: [Errno 2] No such file or directory: '/home/ubuntu/pytorch-lightning-intro-guide/__temp_weight_ddp_end.ckpt'
| 107 |
||||
Lightning-AI/lightning | Lightning-AI__lightning-1423 | 3f1e4b953f84ecdac7dada0c6b57d908efc9c3d3 | diff --git a/pytorch_lightning/trainer/distrib_parts.py b/pytorch_lightning/trainer/distrib_parts.py
--- a/pytorch_lightning/trainer/distrib_parts.py
+++ b/pytorch_lightning/trainer/distrib_parts.py
@@ -566,7 +566,7 @@ def check_gpus_data_type(gpus):
:return: return unmodified gpus variable
"""
- if gpus is not None and type(gpus) not in (int, str, list):
+ if gpus is not None and (not isinstance(gpus, (int, str, list)) or isinstance(gpus, bool)):
raise MisconfigurationException("GPUs must be int, string or list of ints or None.")
| Use isinstance() instead of type() in trainer.distrib_parts.check_gpus_data_type
<!--
### Common bugs:
1. Tensorboard not showing in Jupyter-notebook see [issue 79](https://github.com/PyTorchLightning/pytorch-lightning/issues/79).
2. PyTorch 1.1.0 vs 1.2.0 support [see FAQ](https://github.com/PyTorchLightning/pytorch-lightning#faq)
-->
## ๐ Bug
When instantiating a `Trainer` object, it makes sense to be able to pass a subclass of `list`.
Ideally, this would be something even more general like `collections.abc.Sequence`, but I'm not too familiar with Lightning's codebase and that change would have a greater likelihood of breaking things.
### To Reproduce
Instantiate a `Trainer` with the `gpus` parameter being a subclass of `list`.
#### Code sample
```python
>>> from pytorch_lightning import Trainer
>>> class MyList(list):
... pass
...
>>> gpus = MyList([0])
>>> t = Trainer(gpus=gpus)
```
This produces
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/opt/anaconda/miniconda3/envs/ai/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 366, in __init__
self.data_parallel_device_ids = parse_gpu_ids(self.gpus)
File "/opt/anaconda/miniconda3/envs/ai/lib/python3.7/site-packages/pytorch_lightning/trainer/distrib_parts.py", line 613, in parse_gpu_ids
check_gpus_data_type(gpus)
File "/opt/anaconda/miniconda3/envs/ai/lib/python3.7/site-packages/pytorch_lightning/trainer/distrib_parts.py", line 561, in check_gpus_data_type
raise MisconfigurationException("GPUs must be int, string or list of ints or None.")
pytorch_lightning.utilities.debugging.MisconfigurationException: GPUs must be int, string or list of ints or None.
```
### Expected behavior
`Trainer` is instantiated normally as it would had a list been passed.
### Environment
- PyTorch Version: 1.4.0
- PyTorch Lightning Version: 0.7.1
- OS: Ubuntu 19.10
- How you installed PyTorch: `pip`
- Python version: 3.7
### Potential Fix
In `pytorch_lightning/trainer/distrib_parts.py` check types using `isinstance()` instead of `type()`:
```python
def check_gpus_data_type(gpus):
# if gpus is not None and type(gpus) not in (int, str, list):
if gpus is not None and not isinstance(gpus, (int, str, list)):
raise MisconfigurationException("GPUs must be int, string or list of ints or None.")
```
I'll put in a PR if this change sounds good
| Hi! thanks for your contribution!, great first issue!
I do like this shift from `type` to an `isinstance` which extend accepted types also to child...
as always a good PR is always welcome
cc: @PyTorchLightning/core-contributors @jeremyjordan | 2020-04-09T09:44:35Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/opt/anaconda/miniconda3/envs/ai/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 366, in __init__
self.data_parallel_device_ids = parse_gpu_ids(self.gpus)
File "/opt/anaconda/miniconda3/envs/ai/lib/python3.7/site-packages/pytorch_lightning/trainer/distrib_parts.py", line 613, in parse_gpu_ids
check_gpus_data_type(gpus)
File "/opt/anaconda/miniconda3/envs/ai/lib/python3.7/site-packages/pytorch_lightning/trainer/distrib_parts.py", line 561, in check_gpus_data_type
raise MisconfigurationException("GPUs must be int, string or list of ints or None.")
pytorch_lightning.utilities.debugging.MisconfigurationException: GPUs must be int, string or list of ints or None.
| 111 |
|||
Lightning-AI/lightning | Lightning-AI__lightning-1513 | 9b31272cf0f3079a244944096b4a81eec20fe555 | diff --git a/pytorch_lightning/trainer/data_loading.py b/pytorch_lightning/trainer/data_loading.py
--- a/pytorch_lightning/trainer/data_loading.py
+++ b/pytorch_lightning/trainer/data_loading.py
@@ -61,6 +61,7 @@ class TrainerDataLoadingMixin(ABC):
train_percent_check: float
val_percent_check: float
test_percent_check: float
+ replace_sampler_ddp: bool
@abstractmethod
def is_overriden(self, *args):
@@ -88,10 +89,8 @@ def auto_add_sampler(self, dataloader: DataLoader, train: bool) -> DataLoader:
# don't do anything if it's not a dataloader
if not isinstance(dataloader, DataLoader):
return dataloader
-
- need_dist_sampler = self.use_ddp or self.use_ddp2 or self.use_tpu
-
- if need_dist_sampler:
+ need_dist_sampler = (self.use_ddp or self.use_ddp2 or self.use_tpu)
+ if self.replace_sampler_ddp and need_dist_sampler:
skip_keys = ['sampler', 'batch_sampler', 'dataset_kind']
diff --git a/pytorch_lightning/trainer/trainer.py b/pytorch_lightning/trainer/trainer.py
--- a/pytorch_lightning/trainer/trainer.py
+++ b/pytorch_lightning/trainer/trainer.py
@@ -127,6 +127,7 @@ def __init__(
benchmark: bool = False,
reload_dataloaders_every_epoch: bool = False,
auto_lr_find: Union[bool, str] = False,
+ replace_sampler_ddp: bool = True,
default_save_path=None, # backward compatible, todo: remove in v0.8.0
gradient_clip=None, # backward compatible, todo: remove in v0.8.0
nb_gpu_nodes=None, # backward compatible, todo: remove in v0.8.0
@@ -282,6 +283,9 @@ def __init__(
rate in self.hparams.lr | self.hparams.learning_rate in the lightning module.
To use a different key, set a string instead of True with the key name.
+ replace_sampler_ddp: Explicitly enables or disables sampler replacement.
+ If not specified this will toggled automatically ddp is used
+
benchmark: If true enables cudnn.benchmark.
terminate_on_nan: If set to True, will terminate training (by raising a `ValueError`) at the
@@ -362,6 +366,7 @@ def __init__(
self.reload_dataloaders_every_epoch = reload_dataloaders_every_epoch
self.auto_lr_find = auto_lr_find
+ self.replace_sampler_ddp = replace_sampler_ddp
self.truncated_bptt_steps = truncated_bptt_steps
self.resume_from_checkpoint = resume_from_checkpoint
| 0.7.3 breaks reusable dataloaders in DDP
## ๐ Bug
0.7.3 breaks reusable dataloaders in DDP
```
Traceback (most recent call last):
File "/opt/conda/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 19, in _wrap
fn(i, *args)
File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/trainer/distrib_data_parallel.py", line 345, in ddp_train
self.run_pretrain_routine(model)
File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 864, in run_pretrain_routine
self.train()
File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/trainer/training_loop.py", line 296, in train
self.reset_train_dataloader(model)
File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/trainer/data_loading.py", line 128, in reset_train_dataloader
self.train_dataloader = self.auto_add_sampler(self.train_dataloader, train=True)
File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/trainer/data_loading.py", line 112, in auto_add_sampler
dataloader = type(dataloader)(**dl_args)
File "../main/dataset.py", line 15, in __init__
super().__init__(*args, **kwargs)
TypeError: __init__() got an unexpected keyword argument 'iterator'
```
#### Code sample
```
class _RepeatSampler(object):
def __init__(self, sampler):
self.sampler = sampler
def __iter__(self):
while True:
yield from iter(self.sampler)
class FastDataLoader(torch.utils.data.dataloader.DataLoader):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
object.__setattr__(self, 'batch_sampler', _RepeatSampler(self.batch_sampler))
self.iterator = super().__iter__()
def __len__(self):
return len(self.batch_sampler.sampler)
def __iter__(self):
for i in range(len(self)):
yield next(self.iterator)
```
replace Dataloader with FastDataLoader in lightning
(this snippet is from https://github.com/pytorch/pytorch/issues/15849)
### Expected behavior
Dataloaders initialize correctly and are reused between train/val/epochs (works as expected in 0.7.1)
### Probable Cause
https://github.com/PyTorchLightning/pytorch-lightning/pull/1425
| ummm yeah. we should change the dataloader swap with swapping a dataloader init from the class or not swipe the dataloder at all but set the correct sampler.
@justusschock any ideas?
This is a mixture of #1425 and #1346
And I don't think we can prevent this when we want to set correct samplers also in subclasses of `DataLoader`. We use all public attributes for reinitialization.
The probably easiest fix for you, would be to change `self.iterator` to `self._iterator` to avoid passing this argument in reinit.
If we just change the sampler, this might yield unexpected behaviour. | 2020-04-17T07:59:07Z | [] | [] |
Traceback (most recent call last):
File "/opt/conda/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 19, in _wrap
fn(i, *args)
File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/trainer/distrib_data_parallel.py", line 345, in ddp_train
self.run_pretrain_routine(model)
File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 864, in run_pretrain_routine
self.train()
File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/trainer/training_loop.py", line 296, in train
self.reset_train_dataloader(model)
File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/trainer/data_loading.py", line 128, in reset_train_dataloader
self.train_dataloader = self.auto_add_sampler(self.train_dataloader, train=True)
File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/trainer/data_loading.py", line 112, in auto_add_sampler
dataloader = type(dataloader)(**dl_args)
File "../main/dataset.py", line 15, in __init__
super().__init__(*args, **kwargs)
TypeError: __init__() got an unexpected keyword argument 'iterator'
| 128 |
|||
Lightning-AI/lightning | Lightning-AI__lightning-1582 | 5ab5084f7b9e137c1e7769228aaed8da92eaad6e | diff --git a/pytorch_lightning/loggers/base.py b/pytorch_lightning/loggers/base.py
--- a/pytorch_lightning/loggers/base.py
+++ b/pytorch_lightning/loggers/base.py
@@ -280,6 +280,7 @@ class LoggerCollection(LightningLoggerBase):
Args:
logger_iterable: An iterable collection of loggers
"""
+
def __init__(self, logger_iterable: Iterable[LightningLoggerBase]):
super().__init__()
self._logger_iterable = logger_iterable
@@ -347,20 +348,28 @@ def merge_dicts(
Examples:
>>> import pprint
- >>> d1 = {'a': 1.7, 'b': 2.0, 'c': 1}
- >>> d2 = {'a': 1.1, 'b': 2.2, 'v': 1}
- >>> d3 = {'a': 1.1, 'v': 2.3}
+ >>> d1 = {'a': 1.7, 'b': 2.0, 'c': 1, 'd': {'d1': 1, 'd3': 3}}
+ >>> d2 = {'a': 1.1, 'b': 2.2, 'v': 1, 'd': {'d1': 2, 'd2': 3}}
+ >>> d3 = {'a': 1.1, 'v': 2.3, 'd': {'d3': 3, 'd4': {'d5': 1}}}
>>> dflt_func = min
- >>> agg_funcs = {'a': np.mean, 'v': max}
+ >>> agg_funcs = {'a': np.mean, 'v': max, 'd': {'d1': sum}}
>>> pprint.pprint(merge_dicts([d1, d2, d3], agg_funcs, dflt_func))
- {'a': 1.3, 'b': 2.0, 'c': 1, 'v': 2.3}
+ {'a': 1.3,
+ 'b': 2.0,
+ 'c': 1,
+ 'd': {'d1': 3, 'd2': 3, 'd3': 3, 'd4': {'d5': 1}},
+ 'v': 2.3}
"""
-
+ agg_key_funcs = agg_key_funcs or dict()
keys = list(functools.reduce(operator.or_, [set(d.keys()) for d in dicts]))
d_out = {}
for k in keys:
- fn = agg_key_funcs.get(k, default_func) if agg_key_funcs else default_func
- agg_val = fn([v for v in [d_in.get(k) for d_in in dicts] if v is not None])
- d_out[k] = agg_val
+ fn = agg_key_funcs.get(k)
+ values_to_agg = [v for v in [d_in.get(k) for d_in in dicts] if v is not None]
+
+ if isinstance(values_to_agg[0], dict):
+ d_out[k] = merge_dicts(values_to_agg, fn, default_func)
+ else:
+ d_out[k] = (fn or default_func)(values_to_agg)
return d_out
| After update from 0.5.x to 0.7.3 merge_dicts #1278 sometimes breaks training
## ๐ Bug
After I updated from a quite old lightning version to the newest one, I sometimes get a TypeError from merge_dicts. I guess it's related to this MR #1278 . This Type error is deterministic, meaning it always occurs at the same global step during training. It somehow seems to be related to val_check_interval as well. For some data changing this value leads to no Error. But for other datasets this does not work. Also this only happens during training step, I suspect the training step after validating.
### To Reproduce
Steps to reproduce the behavior:
I have no Idea.
```
File "/home/sebastian/.cache/pypoetry/virtualenvs/forgerydetection-iC5ox0X1-py3.7/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py", line 363, in train
self.run_training_epoch()
File "/home/sebastian/.cache/pypoetry/virtualenvs/forgerydetection-iC5ox0X1-py3.7/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py", line 470, in run_training_epoch
self.log_metrics(batch_step_metrics, grad_norm_dic)
File "/home/sebastian/.cache/pypoetry/virtualenvs/forgerydetection-iC5ox0X1-py3.7/lib/python3.7/site-packages/pytorch_lightning/trainer/logging.py", line 74, in log_metrics
self.logger.agg_and_log_metrics(scalar_metrics, step=step)
File "/home/sebastian/.cache/pypoetry/virtualenvs/forgerydetection-iC5ox0X1-py3.7/lib/python3.7/site-packages/pytorch_lightning/loggers/base.py", line 128, in agg_and_log_metrics
agg_step, metrics_to_log = self._aggregate_metrics(metrics=metrics, step=step)
File "/home/sebastian/.cache/pypoetry/virtualenvs/forgerydetection-iC5ox0X1-py3.7/lib/python3.7/site-packages/pytorch_lightning/loggers/base.py", line 101, in _aggregate_metrics
agg_step, agg_mets = self._finalize_agg_metrics()
File "/home/sebastian/.cache/pypoetry/virtualenvs/forgerydetection-iC5ox0X1-py3.7/lib/python3.7/site-packages/pytorch_lightning/loggers/base.py", line 116, in _finalize_agg_metrics
agg_mets = merge_dicts(self._metrics_to_agg, self._agg_key_funcs, self._agg_default_func)
File "/home/sebastian/.cache/pypoetry/virtualenvs/forgerydetection-iC5ox0X1-py3.7/lib/python3.7/site-packages/pytorch_lightning/loggers/base.py", line 347, in merge_dicts
agg_val = fn([v for v in [d_in.get(k) for d_in in dicts] if v is not None])
File "/home/sebastian/.cache/pypoetry/virtualenvs/forgerydetection-iC5ox0X1-py3.7/lib/python3.7/site-packages/numpy/core/fromnumeric.py", line 3118, in mean
out=out, **kwargs)
File "/home/sebastian/.cache/pypoetry/virtualenvs/forgerydetection-iC5ox0X1-py3.7/lib/python3.7/site-packages/numpy/core/_methods.py", line 75, in _mean
ret = umr_sum(arr, axis, dtype, out, keepdims)
TypeError: unsupported operand type(s) for +: 'dict' and 'dict'
```
Sometimes its also 'dict' and 'int'
### Expected behavior
At least should not break training, but maybe a more verbose message what is wrong. Its quite hard for me to debug, as the structure of the logs I'm returning to lightning does not change.
### Environment
```
cuda:
GPU:
GeForce RTX 2080 Ti
GeForce RTX 2080 Ti
GeForce RTX 2080 Ti
GeForce RTX 2080 Ti
GeForce RTX 2080 Ti
GeForce RTX 2080 Ti
GeForce RTX 2080 Ti
GeForce RTX 2080 Ti
available: True
version: 10.1.243
packages:
numpy: 1.16.4
pyTorch_debug: False
pyTorch_version: 1.3.0
pytorch-lightning: 0.7.3
tensorboard: 2.2.0
tqdm: 4.45.0
system:
OS: Linux
architecture:
64bit
ELF
processor: x86_64
python: 3.7.7
version: #97~16.04.1-Ubuntu SMP Wed Apr 1 03:03:31 UTC 2020
```
### Additional context
Also for some reason some runs have an issue with multiprocessing, but it does not break the training:
```
Traceback (most recent call last):โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 9/9 [00:00<00:00, 8.76it/s]
File "/home/sebastian/.pyenv/versions/3.7.7/lib/python3.7/multiprocessing/util.py", line 277, in _run_finalizers
finalizer()
File "/home/sebastian/.pyenv/versions/3.7.7/lib/python3.7/multiprocessing/util.py", line 201, in __call__
res = self._callback(*self._args, **self._kwargs)
File "/home/sebastian/.pyenv/versions/3.7.7/lib/python3.7/multiprocessing/util.py", line 110, in _remove_temp_dir
rmtree(tempdir)
File "/home/sebastian/.pyenv/versions/3.7.7/lib/python3.7/shutil.py", line 498, in rmtree
onerror(os.rmdir, path, sys.exc_info())
File "/home/sebastian/.pyenv/versions/3.7.7/lib/python3.7/shutil.py", line 496, in rmtree
os.rmdir(path)
OSError: [Errno 39] Directory not empty: '/tmp/pymp-jcqai2xr'
```
| Did you passed any 'agg_key_funcs' to the logger class? If I understand the code correctly, by default np.mean is used to aggregate the dict values returned during training. Maybe numpy tries in the mean function to *add* (+ func) values which can't be summed up?
Can you maybe post the code snippets where you return the metrics to log in the lightning module and the initialization of the logger if you use one? If you don't use a logger, you can disable it by passing logger=False to the trainer (don't know if your previous version had logger on by default).
Hope I can help :)
Thanks for the quick reply!
No I'm not using any 'agg_key_funcs' that I know of.
> If I understand the code correctly, by default np.mean is used to aggregate the dict values returned during training.
This only happens when there is a step in time where two times stuff is logged, right? So my guess is that at some point that is the case that two logs have to be "unified" but this fails, because I'm using "dict in dicts". I need this tho, because I want to have i.e. loss train and val in the same graph.
I'm using the TestTubeLogger:
` logger = TestTubeLogger(save_dir=log_dir, name=name, description=description)
`
and just pass this to the Trainer.
The metric logging to lightning is a bit scattered:
1. train_step in model:
```
x, target = batch
pred = self.forward(x)
loss = self.loss(pred, target)
lightning_log = {"loss": loss}
with torch.no_grad():
train_acc = self.calculate_accuracy(pred, target)
tensorboard_log = {"loss": loss, "acc": train_acc}
return tensorboard_log, lightning_log
```
2. this is passed to a function that lets me add train and val to same graph:
```
def _construct_lightning_log(
self,
tensorboard_log: dict,
lightning_log: dict = None,
suffix: str = "train",
prefix: str = "metrics",
):
lightning_log = lightning_log or {}
fixed_log = {}
for metric, value in tensorboard_log.items():
if isinstance(value, dict):
fixed_log[f"{prefix}/{metric}"] = value
else:
fixed_log[f"{prefix}/{metric}"] = {suffix: value}
return {"log": fixed_log, **lightning_log}
```
Do you pass it after training_step or training_epoch_end? I think lightning collects your logs and tries to aggregate it to one value. I can't test it now. Maybe tomorrow.
But when I quickly type this into python interpreter:
```
>>> d={}
>>> np.mean([d,d])
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<__array_function__ internals>", line 5, in mean
File "/usr/lib/python3.8/site-packages/numpy/core/fromnumeric.py", line 3334, in mean
return _methods._mean(a, axis=axis, dtype=dtype,
File "/usr/lib/python3.8/site-packages/numpy/core/_methods.py", line 151, in _mean
ret = umr_sum(arr, axis, dtype, out, keepdims)
TypeError: unsupported operand type(s) for +: 'dict' and 'dict'
```
Seems like getting your error.
Maybe print what you exactly return and when it crashes. When I have time tomorrow, I will also make some tests.
After training_step. I not have a training_epoch_end or training_end method defined.
> I think lightning collects your logs and tries to aggregate it to one value.
Yes I think so as well.
Ok I return something like this:
`{'metrics/aud_std': {'test': tensor(1.6337, device='cuda:0')},
'metrics/class_loss_diff': {'test': tensor(nan)},
'metrics/class_loss_val': {'0': tensor(nan), '1': tensor(91.5485)},
'metrics/loss': {'test': tensor(45.7742, device='cuda:0')},
'metrics/vid_std': {'test': tensor(1.6506, device='cuda:0')}}`
What do you mean by when it crashes exactly? I think when it crashes it's always the train step after an validation step (keep in mind I'm validation several times during one epoch). If I change the val_check_interval the error either disappears or happens at a different batch number.
Hello.
I think the problem is in your metrics type. Metrics must have the `Dict[str, float]` type. But in your case, the `metrics` is a nested dict. So, that's why values are failed to be aggregated.
Is it possible for you to flatten the dictionary?
@alexeykarnachev Hey! Ah yes that's what I thought. Do you know why the metrics dict is enforced to be of this type? In 0.5.x this was not an issue as far as I know.
I mean, yes I can flatten it but I want to have i.e. val/loss and train/loss in the same graph. It's basically this: https://pytorch.org/docs/stable/tensorboard.html#torch.utils.tensorboard.writer.SummaryWriter.add_scalars
I know that here https://github.com/PyTorchLightning/pytorch-lightning/issues/1144#issuecomment-599089378 It was said that this should not be done, but for me this is essential.
Is there a way that I can overwrite the merge_dicts function? If so how would I do that?
@fellnerse Okay, I got your point, let's ask Borda's advice)
@Borda, what do you think? Is it possible to combine nested metrics dictionaries with metrics aggregation logic? At first sight, it doesn't look like a big problem. Maybe you can see any side effects of tracking aggregated metrics with nested dictionaries? If no, I can try to fix this issue
I ques it can be used, just need to care about the depth and the aggregation will be a bit complicated... | 2020-04-23T20:27:40Z | [] | [] |
Traceback (most recent call last):โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 9/9 [00:00<00:00, 8.76it/s]
File "/home/sebastian/.pyenv/versions/3.7.7/lib/python3.7/multiprocessing/util.py", line 277, in _run_finalizers
finalizer()
File "/home/sebastian/.pyenv/versions/3.7.7/lib/python3.7/multiprocessing/util.py", line 201, in __call__
res = self._callback(*self._args, **self._kwargs)
File "/home/sebastian/.pyenv/versions/3.7.7/lib/python3.7/multiprocessing/util.py", line 110, in _remove_temp_dir
rmtree(tempdir)
File "/home/sebastian/.pyenv/versions/3.7.7/lib/python3.7/shutil.py", line 498, in rmtree
onerror(os.rmdir, path, sys.exc_info())
File "/home/sebastian/.pyenv/versions/3.7.7/lib/python3.7/shutil.py", line 496, in rmtree
os.rmdir(path)
OSError: [Errno 39] Directory not empty: '/tmp/pymp-jcqai2xr'
| 140 |
|||
Lightning-AI/lightning | Lightning-AI__lightning-1589 | 79196246cfcc73391de1be71bfb27d4366daf75a | diff --git a/pytorch_lightning/trainer/distrib_parts.py b/pytorch_lightning/trainer/distrib_parts.py
--- a/pytorch_lightning/trainer/distrib_parts.py
+++ b/pytorch_lightning/trainer/distrib_parts.py
@@ -461,10 +461,15 @@ def __transfer_data_to_device(self, batch, device, gpu_id=None):
# when tuple
if isinstance(batch, tuple):
- batch = list(batch)
- for i, x in enumerate(batch):
- batch[i] = self.__transfer_data_to_device(x, device, gpu_id)
- return tuple(batch)
+ # when namedtuple
+ if hasattr(batch, '_fields'):
+ elem_type = type(batch)
+ return elem_type(*(self.__transfer_data_to_device(x, device, gpu_id) for x in batch))
+ else:
+ batch = list(batch)
+ for i, x in enumerate(batch):
+ batch[i] = self.__transfer_data_to_device(x, device, gpu_id)
+ return tuple(batch)
# when dict
if isinstance(batch, dict):
| Named converted to regular tuples when sent to the gpu.
<!--
### Common bugs:
1. Tensorboard not showing in Jupyter-notebook see [issue 79](https://github.com/PyTorchLightning/pytorch-lightning/issues/79).
2. PyTorch 1.1.0 vs 1.2.0 support [see FAQ](https://github.com/PyTorchLightning/pytorch-lightning#faq)
-->
## ๐ Bug
<!-- A clear and concise description of what the bug is. -->
Named tuples returned from `Dataset` get converted to regular tuples when sent to the gpu.
This happens because `isinstance(instance_of_a_named_tuple, tuple)` evaluates to True in `distrib_parts.py`
https://github.com/PyTorchLightning/pytorch-lightning/blob/67d5f4dc392250d23bfeb11aba45e919a99ff1c0/pytorch_lightning/trainer/distrib_parts.py#L463
### To Reproduce
```python
import pytorch_lightning as pl
from collections import namedtuple
import torch
import numpy
NamedTupleDemoInput = namedtuple('DemoInput', ['x1', 'x2', 'y'])
class NamedTupleDemoDataset:
def __len__(self):
return 30000
def __getitem__(self, index):
x1 = numpy.random.uniform(0, 100)
x2 = numpy.random.uniform(0, 100)
y = 2*x1 + 3*x2 + numpy.random.normal(0, 0.05)
return NamedTupleDemoInput(x1, x2, y)
class WeightedSum(torch.nn.Module):
def __init__(self):
super(WeightedSum, self).__init__()
self.a = torch.nn.Parameter(torch.zeros(1))
self.b = torch.nn.Parameter(torch.zeros(1))
def forward(self, x1, x2):
return self.a * x1 + self.b * x2
class NamedTupleDemo(pl.LightningModule):
def __init__(self):
super(NamedTupleDemo, self).__init__()
self.model = WeightedSum()
def forward(self, x1, x2):
return self.model(x1, x2)
def train_dataloader(self):
return torch.utils.data.DataLoader(NamedTupleDemoDataset(), batch_size=128)
def training_step(self, batch, batch_index):
yhat = self.forward(batch.x1, batch.x2)
return {'loss': torch.nn.functional.mse_loss(batch.y, yhat)}
def configure_optimizers(self):
return torch.optim.Adam(self.parameters(), lr=1e-2)
if __name__ == '__main__':
module = NamedTupleDemo()
pl.Trainer(max_epochs=20, gpus=1).fit(module)
print(f'a={float(module.model.a)} b={float(module.model.b)}')
```
<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->
```
Traceback (most recent call last):
File "demo.py", line 48, in <module>
pl.Trainer(max_epochs=20, gpus=1).fit(module)
File "/home/n/repos/pytorch-lightning/pytorch_lightning/trainer/trainer.py", line 749, in fit
self.single_gpu_train(model)
File "/home/n/repos/pytorch-lightning/pytorch_lightning/trainer/distrib_parts.py", line 491, in single_gpu_train
self.run_pretrain_routine(model)
File "/home/n/repos/pytorch-lightning/pytorch_lightning/trainer/trainer.py", line 910, in run_pretrain_routine
self.train()
File "/home/n/repos/pytorch-lightning/pytorch_lightning/trainer/training_loop.py", line 384, in train
self.run_training_epoch()
File "/home/n/repos/pytorch-lightning/pytorch_lightning/trainer/training_loop.py", line 456, in run_training_epoch
_outputs = self.run_training_batch(batch, batch_idx)
File "/home/n/repos/pytorch-lightning/pytorch_lightning/trainer/training_loop.py", line 633, in run_training_batch
loss, batch_output = optimizer_closure()
File "/home/n/repos/pytorch-lightning/pytorch_lightning/trainer/training_loop.py", line 597, in optimizer_closure
output_dict = self.training_forward(split_batch, batch_idx, opt_idx, self.hiddens)
File "/home/n/repos/pytorch-lightning/pytorch_lightning/trainer/training_loop.py", line 770, in training_forward
output = self.model.training_step(*args)
File "demo.py", line 40, in training_step
yhat = self.forward(batch.x1, batch.x2)
AttributeError: 'tuple' object has no attribute 'x1'
```
<!-- Ideally attach a minimal code sample to reproduce the decried issue.
Minimal means having the shortest code but still preserving the bug. -->
### Expected behavior
Namedtuples returned from the dataset should be keep their original fields.
### Environment
* CUDA:
- GPU:
- GeForce RTX 2080 Ti
- available: True
- version: 10.2
* Packages:
- numpy: 1.18.3
- pyTorch_debug: False
- pyTorch_version: 1.5.0
- pytorch-lightning: 0.7.4rc5
- tensorboard: 2.2.1
- tqdm: 4.45.0
* System:
- OS: Linux
- architecture:
- 64bit
- ELF
- processor:
- python: 3.8.2
- version: #1 SMP PREEMPT Sun, 05 Apr 2020 05:13:14 +0000
<!-- Add any other context about the problem here. -->
| 2020-04-24T03:49:56Z | [] | [] |
Traceback (most recent call last):
File "demo.py", line 48, in <module>
pl.Trainer(max_epochs=20, gpus=1).fit(module)
File "/home/n/repos/pytorch-lightning/pytorch_lightning/trainer/trainer.py", line 749, in fit
self.single_gpu_train(model)
File "/home/n/repos/pytorch-lightning/pytorch_lightning/trainer/distrib_parts.py", line 491, in single_gpu_train
self.run_pretrain_routine(model)
File "/home/n/repos/pytorch-lightning/pytorch_lightning/trainer/trainer.py", line 910, in run_pretrain_routine
self.train()
File "/home/n/repos/pytorch-lightning/pytorch_lightning/trainer/training_loop.py", line 384, in train
self.run_training_epoch()
File "/home/n/repos/pytorch-lightning/pytorch_lightning/trainer/training_loop.py", line 456, in run_training_epoch
_outputs = self.run_training_batch(batch, batch_idx)
File "/home/n/repos/pytorch-lightning/pytorch_lightning/trainer/training_loop.py", line 633, in run_training_batch
loss, batch_output = optimizer_closure()
File "/home/n/repos/pytorch-lightning/pytorch_lightning/trainer/training_loop.py", line 597, in optimizer_closure
output_dict = self.training_forward(split_batch, batch_idx, opt_idx, self.hiddens)
File "/home/n/repos/pytorch-lightning/pytorch_lightning/trainer/training_loop.py", line 770, in training_forward
output = self.model.training_step(*args)
File "demo.py", line 40, in training_step
yhat = self.forward(batch.x1, batch.x2)
AttributeError: 'tuple' object has no attribute 'x1'
| 141 |
||||
Lightning-AI/lightning | Lightning-AI__lightning-2014 | 8b9b923ca8ad9fdb0ae22928de0029e7c2e7a782 | diff --git a/pl_examples/domain_templates/computer_vision_fine_tuning.py b/pl_examples/domain_templates/computer_vision_fine_tuning.py
--- a/pl_examples/domain_templates/computer_vision_fine_tuning.py
+++ b/pl_examples/domain_templates/computer_vision_fine_tuning.py
@@ -450,5 +450,4 @@ def get_args() -> argparse.Namespace:
if __name__ == '__main__':
-
main(get_args())
diff --git a/pl_examples/domain_templates/generative_adversarial_net.py b/pl_examples/domain_templates/generative_adversarial_net.py
--- a/pl_examples/domain_templates/generative_adversarial_net.py
+++ b/pl_examples/domain_templates/generative_adversarial_net.py
@@ -7,7 +7,7 @@
tensorboard --logdir default
"""
import os
-from argparse import ArgumentParser
+from argparse import ArgumentParser, Namespace
from collections import OrderedDict
import numpy as np
@@ -183,7 +183,7 @@ def on_epoch_end(self):
self.logger.experiment.add_image('generated_images', grid, self.current_epoch)
-def main(args):
+def main(args: Namespace) -> None:
# ------------------------
# 1 INIT LIGHTNING MODEL
# ------------------------
diff --git a/pl_examples/domain_templates/imagenet.py b/pl_examples/domain_templates/imagenet.py
--- a/pl_examples/domain_templates/imagenet.py
+++ b/pl_examples/domain_templates/imagenet.py
@@ -1,7 +1,7 @@
"""
This example is largely adapted from https://github.com/pytorch/examples/blob/master/imagenet/main.py
"""
-import argparse
+from argparse import ArgumentParser, Namespace
import os
import random
from collections import OrderedDict
@@ -183,7 +183,7 @@ def val_dataloader(self):
@staticmethod
def add_model_specific_args(parent_parser): # pragma: no-cover
- parser = argparse.ArgumentParser(parents=[parent_parser])
+ parser = ArgumentParser(parents=[parent_parser])
parser.add_argument('-a', '--arch', metavar='ARCH', default='resnet18', choices=MODEL_NAMES,
help='model architecture: ' +
' | '.join(MODEL_NAMES) +
@@ -210,7 +210,7 @@ def add_model_specific_args(parent_parser): # pragma: no-cover
def get_args():
- parent_parser = argparse.ArgumentParser(add_help=False)
+ parent_parser = ArgumentParser(add_help=False)
parent_parser.add_argument('--data-path', metavar='DIR', type=str,
help='path to dataset')
parent_parser.add_argument('--save-path', metavar='DIR', default=".", type=str,
@@ -228,20 +228,23 @@ def get_args():
return parser.parse_args()
-def main(hparams):
- model = ImageNetLightningModel(hparams)
- if hparams.seed is not None:
- random.seed(hparams.seed)
- torch.manual_seed(hparams.seed)
+def main(args: Namespace) -> None:
+ model = ImageNetLightningModel(**vars(args))
+
+ if args.seed is not None:
+ random.seed(args.seed)
+ torch.manual_seed(args.seed)
cudnn.deterministic = True
+
trainer = pl.Trainer(
- default_root_dir=hparams.save_path,
- gpus=hparams.gpus,
- max_epochs=hparams.epochs,
- distributed_backend=hparams.distributed_backend,
- precision=16 if hparams.use_16bit else 32,
+ default_root_dir=args.save_path,
+ gpus=args.gpus,
+ max_epochs=args.epochs,
+ distributed_backend=args.distributed_backend,
+ precision=16 if args.use_16bit else 32,
)
- if hparams.evaluate:
+
+ if args.evaluate:
trainer.run_evaluation()
else:
trainer.fit(model)
| Bug in GAN example
Bug in https://github.com/PyTorchLightning/pytorch-lightning/blob/master/pl_examples/domain_templates/generative_adversarial_net.py
When I run `python generative_adversarial_net.py `
I get
```
Traceback (most recent call last):
File "generative_adversarial_net.py", line 218, in <module>
main(hparams)
File "generative_adversarial_net.py", line 192, in main
model = GAN(hparams)
File "generative_adversarial_net.py", line 90, in __init__
self.generator = Generator(latent_dim=self.latent_dim, img_shape=mnist_shape)
File "generative_adversarial_net.py", line 39, in __init__
*block(latent_dim, 128, normalize=False),
File "generative_adversarial_net.py", line 32, in block
layers = [nn.Linear(in_feat, out_feat)]
File "/home/vladimir/anaconda3/lib/python3.7/site-packages/torch/nn/modules/linear.py", line 72, in __init__
self.weight = Parameter(torch.Tensor(out_features, in_features))
TypeError: new(): argument 'size' must be tuple of ints, but found element of type Namespace at pos 2
```
| Replace with `model = GAN(**vars(hparams))` [here](https://github.com/PyTorchLightning/pytorch-lightning/blob/fdbbe968256f6c68a5dbb840a2004b77a618ef61/pl_examples/domain_templates/generative_adversarial_net.py#L192). Same bug in [imagenet script](https://github.com/PyTorchLightning/pytorch-lightning/blob/fdbbe968256f6c68a5dbb840a2004b77a618ef61/pl_examples/domain_templates/imagenet.py#L232) also.
@ternaus @rohitgr7 mind submitting a PR to fix? :) | 2020-05-30T12:26:09Z | [] | [] |
Traceback (most recent call last):
File "generative_adversarial_net.py", line 218, in <module>
main(hparams)
File "generative_adversarial_net.py", line 192, in main
model = GAN(hparams)
File "generative_adversarial_net.py", line 90, in __init__
self.generator = Generator(latent_dim=self.latent_dim, img_shape=mnist_shape)
File "generative_adversarial_net.py", line 39, in __init__
*block(latent_dim, 128, normalize=False),
File "generative_adversarial_net.py", line 32, in block
layers = [nn.Linear(in_feat, out_feat)]
File "/home/vladimir/anaconda3/lib/python3.7/site-packages/torch/nn/modules/linear.py", line 72, in __init__
self.weight = Parameter(torch.Tensor(out_features, in_features))
TypeError: new(): argument 'size' must be tuple of ints, but found element of type Namespace at pos 2
| 177 |
|||
Lightning-AI/lightning | Lightning-AI__lightning-2115 | 0bd7780adc4d68007946cf380a6a24e1a08d99d1 | diff --git a/pytorch_lightning/trainer/data_loading.py b/pytorch_lightning/trainer/data_loading.py
--- a/pytorch_lightning/trainer/data_loading.py
+++ b/pytorch_lightning/trainer/data_loading.py
@@ -139,6 +139,7 @@ def _get_distributed_sampler(self, dataloader):
else:
world_size = {
'ddp': self.num_nodes * self.num_processes,
+ 'ddp_spawn': self.num_nodes * self.num_processes,
'ddp2': self.num_nodes,
'ddp_cpu': self.num_processes * self.num_nodes
}
diff --git a/pytorch_lightning/trainer/distrib_data_parallel.py b/pytorch_lightning/trainer/distrib_data_parallel.py
--- a/pytorch_lightning/trainer/distrib_data_parallel.py
+++ b/pytorch_lightning/trainer/distrib_data_parallel.py
@@ -221,7 +221,7 @@ def set_distributed_mode(self, distributed_backend):
elif self.num_gpus > 1:
self.use_dp = True
- elif distributed_backend == "ddp":
+ elif distributed_backend in ['ddp', 'ddp_spawn']:
if self.num_gpus == 0:
if self.num_nodes > 1 or self.num_processes > 1:
self.use_ddp = True # ddp_cpu
@@ -378,6 +378,7 @@ def spawn_ddp_children(self, model):
self.interactive_ddp_procs = []
for local_rank in range(1, self.num_processes):
+ print('launching local_rank', local_rank)
env_copy = os.environ.copy()
env_copy['LOCAL_RANK'] = f'{local_rank}'
@@ -394,7 +395,7 @@ def spawn_ddp_children(self, model):
local_rank = 0
self.ddp_train(local_rank, model, is_master=True)
- def ddp_train(self, process_idx, model, is_master=False):
+ def ddp_train(self, process_idx, model, is_master=False, proc_offset=0):
"""
Entry point into a DP thread
:param gpu_idx:
@@ -402,6 +403,9 @@ def ddp_train(self, process_idx, model, is_master=False):
:param cluster_obj:
:return:
"""
+ # offset the process id if requested
+ process_idx = process_idx + proc_offset
+
# show progressbar only on progress_rank 0
if (self.node_rank != 0 or process_idx != 0) and self.progress_bar_callback is not None:
self.progress_bar_callback.disable()
@@ -454,7 +458,7 @@ def ddp_train(self, process_idx, model, is_master=False):
self.reinit_scheduler_properties(self.optimizers, self.lr_schedulers)
# DDP2 uses all GPUs on the machine
- if self.distributed_backend == 'ddp':
+ if self.distributed_backend == 'ddp' or self.distributed_backend == 'ddp_spawn':
device_ids = [self.root_gpu]
elif self.use_ddp2:
device_ids = self.data_parallel_device_ids
diff --git a/pytorch_lightning/trainer/trainer.py b/pytorch_lightning/trainer/trainer.py
--- a/pytorch_lightning/trainer/trainer.py
+++ b/pytorch_lightning/trainer/trainer.py
@@ -246,7 +246,7 @@ def __init__(
Use `row_log_interval` instead. Will remove 0.9.0.
- distributed_backend: The distributed backend to use.
+ distributed_backend: The distributed backend to use (dp, ddp, ddp2, ddp_spawn)
use_amp:
.. warning:: .. deprecated:: 0.7.0
@@ -876,9 +876,16 @@ def fit(
self.ddp_train(task, model)
elif self.distributed_backend == 'cpu_ddp':
+ self.__set_random_port()
self.model = model
mp.spawn(self.ddp_train, nprocs=self.num_processes, args=(model,))
+ elif self.distributed_backend == 'ddp_spawn':
+ model.share_memory()
+
+ # spin up peers
+ mp.spawn(self.ddp_train, nprocs=self.num_processes, args=(model, ))
+
elif self.distributed_backend == 'ddp':
self.spawn_ddp_children(model)
| verify ddp and ddp_spawn implementation
CUDA error: an illegal memory access was encountered after updating to the latest stable packages
Can anyone help with this CUDA error: an illegal memory access was encountered ??
It runs fine for several iterations...
## ๐ Bug
```
Traceback (most recent call last):
File "train_gpu.py", line 237, in <module>
main_local(hparam_trial)
File "train_gpu.py", line 141, in main_local
trainer.fit(model)
File "/shared/storage/cs/staffstore/username/anaconda3/envs/sh1/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 859, in fit
self.single_gpu_train(model)
File "/shared/storage/cs/staffstore/username/anaconda3/envs/sh1/lib/python3.7/site-packages/pytorch_lightning/trainer/distrib_parts.py", line 503, in single_gpu_train
self.run_pretrain_routine(model)
File "/shared/storage/cs/staffstore/username/anaconda3/envs/sh1/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 1015, in run_pretrain_routine
self.train()
File "/shared/storage/cs/staffstore/username/anaconda3/envs/sh1/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py", line 347, in train
self.run_training_epoch()
File "/shared/storage/cs/staffstore/username/anaconda3/envs/sh1/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py", line 419, in run_training_epoch
_outputs = self.run_training_batch(batch, batch_idx)
File "/shared/storage/cs/staffstore/username/anaconda3/envs/sh1/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py", line 604, in run_training_batch
self.batch_loss_value.append(loss)
File "/shared/storage/cs/staffstore/username/anaconda3/envs/sh1/lib/python3.7/site-packages/pytorch_lightning/trainer/supporters.py", line 44, in append
x = x.to(self.memory)
RuntimeError: CUDA error: an illegal memory access was encountered
```
### To Reproduce
### Environment
* CUDA:
- GPU:
- Quadro P6000
- available: True
- version: 10.2
* Packages:
- numpy: 1.18.1
- pyTorch_debug: False
- pyTorch_version: 1.5.0
- pytorch-lightning: 0.7.6
- tensorboard: 2.2.2
- tqdm: 4.46.1
* System:
- OS: Linux
- architecture:
- 64bit
-
- processor: x86_64
- python: 3.7.0
- version: #47~18.04.1-Ubuntu SMP Thu May 7 13:10:50 UTC 2020
| 2020-06-08T15:37:16Z | [] | [] |
Traceback (most recent call last):
File "train_gpu.py", line 237, in <module>
main_local(hparam_trial)
File "train_gpu.py", line 141, in main_local
trainer.fit(model)
File "/shared/storage/cs/staffstore/username/anaconda3/envs/sh1/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 859, in fit
self.single_gpu_train(model)
File "/shared/storage/cs/staffstore/username/anaconda3/envs/sh1/lib/python3.7/site-packages/pytorch_lightning/trainer/distrib_parts.py", line 503, in single_gpu_train
self.run_pretrain_routine(model)
File "/shared/storage/cs/staffstore/username/anaconda3/envs/sh1/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 1015, in run_pretrain_routine
self.train()
File "/shared/storage/cs/staffstore/username/anaconda3/envs/sh1/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py", line 347, in train
self.run_training_epoch()
File "/shared/storage/cs/staffstore/username/anaconda3/envs/sh1/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py", line 419, in run_training_epoch
_outputs = self.run_training_batch(batch, batch_idx)
File "/shared/storage/cs/staffstore/username/anaconda3/envs/sh1/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py", line 604, in run_training_batch
self.batch_loss_value.append(loss)
File "/shared/storage/cs/staffstore/username/anaconda3/envs/sh1/lib/python3.7/site-packages/pytorch_lightning/trainer/supporters.py", line 44, in append
x = x.to(self.memory)
RuntimeError: CUDA error: an illegal memory access was encountered
| 188 |
||||
Lightning-AI/lightning | Lightning-AI__lightning-2216 | e780072961562ab1d89bad871918fcc422ad0ac6 | diff --git a/pytorch_lightning/loggers/base.py b/pytorch_lightning/loggers/base.py
--- a/pytorch_lightning/loggers/base.py
+++ b/pytorch_lightning/loggers/base.py
@@ -3,13 +3,11 @@
import operator
from abc import ABC, abstractmethod
from argparse import Namespace
-from typing import Union, Optional, Dict, Iterable, Any, Callable, List, Sequence, Mapping, Tuple
+from typing import Union, Optional, Dict, Iterable, Any, Callable, List, Sequence, Mapping, Tuple, MutableMapping
import numpy as np
import torch
-from pytorch_lightning.utilities import rank_zero_only
-
class LightningLoggerBase(ABC):
"""
@@ -174,9 +172,9 @@ def _flatten_dict(params: Dict[str, Any], delimiter: str = '/') -> Dict[str, Any
def _dict_generator(input_dict, prefixes=None):
prefixes = prefixes[:] if prefixes else []
- if isinstance(input_dict, dict):
+ if isinstance(input_dict, MutableMapping):
for key, value in input_dict.items():
- if isinstance(value, (dict, Namespace)):
+ if isinstance(value, (MutableMapping, Namespace)):
value = vars(value) if isinstance(value, Namespace) else value
for d in _dict_generator(value, prefixes + [key]):
yield d
| Hydra MLFlow Clash
<!--
### Common bugs:
1. Tensorboard not showing in Jupyter-notebook see [issue 79](https://github.com/PyTorchLightning/pytorch-lightning/issues/79).
2. PyTorch 1.1.0 vs 1.2.0 support [see FAQ](https://github.com/PyTorchLightning/pytorch-lightning#faq)
-->
## ๐ Bug
When using the MLFlow logger with Hydra, because the parameters passed to the LightningModule is a `DictConfig`, the condition in the `logger/base.py` is not met.
https://github.com/PyTorchLightning/pytorch-lightning/blob/8211256c46430e43e0c27e4f078c72085bb4ea34/pytorch_lightning/loggers/base.py#L177
### To Reproduce
Use Hydra and MLFlow together.
<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->
```python
Traceback (most recent call last):
File "/home/siavash/KroniKare/kwae2/kwae_ma/models/pl_train_segmentation_model.py", line 115, in <module>
main()
File "/home/siavash/anaconda3/envs/kwae-ma/lib/python3.7/site-packages/hydra/main.py", line 24, in decorated_main
strict=strict,
File "/home/siavash/anaconda3/envs/kwae-ma/lib/python3.7/site-packages/hydra/_internal/utils.py", line 174, in run_hydra
overrides=args.overrides,
File "/home/siavash/anaconda3/envs/kwae-ma/lib/python3.7/site-packages/hydra/_internal/hydra.py", line 86, in run
job_subdir_key=None,
File "/home/siavash/anaconda3/envs/kwae-ma/lib/python3.7/site-packages/hydra/plugins/common/utils.py", line 109, in run_job
ret.return_value = task_function(task_cfg)
File "/home/siavash/KroniKare/kwae2/kwae_ma/models/pl_train_segmentation_model.py", line 111, in main
trainer.fit(wound_seg_pl)
File "/home/siavash/anaconda3/envs/kwae-ma/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 765, in fit
self.single_gpu_train(model)
File "/home/siavash/anaconda3/envs/kwae-ma/lib/python3.7/site-packages/pytorch_lightning/trainer/distrib_parts.py", line 492, in single_gpu_train
self.run_pretrain_routine(model)
File "/home/siavash/anaconda3/envs/kwae-ma/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 843, in run_pretrain_routine
self.logger.log_hyperparams(ref_model.hparams)
File "/home/siavash/anaconda3/envs/kwae-ma/lib/python3.7/site-packages/pytorch_lightning/loggers/base.py", line 275, in log_hyperparams
[logger.log_hyperparams(params) for logger in self._logger_iterable]
File "/home/siavash/anaconda3/envs/kwae-ma/lib/python3.7/site-packages/pytorch_lightning/loggers/base.py", line 275, in <listcomp>
[logger.log_hyperparams(params) for logger in self._logger_iterable]
File "/home/siavash/anaconda3/envs/kwae-ma/lib/python3.7/site-packages/pytorch_lightning/utilities/distributed.py", line 10, in wrapped_fn
return fn(*args, **kwargs)
File "/home/siavash/anaconda3/envs/kwae-ma/lib/python3.7/site-packages/pytorch_lightning/loggers/mlflow.py", line 105, in log_hyperparams
self.experiment.log_param(self.run_id, k, v)
File "/home/siavash/anaconda3/envs/kwae-ma/lib/python3.7/site-packages/mlflow/tracking/client.py", line 206, in log_param
self._tracking_client.log_param(run_id, key, value)
File "/home/siavash/anaconda3/envs/kwae-ma/lib/python3.7/site-packages/mlflow/tracking/_tracking_service/client.py", line 177, in log_param
_validate_param_name(key)
File "/home/siavash/anaconda3/envs/kwae-ma/lib/python3.7/site-packages/mlflow/utils/validation.py", line 120, in _validate_param_name
INVALID_PARAMETER_VALUE)
mlflow.exceptions.MlflowException: Invalid parameter name: ''. Names may be treated as files in certain cases, and must not resolve to other names when treated as such. This name would resolve to '.'
```
### Expected behavior
Check whether the instance if `dict` or `DictConfig` in the given line.
| Hi! thanks for your contribution!, great first issue!
> Check whether the instance if `dict` or `DictConfig` in the given line.
@ssakhavi that sounds reasonable solution, mind sending a PR - fix and its test? | 2020-06-17T03:24:11Z | [] | [] |
Traceback (most recent call last):
File "/home/siavash/KroniKare/kwae2/kwae_ma/models/pl_train_segmentation_model.py", line 115, in <module>
main()
File "/home/siavash/anaconda3/envs/kwae-ma/lib/python3.7/site-packages/hydra/main.py", line 24, in decorated_main
strict=strict,
File "/home/siavash/anaconda3/envs/kwae-ma/lib/python3.7/site-packages/hydra/_internal/utils.py", line 174, in run_hydra
overrides=args.overrides,
File "/home/siavash/anaconda3/envs/kwae-ma/lib/python3.7/site-packages/hydra/_internal/hydra.py", line 86, in run
job_subdir_key=None,
File "/home/siavash/anaconda3/envs/kwae-ma/lib/python3.7/site-packages/hydra/plugins/common/utils.py", line 109, in run_job
ret.return_value = task_function(task_cfg)
File "/home/siavash/KroniKare/kwae2/kwae_ma/models/pl_train_segmentation_model.py", line 111, in main
trainer.fit(wound_seg_pl)
File "/home/siavash/anaconda3/envs/kwae-ma/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 765, in fit
self.single_gpu_train(model)
File "/home/siavash/anaconda3/envs/kwae-ma/lib/python3.7/site-packages/pytorch_lightning/trainer/distrib_parts.py", line 492, in single_gpu_train
self.run_pretrain_routine(model)
File "/home/siavash/anaconda3/envs/kwae-ma/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 843, in run_pretrain_routine
self.logger.log_hyperparams(ref_model.hparams)
File "/home/siavash/anaconda3/envs/kwae-ma/lib/python3.7/site-packages/pytorch_lightning/loggers/base.py", line 275, in log_hyperparams
[logger.log_hyperparams(params) for logger in self._logger_iterable]
File "/home/siavash/anaconda3/envs/kwae-ma/lib/python3.7/site-packages/pytorch_lightning/loggers/base.py", line 275, in <listcomp>
[logger.log_hyperparams(params) for logger in self._logger_iterable]
File "/home/siavash/anaconda3/envs/kwae-ma/lib/python3.7/site-packages/pytorch_lightning/utilities/distributed.py", line 10, in wrapped_fn
return fn(*args, **kwargs)
File "/home/siavash/anaconda3/envs/kwae-ma/lib/python3.7/site-packages/pytorch_lightning/loggers/mlflow.py", line 105, in log_hyperparams
self.experiment.log_param(self.run_id, k, v)
File "/home/siavash/anaconda3/envs/kwae-ma/lib/python3.7/site-packages/mlflow/tracking/client.py", line 206, in log_param
self._tracking_client.log_param(run_id, key, value)
File "/home/siavash/anaconda3/envs/kwae-ma/lib/python3.7/site-packages/mlflow/tracking/_tracking_service/client.py", line 177, in log_param
_validate_param_name(key)
File "/home/siavash/anaconda3/envs/kwae-ma/lib/python3.7/site-packages/mlflow/utils/validation.py", line 120, in _validate_param_name
INVALID_PARAMETER_VALUE)
mlflow.exceptions.MlflowException: Invalid parameter name: ''. Names may be treated as files in certain cases, and must not resolve to other names when treated as such. This name would resolve to '.'
| 201 |
|||
Lightning-AI/lightning | Lightning-AI__lightning-2255 | b5a2f1ec4463064394dc6d977ffd246aa11158af | diff --git a/pl_examples/basic_examples/gpu_template.py b/pl_examples/basic_examples/gpu_template.py
--- a/pl_examples/basic_examples/gpu_template.py
+++ b/pl_examples/basic_examples/gpu_template.py
@@ -23,7 +23,7 @@ def main(hparams):
# ------------------------
# 1 INIT LIGHTNING MODEL
# ------------------------
- model = LightningTemplateModel(hparams)
+ model = LightningTemplateModel(**vars(hparams))
# ------------------------
# 2 INIT TRAINER
@@ -61,7 +61,7 @@ def main(hparams):
'--distributed_backend',
type=str,
default='dp',
- help='supports three options dp, ddp, ddp2'
+ help='supports four options dp, ddp, ddp2, ddp_spawn'
)
parent_parser.add_argument(
'--use_16bit',
| CPU/GPU Template
## ๐ Bug
The GPU or CPU template do not run currently on master after changes including the setup hook.
```
python -m pl_examples.basic_examples.gpu_template --gpus 4 --distributed_backend ddp
python -m pl_examples.basic_examples.cpu_template
```
CPU Template Error:
```
Traceback (most recent call last):
File "/usr/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/usr/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/anthony/Downloads/pytorch-lightning/pl_examples/basic_examples/cpu_template.py", line 53, in <module>
main(args)
File "/home/anthony/Downloads/pytorch-lightning/pl_examples/basic_examples/cpu_template.py", line 34, in main
trainer.fit(model)
File "/home/anthony/Downloads/pytorch-lightning/pytorch_lightning/trainer/trainer.py", line 952, in fit
self.run_pretrain_routine(model)
File "/home/anthony/Downloads/pytorch-lightning/pytorch_lightning/trainer/trainer.py", line 1063, in run_pretrain_routine
self.reset_val_dataloader(ref_model)
File "/home/anthony/Downloads/pytorch-lightning/pytorch_lightning/trainer/data_loading.py", line 331, in reset_val_dataloader
self._reset_eval_dataloader(model, 'val')
File "/home/anthony/Downloads/pytorch-lightning/pytorch_lightning/trainer/data_loading.py", line 253, in _reset_eval_dataloader
dataloaders = self.request_dataloader(getattr(model, f'{mode}_dataloader'))
File "/home/anthony/Downloads/pytorch-lightning/pytorch_lightning/trainer/data_loading.py", line 352, in request_dataloader
dataloader = dataloader_fx()
File "/home/anthony/Downloads/pytorch-lightning/pl_examples/models/lightning_template.py", line 158, in val_dataloader
return DataLoader(self.mnist_test, batch_size=self.batch_size, num_workers=4)
File "/home/anthony/.cache/pypoetry/virtualenvs/robotics-zp-60jGk-py3.6/lib/python3.6/site-packages/torch/nn/modules/module.py", line 594, in __getattr__
type(self).__name__, name))
AttributeError: 'LightningTemplateModel' object has no attribute 'mnist_test'
```
GPU Template Error:
```
File "/home/anthony/Downloads/pytorch-lightning/pl_examples/models/lightning_template.py", line 64, in __init__
self.c_d1_drop = nn.Dropout(self.drop_prob)
File "/home/anthony/.cache/pypoetry/virtualenvs/robotics-zp-60jGk-py3.6/lib/python3.6/site-packages/torch/nn/modules/dropout.py", line 10, in __init__
if p < 0 or p > 1:
TypeError: '<' not supported between instances of 'Namespace' and 'int'
```
### Environment
* CUDA:
- GPU:
- GeForce RTX 2080 Ti
- GeForce RTX 2080 Ti
- GeForce RTX 2080 Ti
- GeForce RTX 2080 Ti
- available: True
- version: 10.2
* Packages:
- numpy: 1.18.4
- pyTorch_debug: False
- pyTorch_version: 1.5.0
- pytorch-lightning: 0.8.0
- tensorboard: 2.2.1
- tqdm: 4.46.0
* System:
- OS: Linux
- architecture:
- 64bit
- ELF
- processor: x86_64
- python: 3.6.8
- version: #44~18.04.2-Ubuntu SMP Thu Apr 23 14:27:18 UTC 2020
| try again?
> try again?
it is in master now... :( | 2020-06-19T02:43:10Z | [] | [] |
Traceback (most recent call last):
File "/usr/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/usr/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/anthony/Downloads/pytorch-lightning/pl_examples/basic_examples/cpu_template.py", line 53, in <module>
main(args)
File "/home/anthony/Downloads/pytorch-lightning/pl_examples/basic_examples/cpu_template.py", line 34, in main
trainer.fit(model)
File "/home/anthony/Downloads/pytorch-lightning/pytorch_lightning/trainer/trainer.py", line 952, in fit
self.run_pretrain_routine(model)
File "/home/anthony/Downloads/pytorch-lightning/pytorch_lightning/trainer/trainer.py", line 1063, in run_pretrain_routine
self.reset_val_dataloader(ref_model)
File "/home/anthony/Downloads/pytorch-lightning/pytorch_lightning/trainer/data_loading.py", line 331, in reset_val_dataloader
self._reset_eval_dataloader(model, 'val')
File "/home/anthony/Downloads/pytorch-lightning/pytorch_lightning/trainer/data_loading.py", line 253, in _reset_eval_dataloader
dataloaders = self.request_dataloader(getattr(model, f'{mode}_dataloader'))
File "/home/anthony/Downloads/pytorch-lightning/pytorch_lightning/trainer/data_loading.py", line 352, in request_dataloader
dataloader = dataloader_fx()
File "/home/anthony/Downloads/pytorch-lightning/pl_examples/models/lightning_template.py", line 158, in val_dataloader
return DataLoader(self.mnist_test, batch_size=self.batch_size, num_workers=4)
File "/home/anthony/.cache/pypoetry/virtualenvs/robotics-zp-60jGk-py3.6/lib/python3.6/site-packages/torch/nn/modules/module.py", line 594, in __getattr__
type(self).__name__, name))
AttributeError: 'LightningTemplateModel' object has no attribute 'mnist_test'
| 209 |
|||
Lightning-AI/lightning | Lightning-AI__lightning-2293 | 3256fe4e5a405db1ab00d4cf4d48cbbfc7730959 | diff --git a/pytorch_lightning/trainer/data_loading.py b/pytorch_lightning/trainer/data_loading.py
--- a/pytorch_lightning/trainer/data_loading.py
+++ b/pytorch_lightning/trainer/data_loading.py
@@ -52,6 +52,8 @@ def _has_len(dataloader: DataLoader) -> bool:
return True
except TypeError:
return False
+ except NotImplementedError: # e.g. raised by torchtext if a batch_size_fn is used
+ return False
class TrainerDataLoadingMixin(ABC):
| _has_len does not handle NotImplementedError (raised by torchtext)
<!--
### Common bugs:
1. Tensorboard not showing in Jupyter-notebook see [issue 79](https://github.com/PyTorchLightning/pytorch-lightning/issues/79).
2. PyTorch 1.1.0 vs 1.2.0 support [see FAQ](https://github.com/PyTorchLightning/pytorch-lightning#faq)
-->
## ๐ Bug
When using torchtext.data.Iterator with a batch_size_fn function the __len__ function raises a NotImplementedError which is not caught by _has_len function.
A bug-fix is **very simple** by just returning False if a NotImplementedError is raised. This is unlikely to have any negative side effects since it corresponds with what _hads_len is expected to do. The fix allowed me to train my model using torch text.
I plan to submit a pull request with the fix above.
There are no additional dependencies required; however this problem occurred when using torchtext.
Example stack trace:
```
Traceback (most recent call last):
File "/Users/thomas/scm/OakDataPrep/oakSkipThoughtTrainer.py", line 18, in <module>
trainer.fit(model)
File "/Users/thomas/virtualenv/Python3/PyTorch/env/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 952, in fit
self.run_pretrain_routine(model)
File "/Users/thomas/virtualenv/Python3/PyTorch/env/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 1091, in run_pretrain_routine
self.train()
File "/Users/thomas/virtualenv/Python3/PyTorch/env/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py", line 334, in train
self.reset_train_dataloader(model)
File "/Users/thomas/virtualenv/Python3/PyTorch/env/lib/python3.7/site-packages/pytorch_lightning/trainer/data_loading.py", line 201, in reset_train_dataloader
if not _has_len(self.train_dataloader):
File "/Users/thomas/virtualenv/Python3/PyTorch/env/lib/python3.7/site-packages/pytorch_lightning/trainer/data_loading.py", line 49, in _has_len
if len(dataloader) == 0:
File "/Users/thomas/virtualenv/Python3/PyTorch/env/lib/python3.7/site-packages/torchtext/data/iterator.py", line 136, in __len__
raise NotImplementedError
NotImplementedError
```
### To Reproduce
Sorry I currently don't have a minimal example. The issue will always occur when torchtext.data.Iterator gets a batch_size_fn passed in. If the fix is not convincing I can take the time and construct a code example. Hope this is not necessary.
#### Code sample
I created my own Iterator for a Skip-Thought model, that dynamically batches sentences together. This might be unnecessary complex, or even not really useful however it revealed that issue described above when using torchtext. For context here is a code excerpt that creates the issue:
```
import torchtext
...
global max_src_in_batch, max_tgt_in_batch
def batch_size_fn(new, count, sofar):
"Keep augmenting batch and calculate total number of tokens + padding."
global max_src_in_batch, max_tgt_in_batch
if count == 1:
max_src_in_batch = 0
max_tgt_in_batch = 0
max_src_in_batch = max(max_src_in_batch, len(new.current))
max_tgt_in_batch = max(max_tgt_in_batch, len(new.next) + 2)
src_elements = count * max_src_in_batch
tgt_elements = count * max_tgt_in_batch
return max(src_elements, tgt_elements)
class MyIterator(torchtext.data.Iterator):
def create_batches(self):
if self.train:
def pool(d, random_shuffler):
for p in data.batch(d, self.batch_size * 100):
p_batch = data.batch(
sorted(p, key=self.sort_key),
self.batch_size, self.batch_size_fn)
for b in random_shuffler(list(p_batch)):
yield b
self.batches = pool(self.data(), self.random_shuffler)
else:
self.batches = []
for b in data.batch(self.data(), self.batch_size,
self.batch_size_fn):
self.batches.append(sorted(b, key=self.sort_key))
...
class SkipThoughts(pl.LightningModule):
...
@pl.data_loader
def train_dataloader(self):
train_iter = MyIterator(self.my_train_dataloader, batch_size=self.batch_size, repeat=False,
sort_key=lambda x:
data.interleave_keys(len(x.current),
data.interleave_keys(len(x.prev),
len(x.next))),
batch_size_fn=batch_size_fn, train=True,
shuffle=True)
return train_iter
```
But this happens whenever a batch_size_fn is used in torchtext. Because it is unknown how many batches the data set will have torchtext __len__ method returns a NotImplementedError. See code snipped below:
```
def __len__(self):
if self.batch_size_fn is not None:
raise NotImplementedError
return math.ceil(len(self.dataset) / self.batch_size)
```
### Expected behavior
The function _has_len tests if len can is available and then returns True, otherwise False. It shoudl return False if NotImplementedError is raised.
### Environment
/Users/thomas/virtualenv/Python3/PyTorch/env/bin/python /Users/thomas/scm/OakDataPrep/collect_env_details.py
* CUDA:
- GPU:
- available: False
- version: None
* Packages:
- numpy: 1.18.2
- pyTorch_debug: False
- pyTorch_version: 1.5.0
- pytorch-lightning: 0.8.0
- tensorboard: 2.2.0
- tqdm: 4.45.0
* System:
- OS: Darwin
- architecture:
- 64bit
-
- processor: i386
- python: 3.7.7
- version: Darwin Kernel Version 19.5.0: Tue May 26 20:41:44 PDT 2020; root:xnu-6153.121.2~2/RELEASE_X86_64
Process finished with exit code 0
### Additional context
Issue occur with Pytorch-Lighning 0.8 and Torchtext 0.6
<!-- Add any other context about the problem here. -->
| 2020-06-19T23:57:59Z | [] | [] |
Traceback (most recent call last):
File "/Users/thomas/scm/OakDataPrep/oakSkipThoughtTrainer.py", line 18, in <module>
trainer.fit(model)
File "/Users/thomas/virtualenv/Python3/PyTorch/env/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 952, in fit
self.run_pretrain_routine(model)
File "/Users/thomas/virtualenv/Python3/PyTorch/env/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 1091, in run_pretrain_routine
self.train()
File "/Users/thomas/virtualenv/Python3/PyTorch/env/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py", line 334, in train
self.reset_train_dataloader(model)
File "/Users/thomas/virtualenv/Python3/PyTorch/env/lib/python3.7/site-packages/pytorch_lightning/trainer/data_loading.py", line 201, in reset_train_dataloader
if not _has_len(self.train_dataloader):
File "/Users/thomas/virtualenv/Python3/PyTorch/env/lib/python3.7/site-packages/pytorch_lightning/trainer/data_loading.py", line 49, in _has_len
if len(dataloader) == 0:
File "/Users/thomas/virtualenv/Python3/PyTorch/env/lib/python3.7/site-packages/torchtext/data/iterator.py", line 136, in __len__
raise NotImplementedError
NotImplementedError
| 213 |
||||
Lightning-AI/lightning | Lightning-AI__lightning-2356 | 220bb6db57e7181e857a128e245ce242b6cf429f | diff --git a/pytorch_lightning/trainer/optimizers.py b/pytorch_lightning/trainer/optimizers.py
--- a/pytorch_lightning/trainer/optimizers.py
+++ b/pytorch_lightning/trainer/optimizers.py
@@ -111,15 +111,25 @@ def configure_schedulers(self, schedulers: list):
def reinit_scheduler_properties(self, optimizers: list, schedulers: list):
# Reinitialize optimizer.step properties added by schedulers
for scheduler in schedulers:
+ scheduler = scheduler['scheduler']
+
for optimizer in optimizers:
- scheduler = scheduler['scheduler']
# check that we dont mix users optimizers and schedulers
if scheduler.optimizer == optimizer:
# Find the mro belonging to the base lr scheduler class
for i, mro in enumerate(scheduler.__class__.__mro__):
- if mro == optim.lr_scheduler._LRScheduler:
+ if (
+ mro == optim.lr_scheduler._LRScheduler
+ or mro == optim.lr_scheduler.ReduceLROnPlateau
+ ):
idx = i
- scheduler.__class__.__mro__[idx].__init__(scheduler, optimizer)
+ state = scheduler.state_dict()
+ else:
+ state = None
+
+ scheduler.__class__.__mro__[idx].__init__(scheduler, optimizer)
+ if state is not None:
+ scheduler.load_state_dict(state)
class _MockOptimizer(Optimizer):
| Trainer(precision=16) fails with optim.lr_scheduler.ReduceLROnPlateau
<!--
### Common bugs:
1. Tensorboard not showing in Jupyter-notebook see [issue 79](https://github.com/PyTorchLightning/pytorch-lightning/issues/79).
2. PyTorch 1.1.0 vs 1.2.0 support [see FAQ](https://github.com/PyTorchLightning/pytorch-lightning#faq)
-->
## ๐ Bug
<!-- A clear and concise description of what the bug is. -->
### To Reproduce
Steps to reproduce the behavior:
1. Create a `pl.LightningModule` that returns your optimizer along with a `optim.lr_scheduler.ReduceLROnPlateau` scheduler from `configure_optimizers`
2. Create a `pl.Trainer` wit `precision=16`
3. Run your training (i.e., `trainer.fit(model)`)
4. See error
```console
Traceback (most recent call last):
File "main.py", line 65, in <module>
main()
File "main.py", line 61, in main
trainer.fit(model)
File "/workspace/pytorch-lightning/pytorch_lightning/trainer/trainer.py", line 889, in fit
self.dp_train(model)
File "/workspace/pytorch-lightning/pytorch_lightning/trainer/distrib_parts.py", line 223, in dp_train
self.reinit_scheduler_properties(optimizers, self.lr_schedulers)
File "/workspace/pytorch-lightning/pytorch_lightning/trainer/optimizers.py", line 122, in reinit_scheduler_properties
scheduler.__class__.__mro__[idx].__init__(scheduler, optimizer)
UnboundLocalError: local variable 'idx' referenced before assignment
```
<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->
<!-- #### Code sample -->
<!-- Ideally attach a minimal code sample to reproduce the decried issue.
Minimal means having the shortest code but still preserving the bug. -->
<!-- ### Expected behavior -->
<!-- A clear and concise description of what you expected to happen. -->
<!-- ### Environment
Please copy and paste the output from our
[environment collection script](https://raw.githubusercontent.com/PyTorchLightning/pytorch-lightning/master/tests/collect_env_details.py)
(or fill out the checklist below manually).
You can get the script and run it with:
```
wget https://raw.githubusercontent.com/PyTorchLightning/pytorch-lightning/master/tests/collect_env_details.py
# For security purposes, please check the contents of collect_env_details.py before running it.
python collect_env_details.py
```
- PyTorch Version (1.5):
- OS (Linux):
### Additional context
-->
<!-- Add any other context about the problem here. -->
The error occurs in `pytorch-lightning/pytorch_lightning/trainer/optimizers.py", line 122`.
```python
def reinit_scheduler_properties(self, optimizers: list, schedulers: list):
# Reinitialize optimizer.step properties added by schedulers
for scheduler in schedulers:
for optimizer in optimizers:
scheduler = scheduler['scheduler']
# check that we dont mix users optimizers and schedulers
if scheduler.optimizer == optimizer:
# Find the mro belonging to the base lr scheduler class
for i, mro in enumerate(scheduler.__class__.__mro__):
if mro == optim.lr_scheduler._LRScheduler:
idx = i
scheduler.__class__.__mro__[idx].__init__(scheduler, optimizer)
```
The `idx` local variable is unassigned because `optim.lr_scheduler.ReduceLROnPlateau` is not a subclass of `optim.lr_scheduler._LRScheduler`.
I could work around the error by adding a specific check for `optim.lr_scheduler.ReduceLROnPlateau` but I'm not sure if this is a good solution.
```python
def reinit_scheduler_properties(self, optimizers: list, schedulers: list):
# Reinitialize optimizer.step properties added by schedulers
for scheduler in schedulers:
for optimizer in optimizers:
scheduler = scheduler['scheduler']
# check that we dont mix users optimizers and schedulers
if scheduler.optimizer == optimizer:
# Find the mro belonging to the base lr scheduler class
for i, mro in enumerate(scheduler.__class__.__mro__):
if mro == optim.lr_scheduler._LRScheduler:
idx = i
elif mro == optim.lr_scheduler.ReduceLROnPlateau:
idx = i
scheduler.__class__.__mro__[idx].__init__(scheduler, optimizer)
```
### Related issue in PyTorch:
ReduceLROnPlateau parent class is not _LRScheduler #21981
https://github.com/pytorch/pytorch/issues/21981
| Hi! thanks for your contribution!, great first issue!
@naokishibuya good catch. It seems like a problem that should be solved upstream in pytorch, but for now we can solve this locally. Would you be up for a PR?
When I tried this fix, it solved the error but unfortunately `ReduceLROnPlateau` stopped working for me (i.e. there was no indication of the LR decreasing with `verbose=True` or on TensorBoard). If I switched back to `precision=32`, it works normally again
I think that the fix is actually working, however only calling `__init__(scheduler, optimizer)` will reset all other arguments (patience, mode, ect) to default values for the `ReduceLrOnPlauteau` scheduler. A solution to this is to copy over these properties:
```
__init__(scheduler, optimizer, patience=scheduler.patience,mode=scheduler.mode,...)
```
Again I think this is a bit hacky, and a proper solution upstream in pytorch is better.
I think this does the trick for me:
```python
def reinit_scheduler_properties(self, optimizers: list, schedulers: list):
# Reinitialize optimizer.step properties added by schedulers
for scheduler in schedulers:
for optimizer in optimizers:
scheduler = scheduler["scheduler"]
# check that we dont mix users optimizers and schedulers
if scheduler.optimizer == optimizer:
# Find the mro belonging to the base lr scheduler class
for i, mro in enumerate(scheduler.__class__.__mro__):
if (
mro == optim.lr_scheduler._LRScheduler
or mro == optim.lr_scheduler.ReduceLROnPlateau
):
idx = i
state = scheduler.state_dict()
else:
state = None
scheduler.__class__.__mro__[idx].__init__(scheduler, optimizer)
if state is not None:
scheduler.load_state_dict(state)
```
Happy to open a PR if it looks ok to you guys | 2020-06-25T02:42:06Z | [] | [] |
Traceback (most recent call last):
File "main.py", line 65, in <module>
main()
File "main.py", line 61, in main
trainer.fit(model)
File "/workspace/pytorch-lightning/pytorch_lightning/trainer/trainer.py", line 889, in fit
self.dp_train(model)
File "/workspace/pytorch-lightning/pytorch_lightning/trainer/distrib_parts.py", line 223, in dp_train
self.reinit_scheduler_properties(optimizers, self.lr_schedulers)
File "/workspace/pytorch-lightning/pytorch_lightning/trainer/optimizers.py", line 122, in reinit_scheduler_properties
scheduler.__class__.__mro__[idx].__init__(scheduler, optimizer)
UnboundLocalError: local variable 'idx' referenced before assignment
| 219 |
|||
Lightning-AI/lightning | Lightning-AI__lightning-2358 | a5f45787eabddfec4559983f8e6ba1c8317f62f1 | diff --git a/pl_examples/basic_examples/gpu_template.py b/pl_examples/basic_examples/gpu_template.py
--- a/pl_examples/basic_examples/gpu_template.py
+++ b/pl_examples/basic_examples/gpu_template.py
@@ -61,7 +61,8 @@ def main(hparams):
'--distributed_backend',
type=str,
default='dp',
- help='supports four options dp, ddp, ddp2, ddp_spawn'
+ help='supports four options dp, ddp, ddp2, ddp_spawn, ...',
+ choices=['dp', 'ddp', 'ddp2', 'ddp_spawn', 'ddp_cpu'],
)
parent_parser.add_argument(
'--use_16bit',
diff --git a/pytorch_lightning/core/saving.py b/pytorch_lightning/core/saving.py
--- a/pytorch_lightning/core/saving.py
+++ b/pytorch_lightning/core/saving.py
@@ -279,7 +279,7 @@ def load_hparams_from_tags_csv(tags_csv: str) -> Dict[str, Any]:
"""Load hparams from a file.
>>> hparams = Namespace(batch_size=32, learning_rate=0.001, data_root='./any/path/here')
- >>> path_csv = './testing-hparams.csv'
+ >>> path_csv = os.path.join('.', 'testing-hparams.csv')
>>> save_hparams_to_tags_csv(path_csv, hparams)
>>> hparams_new = load_hparams_from_tags_csv(path_csv)
>>> vars(hparams) == hparams_new
@@ -304,7 +304,7 @@ def save_hparams_to_tags_csv(tags_csv: str, hparams: Union[dict, Namespace]) ->
if isinstance(hparams, Namespace):
hparams = vars(hparams)
- with open(tags_csv, 'w') as fp:
+ with open(tags_csv, 'w', newline='') as fp:
fieldnames = ['key', 'value']
writer = csv.DictWriter(fp, fieldnames=fieldnames)
writer.writerow({'key': 'key', 'value': 'value'})
diff --git a/pytorch_lightning/metrics/converters.py b/pytorch_lightning/metrics/converters.py
--- a/pytorch_lightning/metrics/converters.py
+++ b/pytorch_lightning/metrics/converters.py
@@ -10,8 +10,16 @@
import numpy as np
import torch
from torch.utils.data._utils.collate import np_str_obj_array_pattern
-
from pytorch_lightning.utilities.apply_func import apply_to_collection
+from pytorch_lightning.utilities import rank_zero_warn
+
+try:
+ from torch.distributed import ReduceOp
+except ImportError:
+ class ReduceOp:
+ SUM = None
+
+ rank_zero_warn('Unsupported `ReduceOp` for distributed computing.')
def _apply_to_inputs(func_to_apply: Callable, *dec_args, **dec_kwargs) -> Callable:
@@ -217,7 +225,7 @@ def _tensor_collection_metric_conversion(func_to_decorate: Callable) -> Callable
def _sync_ddp_if_available(result: Union[torch.Tensor],
group: Optional[Any] = None,
- reduce_op: Optional[torch.distributed.ReduceOp] = None,
+ reduce_op: Optional[ReduceOp] = None,
) -> torch.Tensor:
"""
Function to reduce the tensors from several ddp processes to one master process
@@ -247,7 +255,7 @@ def _sync_ddp_if_available(result: Union[torch.Tensor],
def sync_ddp(group: Optional[Any] = None,
- reduce_op: Optional[torch.distributed.ReduceOp] = None) -> Callable:
+ reduce_op: Optional[ReduceOp] = None) -> Callable:
"""
This decorator syncs a functions outputs across different processes for DDP.
@@ -269,7 +277,7 @@ def decorator_fn(func_to_decorate):
def numpy_metric(group: Optional[Any] = None,
- reduce_op: Optional[torch.distributed.ReduceOp] = None) -> Callable:
+ reduce_op: Optional[ReduceOp] = None) -> Callable:
"""
This decorator shall be used on all function metrics working on numpy arrays.
It handles the argument conversion and DDP reduction for metrics working on numpy.
@@ -292,7 +300,7 @@ def decorator_fn(func_to_decorate):
def tensor_metric(group: Optional[Any] = None,
- reduce_op: Optional[torch.distributed.ReduceOp] = None) -> Callable:
+ reduce_op: Optional[ReduceOp] = None) -> Callable:
"""
This decorator shall be used on all function metrics working on tensors.
It handles the argument conversion and DDP reduction for metrics working on tensors.
@@ -314,7 +322,7 @@ def decorator_fn(func_to_decorate):
def tensor_collection_metric(group: Optional[Any] = None,
- reduce_op: Optional[torch.distributed.ReduceOp] = None) -> Callable:
+ reduce_op: Optional[ReduceOp] = None) -> Callable:
"""
This decorator shall be used on all function metrics working on tensors and returning collections
that cannot be converted to tensors.
diff --git a/pytorch_lightning/metrics/sklearns.py b/pytorch_lightning/metrics/sklearns.py
--- a/pytorch_lightning/metrics/sklearns.py
+++ b/pytorch_lightning/metrics/sklearns.py
@@ -5,6 +5,18 @@
from pytorch_lightning import _logger as lightning_logger
from pytorch_lightning.metrics.metric import NumpyMetric
+from pytorch_lightning.utilities import rank_zero_warn
+
+try:
+ from torch.distributed import ReduceOp, group
+except ImportError:
+ class ReduceOp:
+ SUM = None
+
+ class group:
+ WORLD = None
+
+ rank_zero_warn('Unsupported `ReduceOp` for distributed computing.')
class SklearnMetric(NumpyMetric):
@@ -20,8 +32,8 @@ class SklearnMetric(NumpyMetric):
def __init__(
self,
metric_name: str,
- reduce_group: Any = torch.distributed.group.WORLD,
- reduce_op: Any = torch.distributed.ReduceOp.SUM,
+ reduce_group: Any = group.WORLD,
+ reduce_op: Any = ReduceOp.SUM,
**kwargs,
):
"""
@@ -82,8 +94,8 @@ class Accuracy(SklearnMetric):
def __init__(
self,
normalize: bool = True,
- reduce_group: Any = torch.distributed.group.WORLD,
- reduce_op: Any = torch.distributed.ReduceOp.SUM,
+ reduce_group: Any = group.WORLD,
+ reduce_op: Any = ReduceOp.SUM,
):
"""
Args:
@@ -136,8 +148,8 @@ class AUC(SklearnMetric):
"""
def __init__(
self,
- reduce_group: Any = torch.distributed.group.WORLD,
- reduce_op: Any = torch.distributed.ReduceOp.SUM,
+ reduce_group: Any = group.WORLD,
+ reduce_op: Any = ReduceOp.SUM,
):
"""
Args:
@@ -174,8 +186,8 @@ class AveragePrecision(SklearnMetric):
def __init__(
self,
average: Optional[str] = 'macro',
- reduce_group: Any = torch.distributed.group.WORLD,
- reduce_op: Any = torch.distributed.ReduceOp.SUM,
+ reduce_group: Any = group.WORLD,
+ reduce_op: Any = ReduceOp.SUM,
):
"""
Args:
@@ -240,8 +252,8 @@ class ConfusionMatrix(SklearnMetric):
"""
def __init__(
self, labels: Optional[Sequence] = None,
- reduce_group: Any = torch.distributed.group.WORLD,
- reduce_op: Any = torch.distributed.ReduceOp.SUM,
+ reduce_group: Any = group.WORLD,
+ reduce_op: Any = ReduceOp.SUM,
):
"""
Args:
@@ -304,8 +316,8 @@ def __init__(
self, labels: Optional[Sequence] = None,
pos_label: Union[str, int] = 1,
average: Optional[str] = 'macro',
- reduce_group: Any = torch.distributed.group.WORLD,
- reduce_op: Any = torch.distributed.ReduceOp.SUM,
+ reduce_group: Any = group.WORLD,
+ reduce_op: Any = ReduceOp.SUM,
):
"""
Args:
@@ -397,8 +409,8 @@ def __init__(
labels: Optional[Sequence] = None,
pos_label: Union[str, int] = 1,
average: Optional[str] = 'macro',
- reduce_group: Any = torch.distributed.group.WORLD,
- reduce_op: Any = torch.distributed.ReduceOp.SUM,
+ reduce_group: Any = group.WORLD,
+ reduce_op: Any = ReduceOp.SUM,
):
"""
Args:
@@ -488,8 +500,8 @@ def __init__(
labels: Optional[Sequence] = None,
pos_label: Union[str, int] = 1,
average: Optional[str] = 'macro',
- reduce_group: Any = torch.distributed.group.WORLD,
- reduce_op: Any = torch.distributed.ReduceOp.SUM,
+ reduce_group: Any = group.WORLD,
+ reduce_op: Any = ReduceOp.SUM,
):
"""
Args:
@@ -576,8 +588,8 @@ def __init__(
labels: Optional[Sequence] = None,
pos_label: Union[str, int] = 1,
average: Optional[str] = 'macro',
- reduce_group: Any = torch.distributed.group.WORLD,
- reduce_op: Any = torch.distributed.ReduceOp.SUM,
+ reduce_group: Any = group.WORLD,
+ reduce_op: Any = ReduceOp.SUM,
):
"""
Args:
@@ -663,8 +675,8 @@ class PrecisionRecallCurve(SklearnMetric):
def __init__(
self,
pos_label: Union[str, int] = 1,
- reduce_group: Any = torch.distributed.group.WORLD,
- reduce_op: Any = torch.distributed.ReduceOp.SUM,
+ reduce_group: Any = group.WORLD,
+ reduce_op: Any = ReduceOp.SUM,
):
"""
Args:
@@ -737,8 +749,8 @@ class ROC(SklearnMetric):
def __init__(
self,
pos_label: Union[str, int] = 1,
- reduce_group: Any = torch.distributed.group.WORLD,
- reduce_op: Any = torch.distributed.ReduceOp.SUM,
+ reduce_group: Any = group.WORLD,
+ reduce_op: Any = ReduceOp.SUM,
):
"""
Args:
@@ -795,8 +807,8 @@ class AUROC(SklearnMetric):
def __init__(
self,
average: Optional[str] = 'macro',
- reduce_group: Any = torch.distributed.group.WORLD,
- reduce_op: Any = torch.distributed.ReduceOp.SUM,
+ reduce_group: Any = group.WORLD,
+ reduce_op: Any = ReduceOp.SUM,
):
"""
Args:
diff --git a/pytorch_lightning/trainer/data_loading.py b/pytorch_lightning/trainer/data_loading.py
--- a/pytorch_lightning/trainer/data_loading.py
+++ b/pytorch_lightning/trainer/data_loading.py
@@ -35,7 +35,7 @@
try:
import horovod.torch as hvd
-except ImportError:
+except (ModuleNotFoundError, ImportError):
HOROVOD_AVAILABLE = False
else:
HOROVOD_AVAILABLE = True
diff --git a/pytorch_lightning/trainer/distrib_data_parallel.py b/pytorch_lightning/trainer/distrib_data_parallel.py
--- a/pytorch_lightning/trainer/distrib_data_parallel.py
+++ b/pytorch_lightning/trainer/distrib_data_parallel.py
@@ -139,7 +139,7 @@ def train_fx(trial_hparams, cluster_manager, _):
try:
import horovod.torch as hvd
-except ImportError:
+except (ModuleNotFoundError, ImportError):
HOROVOD_AVAILABLE = False
else:
HOROVOD_AVAILABLE = True
diff --git a/pytorch_lightning/trainer/distrib_parts.py b/pytorch_lightning/trainer/distrib_parts.py
--- a/pytorch_lightning/trainer/distrib_parts.py
+++ b/pytorch_lightning/trainer/distrib_parts.py
@@ -38,7 +38,7 @@
try:
import horovod.torch as hvd
-except ImportError:
+except (ModuleNotFoundError, ImportError):
HOROVOD_AVAILABLE = False
else:
HOROVOD_AVAILABLE = True
diff --git a/pytorch_lightning/trainer/evaluation_loop.py b/pytorch_lightning/trainer/evaluation_loop.py
--- a/pytorch_lightning/trainer/evaluation_loop.py
+++ b/pytorch_lightning/trainer/evaluation_loop.py
@@ -144,7 +144,7 @@
try:
import horovod.torch as hvd
-except ImportError:
+except (ModuleNotFoundError, ImportError):
HOROVOD_AVAILABLE = False
else:
HOROVOD_AVAILABLE = True
diff --git a/pytorch_lightning/trainer/trainer.py b/pytorch_lightning/trainer/trainer.py
--- a/pytorch_lightning/trainer/trainer.py
+++ b/pytorch_lightning/trainer/trainer.py
@@ -52,7 +52,7 @@
try:
import horovod.torch as hvd
-except ImportError:
+except (ModuleNotFoundError, ImportError):
HOROVOD_AVAILABLE = False
else:
HOROVOD_AVAILABLE = True
@@ -255,7 +255,7 @@ def __init__(
Use `row_log_interval` instead. Will remove 0.9.0.
- distributed_backend: The distributed backend to use (dp, ddp, ddp2, ddp_spawn)
+ distributed_backend: The distributed backend to use (dp, ddp, ddp2, ddp_spawn, ddp_cpu)
use_amp:
.. warning:: .. deprecated:: 0.7.0
@@ -885,7 +885,7 @@ def fit(
task = int(os.environ['LOCAL_RANK'])
self.ddp_train(task, model)
- elif self.distributed_backend == 'cpu_ddp':
+ elif self.distributed_backend == 'ddp_cpu':
self.set_random_port()
self.model = model
mp.spawn(self.ddp_train, nprocs=self.num_processes, args=(model,))
diff --git a/pytorch_lightning/trainer/training_io.py b/pytorch_lightning/trainer/training_io.py
--- a/pytorch_lightning/trainer/training_io.py
+++ b/pytorch_lightning/trainer/training_io.py
@@ -114,7 +114,7 @@
try:
import horovod.torch as hvd
-except ImportError:
+except (ModuleNotFoundError, ImportError):
HOROVOD_AVAILABLE = False
else:
HOROVOD_AVAILABLE = True
diff --git a/pytorch_lightning/trainer/training_loop.py b/pytorch_lightning/trainer/training_loop.py
--- a/pytorch_lightning/trainer/training_loop.py
+++ b/pytorch_lightning/trainer/training_loop.py
@@ -183,7 +183,7 @@ def training_step(self, batch, batch_idx):
try:
import horovod.torch as hvd
-except ImportError:
+except (ModuleNotFoundError, ImportError):
HOROVOD_AVAILABLE = False
else:
HOROVOD_AVAILABLE = True
diff --git a/pytorch_lightning/utilities/cloud_io.py b/pytorch_lightning/utilities/cloud_io.py
--- a/pytorch_lightning/utilities/cloud_io.py
+++ b/pytorch_lightning/utilities/cloud_io.py
@@ -5,8 +5,7 @@
def load(path_or_url: str, map_location=None):
- parsed = urlparse(path_or_url)
- if parsed.scheme == '' or Path(path_or_url).is_file():
- # no scheme or local file
+ if urlparse(path_or_url).scheme == '' or Path(path_or_url).drive: # no scheme or with a drive letter
return torch.load(path_or_url, map_location=map_location)
- return torch.hub.load_state_dict_from_url(path_or_url, map_location=map_location)
+ else:
+ return torch.hub.load_state_dict_from_url(path_or_url, map_location=map_location)
| accuracy metric dosen't support windows
## ๐ Bug
Pytorch Metric.Accuracy uses `ReduceOp` from 'torch.distribution' but torch.distributrion doesn't support `windows`
- https://github.com/pytorch/pytorch/blob/cf8a9b50cacb1702f5855859c657a5358976437b/torch/distributed/__init__.py#L10 : `torch.distributed is available on Linux and MacOS.`
### To Reproduce
Use Metric.Accuracy in Windows environment
<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->
#### Code sample
- I use code sample from `https://github.com/PyTorchLightning/pytorch-lightning/issues/2355`
### Expected behavior
add check OS in `metric.accuracy` and use condition for import different module
```
try:
return platform.linux_distribution()
except:
return "N/A"
```
or warning to windows user, they can't use `metric.accuracy`
### Environment
```
* CUDA:
- GPU:
- GeForce RTX 2080 Ti
- GeForce GTX 1080 Ti
- available: True
- version: 10.1
* Packages:
- numpy: 1.18.1
- pyTorch_debug: False
- pyTorch_version: 1.5.1
- pytorch-lightning: 0.8.1
- tensorboard: 2.2.1
- tqdm: 4.46.0
* System:
- OS: Windows
- architecture:
- 64bit
- WindowsPE
- processor: Intel64 Family 6 Model 158 Stepping 10, GenuineIntel
- python: 3.6.10
- version: 10.0.18362
```
### Additional context
```
Traceback (most recent call last):
File "test.py", line 11, in <module>
from pytorch_lightning.metrics.functional import accuracy
File "C:\Users\dcho\Anaconda3\envs\torch_py36\lib\site-packages\pytorch_lightning\metrics\__init__.py", line 1, in <module>
from pytorch_lightning.metrics.converters import numpy_metric, tensor_metric
File "C:\Users\dcho\Anaconda3\envs\torch_py36\lib\site-packages\pytorch_lightning\metrics\converters.py", line 220, in <module>
reduce_op: Optional[torch.distributed.ReduceOp] = None,
AttributeError: module 'torch.distributed' has no attribute 'ReduceOp'
```
<!-- Add any other context about the problem here. -->
Always thanks for developing & maintaining the cool framework
| 2020-06-25T07:51:08Z | [] | [] |
Traceback (most recent call last):
File "test.py", line 11, in <module>
from pytorch_lightning.metrics.functional import accuracy
File "C:\Users\dcho\Anaconda3\envs\torch_py36\lib\site-packages\pytorch_lightning\metrics\__init__.py", line 1, in <module>
from pytorch_lightning.metrics.converters import numpy_metric, tensor_metric
File "C:\Users\dcho\Anaconda3\envs\torch_py36\lib\site-packages\pytorch_lightning\metrics\converters.py", line 220, in <module>
reduce_op: Optional[torch.distributed.ReduceOp] = None,
AttributeError: module 'torch.distributed' has no attribute 'ReduceOp'
| 220 |
||||
Lightning-AI/lightning | Lightning-AI__lightning-2360 | f2710bb500be017d48ccc6cf596bbed6cc9bdad5 | diff --git a/pytorch_lightning/trainer/trainer.py b/pytorch_lightning/trainer/trainer.py
--- a/pytorch_lightning/trainer/trainer.py
+++ b/pytorch_lightning/trainer/trainer.py
@@ -1193,7 +1193,8 @@ def test(
self.teardown('test')
if self.is_function_implemented('teardown'):
- self.model.teardown('test')
+ model_ref = self.get_model()
+ model_ref.teardown('test')
def check_model_configuration(self, model: LightningModule):
r"""
| AttributeError: 'LightningDataParallel' object has no attribute 'teardown'
<!--
### Common bugs:
1. Tensorboard not showing in Jupyter-notebook see [issue 79](https://github.com/PyTorchLightning/pytorch-lightning/issues/79).
2. PyTorch 1.1.0 vs 1.2.0 support [see FAQ](https://github.com/PyTorchLightning/pytorch-lightning#faq)
-->
## ๐ Bug
<!-- A clear and concise description of what the bug is. -->
### To Reproduce
Steps to reproduce the behavior:
trainer = pytorch_lightning.Trainer(
gpus=2,
distributed_backend='dp'
)
model = BaseModel.load_from_checkpoint(...)
trainer.test(model)
Traceback (most recent call last):
File "run_kitti.py", line 351, in <module>
trainer.test(model)
File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 1198, in test
self.model.teardown('test')
File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 594, in __getattr__
type(self).__name__, name))
AttributeError: 'LightningDataParallel' object has no attribute 'teardown'
<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->
#### Code sample
<!-- Ideally attach a minimal code sample to reproduce the decried issue.
Minimal means having the shortest code but still preserving the bug. -->
### Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
### Environment
* CUDA:
- GPU:
- GeForce GTX 1080 Ti
- GeForce GTX 1080 Ti
- available: True
- version: 10.1
* Packages:
- numpy: 1.18.1
- pyTorch_debug: False
- pyTorch_version: 1.5.1
- pytorch-lightning: 0.8.1
- tensorboard: 2.2.2
- tqdm: 4.46.0
* System:
- OS: Linux
- architecture:
- 64bit
-
- processor: x86_64
- python: 3.7.7
- version: #53~18.04.1-Ubuntu SMP Thu Jun 4 14:58:26 UTC 2020
### Additional context
<!-- Add any other context about the problem here. -->
If I'm not missing something, this AttributeError is a bug on your side.
| Hi! thanks for your contribution!, great first issue!
+1 on this issue.
Also confirm this issue. | 2020-06-25T14:11:42Z | [] | [] |
Traceback (most recent call last):
File "run_kitti.py", line 351, in <module>
trainer.test(model)
File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 1198, in test
self.model.teardown('test')
File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 594, in __getattr__
type(self).__name__, name))
AttributeError: 'LightningDataParallel' object has no attribute 'teardown'
| 221 |
|||
Lightning-AI/lightning | Lightning-AI__lightning-2428 | a75398530c3447ecf13f043a1bc817929b90fd65 | diff --git a/pytorch_lightning/trainer/training_loop.py b/pytorch_lightning/trainer/training_loop.py
--- a/pytorch_lightning/trainer/training_loop.py
+++ b/pytorch_lightning/trainer/training_loop.py
@@ -776,6 +776,7 @@ def optimizer_closure(self, split_batch, batch_idx, opt_idx, optimizer, hiddens)
# PROCESS THE RESULT
# ----------------------------
# format and reduce outputs accordingly
+ training_step_output_for_epoch_end = training_step_output
training_step_output = self.process_output(training_step_output, train=True)
# TODO: temporary part of structured results PR
@@ -788,7 +789,7 @@ def optimizer_closure(self, split_batch, batch_idx, opt_idx, optimizer, hiddens)
)
# if the user decides to finally reduce things in epoch_end, save raw output without graphs
- training_step_output_for_epoch_end = recursive_detach(training_step_output)
+ training_step_output_for_epoch_end = recursive_detach(training_step_output_for_epoch_end)
# accumulate loss
# (if accumulate_grad_batches = 1 no effect)
| training_epoch_end's outputs doesn't have 'loss' key
pytorch-lightning: build from master
```
Traceback (most recent call last):
File "main.py", line 140, in <module>
main(hparams)
File "main.py", line 72, in main
trainer.fit(model)
File "/mnt/lustre/maxiao1/anaconda3/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 881, in fit
self.ddp_train(task, model)
File "/mnt/lustre/maxiao1/anaconda3/lib/python3.7/site-packages/pytorch_lightning/trainer/distrib_data_parallel.py", line 539, in ddp_train
self.run_pretrain_routine(model)
File "/mnt/lustre/maxiao1/anaconda3/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 1091, in run_pretrain_routine
self.train()
File "/mnt/lustre/maxiao1/anaconda3/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py", line 376, in train
self.run_training_epoch()
File "/mnt/lustre/maxiao1/anaconda3/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py", line 510, in run_training_epoch
self.run_training_epoch_end(epoch_output)
File "/mnt/lustre/maxiao1/anaconda3/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py", line 535, in run_training_epoch_end
epoch_output = model.training_epoch_end(epoch_output)
File "/mnt/lustre/maxiao1/PVM/models/baseline.py", line 335, in training_epoch_end
avg_loss = torch.stack([x['loss'] for x in outputs]).mean()
File "/mnt/lustre/maxiao1/PVM/models/baseline.py", line 335, in <listcomp>
avg_loss = torch.stack([x['loss'] for x in outputs]).mean()
KeyError: 'loss'
```
This is my code:
```
def training_step(self, batch, batch_idx):
...
return {'loss': loss, "train_acc": acc}
def training_epoch_end(self, outputs):
avg_loss = torch.stack([x['loss'] for x in outputs]).mean()
avg_acc = torch.stack([x['train_acc'] for x in outputs]).mean()
logs = {'loss': avg_loss, 'train_acc': avg_acc}
progress_bar = {'train_loss': avg_loss, 'train_acc': avg_acc}
results = {
'log': logs,
'progress_bar': progress_bar
}
return results
```
| Try: `avg_loss = torch.stack([x['batch_loss'] for x in outputs]).mean()`
Thanks๏ผ it works
but 'train_acc' key doesn't exist, neither do `batch_train_acc`. How to access other keys returned in training_step?
As of now in lightning you can access them using `x['callback_metrics']['loss']` and `x['callback_metrics']['train_acc']`, but I think it should be handled in a similar way we do this with `validation_epoch_end` and `test_epoch_end`.
Hi! One hint: for me it works with "loss" under windows but not under ubuntu.
Weird!! Why is this think platform dependent?? :thinking:
@Pet222 , are u sure that versions on ubuntu and windows are same?
Hey @williamFalcon is this intended behaviour? I was surprised to see this breaking change being introduced with no warning.
If it is intended, why not have consistent behaviour over `validation_epoch_end` and `test_epoch_end`.
If it is not intended, as it seems due to the "bug fix" tag, are you working on it or should I make a PR for this?
what is the behavior? that the "loss" key is not in training_epoch_end? If so, that's a bug because it should be there
@williamFalcon , on the latest version, the `loss` key was changed to the `batch_loss`. I think it was changed [here](https://github.com/PyTorchLightning/pytorch-lightning/commit/0f073819d3e0df8db7602eab489b1bad0fc0949c#diff-c45bd21c331565cbe62aaa12fa43aa0aR717)
Yes, the fact that you need to access it through 'callback metrics'.
Got it!
On Tue, 30 Jun 2020 at 12:44, William Falcon <notifications@github.com>
wrote:
> what is the behavior? that the "loss" key is not in training_epoch_end? If
> so, that's a bug because it should be there
>
> โ
> You are receiving this because you commented.
> Reply to this email directly, view it on GitHub
> <https://github.com/PyTorchLightning/pytorch-lightning/issues/2372#issuecomment-651740702>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/ABKWP6XTUJDTEDJ2NZQ3RKTRZHFY5ANCNFSM4OJKX4KQ>
> .
>
--
Best Regards,
Miguel Vera
+351 915 198 452
miguel.coimbra.vera@protonmail.com
Github/Captainvera <http://www.github.com/captainvera>
@captainvera would love a PR :) | 2020-06-30T13:23:18Z | [] | [] |
Traceback (most recent call last):
File "main.py", line 140, in <module>
main(hparams)
File "main.py", line 72, in main
trainer.fit(model)
File "/mnt/lustre/maxiao1/anaconda3/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 881, in fit
self.ddp_train(task, model)
File "/mnt/lustre/maxiao1/anaconda3/lib/python3.7/site-packages/pytorch_lightning/trainer/distrib_data_parallel.py", line 539, in ddp_train
self.run_pretrain_routine(model)
File "/mnt/lustre/maxiao1/anaconda3/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 1091, in run_pretrain_routine
self.train()
File "/mnt/lustre/maxiao1/anaconda3/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py", line 376, in train
self.run_training_epoch()
File "/mnt/lustre/maxiao1/anaconda3/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py", line 510, in run_training_epoch
self.run_training_epoch_end(epoch_output)
File "/mnt/lustre/maxiao1/anaconda3/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py", line 535, in run_training_epoch_end
epoch_output = model.training_epoch_end(epoch_output)
File "/mnt/lustre/maxiao1/PVM/models/baseline.py", line 335, in training_epoch_end
avg_loss = torch.stack([x['loss'] for x in outputs]).mean()
File "/mnt/lustre/maxiao1/PVM/models/baseline.py", line 335, in <listcomp>
avg_loss = torch.stack([x['loss'] for x in outputs]).mean()
KeyError: 'loss'
| 230 |
|||
Lightning-AI/lightning | Lightning-AI__lightning-2433 | d4a02e3bd8471946c606fef7512ce44d42f07d3a | diff --git a/pytorch_lightning/trainer/training_loop.py b/pytorch_lightning/trainer/training_loop.py
--- a/pytorch_lightning/trainer/training_loop.py
+++ b/pytorch_lightning/trainer/training_loop.py
@@ -802,9 +802,22 @@ def optimizer_closure(self, split_batch, batch_idx, opt_idx, optimizer, hiddens)
if self.precision == 16 and not self.on_tpu:
closure_loss = model_ref.amp_scale_loss(closure_loss, optimizer, opt_idx)
+ # enter amp context
+ if not NATIVE_AMP_AVALAIBLE:
+ context = closure_loss
+ closure_loss = closure_loss.__enter__()
+
# do backward pass
model_ref.backward(self, closure_loss, optimizer, opt_idx)
+ # exit amp context
+ if self.precision == 16 and not NATIVE_AMP_AVALAIBLE:
+ a, b, c = None, None, None
+ error = context.__exit__(a, b, c)
+ if error:
+ rank_zero_warn(a, b, c)
+ raise Exception('apex unscale error')
+
# once backward has been applied, release graph
closure_loss = closure_loss.detach()
training_step_output.batch_loss = training_step_output.batch_loss.detach()
| 0.8.2 calls backward on '_GeneratorContextManager'
## ๐ Bug
0.8.2 calls backward on '_GeneratorContextManager' and crashes training.
0.8.1 works correctly. my `training_step` returns `{'loss':loss, 'log':{'learn_rate':self.lr}}`
```
Traceback (most recent call last):
File "/opt/conda/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 20, in _wrap
fn(i, *args)
File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/trainer/distrib_data_parallel.py", line 538, in ddp_train
self.run_pretrain_routine(model)
File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 1100, in run_pretrain_routine
self.train()
File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/trainer/training_loop.py", line 370, in train
self.run_training_epoch()
File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/trainer/training_loop.py", line 452, in run_training_epoch
batch_output = self.run_training_batch(batch, batch_idx)
File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/trainer/training_loop.py", line 630, in run_training_batch
self.hiddens
File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/trainer/training_loop.py", line 804, in optimizer_closure
model_ref.backward(self, closure_loss, optimizer, opt_idx)
File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/core/hooks.py", line 189, in backward
loss.backward()
AttributeError: '_GeneratorContextManager' object has no attribute 'backward'
```
### Expected behavior
backward is called on the loss and training runs correctly
| did you override optimizer step?
could you try master? we just pushed a fix to a typo we had
Can confirm this happens on 0.8.3
ok. Can you post a colab example that replicates this?
@Anjum48 @s-rog
colab please
@williamFalcon my optimizer step was untouched, I can't run more testing atm but I'll get to it as soon as I can
@williamFalcon Hi I also encountered this, with normal Adam optimizer. I don't have a colab to replicate this atm but from what I saw earlier, this can be replicated with any setting as long as the Trainer is set to precision=16 when using Apex. Under this condition, the following lines from training_loop.py and hooks.py will run:
`if self.precision == 16 and not self.on_tpu
closure_loss = model_ref.amp_scale_loss(closure_loss, optimizer, opt_idx) `
`scaled_loss = amp.scale_loss(unscaled_loss, optimizer)`
will cause the closure_loss be a _GeneratorContextManager object. Which then cannot have a **backward()** method.
It seems under the current design, pytorch lighting's **scale_loss** function can only be used as a context?
@williamFalcon Here's a colab example (my first time using colab so let me know if you have issues seeing it) https://colab.research.google.com/drive/1G08jVDpx-T-5HE2c89RLJdq4u67mM2-o?usp=sharing
I suspect the issue lies with Apex AMP as suggested above by @aeryen
ummm. I think this is an apex issue. I can't replicate it with 16-bit native.
![image](https://user-images.githubusercontent.com/3640001/86135032-4c97ff80-bab8-11ea-942e-ffaae17aff07.png)
@aeryen min share a minimal example to reproduce?
hi sorry for the delay: https://colab.research.google.com/drive/1rjaRRwgBTm4CKPfe9po_WSxnKqY4jDRv?usp=sharing
I agree this is an apex issue, i.e. only occur when NATIVE_AMP_AVALAIBLE is false in the hooks.py | 2020-06-30T18:33:09Z | [] | [] |
Traceback (most recent call last):
File "/opt/conda/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 20, in _wrap
fn(i, *args)
File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/trainer/distrib_data_parallel.py", line 538, in ddp_train
self.run_pretrain_routine(model)
File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 1100, in run_pretrain_routine
self.train()
File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/trainer/training_loop.py", line 370, in train
self.run_training_epoch()
File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/trainer/training_loop.py", line 452, in run_training_epoch
batch_output = self.run_training_batch(batch, batch_idx)
File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/trainer/training_loop.py", line 630, in run_training_batch
self.hiddens
File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/trainer/training_loop.py", line 804, in optimizer_closure
model_ref.backward(self, closure_loss, optimizer, opt_idx)
File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/core/hooks.py", line 189, in backward
loss.backward()
AttributeError: '_GeneratorContextManager' object has no attribute 'backward'
| 231 |
|||
Lightning-AI/lightning | Lightning-AI__lightning-2565 | e1bc208f66891e22f0139619a1be5c06235a0f34 | diff --git a/pytorch_lightning/trainer/distrib_data_parallel.py b/pytorch_lightning/trainer/distrib_data_parallel.py
--- a/pytorch_lightning/trainer/distrib_data_parallel.py
+++ b/pytorch_lightning/trainer/distrib_data_parallel.py
@@ -189,6 +189,7 @@ class TrainerDDPMixin(ABC):
num_nodes: int
node_rank: int
tpu_cores: int
+ testing: bool
@property
@abstractmethod
@@ -555,15 +556,35 @@ def ddp_train(self, process_idx, q, model, is_master=False, proc_offset=0):
# continue training routine
results = self.run_pretrain_routine(model)
+ # persist info in ddp_spawn
+ self.__transfer_ddp_spawn_state_on_fit_end(model, q, results)
+
# clean up memory
torch.cuda.empty_cache()
+ if self.global_rank == 0 and self.distributed_backend not in ['ddp_spawn', 'ddp_cpu']:
+ return results
+
+ def __transfer_ddp_spawn_state_on_fit_end(self, model, q, results):
+ if not self.distributed_backend in ['ddp_spawn', 'ddp_cpu']:
+ return
+
+ # track the best model path
+ best_model_path = None
+ if self.checkpoint_callback is not None:
+ best_model_path = self.checkpoint_callback.best_model_path
+
if self.global_rank == 0 and q is not None:
- q.put(self.checkpoint_callback.best_model_path)
+ rank_zero_warn('cleaning up ddp environment...')
+ q.put(best_model_path)
q.put(results)
- if self.global_rank == 0 and self.distributed_backend != 'ddp_spawn':
- return results
+ # save the last weights
+ last_path = None
+ if not self.testing:
+ last_path = os.path.join(self.default_root_dir, '__temp_weight_ddp_end.ckpt')
+ torch.save(model.state_dict(), last_path)
+ q.put(last_path)
def save_spawn_weights(self, model):
"""
@@ -574,6 +595,7 @@ def save_spawn_weights(self, model):
if self.is_global_zero:
path = os.path.join(self.default_root_dir, '__temp_weight_ddp_end.ckpt')
self.save_checkpoint(path)
+ return path
def load_spawn_weights(self, original_model):
"""
diff --git a/pytorch_lightning/trainer/trainer.py b/pytorch_lightning/trainer/trainer.py
--- a/pytorch_lightning/trainer/trainer.py
+++ b/pytorch_lightning/trainer/trainer.py
@@ -35,7 +35,7 @@
from pytorch_lightning.utilities import rank_zero_warn, parsing, rank_zero_info, rank_zero_only
import warnings
-# warnings to ignore
+# warnings to ignore in trainer
warnings.filterwarnings('ignore', message='torch.distributed.reduce_op is deprecated, '
'please use torch.distributed.ReduceOp instead')
@@ -1063,9 +1063,14 @@ def __run_ddp_spawn(self, model, nprocs):
# restore main state with best weights
best_path = q.get()
results = q.get()
- if best_path is not None and len(best_path) > 0:
- self.checkpoint_callback.best_model_path = best_path
- model.load_from_checkpoint(best_path)
+ last_path = q.get()
+
+ # transfer back the best path to the trainer
+ self.checkpoint_callback.best_model_path = best_path
+
+ # load last weights
+ if last_path is not None and not self.testing:
+ torch.load(last_path, map_location=lambda storage, loc: storage)
self.model = model
return results
| Can't use None (anymore) in checkpoint_callback
## ๐ Bug
using None in checkpoint_callback now errors out
```
-- Process 0 terminated with the following error:
Traceback (most recent call last):
File "/opt/conda/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 20, in _wrap
fn(i, *args)
File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/trainer/distrib_data_parallel.py", line 562, in ddp_train
q.put(self.checkpoint_callback.best_model_path)
AttributeError: 'NoneType' object has no attribute 'best_model_path'
```
### To Reproduce
`trainer = Trainer(checkpoint_callback=None)`
Ran into this issue from upgrading to masters, was using masters from a few commits ago before
Edit: `False` casuses the same error as well
| 2020-07-09T10:46:34Z | [] | [] |
Traceback (most recent call last):
File "/opt/conda/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 20, in _wrap
fn(i, *args)
File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/trainer/distrib_data_parallel.py", line 562, in ddp_train
q.put(self.checkpoint_callback.best_model_path)
AttributeError: 'NoneType' object has no attribute 'best_model_path'
| 250 |
||||
Lightning-AI/lightning | Lightning-AI__lightning-2572 | c197b74289997fa11cd372b51adb637f3e3846ec | diff --git a/pytorch_lightning/core/memory.py b/pytorch_lightning/core/memory.py
--- a/pytorch_lightning/core/memory.py
+++ b/pytorch_lightning/core/memory.py
@@ -209,7 +209,7 @@ def _forward_example_input(self) -> None:
input_ = model.example_input_array
input_ = model.transfer_batch_to_device(input_, model.device)
- if trainer is not None and trainer.use_amp:
+ if trainer is not None and trainer.use_amp and not trainer.use_tpu:
if NATIVE_AMP_AVALAIBLE:
model.forward = torch.cuda.amp.autocast()(model.forward)
diff --git a/pytorch_lightning/trainer/distrib_parts.py b/pytorch_lightning/trainer/distrib_parts.py
--- a/pytorch_lightning/trainer/distrib_parts.py
+++ b/pytorch_lightning/trainer/distrib_parts.py
@@ -240,14 +240,14 @@ def dp_train(self, model):
# hack forward to do autocast for the user
model_autocast_original_forward = model.forward
- if self.use_amp and NATIVE_AMP_AVALAIBLE:
+ if self.use_amp and NATIVE_AMP_AVALAIBLE and not self.use_tpu:
# wrap the user's forward in autocast and give it back at the end
model.forward = torch.cuda.amp.autocast()(model.forward)
# TODO: remove with dropping NVIDIA AMP support
# check for this bug (amp + dp + !01 doesn't work)
# https://github.com/NVIDIA/apex/issues/227
- if self.use_dp and self.use_amp and not NATIVE_AMP_AVALAIBLE:
+ if self.use_dp and self.use_amp and not NATIVE_AMP_AVALAIBLE and not self.use_tpu:
if self.amp_level == 'O2':
raise MisconfigurationException(
f'Amp level {self.amp_level} with DataParallel is not supported.'
diff --git a/pytorch_lightning/trainer/evaluation_loop.py b/pytorch_lightning/trainer/evaluation_loop.py
--- a/pytorch_lightning/trainer/evaluation_loop.py
+++ b/pytorch_lightning/trainer/evaluation_loop.py
@@ -286,7 +286,7 @@ def _evaluate(
# -----------------
# RUN EVALUATION STEP
# -----------------
- if self.use_amp and NATIVE_AMP_AVALAIBLE:
+ if self.use_amp and NATIVE_AMP_AVALAIBLE and not self.use_tpu:
with torch.cuda.amp.autocast():
output = self.evaluation_forward(model, batch, batch_idx, dataloader_idx, test_mode)
else:
diff --git a/pytorch_lightning/trainer/trainer.py b/pytorch_lightning/trainer/trainer.py
--- a/pytorch_lightning/trainer/trainer.py
+++ b/pytorch_lightning/trainer/trainer.py
@@ -1118,7 +1118,7 @@ def run_pretrain_routine(self, model: LightningModule):
self.copy_trainer_model_properties(ref_model)
# init amp. Must be done here instead of __init__ to allow ddp to work
- if NATIVE_AMP_AVALAIBLE and self.precision == 16:
+ if NATIVE_AMP_AVALAIBLE and self.precision == 16 and not self.use_tpu:
self.scaler = torch.cuda.amp.GradScaler()
# log hyper-parameters
@@ -1300,6 +1300,11 @@ def __test_using_best_weights(self, ckpt_path, test_dataloaders):
if ckpt_path == 'best':
ckpt_path = self.checkpoint_callback.best_model_path
+ if len(ckpt_path) == 0:
+ rank_zero_warn(f'.test() found no path for the best weights, {ckpt_path}. Please '
+ f'specify a path for a checkpoint .test(ckpt_path=PATH)')
+ return {}
+
ckpt = torch.load(ckpt_path, map_location=lambda storage, loc: storage)
model.load_state_dict(ckpt['state_dict'])
diff --git a/pytorch_lightning/trainer/training_io.py b/pytorch_lightning/trainer/training_io.py
--- a/pytorch_lightning/trainer/training_io.py
+++ b/pytorch_lightning/trainer/training_io.py
@@ -358,7 +358,7 @@ def dump_checkpoint(self, weights_only: bool = False) -> dict:
checkpoint['lr_schedulers'] = lr_schedulers
# save native amp scaling
- if self.use_amp and NATIVE_AMP_AVALAIBLE:
+ if self.use_amp and NATIVE_AMP_AVALAIBLE and not self.use_tpu:
checkpoint['native_amp_scaling_state'] = self.scaler.state_dict()
# add the module_arguments and state_dict from the model
diff --git a/pytorch_lightning/trainer/training_loop.py b/pytorch_lightning/trainer/training_loop.py
--- a/pytorch_lightning/trainer/training_loop.py
+++ b/pytorch_lightning/trainer/training_loop.py
@@ -702,7 +702,7 @@ def run_batch_backward_pass(self, split_batch, batch_idx, opt_idx, optimizer):
# ------------------
# CLIP GRADS
# ------------------
- if self.use_amp and NATIVE_AMP_AVALAIBLE:
+ if self.use_amp and NATIVE_AMP_AVALAIBLE and not self.use_tpu:
self.scaler.unscale_(optimizer)
self.clip_gradients()
@@ -750,7 +750,7 @@ def call_optimizer_step(self, optimizer, opt_idx, batch_idx, split_batch):
using_native_amp=native_amp)
# in native 16-bit we need to update scaler after optimizer step
- if self.use_amp and NATIVE_AMP_AVALAIBLE:
+ if self.use_amp and NATIVE_AMP_AVALAIBLE and not self.use_tpu:
self.scaler.update()
# model hook
@@ -767,7 +767,7 @@ def optimizer_closure(self, split_batch, batch_idx, opt_idx, optimizer, hiddens)
# FORWARD
# ---------------------------
with self.profiler.profile('model_forward'):
- if self.use_amp and NATIVE_AMP_AVALAIBLE:
+ if self.use_amp and NATIVE_AMP_AVALAIBLE and not self.use_tpu:
with torch.cuda.amp.autocast():
training_step_output = self.training_forward(split_batch, batch_idx,
opt_idx, hiddens)
@@ -817,7 +817,7 @@ def optimizer_closure(self, split_batch, batch_idx, opt_idx, optimizer, hiddens)
model_ref.backward(self, closure_loss, optimizer, opt_idx)
# exit amp context
- if self.precision == 16 and not NATIVE_AMP_AVALAIBLE:
+ if self.precision == 16 and not NATIVE_AMP_AVALAIBLE and not self.on_tpu:
a, b, c = None, None, None
error = context.__exit__(a, b, c)
if error:
| TPU fp16 requires apex installed
<!--
## ๐ Bug
<!-- A clear and concise description of what the bug is. -->
When I tried to use precision=16 on TPU, pytorch-lightning is trying to find amp, which is unnecessary.
The backtrace is
```
GPU available: False, used: False
TPU available: True, using: 8 TPU cores
Traceback (most recent call last):
File "bert_ner/light/fp16_debug.py", line 16, in <module>
trainer = pl.Trainer(tpu_cores=8, precision=16)
File "/anaconda3/envs/torch-xla-1.5/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 607, in __init__
self.init_amp()
File "/anaconda3/envs/torch-xla-1.5/lib/python3.6/site-packages/pytorch_lightning/trainer/auto_mix_precision.py", line 27, in init_amp
"You set `use_amp=True` but do not have apex installed."
ModuleNotFoundError: You set `use_amp=True` but do not have apex installed.Install apex first using this guide and rerun with use_amp=True:https://github.com/NVIDIA/apex#linux his run will NOT use 16 bit precision
```
### To Reproduce
Steps to reproduce the behavior:
build a whatever Trainer in TPU and use fp16
#### Code sample
<!-- Ideally attach a minimal code sample to reproduce the decried issue.
Minimal means having the shortest code but still preserving the bug. -->
```
import pytorch_lightning as pl
trainer = pl.Trainer(tpu_cores=8, precision=16)
```
### Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
Should have nothing error.
### Environment
- PyTorch Version (e.g., 1.5.0):
- OS (e.g., Linux): Linux
- How you installed PyTorch (`conda`, `pip`, source): conda
- Build command you used (if compiling from source):
- Python version:
- CUDA/cuDNN version:
- GPU models and configuration:
- Any other relevant information: actually I directly use pytorch-xla-1.5 docker on Google Cloud
### Additional context
<!-- Add any other context about the problem here. -->
| Hi! thanks for your contribution!, great first issue!
If you want to do 16 bit precision training, you either need to have the nightly version of pytorch install or have apex installed. Based on the traceback I guess that you do not have any of them.
I could get this working using nightly version of pytorch:
```
pl.Trainer(precision=16, tpu_cores=8)
>>>GPU available: False, used: False
>>>TPU available: True, using: 8 TPU cores
>>>Using native 16bit precision.
```
> If you want to do 16 bit precision training, you either need to have the nightly version of pytorch install or have apex installed. Based on the traceback I guess that you do not have any of them.
> I could get this working using nightly version of pytorch:
>
> ```
> pl.Trainer(precision=16, tpu_cores=8)
> >>>GPU available: False, used: False
> >>>TPU available: True, using: 8 TPU cores
> >>>Using native 16bit precision.
> ```
Thanks for the quick reply. But [the document](https://pytorch-lightning.readthedocs.io/en/latest/apex.html) does not point out that I must have nightly version of pytorch installed or have apex installed when training on TPU with fp16. Maybe it's better to revise that part of document?
Yes, I agree that from the documentation it would look like it is only a requirement for gpu training. I guess that the specific requirement for TPU is to have pytorch version 1.6 or higher. | 2020-07-10T01:17:22Z | [] | [] |
Traceback (most recent call last):
File "bert_ner/light/fp16_debug.py", line 16, in <module>
trainer = pl.Trainer(tpu_cores=8, precision=16)
File "/anaconda3/envs/torch-xla-1.5/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 607, in __init__
self.init_amp()
File "/anaconda3/envs/torch-xla-1.5/lib/python3.6/site-packages/pytorch_lightning/trainer/auto_mix_precision.py", line 27, in init_amp
"You set `use_amp=True` but do not have apex installed."
ModuleNotFoundError: You set `use_amp=True` but do not have apex installed.Install apex first using this guide and rerun with use_amp=True:https://github.com/NVIDIA/apex#linux his run will NOT use 16 bit precision
| 252 |
End of preview. Expand
in Dataset Viewer.
- Downloads last month
- 48