Unnamed: 0
int64 0
2.93k
| code
stringlengths 101
62.2k
| docs
stringlengths 51
10.7k
| doc_len
int64 4
1.74k
| words
int64 4
4.82k
| lang
stringclasses 1
value | prompt
stringlengths 320
71.2k
|
---|---|---|---|---|---|---|
2,700 | def format_command(self) -> str:
command = 'ansible-test %s' % self.command
if self.test:
command += ' --test %s' % self.test
if self.python_version:
command += ' --python %s' % self.python_version
return command
| Return a string representing the CLI command associated with the test failure. | 12 | 30 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def format_command(self) -> str:
command = 'ansible-test %s' % self.command
if self.test:
command += ' --test %s' % self.test
if self.python_version:
command += ' --python %s' % self.python_version
return command
```
###Assistant : Return a string representing the CLI command associated with the test failure.
|
2,701 | def act(self):
obs = self.observation
reply = {'text': INVALID, 'id': self.getID(), 'episode_done': False}
if obs is None or obs['text'] == DO_NOT_RETRIEVE:
return Message(reply)
# construct the search query
labels = obs.get('labels', obs.get('eval_labels', None))
search_query = self.construct_search_query(labels)
if (
self.opt['min_num_search_words'] > 0
and len(search_query[0].split()) <= self.opt['min_num_search_words']
):
return Message(reply)
# retrieve
self.search_engine.set_search_queries(search_query)
retrieved, _ = self.search_engine.retrieve_and_score(self.dummy)
all_docs = [d.get_tokenization_str() for d in retrieved[0]] # batched
# Find the right doc
best_f1, best_doc, best_doc_idx = self.get_best_doc(all_docs, labels)
if best_doc:
assert best_doc_idx is not None
reply['knowledge'] = f'{TOKEN_KNOWLEDGE}{best_doc}{TOKEN_END_KNOWLEDGE}'
reply['f1_overlap'] = best_f1
reply['text'] = labels[0]
reply['retrieved_docs'] = all_docs
reply['gold_doc'] = all_docs[best_doc_idx]
reply['search_query'] = search_query[0]
return Message(reply)
|
Search for overlap with the observation label.
Return the best fitting document. A document is valid if the f1 is above the
threshold AND the f1 is less than 1.0 AND the target label is not in the
document.
| 39 | 102 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def act(self):
obs = self.observation
reply = {'text': INVALID, 'id': self.getID(), 'episode_done': False}
if obs is None or obs['text'] == DO_NOT_RETRIEVE:
return Message(reply)
# construct the search query
labels = obs.get('labels', obs.get('eval_labels', None))
search_query = self.construct_search_query(labels)
if (
self.opt['min_num_search_words'] > 0
and len(search_query[0].split()) <= self.opt['min_num_search_words']
):
return Message(reply)
# retrieve
self.search_engine.set_search_queries(search_query)
retrieved, _ = self.search_engine.retrieve_and_score(self.dummy)
all_docs = [d.get_tokenization_str() for d in retrieved[0]] # batched
# Find the right doc
best_f1, best_doc, best_doc_idx = self.get_best_doc(all_docs, labels)
if best_doc:
assert best_doc_idx is not None
reply['knowledge'] = f'{TOKEN_KNOWLEDGE}{best_doc}{TOKEN_END_KNOWLEDGE}'
reply['f1_overlap'] = best_f1
reply['text'] = labels[0]
reply['retrieved_docs'] = all_docs
reply['gold_doc'] = all_docs[best_doc_idx]
reply['search_query'] = search_query[0]
return Message(reply)
```
###Assistant :
Search for overlap with the observation label.
Return the best fitting document. A document is valid if the f1 is above the
threshold AND the f1 is less than 1.0 AND the target label is not in the
document.
|
2,702 | def print_help(self):
help_text = f
console.print(text=help_text, menu="Forex - Quantitative Analysis")
| Print help[cmds]
pick pick target column for analysis[/cmds]
[param]Pair: [/param]{self.ticker}
[param]Target Column: [/param]{self.target}
[cmds]
[info]Statistics:[/info]
summary brief summary statistics of loaded pair.
normality normality statistics and tests
unitroot unit root test for stationarity (ADF, KPSS)
[info]Plots:[/info]
line line plot of selected target
hist histogram with density plot
cdf cumulative distribution function
bw box and whisker plot
acf (partial) auto-correlation function differentials of prices
qqplot residuals against standard normal curve
[info]Rolling Metrics:[/info]
rolling rolling mean and std deviation of prices
spread rolling variance and std deviation of prices
quantile rolling median and quantile of prices
skew rolling skewness of distribution of prices
kurtosis rolling kurtosis of distribution of prices
[info]Risk:[/info]
var display value at risk
es display expected shortfall
[info]Other:[/info]
raw print raw data
decompose decomposition in cyclic-trend, season, and residuals of prices
cusum detects abrupt changes using cumulative sum algorithm of prices[/cmds]
| 142 | 10 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def print_help(self):
help_text = f
console.print(text=help_text, menu="Forex - Quantitative Analysis")
```
###Assistant : Print help[cmds]
pick pick target column for analysis[/cmds]
[param]Pair: [/param]{self.ticker}
[param]Target Column: [/param]{self.target}
[cmds]
[info]Statistics:[/info]
summary brief summary statistics of loaded pair.
normality normality statistics and tests
unitroot unit root test for stationarity (ADF, KPSS)
[info]Plots:[/info]
line line plot of selected target
hist histogram with density plot
cdf cumulative distribution function
bw box and whisker plot
acf (partial) auto-correlation function differentials of prices
qqplot residuals against standard normal curve
[info]Rolling Metrics:[/info]
rolling rolling mean and std deviation of prices
spread rolling variance and std deviation of prices
quantile rolling median and quantile of prices
skew rolling skewness of distribution of prices
kurtosis rolling kurtosis of distribution of prices
[info]Risk:[/info]
var display value at risk
es display expected shortfall
[info]Other:[/info]
raw print raw data
decompose decomposition in cyclic-trend, season, and residuals of prices
cusum detects abrupt changes using cumulative sum algorithm of prices[/cmds]
|
2,703 | def chain(self, klass=None):
obj = self.clone()
if klass and obj.__class__ != klass:
obj.__class__ = klass
if not obj.filter_is_sticky:
obj.used_aliases = set()
obj.filter_is_sticky = False
if hasattr(obj, "_setup_query"):
obj._setup_query()
return obj
|
Return a copy of the current Query that's ready for another operation.
The klass argument changes the type of the Query, e.g. UpdateQuery.
| 23 | 30 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def chain(self, klass=None):
obj = self.clone()
if klass and obj.__class__ != klass:
obj.__class__ = klass
if not obj.filter_is_sticky:
obj.used_aliases = set()
obj.filter_is_sticky = False
if hasattr(obj, "_setup_query"):
obj._setup_query()
return obj
```
###Assistant :
Return a copy of the current Query that's ready for another operation.
The klass argument changes the type of the Query, e.g. UpdateQuery.
|
2,704 | def run_test_gbm_non_number_inputs(tmpdir, backend_config):
input_features = [binary_feature(), category_feature(encoder={"reduce_output": "sum"})]
output_feature = binary_feature()
output_features = [output_feature]
csv_filename = os.path.join(tmpdir, "training.csv")
dataset_filename = generate_data(input_features, output_features, csv_filename, num_examples=100)
config = {
MODEL_TYPE: "gbm",
"input_features": input_features,
"output_features": output_features,
TRAINER: {"num_boost_round": 2},
}
model = LudwigModel(config, backend=backend_config)
_, _, output_directory = model.train(
dataset=dataset_filename,
output_directory=tmpdir,
skip_save_processed_input=True,
skip_save_progress=True,
skip_save_unprocessed_output=True,
skip_save_log=True,
)
model.load(os.path.join(tmpdir, "api_experiment_run", "model"))
preds, _ = model.predict(dataset=dataset_filename, output_directory=output_directory)
prob_col = preds[output_feature["name"] + "_probabilities"]
if backend_config["type"] == "ray":
prob_col = prob_col.compute()
assert len(prob_col.iloc[0]) == 2
assert prob_col.apply(sum).mean() == pytest.approx(1.0)
| Test that the GBM model can train and predict with non-number inputs. | 12 | 81 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def run_test_gbm_non_number_inputs(tmpdir, backend_config):
input_features = [binary_feature(), category_feature(encoder={"reduce_output": "sum"})]
output_feature = binary_feature()
output_features = [output_feature]
csv_filename = os.path.join(tmpdir, "training.csv")
dataset_filename = generate_data(input_features, output_features, csv_filename, num_examples=100)
config = {
MODEL_TYPE: "gbm",
"input_features": input_features,
"output_features": output_features,
TRAINER: {"num_boost_round": 2},
}
model = LudwigModel(config, backend=backend_config)
_, _, output_directory = model.train(
dataset=dataset_filename,
output_directory=tmpdir,
skip_save_processed_input=True,
skip_save_progress=True,
skip_save_unprocessed_output=True,
skip_save_log=True,
)
model.load(os.path.join(tmpdir, "api_experiment_run", "model"))
preds, _ = model.predict(dataset=dataset_filename, output_directory=output_directory)
prob_col = preds[output_feature["name"] + "_probabilities"]
if backend_config["type"] == "ray":
prob_col = prob_col.compute()
assert len(prob_col.iloc[0]) == 2
assert prob_col.apply(sum).mean() == pytest.approx(1.0)
```
###Assistant : Test that the GBM model can train and predict with non-number inputs.
|
2,705 | def i2len(self, pkt, val):
fld_len = self.fld.i2len(pkt, val)
return fld_len + self.padlen(fld_len, pkt)
| get the length of the field, including the padding length | 10 | 13 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def i2len(self, pkt, val):
fld_len = self.fld.i2len(pkt, val)
return fld_len + self.padlen(fld_len, pkt)
```
###Assistant : get the length of the field, including the padding length
|
2,706 | def get_kerning(self, next):
advance = self._metrics.advance - self.width
kern = 0.
if isinstance(next, Char):
kern = self.fontset.get_kern(
self.font, self.font_class, self.c, self.fontsize,
next.font, next.font_class, next.c, next.fontsize,
self.dpi)
return advance + kern
|
Return the amount of kerning between this and the given character.
This method is called when characters are strung together into `Hlist`
to create `Kern` nodes.
| 26 | 30 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def get_kerning(self, next):
advance = self._metrics.advance - self.width
kern = 0.
if isinstance(next, Char):
kern = self.fontset.get_kern(
self.font, self.font_class, self.c, self.fontsize,
next.font, next.font_class, next.c, next.fontsize,
self.dpi)
return advance + kern
```
###Assistant :
Return the amount of kerning between this and the given character.
This method is called when characters are strung together into `Hlist`
to create `Kern` nodes.
|
2,707 | def _get_num_samples_or_steps(data, steps_per_epoch):
flat_inputs = tf.nest.flatten(data)
if hasattr(flat_inputs[0], "shape"):
return int(flat_inputs[0].shape[0]), False
return steps_per_epoch, True
| Returns number of samples or steps, and whether to use steps count mode. | 13 | 15 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _get_num_samples_or_steps(data, steps_per_epoch):
flat_inputs = tf.nest.flatten(data)
if hasattr(flat_inputs[0], "shape"):
return int(flat_inputs[0].shape[0]), False
return steps_per_epoch, True
```
###Assistant : Returns number of samples or steps, and whether to use steps count mode.
|
2,708 | def test_retrieve_product_attributes_input_type(staff_api_client, product, channel_USD):
query =
variables = {"channel": channel_USD.slug}
found_products = get_graphql_content(
staff_api_client.post_graphql(query, variables)
)["data"]["products"]["edges"]
assert len(found_products) == 1
for gql_attr in found_products[0]["node"]["attributes"]:
assert len(gql_attr["values"]) == 1
assert gql_attr["values"][0]["inputType"] == "DROPDOWN"
ATTRIBUTES_RESORT_QUERY =
|
query ($channel: String){
products(first: 10, channel: $channel) {
edges {
node {
attributes {
values {
inputType
}
}
}
}
}
}
mutation ProductTypeReorderAttributes(
$productTypeId: ID!
$moves: [ReorderInput!]!
$type: ProductAttributeType!
) {
productTypeReorderAttributes(
productTypeId: $productTypeId
moves: $moves
type: $type
) {
productType {
id
variantAttributes {
id
slug
}
productAttributes {
id
}
}
errors {
field
message
code
attributes
}
}
}
| 64 | 34 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_retrieve_product_attributes_input_type(staff_api_client, product, channel_USD):
query =
variables = {"channel": channel_USD.slug}
found_products = get_graphql_content(
staff_api_client.post_graphql(query, variables)
)["data"]["products"]["edges"]
assert len(found_products) == 1
for gql_attr in found_products[0]["node"]["attributes"]:
assert len(gql_attr["values"]) == 1
assert gql_attr["values"][0]["inputType"] == "DROPDOWN"
ATTRIBUTES_RESORT_QUERY =
```
###Assistant :
query ($channel: String){
products(first: 10, channel: $channel) {
edges {
node {
attributes {
values {
inputType
}
}
}
}
}
}
mutation ProductTypeReorderAttributes(
$productTypeId: ID!
$moves: [ReorderInput!]!
$type: ProductAttributeType!
) {
productTypeReorderAttributes(
productTypeId: $productTypeId
moves: $moves
type: $type
) {
productType {
id
variantAttributes {
id
slug
}
productAttributes {
id
}
}
errors {
field
message
code
attributes
}
}
}
|
2,709 | def apply(self, project_state, schema_editor, collect_sql=False):
for operation in self.operations:
# If this operation cannot be represented as SQL, place a comment
# there instead
if collect_sql:
schema_editor.collected_sql.append("--")
if not operation.reduces_to_sql:
schema_editor.collected_sql.append(
"-- MIGRATION NOW PERFORMS OPERATION THAT CANNOT BE WRITTEN AS SQL:"
)
schema_editor.collected_sql.append("-- %s" % operation.describe())
schema_editor.collected_sql.append("--")
if not operation.reduces_to_sql:
continue
# Save the state before the operation has run
old_state = project_state.clone()
operation.state_forwards(self.app_label, project_state)
# Run the operation
atomic_operation = operation.atomic or (
self.atomic and operation.atomic is not False
)
if not schema_editor.atomic_migration and atomic_operation:
# Force a transaction on a non-transactional-DDL backend or an
# atomic operation inside a non-atomic migration.
with atomic(schema_editor.connection.alias):
operation.database_forwards(
self.app_label, schema_editor, old_state, project_state
)
else:
# Normal behaviour
operation.database_forwards(
self.app_label, schema_editor, old_state, project_state
)
return project_state
|
Take a project_state representing all migrations prior to this one
and a schema_editor for a live database and apply the migration
in a forwards order.
Return the resulting project state for efficient reuse by following
Migrations.
| 36 | 124 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def apply(self, project_state, schema_editor, collect_sql=False):
for operation in self.operations:
# If this operation cannot be represented as SQL, place a comment
# there instead
if collect_sql:
schema_editor.collected_sql.append("--")
if not operation.reduces_to_sql:
schema_editor.collected_sql.append(
"-- MIGRATION NOW PERFORMS OPERATION THAT CANNOT BE WRITTEN AS SQL:"
)
schema_editor.collected_sql.append("-- %s" % operation.describe())
schema_editor.collected_sql.append("--")
if not operation.reduces_to_sql:
continue
# Save the state before the operation has run
old_state = project_state.clone()
operation.state_forwards(self.app_label, project_state)
# Run the operation
atomic_operation = operation.atomic or (
self.atomic and operation.atomic is not False
)
if not schema_editor.atomic_migration and atomic_operation:
# Force a transaction on a non-transactional-DDL backend or an
# atomic operation inside a non-atomic migration.
with atomic(schema_editor.connection.alias):
operation.database_forwards(
self.app_label, schema_editor, old_state, project_state
)
else:
# Normal behaviour
operation.database_forwards(
self.app_label, schema_editor, old_state, project_state
)
return project_state
```
###Assistant :
Take a project_state representing all migrations prior to this one
and a schema_editor for a live database and apply the migration
in a forwards order.
Return the resulting project state for efficient reuse by following
Migrations.
|
2,710 | def safe_to_scale(self) -> bool:
# Get the list of nodes.
node_set = set(self.node_data_dict.keys())
worker_groups = self._raycluster["spec"].get("workerGroupSpecs", [])
# Accumulates the indices of worker groups with non-empty workersToDelete
non_empty_worker_group_indices = []
for group_index, worker_group in enumerate(worker_groups):
workersToDelete = worker_group.get("scaleStrategy", {}).get(
"workersToDelete", []
)
if workersToDelete:
non_empty_worker_group_indices.append(group_index)
for worker in workersToDelete:
if worker in node_set:
# The operator hasn't removed this worker yet. Abort
# the autoscaler update.
logger.warning(f"Waiting for operator to remove worker {worker}.")
return False
# All required workersToDelete have been removed.
# Clean up the workersToDelete field.
patch_payload = []
for group_index in non_empty_worker_group_indices:
patch = worker_delete_patch(group_index, workers_to_delete=[])
patch_payload.append(patch)
if patch_payload:
logger.info("Cleaning up workers to delete.")
logger.info(f"Submitting patch {patch_payload}.")
self._submit_raycluster_patch(patch_payload)
# It's safe to proceed with the autoscaler update.
return True
| Returns False iff non_terminated_nodes contains any pods in the RayCluster's
workersToDelete lists.
Explanation:
If there are any workersToDelete which are non-terminated,
we should wait for the operator to do its job and delete those
pods. Therefore, we back off the autoscaler update.
If, on the other hand, all of the workersToDelete have already been cleaned up,
then we patch away the workersToDelete lists and return True.
In the future, we may consider having the operator clean up workersToDelete
on it own:
https://github.com/ray-project/kuberay/issues/733
Note (Dmitri):
It is stylistically bad that this function has a side effect.
| 95 | 122 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def safe_to_scale(self) -> bool:
# Get the list of nodes.
node_set = set(self.node_data_dict.keys())
worker_groups = self._raycluster["spec"].get("workerGroupSpecs", [])
# Accumulates the indices of worker groups with non-empty workersToDelete
non_empty_worker_group_indices = []
for group_index, worker_group in enumerate(worker_groups):
workersToDelete = worker_group.get("scaleStrategy", {}).get(
"workersToDelete", []
)
if workersToDelete:
non_empty_worker_group_indices.append(group_index)
for worker in workersToDelete:
if worker in node_set:
# The operator hasn't removed this worker yet. Abort
# the autoscaler update.
logger.warning(f"Waiting for operator to remove worker {worker}.")
return False
# All required workersToDelete have been removed.
# Clean up the workersToDelete field.
patch_payload = []
for group_index in non_empty_worker_group_indices:
patch = worker_delete_patch(group_index, workers_to_delete=[])
patch_payload.append(patch)
if patch_payload:
logger.info("Cleaning up workers to delete.")
logger.info(f"Submitting patch {patch_payload}.")
self._submit_raycluster_patch(patch_payload)
# It's safe to proceed with the autoscaler update.
return True
```
###Assistant : Returns False iff non_terminated_nodes contains any pods in the RayCluster's
workersToDelete lists.
Explanation:
If there are any workersToDelete which are non-terminated,
we should wait for the operator to do its job and delete those
pods. Therefore, we back off the autoscaler update.
If, on the other hand, all of the workersToDelete have already been cleaned up,
then we patch away the workersToDelete lists and return True.
In the future, we may consider having the operator clean up workersToDelete
on it own:
https://github.com/ray-project/kuberay/issues/733
Note (Dmitri):
It is stylistically bad that this function has a side effect.
|
2,711 | def get_distance(self, f, value, lookup_type):
if not value:
return []
value = value[0]
if isinstance(value, Distance):
if f.geodetic(self.connection):
dist_param = value.m
else:
dist_param = getattr(
value, Distance.unit_attname(f.units_name(self.connection))
)
else:
dist_param = value
# dwithin lookups on Oracle require a special string parameter
# that starts with "distance=".
if lookup_type == "dwithin":
dist_param = "distance=%s" % dist_param
return [dist_param]
|
Return the distance parameters given the value and the lookup type.
On Oracle, geometry columns with a geodetic coordinate system behave
implicitly like a geography column, and thus meters will be used as
the distance parameter on them.
| 38 | 58 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def get_distance(self, f, value, lookup_type):
if not value:
return []
value = value[0]
if isinstance(value, Distance):
if f.geodetic(self.connection):
dist_param = value.m
else:
dist_param = getattr(
value, Distance.unit_attname(f.units_name(self.connection))
)
else:
dist_param = value
# dwithin lookups on Oracle require a special string parameter
# that starts with "distance=".
if lookup_type == "dwithin":
dist_param = "distance=%s" % dist_param
return [dist_param]
```
###Assistant :
Return the distance parameters given the value and the lookup type.
On Oracle, geometry columns with a geodetic coordinate system behave
implicitly like a geography column, and thus meters will be used as
the distance parameter on them.
|
2,712 | def test_standard_get_document_model(self):
del settings.WAGTAILDOCS_DOCUMENT_MODEL
from wagtail.documents.models import Document
self.assertIs(get_document_model(), Document)
| Test get_document_model with no WAGTAILDOCS_DOCUMENT_MODEL | 5 | 10 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_standard_get_document_model(self):
del settings.WAGTAILDOCS_DOCUMENT_MODEL
from wagtail.documents.models import Document
self.assertIs(get_document_model(), Document)
```
###Assistant : Test get_document_model with no WAGTAILDOCS_DOCUMENT_MODEL
|
2,713 | def _safe_assign(X, values, *, row_indexer=None, column_indexer=None):
row_indexer = slice(None, None, None) if row_indexer is None else row_indexer
column_indexer = (
slice(None, None, None) if column_indexer is None else column_indexer
)
if hasattr(X, "iloc"): # pandas dataframe
X.iloc[row_indexer, column_indexer] = values
else: # numpy array or sparse matrix
X[row_indexer, column_indexer] = values
| Safe assignment to a numpy array, sparse matrix, or pandas dataframe.
Parameters
----------
X : {ndarray, sparse-matrix, dataframe}
Array to be modified. It is expected to be 2-dimensional.
values : ndarray
The values to be assigned to `X`.
row_indexer : array-like, dtype={int, bool}, default=None
A 1-dimensional array to select the rows of interest. If `None`, all
rows are selected.
column_indexer : array-like, dtype={int, bool}, default=None
A 1-dimensional array to select the columns of interest. If `None`, all
columns are selected.
| 80 | 51 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _safe_assign(X, values, *, row_indexer=None, column_indexer=None):
row_indexer = slice(None, None, None) if row_indexer is None else row_indexer
column_indexer = (
slice(None, None, None) if column_indexer is None else column_indexer
)
if hasattr(X, "iloc"): # pandas dataframe
X.iloc[row_indexer, column_indexer] = values
else: # numpy array or sparse matrix
X[row_indexer, column_indexer] = values
```
###Assistant : Safe assignment to a numpy array, sparse matrix, or pandas dataframe.
Parameters
----------
X : {ndarray, sparse-matrix, dataframe}
Array to be modified. It is expected to be 2-dimensional.
values : ndarray
The values to be assigned to `X`.
row_indexer : array-like, dtype={int, bool}, default=None
A 1-dimensional array to select the rows of interest. If `None`, all
rows are selected.
column_indexer : array-like, dtype={int, bool}, default=None
A 1-dimensional array to select the columns of interest. If `None`, all
columns are selected.
|
2,714 | def copy_sign(self, a, b):
a = _convert_other(a, raiseit=True)
return a.copy_sign(b)
| Copies the second operand's sign to the first one.
In detail, it returns a copy of the first operand with the sign
equal to the sign of the second operand.
>>> ExtendedContext.copy_sign(Decimal( '1.50'), Decimal('7.33'))
Decimal('1.50')
>>> ExtendedContext.copy_sign(Decimal('-1.50'), Decimal('7.33'))
Decimal('1.50')
>>> ExtendedContext.copy_sign(Decimal( '1.50'), Decimal('-7.33'))
Decimal('-1.50')
>>> ExtendedContext.copy_sign(Decimal('-1.50'), Decimal('-7.33'))
Decimal('-1.50')
>>> ExtendedContext.copy_sign(1, -2)
Decimal('-1')
>>> ExtendedContext.copy_sign(Decimal(1), -2)
Decimal('-1')
>>> ExtendedContext.copy_sign(1, Decimal(-2))
Decimal('-1')
| 60 | 10 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def copy_sign(self, a, b):
a = _convert_other(a, raiseit=True)
return a.copy_sign(b)
```
###Assistant : Copies the second operand's sign to the first one.
In detail, it returns a copy of the first operand with the sign
equal to the sign of the second operand.
>>> ExtendedContext.copy_sign(Decimal( '1.50'), Decimal('7.33'))
Decimal('1.50')
>>> ExtendedContext.copy_sign(Decimal('-1.50'), Decimal('7.33'))
Decimal('1.50')
>>> ExtendedContext.copy_sign(Decimal( '1.50'), Decimal('-7.33'))
Decimal('-1.50')
>>> ExtendedContext.copy_sign(Decimal('-1.50'), Decimal('-7.33'))
Decimal('-1.50')
>>> ExtendedContext.copy_sign(1, -2)
Decimal('-1')
>>> ExtendedContext.copy_sign(Decimal(1), -2)
Decimal('-1')
>>> ExtendedContext.copy_sign(1, Decimal(-2))
Decimal('-1')
|
2,715 | def get_validated_ordering(self):
orderable_fields = self.orderable_fields or ()
ordering = {}
if self.is_export:
# Revert to CSV order_by submit_time ascending for backwards compatibility
default_ordering = self.ordering_csv or ()
else:
default_ordering = self.ordering or ()
if isinstance(default_ordering, str):
default_ordering = (default_ordering,)
ordering_strs = self.request.GET.getlist('order_by') or list(default_ordering)
for order in ordering_strs:
try:
_, prefix, field_name = order.rpartition('-')
if field_name in orderable_fields:
ordering[field_name] = (
prefix, 'descending' if prefix == '-' else 'ascending'
)
except (IndexError, ValueError):
continue # invalid ordering specified, skip it
return ordering
| Return a dict of field names with ordering labels if ordering is valid | 13 | 82 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def get_validated_ordering(self):
orderable_fields = self.orderable_fields or ()
ordering = {}
if self.is_export:
# Revert to CSV order_by submit_time ascending for backwards compatibility
default_ordering = self.ordering_csv or ()
else:
default_ordering = self.ordering or ()
if isinstance(default_ordering, str):
default_ordering = (default_ordering,)
ordering_strs = self.request.GET.getlist('order_by') or list(default_ordering)
for order in ordering_strs:
try:
_, prefix, field_name = order.rpartition('-')
if field_name in orderable_fields:
ordering[field_name] = (
prefix, 'descending' if prefix == '-' else 'ascending'
)
except (IndexError, ValueError):
continue # invalid ordering specified, skip it
return ordering
```
###Assistant : Return a dict of field names with ordering labels if ordering is valid
|
2,716 | def test_chordal_cycle_graph(p):
G = nx.chordal_cycle_graph(p)
assert len(G) == p
# TODO The second largest eigenvalue should be smaller than a constant,
# independent of the number of nodes in the graph:
#
# eigs = sorted(sp.linalg.eigvalsh(nx.adjacency_matrix(G).toarray()))
# assert_less(eigs[-2], ...)
#
@pytest.mark.parametrize("p", (3, 5, 7, 11, 13)) # Primes | Test for the :func:`networkx.chordal_cycle_graph` function. | 5 | 48 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_chordal_cycle_graph(p):
G = nx.chordal_cycle_graph(p)
assert len(G) == p
# TODO The second largest eigenvalue should be smaller than a constant,
# independent of the number of nodes in the graph:
#
# eigs = sorted(sp.linalg.eigvalsh(nx.adjacency_matrix(G).toarray()))
# assert_less(eigs[-2], ...)
#
@pytest.mark.parametrize("p", (3, 5, 7, 11, 13)) # Primes
```
###Assistant : Test for the :func:`networkx.chordal_cycle_graph` function.
|
2,717 | def convert_dataset_split_sizes(left_size,right_size,total_size):
left_size_type = type(left_size)
right_size_type = type(right_size)
if left_size is not None and left_size_type not in [int,float]:
raise ValueError(f'Invalid `left_size` type Got {left_size_type}'
'It should be one of float,int or None')
if right_size is not None and right_size_type not in [int,float]:
raise ValueError(f'Invalid `right_size` type Got {right_size_type}'
'It should be one of float,int or None')
if (left_size_type == int
and (left_size <= 0 or left_size>= total_size)
or left_size_type == float
and (left_size <= 0 or left_size>= 1) ):
raise ValueError('`left_size` should be either a positive integer'
f'and smaller than {total_size} or a float '
'within the range `[0, 1]`')
if (right_size_type == int
and (right_size <= 0 or right_size>= total_size)
or right_size_type == float
and (right_size <= 0 or right_size>= 1)):
raise ValueError('`right_size` should be either a positive integer '
f'and smaller than {total_size} or'
'a float within the range `[0, 1]`')
if right_size_type == left_size_type == float and right_size + left_size > 1:
raise ValueError('sum of `left_size` and `right_size`'
' should be within `[0,1]`'
f'Got {right_size + left_size} ,'
'reduce the `left_size` or `right_size`')
if left_size_type == float:
left_size = math.ceil(left_size*total_size)
else:
left_size = float(left_size)
if right_size_type == float:
right_size = math.ceil(right_size*total_size)
else:
right_size = float(right_size)
if left_size is None:
left_size = total_size - right_size
elif right_size is None:
right_size = total_size - left_size
if left_size + right_size > total_size:
raise ValueError('The sum of `left_size` and `right_size`'
f' should be smaller than the samples {total_size} '
' reduce `left_size` or `right_size` ' )
if left_size == 0:
raise ValueError(f'with dataset of length={total_size}'
'`left_size`={left_size} and `right_size`={right_size} '
'resulting left dataset split will be empty, '
'adjust any of the aforementioned parameters')
left_size,right_size = int(left_size) ,int(right_size)
return left_size,right_size
| Helper function to convert left_size/right_size relative to dataset's size
| 9 | 278 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def convert_dataset_split_sizes(left_size,right_size,total_size):
left_size_type = type(left_size)
right_size_type = type(right_size)
if left_size is not None and left_size_type not in [int,float]:
raise ValueError(f'Invalid `left_size` type Got {left_size_type}'
'It should be one of float,int or None')
if right_size is not None and right_size_type not in [int,float]:
raise ValueError(f'Invalid `right_size` type Got {right_size_type}'
'It should be one of float,int or None')
if (left_size_type == int
and (left_size <= 0 or left_size>= total_size)
or left_size_type == float
and (left_size <= 0 or left_size>= 1) ):
raise ValueError('`left_size` should be either a positive integer'
f'and smaller than {total_size} or a float '
'within the range `[0, 1]`')
if (right_size_type == int
and (right_size <= 0 or right_size>= total_size)
or right_size_type == float
and (right_size <= 0 or right_size>= 1)):
raise ValueError('`right_size` should be either a positive integer '
f'and smaller than {total_size} or'
'a float within the range `[0, 1]`')
if right_size_type == left_size_type == float and right_size + left_size > 1:
raise ValueError('sum of `left_size` and `right_size`'
' should be within `[0,1]`'
f'Got {right_size + left_size} ,'
'reduce the `left_size` or `right_size`')
if left_size_type == float:
left_size = math.ceil(left_size*total_size)
else:
left_size = float(left_size)
if right_size_type == float:
right_size = math.ceil(right_size*total_size)
else:
right_size = float(right_size)
if left_size is None:
left_size = total_size - right_size
elif right_size is None:
right_size = total_size - left_size
if left_size + right_size > total_size:
raise ValueError('The sum of `left_size` and `right_size`'
f' should be smaller than the samples {total_size} '
' reduce `left_size` or `right_size` ' )
if left_size == 0:
raise ValueError(f'with dataset of length={total_size}'
'`left_size`={left_size} and `right_size`={right_size} '
'resulting left dataset split will be empty, '
'adjust any of the aforementioned parameters')
left_size,right_size = int(left_size) ,int(right_size)
return left_size,right_size
```
###Assistant : Helper function to convert left_size/right_size relative to dataset's size
|
2,718 | def active_count(self):
return self.order_by().exclude(inventory_sources__source='controller').values(name_lower=Lower('name')).distinct().count()
| Return count of active, unique hosts for licensing.
Construction of query involves:
- remove any ordering specified in model's Meta
- Exclude hosts sourced from another Tower
- Restrict the query to only return the name column
- Only consider results that are unique
- Return the count of this query
| 51 | 4 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def active_count(self):
return self.order_by().exclude(inventory_sources__source='controller').values(name_lower=Lower('name')).distinct().count()
```
###Assistant : Return count of active, unique hosts for licensing.
Construction of query involves:
- remove any ordering specified in model's Meta
- Exclude hosts sourced from another Tower
- Restrict the query to only return the name column
- Only consider results that are unique
- Return the count of this query
|
2,719 | async def test_set_avatar_incorrect_mime_type(self) -> None:
handler = self.hs.get_sso_handler()
# any random user works since image check is supposed to fail
user_id = "@sso-user:test"
self.assertFalse(
self.get_success(handler.set_avatar(user_id, "http://my.server/me.png"))
)
| Tests that saving an avatar fails when its mime type is not allowed | 13 | 27 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
async def test_set_avatar_incorrect_mime_type(self) -> None:
handler = self.hs.get_sso_handler()
# any random user works since image check is supposed to fail
user_id = "@sso-user:test"
self.assertFalse(
self.get_success(handler.set_avatar(user_id, "http://my.server/me.png"))
)
```
###Assistant : Tests that saving an avatar fails when its mime type is not allowed
|
2,720 | def layer_uses_training_bool(layer):
if layer._expects_training_arg: # pylint: disable=protected-access
return True
visited = {layer}
to_visit = list_all_layers(layer)
while to_visit:
layer = to_visit.pop()
if layer in visited:
continue
if getattr(layer, "_expects_training_arg", True):
return True
visited.add(layer)
to_visit.extend(list_all_layers(layer))
return False
| Returns whether this layer or any of its children uses the training arg. | 13 | 35 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def layer_uses_training_bool(layer):
if layer._expects_training_arg: # pylint: disable=protected-access
return True
visited = {layer}
to_visit = list_all_layers(layer)
while to_visit:
layer = to_visit.pop()
if layer in visited:
continue
if getattr(layer, "_expects_training_arg", True):
return True
visited.add(layer)
to_visit.extend(list_all_layers(layer))
return False
```
###Assistant : Returns whether this layer or any of its children uses the training arg.
|
2,721 | async def test_edgeql_for_in_computable_09(self):
# This is basically test_edgeql_for_in_computable_01 but with
# a WITH binding in front of the whole shape
await self.assert_query_result(
r |
WITH
U := (
SELECT User {
select_deck := (
FOR letter IN {'I', 'B'}
UNION (
SELECT User.deck {
name,
# just define an ad-hoc link prop | 28 | 23 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
async def test_edgeql_for_in_computable_09(self):
# This is basically test_edgeql_for_in_computable_01 but with
# a WITH binding in front of the whole shape
await self.assert_query_result(
r
```
###Assistant :
WITH
U := (
SELECT User {
select_deck := (
FOR letter IN {'I', 'B'}
UNION (
SELECT User.deck {
name,
# just define an ad-hoc link prop
|
2,722 | def get_actual_details(name, filters):
budget_against = frappe.scrub(filters.get("budget_against"))
cond = ""
if filters.get("budget_against") == "Cost Center":
cc_lft, cc_rgt = frappe.db.get_value("Cost Center", name, ["lft", "rgt"])
cond = .format(
lft=cc_lft, rgt=cc_rgt
)
ac_details = frappe.db.sql(
.format(
tab=filters.budget_against, budget_against=budget_against, cond=cond
),
(filters.from_fiscal_year, filters.to_fiscal_year, name),
as_dict=1,
)
cc_actual_details = {}
for d in ac_details:
cc_actual_details.setdefault(d.account, []).append(d)
return cc_actual_details
|
and lft >= "{lft}"
and rgt <= "{rgt}"
select
gl.account,
gl.debit,
gl.credit,
gl.fiscal_year,
MONTHNAME(gl.posting_date) as month_name,
b.{budget_against} as budget_against
from
`tabGL Entry` gl,
`tabBudget Account` ba,
`tabBudget` b
where
b.name = ba.parent
and b.docstatus = 1
and ba.account=gl.account
and b.{budget_against} = gl.{budget_against}
and gl.fiscal_year between %s and %s
and b.{budget_against} = %s
and exists(
select
name
from
`tab{tab}`
where
name = gl.{budget_against}
{cond}
)
group by
gl.name
order by gl.fiscal_year
| 70 | 52 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def get_actual_details(name, filters):
budget_against = frappe.scrub(filters.get("budget_against"))
cond = ""
if filters.get("budget_against") == "Cost Center":
cc_lft, cc_rgt = frappe.db.get_value("Cost Center", name, ["lft", "rgt"])
cond = .format(
lft=cc_lft, rgt=cc_rgt
)
ac_details = frappe.db.sql(
.format(
tab=filters.budget_against, budget_against=budget_against, cond=cond
),
(filters.from_fiscal_year, filters.to_fiscal_year, name),
as_dict=1,
)
cc_actual_details = {}
for d in ac_details:
cc_actual_details.setdefault(d.account, []).append(d)
return cc_actual_details
```
###Assistant :
and lft >= "{lft}"
and rgt <= "{rgt}"
select
gl.account,
gl.debit,
gl.credit,
gl.fiscal_year,
MONTHNAME(gl.posting_date) as month_name,
b.{budget_against} as budget_against
from
`tabGL Entry` gl,
`tabBudget Account` ba,
`tabBudget` b
where
b.name = ba.parent
and b.docstatus = 1
and ba.account=gl.account
and b.{budget_against} = gl.{budget_against}
and gl.fiscal_year between %s and %s
and b.{budget_against} = %s
and exists(
select
name
from
`tab{tab}`
where
name = gl.{budget_against}
{cond}
)
group by
gl.name
order by gl.fiscal_year
|
2,723 | def print_index(toc):
dash = "-"*(100 - 7)
space = " "*47
print(f"{space}INDEX")
print(f"\n\nName : {dash} PageNo.\n\n\n")
for topic in toc:
eq_dash = "-"*(100 - len(topic[1]))
print(f"{topic[1]} {eq_dash} {topic[2]}")
| Prints out the index in proper format with title name and page number
Args:
toc (nested list): toc[1] - Topic name
toc[2] - Page number
| 25 | 28 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def print_index(toc):
dash = "-"*(100 - 7)
space = " "*47
print(f"{space}INDEX")
print(f"\n\nName : {dash} PageNo.\n\n\n")
for topic in toc:
eq_dash = "-"*(100 - len(topic[1]))
print(f"{topic[1]} {eq_dash} {topic[2]}")
```
###Assistant : Prints out the index in proper format with title name and page number
Args:
toc (nested list): toc[1] - Topic name
toc[2] - Page number
|
2,724 | def pie(self, X, win=None, env=None, opts=None):
X = np.squeeze(X)
assert X.ndim == 1, "X should be one-dimensional"
assert np.all(np.greater_equal(X, 0)), "X cannot contain negative values"
opts = {} if opts is None else opts
_title2str(opts)
_assert_opts(opts)
data = [
{
"values": X.tolist(),
"labels": opts.get("legend"),
"type": "pie",
}
]
return self._send(
{
"data": data,
"win": win,
"eid": env,
"layout": _opts2layout(opts),
"opts": opts,
}
)
|
This function draws a pie chart based on the `N` tensor `X`.
The following `opts` are supported:
- `opts.legend`: `list` containing legend names
| 23 | 63 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def pie(self, X, win=None, env=None, opts=None):
X = np.squeeze(X)
assert X.ndim == 1, "X should be one-dimensional"
assert np.all(np.greater_equal(X, 0)), "X cannot contain negative values"
opts = {} if opts is None else opts
_title2str(opts)
_assert_opts(opts)
data = [
{
"values": X.tolist(),
"labels": opts.get("legend"),
"type": "pie",
}
]
return self._send(
{
"data": data,
"win": win,
"eid": env,
"layout": _opts2layout(opts),
"opts": opts,
}
)
```
###Assistant :
This function draws a pie chart based on the `N` tensor `X`.
The following `opts` are supported:
- `opts.legend`: `list` containing legend names
|
2,725 | def dry_run(self) -> None:
pod = self.build_pod_request_obj()
print(yaml.dump(prune_dict(pod.to_dict(), mode='strict')))
|
Prints out the pod definition that would be created by this operator.
Does not include labels specific to the task instance (since there isn't
one in a dry_run) and excludes all empty elements.
| 33 | 9 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def dry_run(self) -> None:
pod = self.build_pod_request_obj()
print(yaml.dump(prune_dict(pod.to_dict(), mode='strict')))
```
###Assistant :
Prints out the pod definition that would be created by this operator.
Does not include labels specific to the task instance (since there isn't
one in a dry_run) and excludes all empty elements.
|
2,726 | def ExponentialPower(name, mu, alpha, beta):
r
return rv(name, ExponentialPowerDistribution, (mu, alpha, beta))
#-------------------------------------------------------------------------------
# F distribution ---------------------------------------------------------------
|
Create a Continuous Random Variable with Exponential Power distribution.
This distribution is known also as Generalized Normal
distribution version 1.
Explanation
===========
The density of the Exponential Power distribution is given by
.. math::
f(x) := \frac{\beta}{2\alpha\Gamma(\frac{1}{\beta})}
e^{{-(\frac{|x - \mu|}{\alpha})^{\beta}}}
with :math:`x \in [ - \infty, \infty ]`.
Parameters
==========
mu : Real number
A location.
alpha : Real number,`\alpha > 0`
A scale.
beta : Real number, `\beta > 0`
A shape.
Returns
=======
RandomSymbol
Examples
========
>>> from sympy.stats import ExponentialPower, density, cdf
>>> from sympy import Symbol, pprint
>>> z = Symbol("z")
>>> mu = Symbol("mu")
>>> alpha = Symbol("alpha", positive=True)
>>> beta = Symbol("beta", positive=True)
>>> X = ExponentialPower("x", mu, alpha, beta)
>>> pprint(density(X)(z), use_unicode=False)
beta
/|mu - z|\
-|--------|
\ alpha /
beta*e
---------------------
/ 1 \
2*alpha*Gamma|----|
\beta/
>>> cdf(X)(z)
1/2 + lowergamma(1/beta, (Abs(mu - z)/alpha)**beta)*sign(-mu + z)/(2*gamma(1/beta))
References
==========
.. [1] https://reference.wolfram.com/language/ref/ExponentialPowerDistribution.html
.. [2] https://en.wikipedia.org/wiki/Generalized_normal_distribution#Version_1
| 152 | 17 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def ExponentialPower(name, mu, alpha, beta):
r
return rv(name, ExponentialPowerDistribution, (mu, alpha, beta))
#-------------------------------------------------------------------------------
# F distribution ---------------------------------------------------------------
```
###Assistant :
Create a Continuous Random Variable with Exponential Power distribution.
This distribution is known also as Generalized Normal
distribution version 1.
Explanation
===========
The density of the Exponential Power distribution is given by
.. math::
f(x) := \frac{\beta}{2\alpha\Gamma(\frac{1}{\beta})}
e^{{-(\frac{|x - \mu|}{\alpha})^{\beta}}}
with :math:`x \in [ - \infty, \infty ]`.
Parameters
==========
mu : Real number
A location.
alpha : Real number,`\alpha > 0`
A scale.
beta : Real number, `\beta > 0`
A shape.
Returns
=======
RandomSymbol
Examples
========
>>> from sympy.stats import ExponentialPower, density, cdf
>>> from sympy import Symbol, pprint
>>> z = Symbol("z")
>>> mu = Symbol("mu")
>>> alpha = Symbol("alpha", positive=True)
>>> beta = Symbol("beta", positive=True)
>>> X = ExponentialPower("x", mu, alpha, beta)
>>> pprint(density(X)(z), use_unicode=False)
beta
/|mu - z|\
-|--------|
\ alpha /
beta*e
---------------------
/ 1 \
2*alpha*Gamma|----|
\beta/
>>> cdf(X)(z)
1/2 + lowergamma(1/beta, (Abs(mu - z)/alpha)**beta)*sign(-mu + z)/(2*gamma(1/beta))
References
==========
.. [1] https://reference.wolfram.com/language/ref/ExponentialPowerDistribution.html
.. [2] https://en.wikipedia.org/wiki/Generalized_normal_distribution#Version_1
|
2,727 | def get_palette(num_cls):
n = num_cls
palette = [0] * (n * 3)
for j in range(0, n):
lab = j
palette[j * 3 + 0] = 0
palette[j * 3 + 1] = 0
palette[j * 3 + 2] = 0
i = 0
while lab:
palette[j * 3 + 0] |= (((lab >> 0) & 1) << (7 - i))
palette[j * 3 + 1] |= (((lab >> 1) & 1) << (7 - i))
palette[j * 3 + 2] |= (((lab >> 2) & 1) << (7 - i))
i += 1
lab >>= 3
return palette
| Returns the color map for visualizing the segmentation mask.
Args:
num_cls: Number of classes
Returns:
The color map
| 18 | 99 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def get_palette(num_cls):
n = num_cls
palette = [0] * (n * 3)
for j in range(0, n):
lab = j
palette[j * 3 + 0] = 0
palette[j * 3 + 1] = 0
palette[j * 3 + 2] = 0
i = 0
while lab:
palette[j * 3 + 0] |= (((lab >> 0) & 1) << (7 - i))
palette[j * 3 + 1] |= (((lab >> 1) & 1) << (7 - i))
palette[j * 3 + 2] |= (((lab >> 2) & 1) << (7 - i))
i += 1
lab >>= 3
return palette
```
###Assistant : Returns the color map for visualizing the segmentation mask.
Args:
num_cls: Number of classes
Returns:
The color map
|
2,728 | def _Net_forward_backward_all(self, blobs=None, diffs=None, **kwargs):
# Batch blobs and diffs.
all_outs = {out: [] for out in set(self.outputs + (blobs or []))}
all_diffs = {diff: [] for diff in set(self.inputs + (diffs or []))}
forward_batches = self._batch({in_: kwargs[in_]
for in_ in self.inputs if in_ in kwargs})
backward_batches = self._batch({out: kwargs[out]
for out in self.outputs if out in kwargs})
# Collect outputs from batches (and heed lack of forward/backward batches).
for fb, bb in izip_longest(forward_batches, backward_batches, fillvalue={}):
batch_blobs = self.forward(blobs=blobs, **fb)
batch_diffs = self.backward(diffs=diffs, **bb)
for out, out_blobs in six.iteritems(batch_blobs):
all_outs[out].extend(out_blobs.copy())
for diff, out_diffs in six.iteritems(batch_diffs):
all_diffs[diff].extend(out_diffs.copy())
# Package in ndarray.
for out, diff in zip(all_outs, all_diffs):
all_outs[out] = np.asarray(all_outs[out])
all_diffs[diff] = np.asarray(all_diffs[diff])
# Discard padding at the end and package in ndarray.
pad = len(six.next(six.itervalues(all_outs))) - len(six.next(six.itervalues(kwargs)))
if pad:
for out, diff in zip(all_outs, all_diffs):
all_outs[out] = all_outs[out][:-pad]
all_diffs[diff] = all_diffs[diff][:-pad]
return all_outs, all_diffs
|
Run net forward + backward in batches.
Parameters
----------
blobs: list of blobs to extract as in forward()
diffs: list of diffs to extract as in backward()
kwargs: Keys are input (for forward) and output (for backward) blob names
and values are ndarrays. Refer to forward() and backward().
Prefilled variants are called for lack of input or output blobs.
Returns
-------
all_blobs: {blob name: blob ndarray} dict.
all_diffs: {blob name: diff ndarray} dict.
| 73 | 144 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _Net_forward_backward_all(self, blobs=None, diffs=None, **kwargs):
# Batch blobs and diffs.
all_outs = {out: [] for out in set(self.outputs + (blobs or []))}
all_diffs = {diff: [] for diff in set(self.inputs + (diffs or []))}
forward_batches = self._batch({in_: kwargs[in_]
for in_ in self.inputs if in_ in kwargs})
backward_batches = self._batch({out: kwargs[out]
for out in self.outputs if out in kwargs})
# Collect outputs from batches (and heed lack of forward/backward batches).
for fb, bb in izip_longest(forward_batches, backward_batches, fillvalue={}):
batch_blobs = self.forward(blobs=blobs, **fb)
batch_diffs = self.backward(diffs=diffs, **bb)
for out, out_blobs in six.iteritems(batch_blobs):
all_outs[out].extend(out_blobs.copy())
for diff, out_diffs in six.iteritems(batch_diffs):
all_diffs[diff].extend(out_diffs.copy())
# Package in ndarray.
for out, diff in zip(all_outs, all_diffs):
all_outs[out] = np.asarray(all_outs[out])
all_diffs[diff] = np.asarray(all_diffs[diff])
# Discard padding at the end and package in ndarray.
pad = len(six.next(six.itervalues(all_outs))) - len(six.next(six.itervalues(kwargs)))
if pad:
for out, diff in zip(all_outs, all_diffs):
all_outs[out] = all_outs[out][:-pad]
all_diffs[diff] = all_diffs[diff][:-pad]
return all_outs, all_diffs
```
###Assistant :
Run net forward + backward in batches.
Parameters
----------
blobs: list of blobs to extract as in forward()
diffs: list of diffs to extract as in backward()
kwargs: Keys are input (for forward) and output (for backward) blob names
and values are ndarrays. Refer to forward() and backward().
Prefilled variants are called for lack of input or output blobs.
Returns
-------
all_blobs: {blob name: blob ndarray} dict.
all_diffs: {blob name: diff ndarray} dict.
|
2,729 | def test_user_does_not_exist(self) -> None:
url = "/_synapse/admin/v2/users/@unknown_person:test/devices"
channel = self.make_request(
"GET",
url,
access_token=self.admin_user_tok,
)
self.assertEqual(404, channel.code, msg=channel.json_body)
self.assertEqual(Codes.NOT_FOUND, channel.json_body["errcode"])
|
Tests that a lookup for a user that does not exist returns a 404
| 14 | 19 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_user_does_not_exist(self) -> None:
url = "/_synapse/admin/v2/users/@unknown_person:test/devices"
channel = self.make_request(
"GET",
url,
access_token=self.admin_user_tok,
)
self.assertEqual(404, channel.code, msg=channel.json_body)
self.assertEqual(Codes.NOT_FOUND, channel.json_body["errcode"])
```
###Assistant :
Tests that a lookup for a user that does not exist returns a 404
|
2,730 | def send_ping(self) -> None:
now = self.clock.time_msec()
if self.time_we_closed:
if now - self.time_we_closed > PING_TIMEOUT_MS:
logger.info(
"[%s] Failed to close connection gracefully, aborting", self.id()
)
assert self.transport is not None
self.transport.abortConnection()
else:
if now - self.last_sent_command >= PING_TIME:
self.send_command(PingCommand(str(now)))
if (
self.received_ping
and now - self.last_received_command > PING_TIMEOUT_MS
):
logger.info(
"[%s] Connection hasn't received command in %r ms. Closing.",
self.id(),
now - self.last_received_command,
)
self.send_error("ping timeout")
| Periodically sends a ping and checks if we should close the connection
due to the other side timing out.
| 19 | 66 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def send_ping(self) -> None:
now = self.clock.time_msec()
if self.time_we_closed:
if now - self.time_we_closed > PING_TIMEOUT_MS:
logger.info(
"[%s] Failed to close connection gracefully, aborting", self.id()
)
assert self.transport is not None
self.transport.abortConnection()
else:
if now - self.last_sent_command >= PING_TIME:
self.send_command(PingCommand(str(now)))
if (
self.received_ping
and now - self.last_received_command > PING_TIMEOUT_MS
):
logger.info(
"[%s] Connection hasn't received command in %r ms. Closing.",
self.id(),
now - self.last_received_command,
)
self.send_error("ping timeout")
```
###Assistant : Periodically sends a ping and checks if we should close the connection
due to the other side timing out.
|
2,731 | def workflow_logging_context(job_id) -> None:
node = ray.worker._global_node
original_out_file, original_err_file = node.get_log_file_handles(
get_worker_log_file_name("WORKER")
)
out_file, err_file = node.get_log_file_handles(
get_worker_log_file_name("WORKER", job_id)
)
try:
configure_log_file(out_file, err_file)
yield
finally:
configure_log_file(original_out_file, original_err_file)
| Initialize the workflow logging context.
Workflow executions are running as remote functions from
WorkflowManagementActor. Without logging redirection, workflow
inner execution logs will be pushed to the driver that initially
created WorkflowManagementActor rather than the driver that
actually submits the current workflow execution.
We use this conext manager to re-configure the log files to send
the logs to the correct driver, and to restore the log files once
the execution is done.
Args:
job_id: The ID of the job that submits the workflow execution.
| 83 | 27 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def workflow_logging_context(job_id) -> None:
node = ray.worker._global_node
original_out_file, original_err_file = node.get_log_file_handles(
get_worker_log_file_name("WORKER")
)
out_file, err_file = node.get_log_file_handles(
get_worker_log_file_name("WORKER", job_id)
)
try:
configure_log_file(out_file, err_file)
yield
finally:
configure_log_file(original_out_file, original_err_file)
```
###Assistant : Initialize the workflow logging context.
Workflow executions are running as remote functions from
WorkflowManagementActor. Without logging redirection, workflow
inner execution logs will be pushed to the driver that initially
created WorkflowManagementActor rather than the driver that
actually submits the current workflow execution.
We use this conext manager to re-configure the log files to send
the logs to the correct driver, and to restore the log files once
the execution is done.
Args:
job_id: The ID of the job that submits the workflow execution.
|
2,732 | def console_entry_point():
if "--profile" in sys.argv:
with cProfile.Profile() as profile:
entry_point()
stats = pstats.Stats(profile)
stats.sort_stats(pstats.SortKey.TIME)
# Use snakeviz to visualize the profile
stats.dump_stats("spotdl.profile")
else:
entry_point()
|
Wrapper around `entry_point` so we can profile the code
| 9 | 25 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def console_entry_point():
if "--profile" in sys.argv:
with cProfile.Profile() as profile:
entry_point()
stats = pstats.Stats(profile)
stats.sort_stats(pstats.SortKey.TIME)
# Use snakeviz to visualize the profile
stats.dump_stats("spotdl.profile")
else:
entry_point()
```
###Assistant :
Wrapper around `entry_point` so we can profile the code
|
2,733 | def callbacks(self, callbacks_class) -> "TrainerConfig":
self.callbacks_class = callbacks_class
return self
| Sets the callbacks configuration.
Args:
callbacks_class: Callbacks class, whose methods will be run during
various phases of training and environment sample collection.
See the `DefaultCallbacks` class and
`examples/custom_metrics_and_callbacks.py` for more usage information.
Returns:
This updated TrainerConfig object.
| 37 | 10 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def callbacks(self, callbacks_class) -> "TrainerConfig":
self.callbacks_class = callbacks_class
return self
```
###Assistant : Sets the callbacks configuration.
Args:
callbacks_class: Callbacks class, whose methods will be run during
various phases of training and environment sample collection.
See the `DefaultCallbacks` class and
`examples/custom_metrics_and_callbacks.py` for more usage information.
Returns:
This updated TrainerConfig object.
|
2,734 | def test_get_dynamic_sampling_after_migrating_to_new_plan_default_biases(self):
self.project.update_option("sentry:dynamic_sampling", self.dynamic_sampling_data)
with Feature(
{
self.universal_ds_flag: True,
self.old_ds_flag: True,
self.new_ds_flag: True,
}
):
response = self.get_success_response(
self.organization.slug, self.project.slug, method="get"
)
assert response.data["dynamicSampling"] is None
assert response.data["dynamicSamplingBiases"] == DEFAULT_BIASES
|
Tests the case when an organization was in EA/LA and has setup previously Dynamic Sampling rules,
and now they have migrated to an AM2 plan, but haven't manipulated the bias toggles yet so they get the
default biases. This also ensures that they no longer receive the deprecated dynamic sampling rules.
| 51 | 30 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_get_dynamic_sampling_after_migrating_to_new_plan_default_biases(self):
self.project.update_option("sentry:dynamic_sampling", self.dynamic_sampling_data)
with Feature(
{
self.universal_ds_flag: True,
self.old_ds_flag: True,
self.new_ds_flag: True,
}
):
response = self.get_success_response(
self.organization.slug, self.project.slug, method="get"
)
assert response.data["dynamicSampling"] is None
assert response.data["dynamicSamplingBiases"] == DEFAULT_BIASES
```
###Assistant :
Tests the case when an organization was in EA/LA and has setup previously Dynamic Sampling rules,
and now they have migrated to an AM2 plan, but haven't manipulated the bias toggles yet so they get the
default biases. This also ensures that they no longer receive the deprecated dynamic sampling rules.
|
2,735 | def strtobool(val):
# type: (str) -> int
val = val.lower()
if val in ("y", "yes", "t", "true", "on", "1"):
return 1
elif val in ("n", "no", "f", "false", "off", "0"):
return 0
else:
raise ValueError(f"invalid truth value {val!r}")
| Convert a string representation of truth to true (1) or false (0).
True values are 'y', 'yes', 't', 'true', 'on', and '1'; false values
are 'n', 'no', 'f', 'false', 'off', and '0'. Raises ValueError if
'val' is anything else.
| 39 | 38 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def strtobool(val):
# type: (str) -> int
val = val.lower()
if val in ("y", "yes", "t", "true", "on", "1"):
return 1
elif val in ("n", "no", "f", "false", "off", "0"):
return 0
else:
raise ValueError(f"invalid truth value {val!r}")
```
###Assistant : Convert a string representation of truth to true (1) or false (0).
True values are 'y', 'yes', 't', 'true', 'on', and '1'; false values
are 'n', 'no', 'f', 'false', 'off', and '0'. Raises ValueError if
'val' is anything else.
|
2,736 | def convert_xunits(self, x):
ax = getattr(self, 'axes', None)
if ax is None or ax.xaxis is None:
return x
return ax.xaxis.convert_units(x)
|
Convert *x* using the unit type of the xaxis.
If the artist is not contained in an Axes or if the xaxis does not
have units, *x* itself is returned.
| 30 | 20 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def convert_xunits(self, x):
ax = getattr(self, 'axes', None)
if ax is None or ax.xaxis is None:
return x
return ax.xaxis.convert_units(x)
```
###Assistant :
Convert *x* using the unit type of the xaxis.
If the artist is not contained in an Axes or if the xaxis does not
have units, *x* itself is returned.
|
2,737 | def value_as_datetime(self) -> tp.Tuple[datetime, datetime] | None:
if self.value is None:
return None
v1, v2 = self.value
if isinstance(v1, numbers.Number):
d1 = datetime.utcfromtimestamp(v1 / 1000)
else:
d1 = v1
if isinstance(v2, numbers.Number):
d2 = datetime.utcfromtimestamp(v2 / 1000)
else:
d2 = v2
return d1, d2
value = NonNullable(Tuple(Datetime, Datetime), help=)
value_throttled = Readonly(NonNullable(Tuple(Datetime, Datetime)), help=)
start = NonNullable(Datetime, help=)
end = NonNullable(Datetime, help=)
step = Int(default=3_600_000, help=)
format = Override(default="%d %b %Y %H:%M:%S")
#-----------------------------------------------------------------------------
# Private API
#-----------------------------------------------------------------------------
#-----------------------------------------------------------------------------
# Code
#-----------------------------------------------------------------------------
| Convenience property to retrieve the value tuple as a tuple of
datetime objects.
Initial or selected range.
Initial or selected value, throttled to report only on mouseup.
The minimum allowable value.
The maximum allowable value.
The step between consecutive values, in units of milliseconds.
Default is one hour.
| 48 | 81 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def value_as_datetime(self) -> tp.Tuple[datetime, datetime] | None:
if self.value is None:
return None
v1, v2 = self.value
if isinstance(v1, numbers.Number):
d1 = datetime.utcfromtimestamp(v1 / 1000)
else:
d1 = v1
if isinstance(v2, numbers.Number):
d2 = datetime.utcfromtimestamp(v2 / 1000)
else:
d2 = v2
return d1, d2
value = NonNullable(Tuple(Datetime, Datetime), help=)
value_throttled = Readonly(NonNullable(Tuple(Datetime, Datetime)), help=)
start = NonNullable(Datetime, help=)
end = NonNullable(Datetime, help=)
step = Int(default=3_600_000, help=)
format = Override(default="%d %b %Y %H:%M:%S")
#-----------------------------------------------------------------------------
# Private API
#-----------------------------------------------------------------------------
#-----------------------------------------------------------------------------
# Code
#-----------------------------------------------------------------------------
```
###Assistant : Convenience property to retrieve the value tuple as a tuple of
datetime objects.
Initial or selected range.
Initial or selected value, throttled to report only on mouseup.
The minimum allowable value.
The maximum allowable value.
The step between consecutive values, in units of milliseconds.
Default is one hour.
|
2,738 | def test_first_event_with_minified_stack_trace_received(self, record_analytics):
now = timezone.now()
project = self.create_project(first_event=now)
project_created.send(project=project, user=self.user, sender=type(project))
url = "http://localhost:3000"
data = load_data("javascript")
data["tags"] = [("url", url)]
data["exception"] = {
"values": [
{
**data["exception"]["values"][0],
"raw_stacktrace": {
"frames": [
{
"function": "o",
"filename": "/_static/dist/sentry/chunks/vendors-node_modules_emotion_is-prop-valid_node_modules_emotion_memoize_dist_memoize_browser_-4fe4bd.255071ceadabfb67483c.js",
"abs_path": "https://s1.sentry-cdn.com/_static/dist/sentry/chunks/vendors-node_modules_emotion_is-prop-valid_node_modules_emotion_memoize_dist_memoize_browser_-4fe4bd.255071ceadabfb67483c.js",
"lineno": 2,
"colno": 37098,
"pre_context": [
"/*! For license information please see vendors-node_modules_emotion_is-prop-valid_node_modules_emotion_memoize_dist_memoize_browser_-4fe4bd. {snip}"
],
"context_line": "{snip} .apply(this,arguments);const i=o.map((e=>c(e,t)));return e.apply(this,i)}catch(e){throw l(),(0,i.$e)((n=>{n.addEventProcessor((e=>(t.mechani {snip}",
"post_context": [
"//# sourceMappingURL=../sourcemaps/vendors-node_modules_emotion_is-prop-valid_node_modules_emotion_memoize_dist_memoize_browser_-4fe4bd.fe32 {snip}"
],
"in_app": False,
},
],
},
}
]
}
self.store_event(
project_id=project.id,
data=data,
)
record_analytics.assert_called_with(
"first_event_with_minified_stack_trace_for_project.sent",
user_id=self.user.id,
organization_id=project.organization_id,
project_id=project.id,
platform=data["platform"],
url=url,
)
|
Test that an analytics event is recorded when
a first event with minified stack trace is received
| 17 | 88 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_first_event_with_minified_stack_trace_received(self, record_analytics):
now = timezone.now()
project = self.create_project(first_event=now)
project_created.send(project=project, user=self.user, sender=type(project))
url = "http://localhost:3000"
data = load_data("javascript")
data["tags"] = [("url", url)]
data["exception"] = {
"values": [
{
**data["exception"]["values"][0],
"raw_stacktrace": {
"frames": [
{
"function": "o",
"filename": "/_static/dist/sentry/chunks/vendors-node_modules_emotion_is-prop-valid_node_modules_emotion_memoize_dist_memoize_browser_-4fe4bd.255071ceadabfb67483c.js",
"abs_path": "https://s1.sentry-cdn.com/_static/dist/sentry/chunks/vendors-node_modules_emotion_is-prop-valid_node_modules_emotion_memoize_dist_memoize_browser_-4fe4bd.255071ceadabfb67483c.js",
"lineno": 2,
"colno": 37098,
"pre_context": [
"/*! For license information please see vendors-node_modules_emotion_is-prop-valid_node_modules_emotion_memoize_dist_memoize_browser_-4fe4bd. {snip}"
],
"context_line": "{snip} .apply(this,arguments);const i=o.map((e=>c(e,t)));return e.apply(this,i)}catch(e){throw l(),(0,i.$e)((n=>{n.addEventProcessor((e=>(t.mechani {snip}",
"post_context": [
"//# sourceMappingURL=../sourcemaps/vendors-node_modules_emotion_is-prop-valid_node_modules_emotion_memoize_dist_memoize_browser_-4fe4bd.fe32 {snip}"
],
"in_app": False,
},
],
},
}
]
}
self.store_event(
project_id=project.id,
data=data,
)
record_analytics.assert_called_with(
"first_event_with_minified_stack_trace_for_project.sent",
user_id=self.user.id,
organization_id=project.organization_id,
project_id=project.id,
platform=data["platform"],
url=url,
)
```
###Assistant :
Test that an analytics event is recorded when
a first event with minified stack trace is received
|
2,739 | def arange(start, /, stop=None, step=1, *, dtype=None, meta=None, **kwargs):
raise NotImplementedError
| Create an ascending or descending array
Returns evenly spaced values within the half-open interval
``[start, stop)`` as a one-dimensional array.
| 20 | 11 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def arange(start, /, stop=None, step=1, *, dtype=None, meta=None, **kwargs):
raise NotImplementedError
```
###Assistant : Create an ascending or descending array
Returns evenly spaced values within the half-open interval
``[start, stop)`` as a one-dimensional array.
|
2,740 | def best_checkpoints(self):
checkpoints = sorted(self._top_persisted_checkpoints, key=lambda c: c.priority)
return [wrapped.tracked_checkpoint for wrapped in checkpoints]
| Returns best PERSISTENT checkpoints, sorted by score. | 7 | 14 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def best_checkpoints(self):
checkpoints = sorted(self._top_persisted_checkpoints, key=lambda c: c.priority)
return [wrapped.tracked_checkpoint for wrapped in checkpoints]
```
###Assistant : Returns best PERSISTENT checkpoints, sorted by score.
|
2,741 | def get_config_directory() -> Path:
if os.getenv('NNI_CONFIG_DIR') is not None:
config_dir = Path(os.getenv('NNI_CONFIG_DIR')) # type: ignore
elif sys.prefix != sys.base_prefix or Path(sys.prefix, 'conda-meta').is_dir():
config_dir = Path(sys.prefix, 'nni')
elif sys.platform == 'win32':
config_dir = Path(os.environ['APPDATA'], 'nni')
else:
config_dir = Path.home() / '.config/nni'
config_dir.mkdir(parents=True, exist_ok=True)
return config_dir
|
Get NNI config directory.
Create it if not exist.
| 9 | 44 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def get_config_directory() -> Path:
if os.getenv('NNI_CONFIG_DIR') is not None:
config_dir = Path(os.getenv('NNI_CONFIG_DIR')) # type: ignore
elif sys.prefix != sys.base_prefix or Path(sys.prefix, 'conda-meta').is_dir():
config_dir = Path(sys.prefix, 'nni')
elif sys.platform == 'win32':
config_dir = Path(os.environ['APPDATA'], 'nni')
else:
config_dir = Path.home() / '.config/nni'
config_dir.mkdir(parents=True, exist_ok=True)
return config_dir
```
###Assistant :
Get NNI config directory.
Create it if not exist.
|
2,742 | def test_background_update_min_batch_set_in_config(self):
# a very long-running individual update
duration_ms = 50
self.get_success(
self.store.db_pool.simple_insert(
"background_updates",
values={"update_name": "test_update", "progress_json": '{"my_key": 1}'},
)
)
# Run the update with the long-running update item |
Test that the minimum batch size set in the config is used
| 12 | 30 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_background_update_min_batch_set_in_config(self):
# a very long-running individual update
duration_ms = 50
self.get_success(
self.store.db_pool.simple_insert(
"background_updates",
values={"update_name": "test_update", "progress_json": '{"my_key": 1}'},
)
)
# Run the update with the long-running update item
```
###Assistant :
Test that the minimum batch size set in the config is used
|
2,743 | def swap_memory():
mem = cext.virtual_mem()
total_phys = mem[0]
free_phys = mem[1]
total_system = mem[2]
free_system = mem[3]
# Despite the name PageFile refers to total system memory here
# thus physical memory values need to be subtracted to get swap values
total = total_system - total_phys
free = min(total, free_system - free_phys)
used = total - free
percent = usage_percent(used, total, round_=1)
return _common.sswap(total, used, free, percent, 0, 0)
# =====================================================================
# --- disk
# =====================================================================
disk_io_counters = cext.disk_io_counters
| Swap system memory as a (total, used, free, sin, sout) tuple. | 11 | 79 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def swap_memory():
mem = cext.virtual_mem()
total_phys = mem[0]
free_phys = mem[1]
total_system = mem[2]
free_system = mem[3]
# Despite the name PageFile refers to total system memory here
# thus physical memory values need to be subtracted to get swap values
total = total_system - total_phys
free = min(total, free_system - free_phys)
used = total - free
percent = usage_percent(used, total, round_=1)
return _common.sswap(total, used, free, percent, 0, 0)
# =====================================================================
# --- disk
# =====================================================================
disk_io_counters = cext.disk_io_counters
```
###Assistant : Swap system memory as a (total, used, free, sin, sout) tuple.
|
2,744 | def get_supplier_invoice_details():
inv_details = {}
for d in frappe.db.sql(
,
as_dict=1,
):
inv_details[d.name] = d.bill_no
return inv_details
| select name, bill_no from `tabPurchase Invoice`
where docstatus = 1 and bill_no is not null and bill_no != '' | 19 | 17 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def get_supplier_invoice_details():
inv_details = {}
for d in frappe.db.sql(
,
as_dict=1,
):
inv_details[d.name] = d.bill_no
return inv_details
```
###Assistant : select name, bill_no from `tabPurchase Invoice`
where docstatus = 1 and bill_no is not null and bill_no != ''
|
2,745 | def dce_rpc_endianess(pkt):
if pkt.endianness == 0: # big endian
return ">"
elif pkt.endianness == 1: # little endian
return "<"
else:
return "!"
| Determine the right endianness sign for a given DCE/RPC packet | 10 | 23 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def dce_rpc_endianess(pkt):
if pkt.endianness == 0: # big endian
return ">"
elif pkt.endianness == 1: # little endian
return "<"
else:
return "!"
```
###Assistant : Determine the right endianness sign for a given DCE/RPC packet
|
2,746 | def deserialize_object(model, fields, pk=None):
content_type = ContentType.objects.get_for_model(model)
if 'custom_fields' in fields:
fields['custom_field_data'] = fields.pop('custom_fields')
data = {
'model': '.'.join(content_type.natural_key()),
'pk': pk,
'fields': fields,
}
instance = list(serializers.deserialize('python', [data]))[0]
return instance
|
Instantiate an object from the given model and field data. Functions as
the complement to serialize_object().
| 16 | 30 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def deserialize_object(model, fields, pk=None):
content_type = ContentType.objects.get_for_model(model)
if 'custom_fields' in fields:
fields['custom_field_data'] = fields.pop('custom_fields')
data = {
'model': '.'.join(content_type.natural_key()),
'pk': pk,
'fields': fields,
}
instance = list(serializers.deserialize('python', [data]))[0]
return instance
```
###Assistant :
Instantiate an object from the given model and field data. Functions as
the complement to serialize_object().
|
2,747 | async def test_component_not_installed_if_requirement_fails(hass):
hass.config.skip_pip = False
mock_integration(hass, MockModule("comp", requirements=["package==0.0.1"]))
with patch("homeassistant.util.package.install_package", return_value=False):
assert not await setup.async_setup_component(hass, "comp", {})
assert "comp" not in hass.config.components
| Component setup should fail if requirement can't install. | 8 | 23 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
async def test_component_not_installed_if_requirement_fails(hass):
hass.config.skip_pip = False
mock_integration(hass, MockModule("comp", requirements=["package==0.0.1"]))
with patch("homeassistant.util.package.install_package", return_value=False):
assert not await setup.async_setup_component(hass, "comp", {})
assert "comp" not in hass.config.components
```
###Assistant : Component setup should fail if requirement can't install.
|
2,748 | def use_numba_cb(key) -> None:
from pandas.core.util import numba_
numba_.set_use_numba(cf.get_option(key))
with cf.config_prefix("compute"):
cf.register_option(
"use_bottleneck",
True,
use_bottleneck_doc,
validator=is_bool,
cb=use_bottleneck_cb,
)
cf.register_option(
"use_numexpr", True, use_numexpr_doc, validator=is_bool, cb=use_numexpr_cb
)
cf.register_option(
"use_numba", False, use_numba_doc, validator=is_bool, cb=use_numba_cb
)
#
# options from the "display" namespace
pc_precision_doc =
pc_colspace_doc =
pc_max_rows_doc =
pc_min_rows_doc =
pc_max_cols_doc =
pc_max_categories_doc =
pc_max_info_cols_doc =
pc_nb_repr_h_doc =
pc_pprint_nest_depth =
pc_multi_sparse_doc =
float_format_doc =
max_colwidth_doc =
colheader_justify_doc =
pc_expand_repr_doc =
pc_show_dimensions_doc =
pc_east_asian_width_doc =
pc_ambiguous_as_wide_doc =
pc_latex_repr_doc =
pc_table_schema_doc =
pc_html_border_doc =
pc_html_use_mathjax_doc =
pc_max_dir_items =
pc_width_doc =
pc_chop_threshold_doc =
pc_max_seq_items =
pc_max_info_rows_doc =
pc_large_repr_doc =
pc_memory_usage_doc =
pc_latex_escape =
pc_latex_longtable =
pc_latex_multicolumn =
pc_latex_multicolumn_format =
pc_latex_multirow =
|
: int
Floating point output precision in terms of number of places after the
decimal, for regular formatting as well as scientific notation. Similar
to ``precision`` in :meth:`numpy.set_printoptions`.
: int
Default space for DataFrame columns.
: int
If max_rows is exceeded, switch to truncate view. Depending on
`large_repr`, objects are either centrally truncated or printed as
a summary view. 'None' value means unlimited.
In case python/IPython is running in a terminal and `large_repr`
equals 'truncate' this can be set to 0 and pandas will auto-detect
the height of the terminal and print a truncated object which fits
the screen height. The IPython notebook, IPython qtconsole, or
IDLE do not run in a terminal and hence it is not possible to do
correct auto-detection.
: int
The numbers of rows to show in a truncated view (when `max_rows` is
exceeded). Ignored when `max_rows` is set to None or 0. When set to
None, follows the value of `max_rows`.
: int
If max_cols is exceeded, switch to truncate view. Depending on
`large_repr`, objects are either centrally truncated or printed as
a summary view. 'None' value means unlimited.
In case python/IPython is running in a terminal and `large_repr`
equals 'truncate' this can be set to 0 and pandas will auto-detect
the width of the terminal and print a truncated object which fits
the screen width. The IPython notebook, IPython qtconsole, or IDLE
do not run in a terminal and hence it is not possible to do
correct auto-detection.
: int
This sets the maximum number of categories pandas should output when
printing out a `Categorical` or a Series of dtype "category".
: int
max_info_columns is used in DataFrame.info method to decide if
per column information will be printed.
: boolean
When True, IPython notebook will use html representation for
pandas objects (if it is available).
: int
Controls the number of nested levels to process when pretty-printing
: boolean
"sparsify" MultiIndex display (don't display repeated
elements in outer levels within groups)
: callable
The callable should accept a floating point number and return
a string with the desired format of the number. This is used
in some places like SeriesFormatter.
See formats.format.EngFormatter for an example.
: int or None
The maximum width in characters of a column in the repr of
a pandas data structure. When the column overflows, a "..."
placeholder is embedded in the output. A 'None' value means unlimited.
: 'left'/'right'
Controls the justification of column headers. used by DataFrameFormatter.
: boolean
Whether to print out the full DataFrame repr for wide DataFrames across
multiple lines, `max_columns` is still respected, but the output will
wrap-around across multiple "pages" if its width exceeds `display.width`.
: boolean or 'truncate'
Whether to print out dimensions at the end of DataFrame repr.
If 'truncate' is specified, only print out the dimensions if the
frame is truncated (e.g. not display all rows and/or columns)
: boolean
Whether to use the Unicode East Asian Width to calculate the display text
width.
Enabling this may affect to the performance (default: False)
: boolean
Whether to handle Unicode characters belong to Ambiguous as Wide (width=2)
(default: False)
: boolean
Whether to produce a latex DataFrame representation for jupyter
environments that support it.
(default: False)
: boolean
Whether to publish a Table Schema representation for frontends
that support it.
(default: False)
: int
A ``border=value`` attribute is inserted in the ``<table>`` tag
for the DataFrame HTML repr.
\
: boolean
When True, Jupyter notebook will process table contents using MathJax,
rendering mathematical expressions enclosed by the dollar symbol.
(default: True)
\
: int
The number of items that will be added to `dir(...)`. 'None' value means
unlimited. Because dir is cached, changing this option will not immediately
affect already existing dataframes until a column is deleted or added.
This is for instance used to suggest columns from a dataframe to tab
completion.
: int
Width of the display in characters. In case python/IPython is running in
a terminal this can be set to None and pandas will correctly auto-detect
the width.
Note that the IPython notebook, IPython qtconsole, or IDLE do not run in a
terminal and hence it is not possible to correctly detect the width.
: float or None
if set to a float value, all float values smaller then the given threshold
will be displayed as exactly 0 by repr and friends.
: int or None
When pretty-printing a long sequence, no more then `max_seq_items`
will be printed. If items are omitted, they will be denoted by the
addition of "..." to the resulting string.
If set to None, the number of items to be printed is unlimited.
: int or None
df.info() will usually show null-counts for each column.
For large frames this can be quite slow. max_info_rows and max_info_cols
limit this null check only to frames with smaller dimensions than
specified.
: 'truncate'/'info'
For DataFrames exceeding max_rows/max_cols, the repr (and HTML repr) can
show a truncated table (the default from 0.13), or switch to the view from
df.info() (the behaviour in earlier versions of pandas).
: bool, string or None
This specifies if the memory usage of a DataFrame should be displayed when
df.info() is called. Valid values True,False,'deep'
: bool
This specifies if the to_latex method of a Dataframe uses escapes special
characters.
Valid values: False,True
:bool
This specifies if the to_latex method of a Dataframe uses the longtable
format.
Valid values: False,True
: bool
This specifies if the to_latex method of a Dataframe uses multicolumns
to pretty-print MultiIndex columns.
Valid values: False,True
: string
This specifies the format for multicolumn headers.
Can be surrounded with '|'.
Valid values: 'l', 'c', 'r', 'p{<width>}'
: bool
This specifies if the to_latex method of a Dataframe uses multirows
to pretty-print MultiIndex rows.
Valid values: False,True
| 960 | 105 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def use_numba_cb(key) -> None:
from pandas.core.util import numba_
numba_.set_use_numba(cf.get_option(key))
with cf.config_prefix("compute"):
cf.register_option(
"use_bottleneck",
True,
use_bottleneck_doc,
validator=is_bool,
cb=use_bottleneck_cb,
)
cf.register_option(
"use_numexpr", True, use_numexpr_doc, validator=is_bool, cb=use_numexpr_cb
)
cf.register_option(
"use_numba", False, use_numba_doc, validator=is_bool, cb=use_numba_cb
)
#
# options from the "display" namespace
pc_precision_doc =
pc_colspace_doc =
pc_max_rows_doc =
pc_min_rows_doc =
pc_max_cols_doc =
pc_max_categories_doc =
pc_max_info_cols_doc =
pc_nb_repr_h_doc =
pc_pprint_nest_depth =
pc_multi_sparse_doc =
float_format_doc =
max_colwidth_doc =
colheader_justify_doc =
pc_expand_repr_doc =
pc_show_dimensions_doc =
pc_east_asian_width_doc =
pc_ambiguous_as_wide_doc =
pc_latex_repr_doc =
pc_table_schema_doc =
pc_html_border_doc =
pc_html_use_mathjax_doc =
pc_max_dir_items =
pc_width_doc =
pc_chop_threshold_doc =
pc_max_seq_items =
pc_max_info_rows_doc =
pc_large_repr_doc =
pc_memory_usage_doc =
pc_latex_escape =
pc_latex_longtable =
pc_latex_multicolumn =
pc_latex_multicolumn_format =
pc_latex_multirow =
```
###Assistant :
: int
Floating point output precision in terms of number of places after the
decimal, for regular formatting as well as scientific notation. Similar
to ``precision`` in :meth:`numpy.set_printoptions`.
: int
Default space for DataFrame columns.
: int
If max_rows is exceeded, switch to truncate view. Depending on
`large_repr`, objects are either centrally truncated or printed as
a summary view. 'None' value means unlimited.
In case python/IPython is running in a terminal and `large_repr`
equals 'truncate' this can be set to 0 and pandas will auto-detect
the height of the terminal and print a truncated object which fits
the screen height. The IPython notebook, IPython qtconsole, or
IDLE do not run in a terminal and hence it is not possible to do
correct auto-detection.
: int
The numbers of rows to show in a truncated view (when `max_rows` is
exceeded). Ignored when `max_rows` is set to None or 0. When set to
None, follows the value of `max_rows`.
: int
If max_cols is exceeded, switch to truncate view. Depending on
`large_repr`, objects are either centrally truncated or printed as
a summary view. 'None' value means unlimited.
In case python/IPython is running in a terminal and `large_repr`
equals 'truncate' this can be set to 0 and pandas will auto-detect
the width of the terminal and print a truncated object which fits
the screen width. The IPython notebook, IPython qtconsole, or IDLE
do not run in a terminal and hence it is not possible to do
correct auto-detection.
: int
This sets the maximum number of categories pandas should output when
printing out a `Categorical` or a Series of dtype "category".
: int
max_info_columns is used in DataFrame.info method to decide if
per column information will be printed.
: boolean
When True, IPython notebook will use html representation for
pandas objects (if it is available).
: int
Controls the number of nested levels to process when pretty-printing
: boolean
"sparsify" MultiIndex display (don't display repeated
elements in outer levels within groups)
: callable
The callable should accept a floating point number and return
a string with the desired format of the number. This is used
in some places like SeriesFormatter.
See formats.format.EngFormatter for an example.
: int or None
The maximum width in characters of a column in the repr of
a pandas data structure. When the column overflows, a "..."
placeholder is embedded in the output. A 'None' value means unlimited.
: 'left'/'right'
Controls the justification of column headers. used by DataFrameFormatter.
: boolean
Whether to print out the full DataFrame repr for wide DataFrames across
multiple lines, `max_columns` is still respected, but the output will
wrap-around across multiple "pages" if its width exceeds `display.width`.
: boolean or 'truncate'
Whether to print out dimensions at the end of DataFrame repr.
If 'truncate' is specified, only print out the dimensions if the
frame is truncated (e.g. not display all rows and/or columns)
: boolean
Whether to use the Unicode East Asian Width to calculate the display text
width.
Enabling this may affect to the performance (default: False)
: boolean
Whether to handle Unicode characters belong to Ambiguous as Wide (width=2)
(default: False)
: boolean
Whether to produce a latex DataFrame representation for jupyter
environments that support it.
(default: False)
: boolean
Whether to publish a Table Schema representation for frontends
that support it.
(default: False)
: int
A ``border=value`` attribute is inserted in the ``<table>`` tag
for the DataFrame HTML repr.
\
: boolean
When True, Jupyter notebook will process table contents using MathJax,
rendering mathematical expressions enclosed by the dollar symbol.
(default: True)
\
: int
The number of items that will be added to `dir(...)`. 'None' value means
unlimited. Because dir is cached, changing this option will not immediately
affect already existing dataframes until a column is deleted or added.
This is for instance used to suggest columns from a dataframe to tab
completion.
: int
Width of the display in characters. In case python/IPython is running in
a terminal this can be set to None and pandas will correctly auto-detect
the width.
Note that the IPython notebook, IPython qtconsole, or IDLE do not run in a
terminal and hence it is not possible to correctly detect the width.
: float or None
if set to a float value, all float values smaller then the given threshold
will be displayed as exactly 0 by repr and friends.
: int or None
When pretty-printing a long sequence, no more then `max_seq_items`
will be printed. If items are omitted, they will be denoted by the
addition of "..." to the resulting string.
If set to None, the number of items to be printed is unlimited.
: int or None
df.info() will usually show null-counts for each column.
For large frames this can be quite slow. max_info_rows and max_info_cols
limit this null check only to frames with smaller dimensions than
specified.
: 'truncate'/'info'
For DataFrames exceeding max_rows/max_cols, the repr (and HTML repr) can
show a truncated table (the default from 0.13), or switch to the view from
df.info() (the behaviour in earlier versions of pandas).
: bool, string or None
This specifies if the memory usage of a DataFrame should be displayed when
df.info() is called. Valid values True,False,'deep'
: bool
This specifies if the to_latex method of a Dataframe uses escapes special
characters.
Valid values: False,True
:bool
This specifies if the to_latex method of a Dataframe uses the longtable
format.
Valid values: False,True
: bool
This specifies if the to_latex method of a Dataframe uses multicolumns
to pretty-print MultiIndex columns.
Valid values: False,True
: string
This specifies the format for multicolumn headers.
Can be surrounded with '|'.
Valid values: 'l', 'c', 'r', 'p{<width>}'
: bool
This specifies if the to_latex method of a Dataframe uses multirows
to pretty-print MultiIndex rows.
Valid values: False,True
|
2,749 | def test_twitter_tag(self) -> None:
html = b
tree = decode_body(html, "http://example.com/test.html")
og = parse_html_to_open_graph(tree)
self.assertEqual(
og,
{
"og:title": None,
"og:description": "Description",
"og:site_name": "@matrixdotorg",
},
)
# But they shouldn't override Open Graph values.
html = b
tree = decode_body(html, "http://example.com/test.html")
og = parse_html_to_open_graph(tree)
self.assertEqual(
og,
{
"og:title": None,
"og:description": "Real Description",
"og:site_name": "matrix.org",
},
)
| Twitter card tags should be used if nothing else is available.
<html>
<meta name="twitter:card" content="summary">
<meta name="twitter:description" content="Description">
<meta name="twitter:site" content="@matrixdotorg">
</html>
<html>
<meta name="twitter:card" content="summary">
<meta name="twitter:description" content="Description">
<meta property="og:description" content="Real Description">
<meta name="twitter:site" content="@matrixdotorg">
<meta property="og:site_name" content="matrix.org">
</html>
| 40 | 55 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_twitter_tag(self) -> None:
html = b
tree = decode_body(html, "http://example.com/test.html")
og = parse_html_to_open_graph(tree)
self.assertEqual(
og,
{
"og:title": None,
"og:description": "Description",
"og:site_name": "@matrixdotorg",
},
)
# But they shouldn't override Open Graph values.
html = b
tree = decode_body(html, "http://example.com/test.html")
og = parse_html_to_open_graph(tree)
self.assertEqual(
og,
{
"og:title": None,
"og:description": "Real Description",
"og:site_name": "matrix.org",
},
)
```
###Assistant : Twitter card tags should be used if nothing else is available.
<html>
<meta name="twitter:card" content="summary">
<meta name="twitter:description" content="Description">
<meta name="twitter:site" content="@matrixdotorg">
</html>
<html>
<meta name="twitter:card" content="summary">
<meta name="twitter:description" content="Description">
<meta property="og:description" content="Real Description">
<meta name="twitter:site" content="@matrixdotorg">
<meta property="og:site_name" content="matrix.org">
</html>
|
2,750 | def get_tax_template(posting_date, args):
args = frappe._dict(args)
conditions = []
if posting_date:
conditions.append(
f
)
else:
conditions.append("(from_date is null) and (to_date is null)")
conditions.append(
"ifnull(tax_category, '') = {0}".format(frappe.db.escape(cstr(args.get("tax_category"))))
)
if "tax_category" in args.keys():
del args["tax_category"]
for key, value in args.items():
if key == "use_for_shopping_cart":
conditions.append("use_for_shopping_cart = {0}".format(1 if value else 0))
elif key == "customer_group":
if not value:
value = get_root_of("Customer Group")
customer_group_condition = get_customer_group_condition(value)
conditions.append("ifnull({0}, '') in ('', {1})".format(key, customer_group_condition))
else:
conditions.append("ifnull({0}, '') in ('', {1})".format(key, frappe.db.escape(cstr(value))))
tax_rule = frappe.db.sql(
.format(
" and ".join(conditions)
),
as_dict=True,
)
if not tax_rule:
return None
for rule in tax_rule:
rule.no_of_keys_matched = 0
for key in args:
if rule.get(key):
rule.no_of_keys_matched += 1
def cmp(a, b):
# refernce: https://docs.python.org/3.0/whatsnew/3.0.html#ordering-comparisons
return int(a > b) - int(a < b)
rule = sorted(
tax_rule,
key=functools.cmp_to_key(
lambda b, a: cmp(a.no_of_keys_matched, b.no_of_keys_matched) or cmp(a.priority, b.priority)
),
)[0]
tax_template = rule.sales_tax_template or rule.purchase_tax_template
doctype = "{0} Taxes and Charges Template".format(rule.tax_type)
if frappe.db.get_value(doctype, tax_template, "disabled") == 1:
return None
return tax_template
| Get matching tax rule(from_date is null or from_date <= '{posting_date}')
and (to_date is null or to_date >= '{posting_date}')select * from `tabTax Rule`
where {0} | 24 | 159 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def get_tax_template(posting_date, args):
args = frappe._dict(args)
conditions = []
if posting_date:
conditions.append(
f
)
else:
conditions.append("(from_date is null) and (to_date is null)")
conditions.append(
"ifnull(tax_category, '') = {0}".format(frappe.db.escape(cstr(args.get("tax_category"))))
)
if "tax_category" in args.keys():
del args["tax_category"]
for key, value in args.items():
if key == "use_for_shopping_cart":
conditions.append("use_for_shopping_cart = {0}".format(1 if value else 0))
elif key == "customer_group":
if not value:
value = get_root_of("Customer Group")
customer_group_condition = get_customer_group_condition(value)
conditions.append("ifnull({0}, '') in ('', {1})".format(key, customer_group_condition))
else:
conditions.append("ifnull({0}, '') in ('', {1})".format(key, frappe.db.escape(cstr(value))))
tax_rule = frappe.db.sql(
.format(
" and ".join(conditions)
),
as_dict=True,
)
if not tax_rule:
return None
for rule in tax_rule:
rule.no_of_keys_matched = 0
for key in args:
if rule.get(key):
rule.no_of_keys_matched += 1
def cmp(a, b):
# refernce: https://docs.python.org/3.0/whatsnew/3.0.html#ordering-comparisons
return int(a > b) - int(a < b)
rule = sorted(
tax_rule,
key=functools.cmp_to_key(
lambda b, a: cmp(a.no_of_keys_matched, b.no_of_keys_matched) or cmp(a.priority, b.priority)
),
)[0]
tax_template = rule.sales_tax_template or rule.purchase_tax_template
doctype = "{0} Taxes and Charges Template".format(rule.tax_type)
if frappe.db.get_value(doctype, tax_template, "disabled") == 1:
return None
return tax_template
```
###Assistant : Get matching tax rule(from_date is null or from_date <= '{posting_date}')
and (to_date is null or to_date >= '{posting_date}')select * from `tabTax Rule`
where {0}
|
2,751 | def wheel_graph(n, create_using=None):
_, nodes = n
G = empty_graph(nodes, create_using)
if G.is_directed():
raise NetworkXError("Directed Graph not supported")
if len(nodes) > 1:
hub, *rim = nodes
G.add_edges_from((hub, node) for node in rim)
if len(rim) > 1:
G.add_edges_from(pairwise(rim, cyclic=True))
return G
| Return the wheel graph
The wheel graph consists of a hub node connected to a cycle of (n-1) nodes.
Parameters
----------
n : int or iterable
If an integer, node labels are 0 to n with center 0.
If an iterable of nodes, the center is the first.
create_using : NetworkX graph constructor, optional (default=nx.Graph)
Graph type to create. If graph instance, then cleared before populated.
Node labels are the integers 0 to n - 1.
| 76 | 40 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def wheel_graph(n, create_using=None):
_, nodes = n
G = empty_graph(nodes, create_using)
if G.is_directed():
raise NetworkXError("Directed Graph not supported")
if len(nodes) > 1:
hub, *rim = nodes
G.add_edges_from((hub, node) for node in rim)
if len(rim) > 1:
G.add_edges_from(pairwise(rim, cyclic=True))
return G
```
###Assistant : Return the wheel graph
The wheel graph consists of a hub node connected to a cycle of (n-1) nodes.
Parameters
----------
n : int or iterable
If an integer, node labels are 0 to n with center 0.
If an iterable of nodes, the center is the first.
create_using : NetworkX graph constructor, optional (default=nx.Graph)
Graph type to create. If graph instance, then cleared before populated.
Node labels are the integers 0 to n - 1.
|
2,752 | def get_changes():
with open(HISTORY) as f:
lines = f.readlines()
block = []
# eliminate the part preceding the first block
for i, line in enumerate(lines):
line = lines.pop(0)
if line.startswith('===='):
break
lines.pop(0)
for i, line in enumerate(lines):
line = lines.pop(0)
line = line.rstrip()
if re.match(r"^- \d+_", line):
line = re.sub(r"^- (\d+)_", r"- #\1", line)
if line.startswith('===='):
break
block.append(line)
# eliminate bottom empty lines
block.pop(-1)
while not block[-1]:
block.pop(-1)
return "\n".join(block)
| Get the most recent changes for this release by parsing
HISTORY.rst file.
| 12 | 70 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def get_changes():
with open(HISTORY) as f:
lines = f.readlines()
block = []
# eliminate the part preceding the first block
for i, line in enumerate(lines):
line = lines.pop(0)
if line.startswith('===='):
break
lines.pop(0)
for i, line in enumerate(lines):
line = lines.pop(0)
line = line.rstrip()
if re.match(r"^- \d+_", line):
line = re.sub(r"^- (\d+)_", r"- #\1", line)
if line.startswith('===='):
break
block.append(line)
# eliminate bottom empty lines
block.pop(-1)
while not block[-1]:
block.pop(-1)
return "\n".join(block)
```
###Assistant : Get the most recent changes for this release by parsing
HISTORY.rst file.
|
2,753 | def _check_valid_data(self) -> bool:
logger.debug("Validating data. %s",
{key: len(val) for key, val in self._display_data.stats.items()})
if any(len(val) == 0 # pylint:disable=len-as-condition
for val in self._display_data.stats.values()):
return False
return True
| Check that the selections holds valid data to display
NB: len-as-condition is used as data could be a list or a numpy array
Returns
-------
bool
``True` if there is data to be displayed, otherwise ``False``
| 36 | 28 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _check_valid_data(self) -> bool:
logger.debug("Validating data. %s",
{key: len(val) for key, val in self._display_data.stats.items()})
if any(len(val) == 0 # pylint:disable=len-as-condition
for val in self._display_data.stats.values()):
return False
return True
```
###Assistant : Check that the selections holds valid data to display
NB: len-as-condition is used as data could be a list or a numpy array
Returns
-------
bool
``True` if there is data to be displayed, otherwise ``False``
|
2,754 | def _async_check_unavailable_groups_with_random_macs(self) -> None:
now = MONOTONIC_TIME()
gone_unavailable = [
group_id
for group_id in self._group_ids_random_macs
if group_id not in self._unavailable_group_ids
and (service_info := self._last_seen_by_group_id.get(group_id))
and (
# We will not be callbacks for iBeacons with random macs
# that rotate infrequently since their advertisement data is
# does not change as the bluetooth.async_register_callback API
# suppresses callbacks for duplicate advertisements to avoid
# exposing integrations to the firehose of bluetooth advertisements.
#
# To solve this we need to ask for the latest service info for
# the address we last saw to get the latest timestamp.
#
# If there is no last service info for the address we know that
# the device is no longer advertising.
not (
latest_service_info := bluetooth.async_last_service_info(
self.hass, service_info.address, connectable=False
)
)
or now - latest_service_info.time > UNAVAILABLE_TIMEOUT
)
]
for group_id in gone_unavailable:
self._unavailable_group_ids.add(group_id)
async_dispatcher_send(self.hass, signal_unavailable(group_id))
| Check for random mac groups that have not been seen in a while and mark them as unavailable. | 18 | 144 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _async_check_unavailable_groups_with_random_macs(self) -> None:
now = MONOTONIC_TIME()
gone_unavailable = [
group_id
for group_id in self._group_ids_random_macs
if group_id not in self._unavailable_group_ids
and (service_info := self._last_seen_by_group_id.get(group_id))
and (
# We will not be callbacks for iBeacons with random macs
# that rotate infrequently since their advertisement data is
# does not change as the bluetooth.async_register_callback API
# suppresses callbacks for duplicate advertisements to avoid
# exposing integrations to the firehose of bluetooth advertisements.
#
# To solve this we need to ask for the latest service info for
# the address we last saw to get the latest timestamp.
#
# If there is no last service info for the address we know that
# the device is no longer advertising.
not (
latest_service_info := bluetooth.async_last_service_info(
self.hass, service_info.address, connectable=False
)
)
or now - latest_service_info.time > UNAVAILABLE_TIMEOUT
)
]
for group_id in gone_unavailable:
self._unavailable_group_ids.add(group_id)
async_dispatcher_send(self.hass, signal_unavailable(group_id))
```
###Assistant : Check for random mac groups that have not been seen in a while and mark them as unavailable.
|
2,755 | def restore_optimizers_and_schedulers(self) -> None:
if not self._loaded_checkpoint:
return
if self.trainer.strategy.lightning_restore_optimizer:
# validation
if "optimizer_states" not in self._loaded_checkpoint:
raise KeyError(
"Trying to restore optimizer state but checkpoint contains only the model."
" This is probably due to `ModelCheckpoint.save_weights_only` being set to `True`."
)
self.restore_optimizers()
if "lr_schedulers" not in self._loaded_checkpoint:
raise KeyError(
"Trying to restore learning rate scheduler state but checkpoint contains only the model."
" This is probably due to `ModelCheckpoint.save_weights_only` being set to `True`."
)
self.restore_lr_schedulers()
| Restores the optimizers and learning rate scheduler states from the pre-loaded checkpoint. | 12 | 76 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def restore_optimizers_and_schedulers(self) -> None:
if not self._loaded_checkpoint:
return
if self.trainer.strategy.lightning_restore_optimizer:
# validation
if "optimizer_states" not in self._loaded_checkpoint:
raise KeyError(
"Trying to restore optimizer state but checkpoint contains only the model."
" This is probably due to `ModelCheckpoint.save_weights_only` being set to `True`."
)
self.restore_optimizers()
if "lr_schedulers" not in self._loaded_checkpoint:
raise KeyError(
"Trying to restore learning rate scheduler state but checkpoint contains only the model."
" This is probably due to `ModelCheckpoint.save_weights_only` being set to `True`."
)
self.restore_lr_schedulers()
```
###Assistant : Restores the optimizers and learning rate scheduler states from the pre-loaded checkpoint.
|
2,756 | def urldefragauth(url):
scheme, netloc, path, params, query, fragment = urlparse(url)
# see func:`prepend_scheme_if_needed`
if not netloc:
netloc, path = path, netloc
netloc = netloc.rsplit("@", 1)[-1]
return urlunparse((scheme, netloc, path, params, query, ""))
|
Given a url remove the fragment and the authentication part.
:rtype: str
| 12 | 32 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def urldefragauth(url):
scheme, netloc, path, params, query, fragment = urlparse(url)
# see func:`prepend_scheme_if_needed`
if not netloc:
netloc, path = path, netloc
netloc = netloc.rsplit("@", 1)[-1]
return urlunparse((scheme, netloc, path, params, query, ""))
```
###Assistant :
Given a url remove the fragment and the authentication part.
:rtype: str
|
2,757 | def test_approval_not_required(self) -> None:
self.get_success(self.store.register_user(self.user_id, self.pwhash))
user = self.get_success(self.store.get_user_by_id(self.user_id))
assert user is not None
self.assertTrue(user["approved"])
approved = self.get_success(self.store.is_user_approved(self.user_id))
self.assertTrue(approved)
| Tests that if we don't require approval for new accounts, newly created
accounts are automatically marked as approved.
| 18 | 19 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_approval_not_required(self) -> None:
self.get_success(self.store.register_user(self.user_id, self.pwhash))
user = self.get_success(self.store.get_user_by_id(self.user_id))
assert user is not None
self.assertTrue(user["approved"])
approved = self.get_success(self.store.is_user_approved(self.user_id))
self.assertTrue(approved)
```
###Assistant : Tests that if we don't require approval for new accounts, newly created
accounts are automatically marked as approved.
|
2,758 | def get_all(self, name, failobj=None):
values = []
name = name.lower()
for k, v in self._headers:
if k.lower() == name:
values.append(self.policy.header_fetch_parse(k, v))
if not values:
return failobj
return values
| Return a list of all the values for the named field.
These will be sorted in the order they appeared in the original
message, and may contain duplicates. Any fields deleted and
re-inserted are always appended to the header list.
If no such fields exist, failobj is returned (defaults to None).
| 51 | 28 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def get_all(self, name, failobj=None):
values = []
name = name.lower()
for k, v in self._headers:
if k.lower() == name:
values.append(self.policy.header_fetch_parse(k, v))
if not values:
return failobj
return values
```
###Assistant : Return a list of all the values for the named field.
These will be sorted in the order they appeared in the original
message, and may contain duplicates. Any fields deleted and
re-inserted are always appended to the header list.
If no such fields exist, failobj is returned (defaults to None).
|
2,759 | def test_explorer_private_child(self):
response = self.client.get(
reverse("wagtailadmin_explore", args=(self.private_child_page.id,))
)
# Check the response
self.assertEqual(response.status_code, 200)
# Check the privacy indicator is public
self.assertTemplateUsed(response, "wagtailadmin/pages/_privacy_switch.html")
self.assertContains(response, '<div class="privacy-indicator private">')
self.assertNotContains(response, '<div class="privacy-indicator public">')
|
This tests that the privacy indicator on the private child pages explore view is set to "PRIVATE"
| 17 | 31 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_explorer_private_child(self):
response = self.client.get(
reverse("wagtailadmin_explore", args=(self.private_child_page.id,))
)
# Check the response
self.assertEqual(response.status_code, 200)
# Check the privacy indicator is public
self.assertTemplateUsed(response, "wagtailadmin/pages/_privacy_switch.html")
self.assertContains(response, '<div class="privacy-indicator private">')
self.assertNotContains(response, '<div class="privacy-indicator public">')
```
###Assistant :
This tests that the privacy indicator on the private child pages explore view is set to "PRIVATE"
|
2,760 | def find_backend(line):
if _re_test_backend.search(line) is None:
return None
backends = [b[0] for b in _re_backend.findall(line)]
backends.sort()
return "_and_".join(backends)
| Find one (or multiple) backend in a code line of the init. | 12 | 18 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def find_backend(line):
if _re_test_backend.search(line) is None:
return None
backends = [b[0] for b in _re_backend.findall(line)]
backends.sort()
return "_and_".join(backends)
```
###Assistant : Find one (or multiple) backend in a code line of the init.
|
2,761 | def subgraph_view(G, filter_node=no_filter, filter_edge=no_filter):
newG = nx.freeze(G.__class__())
newG._NODE_OK = filter_node
newG._EDGE_OK = filter_edge
# create view by assigning attributes from G
newG._graph = G
newG.graph = G.graph
newG._node = FilterAtlas(G._node, filter_node)
if G.is_multigraph():
Adj = FilterMultiAdjacency
| View of `G` applying a filter on nodes and edges.
`subgraph_view` provides a read-only view of the input graph that excludes
nodes and edges based on the outcome of two filter functions `filter_node`
and `filter_edge`.
The `filter_node` function takes one argument --- the node --- and returns
`True` if the node should be included in the subgraph, and `False` if it
should not be included.
The `filter_edge` function takes two (or three arguments if `G` is a
multi-graph) --- the nodes describing an edge, plus the edge-key if
parallel edges are possible --- and returns `True` if the edge should be
included in the subgraph, and `False` if it should not be included.
Both node and edge filter functions are called on graph elements as they
are queried, meaning there is no up-front cost to creating the view.
Parameters
----------
G : networkx.Graph
A directed/undirected graph/multigraph
filter_node : callable, optional
A function taking a node as input, which returns `True` if the node
should appear in the view.
filter_edge : callable, optional
A function taking as input the two nodes describing an edge (plus the
edge-key if `G` is a multi-graph), which returns `True` if the edge
should appear in the view.
Returns
-------
graph : networkx.Graph
A read-only graph view of the input graph.
Examples
--------
>>> G = nx.path_graph(6)
Filter functions operate on the node, and return `True` if the node should
appear in the view:
>>> def filter_node(n1):
... return n1 != 5
...
>>> view = nx.subgraph_view(G, filter_node=filter_node)
>>> view.nodes()
NodeView((0, 1, 2, 3, 4))
We can use a closure pattern to filter graph elements based on additional
data --- for example, filtering on edge data attached to the graph:
>>> G[3][4]["cross_me"] = False
>>> def filter_edge(n1, n2):
... return G[n1][n2].get("cross_me", True)
...
>>> view = nx.subgraph_view(G, filter_edge=filter_edge)
>>> view.edges()
EdgeView([(0, 1), (1, 2), (2, 3), (4, 5)])
>>> view = nx.subgraph_view(G, filter_node=filter_node, filter_edge=filter_edge,)
>>> view.nodes()
NodeView((0, 1, 2, 3, 4))
>>> view.edges()
EdgeView([(0, 1), (1, 2), (2, 3)])
| 333 | 36 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def subgraph_view(G, filter_node=no_filter, filter_edge=no_filter):
newG = nx.freeze(G.__class__())
newG._NODE_OK = filter_node
newG._EDGE_OK = filter_edge
# create view by assigning attributes from G
newG._graph = G
newG.graph = G.graph
newG._node = FilterAtlas(G._node, filter_node)
if G.is_multigraph():
Adj = FilterMultiAdjacency
```
###Assistant : View of `G` applying a filter on nodes and edges.
`subgraph_view` provides a read-only view of the input graph that excludes
nodes and edges based on the outcome of two filter functions `filter_node`
and `filter_edge`.
The `filter_node` function takes one argument --- the node --- and returns
`True` if the node should be included in the subgraph, and `False` if it
should not be included.
The `filter_edge` function takes two (or three arguments if `G` is a
multi-graph) --- the nodes describing an edge, plus the edge-key if
parallel edges are possible --- and returns `True` if the edge should be
included in the subgraph, and `False` if it should not be included.
Both node and edge filter functions are called on graph elements as they
are queried, meaning there is no up-front cost to creating the view.
Parameters
----------
G : networkx.Graph
A directed/undirected graph/multigraph
filter_node : callable, optional
A function taking a node as input, which returns `True` if the node
should appear in the view.
filter_edge : callable, optional
A function taking as input the two nodes describing an edge (plus the
edge-key if `G` is a multi-graph), which returns `True` if the edge
should appear in the view.
Returns
-------
graph : networkx.Graph
A read-only graph view of the input graph.
Examples
--------
>>> G = nx.path_graph(6)
Filter functions operate on the node, and return `True` if the node should
appear in the view:
>>> def filter_node(n1):
... return n1 != 5
...
>>> view = nx.subgraph_view(G, filter_node=filter_node)
>>> view.nodes()
NodeView((0, 1, 2, 3, 4))
We can use a closure pattern to filter graph elements based on additional
data --- for example, filtering on edge data attached to the graph:
>>> G[3][4]["cross_me"] = False
>>> def filter_edge(n1, n2):
... return G[n1][n2].get("cross_me", True)
...
>>> view = nx.subgraph_view(G, filter_edge=filter_edge)
>>> view.edges()
EdgeView([(0, 1), (1, 2), (2, 3), (4, 5)])
>>> view = nx.subgraph_view(G, filter_node=filter_node, filter_edge=filter_edge,)
>>> view.nodes()
NodeView((0, 1, 2, 3, 4))
>>> view.edges()
EdgeView([(0, 1), (1, 2), (2, 3)])
|
2,762 | def _deprecate_ci(errorbar, ci):
if ci != "deprecated":
if ci is None:
errorbar = None
elif ci == "sd":
errorbar = "sd"
else:
errorbar = ("ci", ci)
msg = (
"\n\nThe `ci` parameter is deprecated. "
f"Use `errorbar={repr(errorbar)}` for the same effect.\n"
)
warnings.warn(msg, FutureWarning, stacklevel=3)
return errorbar
|
Warn on usage of ci= and convert to appropriate errorbar= arg.
ci was deprecated when errorbar was added in 0.12. It should not be removed
completely for some time, but it can be moved out of function definitions
(and extracted from kwargs) after one cycle.
| 45 | 47 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _deprecate_ci(errorbar, ci):
if ci != "deprecated":
if ci is None:
errorbar = None
elif ci == "sd":
errorbar = "sd"
else:
errorbar = ("ci", ci)
msg = (
"\n\nThe `ci` parameter is deprecated. "
f"Use `errorbar={repr(errorbar)}` for the same effect.\n"
)
warnings.warn(msg, FutureWarning, stacklevel=3)
return errorbar
```
###Assistant :
Warn on usage of ci= and convert to appropriate errorbar= arg.
ci was deprecated when errorbar was added in 0.12. It should not be removed
completely for some time, but it can be moved out of function definitions
(and extracted from kwargs) after one cycle.
|
2,763 | def __sub__(self, other):
if self._delegate_binop(other):
return NotImplemented
return np.subtract(self, other)
|
Subtract other from self, and return a new masked array.
| 10 | 10 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def __sub__(self, other):
if self._delegate_binop(other):
return NotImplemented
return np.subtract(self, other)
```
###Assistant :
Subtract other from self, and return a new masked array.
|
2,764 | def mock_smile_adam_2() -> Generator[None, MagicMock, None]:
chosen_env = "m_adam_heating"
with patch(
"homeassistant.components.plugwise.gateway.Smile", autospec=True
) as smile_mock:
smile = smile_mock.return_value
smile.gateway_id = "da224107914542988a88561b4452b0f6"
smile.heater_id = "056ee145a816487eaa69243c3280f8bf"
smile.smile_version = "3.6.4"
smile.smile_type = "thermostat"
smile.smile_hostname = "smile98765"
smile.smile_name = "Adam"
smile.connect.return_value = True
smile.notifications = _read_json(chosen_env, "notifications")
smile.async_update.return_value = _read_json(chosen_env, "all_data")
yield smile
@pytest.fixture | Create a 2nd Mock Adam environment for testing exceptions. | 9 | 51 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def mock_smile_adam_2() -> Generator[None, MagicMock, None]:
chosen_env = "m_adam_heating"
with patch(
"homeassistant.components.plugwise.gateway.Smile", autospec=True
) as smile_mock:
smile = smile_mock.return_value
smile.gateway_id = "da224107914542988a88561b4452b0f6"
smile.heater_id = "056ee145a816487eaa69243c3280f8bf"
smile.smile_version = "3.6.4"
smile.smile_type = "thermostat"
smile.smile_hostname = "smile98765"
smile.smile_name = "Adam"
smile.connect.return_value = True
smile.notifications = _read_json(chosen_env, "notifications")
smile.async_update.return_value = _read_json(chosen_env, "all_data")
yield smile
@pytest.fixture
```
###Assistant : Create a 2nd Mock Adam environment for testing exceptions.
|
2,765 | def plot_wireframe(self, X, Y, Z, **kwargs):
had_data = self.has_data()
if Z.ndim != 2:
raise ValueError("Argument Z must be 2-dimensional.")
# FIXME: Support masked arrays
X, Y, Z = np.broadcast_arrays(X, Y, Z)
rows, cols = Z.shape
has_stride = 'rstride' in kwargs or 'cstride' in kwargs
has_count = 'rcount' in kwargs or 'ccount' in kwargs
if has_stride and has_count:
raise ValueError("Cannot specify both stride and count arguments")
rstride = kwargs.pop('rstride', 1)
cstride = kwargs.pop('cstride', 1)
rcount = kwargs.pop('rcount', 50)
ccount = kwargs.pop('ccount', 50)
if rcParams['_internal.classic_mode']:
# Strides have priority over counts in classic mode.
# So, only compute strides from counts
# if counts were explicitly given
if has_count:
rstride = int(max(np.ceil(rows / rcount), 1)) if rcount else 0
cstride = int(max(np.ceil(cols / ccount), 1)) if ccount else 0
else:
# If the strides are provided then it has priority.
# Otherwise, compute the strides from the counts.
if not has_stride:
rstride = int(max(np.ceil(rows / rcount), 1)) if rcount else 0
cstride = int(max(np.ceil(cols / ccount), 1)) if ccount else 0
# We want two sets of lines, one running along the "rows" of
# Z and another set of lines running along the "columns" of Z.
# This transpose will make it easy to obtain the columns.
tX, tY, tZ = np.transpose(X), np.transpose(Y), np.transpose(Z)
if rstride:
rii = list(range(0, rows, rstride))
# Add the last index only if needed
if rows > 0 and rii[-1] != (rows - 1):
rii += [rows-1]
else:
rii = []
if cstride:
cii = list(range(0, cols, cstride))
# Add the last index only if needed
if cols > 0 and cii[-1] != (cols - 1):
cii += [cols-1]
else:
cii = []
if rstride == 0 and cstride == 0:
raise ValueError("Either rstride or cstride must be non zero")
# If the inputs were empty, then just
# reset everything.
if Z.size == 0:
rii = []
cii = []
xlines = [X[i] for i in rii]
ylines = [Y[i] for i in rii]
zlines = [Z[i] for i in rii]
txlines = [tX[i] for i in cii]
tylines = [tY[i] for i in cii]
tzlines = [tZ[i] for i in cii]
lines = ([list(zip(xl, yl, zl))
for xl, yl, zl in zip(xlines, ylines, zlines)]
+ [list(zip(xl, yl, zl))
for xl, yl, zl in zip(txlines, tylines, tzlines)])
linec = art3d.Line3DCollection(lines, **kwargs)
self.add_collection(linec)
self.auto_scale_xyz(X, Y, Z, had_data)
return linec
|
Plot a 3D wireframe.
.. note::
The *rcount* and *ccount* kwargs, which both default to 50,
determine the maximum number of samples used in each direction. If
the input data is larger, it will be downsampled (by slicing) to
these numbers of points.
Parameters
----------
X, Y, Z : 2D arrays
Data values.
rcount, ccount : int
Maximum number of samples used in each direction. If the input
data is larger, it will be downsampled (by slicing) to these
numbers of points. Setting a count to zero causes the data to be
not sampled in the corresponding direction, producing a 3D line
plot rather than a wireframe plot. Defaults to 50.
rstride, cstride : int
Downsampling stride in each direction. These arguments are
mutually exclusive with *rcount* and *ccount*. If only one of
*rstride* or *cstride* is set, the other defaults to 1. Setting a
stride to zero causes the data to be not sampled in the
corresponding direction, producing a 3D line plot rather than a
wireframe plot.
'classic' mode uses a default of ``rstride = cstride = 1`` instead
of the new default of ``rcount = ccount = 50``.
**kwargs
Other arguments are forwarded to `.Line3DCollection`.
| 198 | 393 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def plot_wireframe(self, X, Y, Z, **kwargs):
had_data = self.has_data()
if Z.ndim != 2:
raise ValueError("Argument Z must be 2-dimensional.")
# FIXME: Support masked arrays
X, Y, Z = np.broadcast_arrays(X, Y, Z)
rows, cols = Z.shape
has_stride = 'rstride' in kwargs or 'cstride' in kwargs
has_count = 'rcount' in kwargs or 'ccount' in kwargs
if has_stride and has_count:
raise ValueError("Cannot specify both stride and count arguments")
rstride = kwargs.pop('rstride', 1)
cstride = kwargs.pop('cstride', 1)
rcount = kwargs.pop('rcount', 50)
ccount = kwargs.pop('ccount', 50)
if rcParams['_internal.classic_mode']:
# Strides have priority over counts in classic mode.
# So, only compute strides from counts
# if counts were explicitly given
if has_count:
rstride = int(max(np.ceil(rows / rcount), 1)) if rcount else 0
cstride = int(max(np.ceil(cols / ccount), 1)) if ccount else 0
else:
# If the strides are provided then it has priority.
# Otherwise, compute the strides from the counts.
if not has_stride:
rstride = int(max(np.ceil(rows / rcount), 1)) if rcount else 0
cstride = int(max(np.ceil(cols / ccount), 1)) if ccount else 0
# We want two sets of lines, one running along the "rows" of
# Z and another set of lines running along the "columns" of Z.
# This transpose will make it easy to obtain the columns.
tX, tY, tZ = np.transpose(X), np.transpose(Y), np.transpose(Z)
if rstride:
rii = list(range(0, rows, rstride))
# Add the last index only if needed
if rows > 0 and rii[-1] != (rows - 1):
rii += [rows-1]
else:
rii = []
if cstride:
cii = list(range(0, cols, cstride))
# Add the last index only if needed
if cols > 0 and cii[-1] != (cols - 1):
cii += [cols-1]
else:
cii = []
if rstride == 0 and cstride == 0:
raise ValueError("Either rstride or cstride must be non zero")
# If the inputs were empty, then just
# reset everything.
if Z.size == 0:
rii = []
cii = []
xlines = [X[i] for i in rii]
ylines = [Y[i] for i in rii]
zlines = [Z[i] for i in rii]
txlines = [tX[i] for i in cii]
tylines = [tY[i] for i in cii]
tzlines = [tZ[i] for i in cii]
lines = ([list(zip(xl, yl, zl))
for xl, yl, zl in zip(xlines, ylines, zlines)]
+ [list(zip(xl, yl, zl))
for xl, yl, zl in zip(txlines, tylines, tzlines)])
linec = art3d.Line3DCollection(lines, **kwargs)
self.add_collection(linec)
self.auto_scale_xyz(X, Y, Z, had_data)
return linec
```
###Assistant :
Plot a 3D wireframe.
.. note::
The *rcount* and *ccount* kwargs, which both default to 50,
determine the maximum number of samples used in each direction. If
the input data is larger, it will be downsampled (by slicing) to
these numbers of points.
Parameters
----------
X, Y, Z : 2D arrays
Data values.
rcount, ccount : int
Maximum number of samples used in each direction. If the input
data is larger, it will be downsampled (by slicing) to these
numbers of points. Setting a count to zero causes the data to be
not sampled in the corresponding direction, producing a 3D line
plot rather than a wireframe plot. Defaults to 50.
rstride, cstride : int
Downsampling stride in each direction. These arguments are
mutually exclusive with *rcount* and *ccount*. If only one of
*rstride* or *cstride* is set, the other defaults to 1. Setting a
stride to zero causes the data to be not sampled in the
corresponding direction, producing a 3D line plot rather than a
wireframe plot.
'classic' mode uses a default of ``rstride = cstride = 1`` instead
of the new default of ``rcount = ccount = 50``.
**kwargs
Other arguments are forwarded to `.Line3DCollection`.
|
2,766 | def check_response(self, response, callback, name=None):
if not (response is None or asyncio.iscoroutine(response)):
return
if not name:
if isinstance(callback, types.FunctionType): # FBV
name = "The view %s.%s" % (callback.__module__, callback.__name__)
else: # CBV
name = "The view %s.%s.__call__" % (
callback.__module__,
callback.__class__.__name__,
)
if response is None:
raise ValueError(
"%s didn't return an HttpResponse object. It returned None "
"instead." % name
)
elif asyncio.iscoroutine(response):
raise ValueError(
"%s didn't return an HttpResponse object. It returned an "
"unawaited coroutine instead. You may need to add an 'await' "
"into your view." % name
)
# Other utility methods.
|
Raise an error if the view returned None or an uncalled coroutine.
| 12 | 97 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def check_response(self, response, callback, name=None):
if not (response is None or asyncio.iscoroutine(response)):
return
if not name:
if isinstance(callback, types.FunctionType): # FBV
name = "The view %s.%s" % (callback.__module__, callback.__name__)
else: # CBV
name = "The view %s.%s.__call__" % (
callback.__module__,
callback.__class__.__name__,
)
if response is None:
raise ValueError(
"%s didn't return an HttpResponse object. It returned None "
"instead." % name
)
elif asyncio.iscoroutine(response):
raise ValueError(
"%s didn't return an HttpResponse object. It returned an "
"unawaited coroutine instead. You may need to add an 'await' "
"into your view." % name
)
# Other utility methods.
```
###Assistant :
Raise an error if the view returned None or an uncalled coroutine.
|
2,767 | def _meta_from_array(x, columns=None, index=None, meta=None):
if x.ndim > 2:
raise ValueError(
"from_array does not input more than 2D array, got"
" array with shape %r" % (x.shape,)
)
if index is not None:
if not isinstance(index, Index):
raise ValueError("'index' must be an instance of dask.dataframe.Index")
index = index._meta
if meta is None:
meta = meta_lib_from_array(x).DataFrame()
if getattr(x.dtype, "names", None) is not None:
# record array has named columns
if columns is None:
columns = list(x.dtype.names)
elif np.isscalar(columns):
raise ValueError("For a struct dtype, columns must be a list.")
elif not all(i in x.dtype.names for i in columns):
extra = sorted(set(columns).difference(x.dtype.names))
raise ValueError(f"dtype {x.dtype} doesn't have fields {extra}")
fields = x.dtype.fields
dtypes = [fields[n][0] if n in fields else "f8" for n in columns]
elif x.ndim == 1:
if np.isscalar(columns) or columns is None:
return meta._constructor_sliced(
[], name=columns, dtype=x.dtype, index=index
)
elif len(columns) == 1:
return meta._constructor(
np.array([], dtype=x.dtype), columns=columns, index=index
)
raise ValueError(
"For a 1d array, columns must be a scalar or single element list"
)
else:
if np.isnan(x.shape[1]):
raise ValueError("Shape along axis 1 must be known")
if columns is None:
columns = list(range(x.shape[1])) if x.ndim == 2 else [0]
elif len(columns) != x.shape[1]:
raise ValueError(
"Number of column names must match width of the array. "
f"Got {len(columns)} names for {x.shape[1]} columns"
)
dtypes = [x.dtype] * len(columns)
data = {c: np.array([], dtype=dt) for (c, dt) in zip(columns, dtypes)}
return meta._constructor(data, columns=columns, index=index)
| Create empty DataFrame or Series which has correct dtype | 9 | 234 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _meta_from_array(x, columns=None, index=None, meta=None):
if x.ndim > 2:
raise ValueError(
"from_array does not input more than 2D array, got"
" array with shape %r" % (x.shape,)
)
if index is not None:
if not isinstance(index, Index):
raise ValueError("'index' must be an instance of dask.dataframe.Index")
index = index._meta
if meta is None:
meta = meta_lib_from_array(x).DataFrame()
if getattr(x.dtype, "names", None) is not None:
# record array has named columns
if columns is None:
columns = list(x.dtype.names)
elif np.isscalar(columns):
raise ValueError("For a struct dtype, columns must be a list.")
elif not all(i in x.dtype.names for i in columns):
extra = sorted(set(columns).difference(x.dtype.names))
raise ValueError(f"dtype {x.dtype} doesn't have fields {extra}")
fields = x.dtype.fields
dtypes = [fields[n][0] if n in fields else "f8" for n in columns]
elif x.ndim == 1:
if np.isscalar(columns) or columns is None:
return meta._constructor_sliced(
[], name=columns, dtype=x.dtype, index=index
)
elif len(columns) == 1:
return meta._constructor(
np.array([], dtype=x.dtype), columns=columns, index=index
)
raise ValueError(
"For a 1d array, columns must be a scalar or single element list"
)
else:
if np.isnan(x.shape[1]):
raise ValueError("Shape along axis 1 must be known")
if columns is None:
columns = list(range(x.shape[1])) if x.ndim == 2 else [0]
elif len(columns) != x.shape[1]:
raise ValueError(
"Number of column names must match width of the array. "
f"Got {len(columns)} names for {x.shape[1]} columns"
)
dtypes = [x.dtype] * len(columns)
data = {c: np.array([], dtype=dt) for (c, dt) in zip(columns, dtypes)}
return meta._constructor(data, columns=columns, index=index)
```
###Assistant : Create empty DataFrame or Series which has correct dtype
|
2,768 | def rows(self):
for row in self.row_data:
yield [
column["block"].bind(value)
for column, value in zip(self.columns, row["values"])
]
|
Iterate over the rows of the table, with each row returned as a list of BoundBlocks
| 16 | 16 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def rows(self):
for row in self.row_data:
yield [
column["block"].bind(value)
for column, value in zip(self.columns, row["values"])
]
```
###Assistant :
Iterate over the rows of the table, with each row returned as a list of BoundBlocks
|
2,769 | def Uniform(name, left, right):
r
return rv(name, UniformDistribution, (left, right))
#-------------------------------------------------------------------------------
# UniformSum distribution ------------------------------------------------------
|
Create a continuous random variable with a uniform distribution.
Explanation
===========
The density of the uniform distribution is given by
.. math::
f(x) := \begin{cases}
\frac{1}{b - a} & \text{for } x \in [a,b] \\
0 & \text{otherwise}
\end{cases}
with :math:`x \in [a,b]`.
Parameters
==========
a : Real number, :math:`-\infty < a`, the left boundary
b : Real number, :math:`a < b < \infty`, the right boundary
Returns
=======
RandomSymbol
Examples
========
>>> from sympy.stats import Uniform, density, cdf, E, variance
>>> from sympy import Symbol, simplify
>>> a = Symbol("a", negative=True)
>>> b = Symbol("b", positive=True)
>>> z = Symbol("z")
>>> X = Uniform("x", a, b)
>>> density(X)(z)
Piecewise((1/(-a + b), (b >= z) & (a <= z)), (0, True))
>>> cdf(X)(z)
Piecewise((0, a > z), ((-a + z)/(-a + b), b >= z), (1, True))
>>> E(X)
a/2 + b/2
>>> simplify(variance(X))
a**2/12 - a*b/6 + b**2/12
References
==========
.. [1] https://en.wikipedia.org/wiki/Uniform_distribution_%28continuous%29
.. [2] http://mathworld.wolfram.com/UniformDistribution.html
| 157 | 15 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def Uniform(name, left, right):
r
return rv(name, UniformDistribution, (left, right))
#-------------------------------------------------------------------------------
# UniformSum distribution ------------------------------------------------------
```
###Assistant :
Create a continuous random variable with a uniform distribution.
Explanation
===========
The density of the uniform distribution is given by
.. math::
f(x) := \begin{cases}
\frac{1}{b - a} & \text{for } x \in [a,b] \\
0 & \text{otherwise}
\end{cases}
with :math:`x \in [a,b]`.
Parameters
==========
a : Real number, :math:`-\infty < a`, the left boundary
b : Real number, :math:`a < b < \infty`, the right boundary
Returns
=======
RandomSymbol
Examples
========
>>> from sympy.stats import Uniform, density, cdf, E, variance
>>> from sympy import Symbol, simplify
>>> a = Symbol("a", negative=True)
>>> b = Symbol("b", positive=True)
>>> z = Symbol("z")
>>> X = Uniform("x", a, b)
>>> density(X)(z)
Piecewise((1/(-a + b), (b >= z) & (a <= z)), (0, True))
>>> cdf(X)(z)
Piecewise((0, a > z), ((-a + z)/(-a + b), b >= z), (1, True))
>>> E(X)
a/2 + b/2
>>> simplify(variance(X))
a**2/12 - a*b/6 + b**2/12
References
==========
.. [1] https://en.wikipedia.org/wiki/Uniform_distribution_%28continuous%29
.. [2] http://mathworld.wolfram.com/UniformDistribution.html
|
2,770 | def get_ttext(value):
m = _non_token_end_matcher(value)
if not m:
raise errors.HeaderParseError(
"expected ttext but found '{}'".format(value))
ttext = m.group()
value = value[len(ttext):]
ttext = ValueTerminal(ttext, 'ttext')
_validate_xtext(ttext)
return ttext, value
| ttext = <matches _ttext_matcher>
We allow any non-TOKEN_ENDS in ttext, but add defects to the token's
defects list if we find non-ttext characters. We also register defects for
*any* non-printables even though the RFC doesn't exclude all of them,
because we follow the spirit of RFC 5322.
| 47 | 29 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def get_ttext(value):
m = _non_token_end_matcher(value)
if not m:
raise errors.HeaderParseError(
"expected ttext but found '{}'".format(value))
ttext = m.group()
value = value[len(ttext):]
ttext = ValueTerminal(ttext, 'ttext')
_validate_xtext(ttext)
return ttext, value
```
###Assistant : ttext = <matches _ttext_matcher>
We allow any non-TOKEN_ENDS in ttext, but add defects to the token's
defects list if we find non-ttext characters. We also register defects for
*any* non-printables even though the RFC doesn't exclude all of them,
because we follow the spirit of RFC 5322.
|
2,771 | def kwargs(self, exclude=(), apply=None):
kwargs = {k: getattr(self, k) for k in self._fields if k not in exclude}
if apply is not None:
return {k: apply(v) for k, v in kwargs.items()}
else:
return kwargs
| Get instance's attributes as dict of keyword arguments.
Parameters
==========
exclude : collection of str
Collection of keywords to exclude.
apply : callable, optional
Function to apply to all values.
| 30 | 34 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def kwargs(self, exclude=(), apply=None):
kwargs = {k: getattr(self, k) for k in self._fields if k not in exclude}
if apply is not None:
return {k: apply(v) for k, v in kwargs.items()}
else:
return kwargs
```
###Assistant : Get instance's attributes as dict of keyword arguments.
Parameters
==========
exclude : collection of str
Collection of keywords to exclude.
apply : callable, optional
Function to apply to all values.
|
2,772 | def _execute_impl(self, *args, **kwargs) -> RayServeHandle:
return self._deployment_handle
| Does not call into anything or produce a new value, as the time
this function gets called, all child nodes are already resolved to
ObjectRefs.
| 25 | 8 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _execute_impl(self, *args, **kwargs) -> RayServeHandle:
return self._deployment_handle
```
###Assistant : Does not call into anything or produce a new value, as the time
this function gets called, all child nodes are already resolved to
ObjectRefs.
|
2,773 | def test_settings_use_default_site(self):
context = {}
# This should use the default site
template = '{{ settings("tests.testsetting", use_default_site=True).title}}'
self.assertEqual(
self.render(template, context, request_context=False),
self.default_site_settings.title,
)
|
Check that the {{ settings(use_default_site=True) }} option works with
no site in the context
| 14 | 23 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_settings_use_default_site(self):
context = {}
# This should use the default site
template = '{{ settings("tests.testsetting", use_default_site=True).title}}'
self.assertEqual(
self.render(template, context, request_context=False),
self.default_site_settings.title,
)
```
###Assistant :
Check that the {{ settings(use_default_site=True) }} option works with
no site in the context
|
2,774 | def test_stream_square_brackets_and_language():
infos =
d = FFmpegInfosParser(infos, "clip.mp4").parse()
assert d
assert len(d["inputs"][0]["streams"]) == 2
assert d["inputs"][0]["streams"][0]["language"] == "eng"
assert d["inputs"][0]["streams"][1]["language"] is None
|
Input #0, mpeg, from 'clip.mp4':
Duration: 00:02:15.00, start: 52874.498178, bitrate: 266 kb/s
Stream #0:0[0x1e0](eng): Video: ..., 25 tbr, 90k tbn, 50 tbc
Stream #0:1[0x1c0](und): Audio: mp2, 0 channels, s16p
At least one output file must be specified | 37 | 22 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_stream_square_brackets_and_language():
infos =
d = FFmpegInfosParser(infos, "clip.mp4").parse()
assert d
assert len(d["inputs"][0]["streams"]) == 2
assert d["inputs"][0]["streams"][0]["language"] == "eng"
assert d["inputs"][0]["streams"][1]["language"] is None
```
###Assistant :
Input #0, mpeg, from 'clip.mp4':
Duration: 00:02:15.00, start: 52874.498178, bitrate: 266 kb/s
Stream #0:0[0x1e0](eng): Video: ..., 25 tbr, 90k tbn, 50 tbc
Stream #0:1[0x1c0](und): Audio: mp2, 0 channels, s16p
At least one output file must be specified
|
2,775 | async def wait(self) -> None:
if self._is_set:
return
if not self._loop:
self._loop = get_running_loop()
self._event = asyncio.Event()
await self._event.wait()
|
Wait until the flag has been set.
If the flag has already been set when this method is called, it returns immediately.
| 22 | 19 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
async def wait(self) -> None:
if self._is_set:
return
if not self._loop:
self._loop = get_running_loop()
self._event = asyncio.Event()
await self._event.wait()
```
###Assistant :
Wait until the flag has been set.
If the flag has already been set when this method is called, it returns immediately.
|
2,776 | async def get_and_submit_flow_runs(self) -> List[FlowRun]:
if not self.started:
raise RuntimeError("Agent is not started. Use `async with OrionAgent()...`")
self.logger.debug("Checking for flow runs...")
before = pendulum.now("utc").add(
seconds=self.prefetch_seconds or PREFECT_AGENT_PREFETCH_SECONDS.value()
)
# Use the work queue id or load one from the name
work_queue_id = self.work_queue_id or await self.work_queue_id_from_name()
if not work_queue_id:
return []
try:
submittable_runs = await self.client.get_runs_in_work_queue(
id=work_queue_id, limit=10, scheduled_before=before
)
except httpx.HTTPStatusError as exc:
if exc.response.status_code == status.HTTP_404_NOT_FOUND:
raise ValueError(
f"No work queue found with id '{work_queue_id}'"
) from None
else:
raise
# Check for a paused work queue for display purposes
if not submittable_runs:
work_queue = await self.client.read_work_queue(work_queue_id)
if work_queue.is_paused:
self.logger.info(
f"Work queue {work_queue.name!r} ({work_queue.id}) is paused."
)
for flow_run in submittable_runs:
self.logger.info(f"Submitting flow run '{flow_run.id}'")
# don't resubmit a run
if flow_run.id in self.submitting_flow_run_ids:
continue
self.submitting_flow_run_ids.add(flow_run.id)
self.task_group.start_soon(
self.submit_run,
flow_run,
)
return submittable_runs
|
The principle method on agents. Queries for scheduled flow runs and submits
them for execution in parallel.
| 17 | 134 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
async def get_and_submit_flow_runs(self) -> List[FlowRun]:
if not self.started:
raise RuntimeError("Agent is not started. Use `async with OrionAgent()...`")
self.logger.debug("Checking for flow runs...")
before = pendulum.now("utc").add(
seconds=self.prefetch_seconds or PREFECT_AGENT_PREFETCH_SECONDS.value()
)
# Use the work queue id or load one from the name
work_queue_id = self.work_queue_id or await self.work_queue_id_from_name()
if not work_queue_id:
return []
try:
submittable_runs = await self.client.get_runs_in_work_queue(
id=work_queue_id, limit=10, scheduled_before=before
)
except httpx.HTTPStatusError as exc:
if exc.response.status_code == status.HTTP_404_NOT_FOUND:
raise ValueError(
f"No work queue found with id '{work_queue_id}'"
) from None
else:
raise
# Check for a paused work queue for display purposes
if not submittable_runs:
work_queue = await self.client.read_work_queue(work_queue_id)
if work_queue.is_paused:
self.logger.info(
f"Work queue {work_queue.name!r} ({work_queue.id}) is paused."
)
for flow_run in submittable_runs:
self.logger.info(f"Submitting flow run '{flow_run.id}'")
# don't resubmit a run
if flow_run.id in self.submitting_flow_run_ids:
continue
self.submitting_flow_run_ids.add(flow_run.id)
self.task_group.start_soon(
self.submit_run,
flow_run,
)
return submittable_runs
```
###Assistant :
The principle method on agents. Queries for scheduled flow runs and submits
them for execution in parallel.
|
2,777 | def conv_output_length(input_length, filter_size, padding, stride, dilation=1):
if input_length is None:
return None
assert padding in {"same", "valid", "full", "causal"}
dilated_filter_size = filter_size + (filter_size - 1) * (dilation - 1)
if padding in ["same", "causal"]:
output_length = input_length
elif padding == "valid":
output_length = input_length - dilated_filter_size + 1
elif padding == "full":
output_length = input_length + dilated_filter_size - 1
return (output_length + stride - 1) // stride
| Determines output length of a convolution given input length.
Args:
input_length: integer.
filter_size: integer.
padding: one of "same", "valid", "full", "causal"
stride: integer.
dilation: dilation rate, integer.
Returns:
The output length (integer).
| 32 | 68 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def conv_output_length(input_length, filter_size, padding, stride, dilation=1):
if input_length is None:
return None
assert padding in {"same", "valid", "full", "causal"}
dilated_filter_size = filter_size + (filter_size - 1) * (dilation - 1)
if padding in ["same", "causal"]:
output_length = input_length
elif padding == "valid":
output_length = input_length - dilated_filter_size + 1
elif padding == "full":
output_length = input_length + dilated_filter_size - 1
return (output_length + stride - 1) // stride
```
###Assistant : Determines output length of a convolution given input length.
Args:
input_length: integer.
filter_size: integer.
padding: one of "same", "valid", "full", "causal"
stride: integer.
dilation: dilation rate, integer.
Returns:
The output length (integer).
|
2,778 | def get_split_nodes(self):
rearport = path_node_to_object(self._nodes[-1])
return FrontPort.objects.filter(rear_port=rearport)
|
Return all available next segments in a split cable path.
| 10 | 7 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def get_split_nodes(self):
rearport = path_node_to_object(self._nodes[-1])
return FrontPort.objects.filter(rear_port=rearport)
```
###Assistant :
Return all available next segments in a split cable path.
|
2,779 | def get_template_names(self):
try:
names = super().get_template_names()
except ImproperlyConfigured:
# If template_name isn't specified, it's not a problem --
# we just start with an empty list.
names = []
# If the list is a queryset, we'll invent a template name based on the
# app and model name. This name gets put at the end of the template
# name list so that user-supplied names override the automatically-
# generated ones.
if hasattr(self.object_list, "model"):
opts = self.object_list.model._meta
names.append(
"%s/%s%s.html"
% (opts.app_label, opts.model_name, self.template_name_suffix)
)
elif not names:
raise ImproperlyConfigured(
"%(cls)s requires either a 'template_name' attribute "
"or a get_queryset() method that returns a QuerySet."
% {
"cls": self.__class__.__name__,
}
)
return names
|
Return a list of template names to be used for the request. Must return
a list. May not be called if render_to_response is overridden.
| 24 | 113 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def get_template_names(self):
try:
names = super().get_template_names()
except ImproperlyConfigured:
# If template_name isn't specified, it's not a problem --
# we just start with an empty list.
names = []
# If the list is a queryset, we'll invent a template name based on the
# app and model name. This name gets put at the end of the template
# name list so that user-supplied names override the automatically-
# generated ones.
if hasattr(self.object_list, "model"):
opts = self.object_list.model._meta
names.append(
"%s/%s%s.html"
% (opts.app_label, opts.model_name, self.template_name_suffix)
)
elif not names:
raise ImproperlyConfigured(
"%(cls)s requires either a 'template_name' attribute "
"or a get_queryset() method that returns a QuerySet."
% {
"cls": self.__class__.__name__,
}
)
return names
```
###Assistant :
Return a list of template names to be used for the request. Must return
a list. May not be called if render_to_response is overridden.
|
2,780 | def remove_lines(fname, entries):
to_remove = []
for entry in entries:
msg, issue, lineno, pos, descr = entry
# 'module imported but not used'
if issue == 'F401' and handle_f401(fname, lineno):
to_remove.append(lineno)
# 'blank line(s) at end of file'
elif issue == 'W391':
lines = read_lines(fname)
i = len(lines) - 1
while lines[i] == '\n':
to_remove.append(i + 1)
i -= 1
# 'too many blank lines'
elif issue == 'E303':
howmany = descr.replace('(', '').replace(')', '')
howmany = int(howmany[-1])
for x in range(lineno - howmany, lineno):
to_remove.append(x)
if to_remove:
newlines = []
for i, line in enumerate(read_lines(fname), 1):
if i not in to_remove:
newlines.append(line)
print("removing line(s) from %s" % fname)
write_file(fname, newlines)
return len(to_remove)
| Check if we should remove lines, then do it.
Return the number of lines removed.
| 15 | 112 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def remove_lines(fname, entries):
to_remove = []
for entry in entries:
msg, issue, lineno, pos, descr = entry
# 'module imported but not used'
if issue == 'F401' and handle_f401(fname, lineno):
to_remove.append(lineno)
# 'blank line(s) at end of file'
elif issue == 'W391':
lines = read_lines(fname)
i = len(lines) - 1
while lines[i] == '\n':
to_remove.append(i + 1)
i -= 1
# 'too many blank lines'
elif issue == 'E303':
howmany = descr.replace('(', '').replace(')', '')
howmany = int(howmany[-1])
for x in range(lineno - howmany, lineno):
to_remove.append(x)
if to_remove:
newlines = []
for i, line in enumerate(read_lines(fname), 1):
if i not in to_remove:
newlines.append(line)
print("removing line(s) from %s" % fname)
write_file(fname, newlines)
return len(to_remove)
```
###Assistant : Check if we should remove lines, then do it.
Return the number of lines removed.
|
2,781 | def _get_one_trial_job(self):
if not self.generated_hyper_configs:
ret = {
'parameter_id': '-1_0_0',
'parameter_source': 'algorithm',
'parameters': ''
}
self.send(CommandType.NoMoreTrialJobs, nni.dump(ret))
return None
assert self.generated_hyper_configs
params = self.generated_hyper_configs.pop(0)
ret = {
'parameter_id': params[0],
'parameter_source': 'algorithm',
'parameters': params[1]
}
self.parameters[params[0]] = params[1]
return ret
| get one trial job, i.e., one hyperparameter configuration.
If this function is called, Command will be sent by BOHB:
a. If there is a parameter need to run, will return "NewTrialJob" with a dict:
{
'parameter_id': id of new hyperparameter
'parameter_source': 'algorithm'
'parameters': value of new hyperparameter
}
b. If BOHB don't have parameter waiting, will return "NoMoreTrialJobs" with
{
'parameter_id': '-1_0_0',
'parameter_source': 'algorithm',
'parameters': ''
}
| 67 | 39 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _get_one_trial_job(self):
if not self.generated_hyper_configs:
ret = {
'parameter_id': '-1_0_0',
'parameter_source': 'algorithm',
'parameters': ''
}
self.send(CommandType.NoMoreTrialJobs, nni.dump(ret))
return None
assert self.generated_hyper_configs
params = self.generated_hyper_configs.pop(0)
ret = {
'parameter_id': params[0],
'parameter_source': 'algorithm',
'parameters': params[1]
}
self.parameters[params[0]] = params[1]
return ret
```
###Assistant : get one trial job, i.e., one hyperparameter configuration.
If this function is called, Command will be sent by BOHB:
a. If there is a parameter need to run, will return "NewTrialJob" with a dict:
{
'parameter_id': id of new hyperparameter
'parameter_source': 'algorithm'
'parameters': value of new hyperparameter
}
b. If BOHB don't have parameter waiting, will return "NoMoreTrialJobs" with
{
'parameter_id': '-1_0_0',
'parameter_source': 'algorithm',
'parameters': ''
}
|
2,782 | def global_array_to_host_local_array(global_inputs, global_mesh, pspecs):
def _convert(arr, pspec):
local_aval = global_mesh._global_to_local(
pxla._get_array_mapping(pspec), arr.aval)
return array.ArrayImpl(
local_aval, MeshPspecSharding(global_mesh.local_mesh, pspec),
arr._arrays, committed=True)
flattened_inps, out_tree = tree_flatten(global_inputs)
out_pspecs = flatten_axis_resources(
'output pspecs', out_tree, pspecs, tupled_args=True)
out = tree_map(_convert, tuple(flattened_inps), out_pspecs)
return tree_unflatten(out_tree, out)
| Converts a global `jax.Array` to a host local `jax.Array`.
You can use this function to transition to `jax.Array`. Using `jax.Array` with
`pjit` has the same semantics of using GDA with pjit i.e. all `jax.Array`
inputs to pjit should be globally shaped and the output from `pjit` will also
be globally shaped `jax.Array`s
You can use this function to convert the globally shaped `jax.Array` output
from pjit to host local values again so that the transition to jax.Array can
be a mechanical change.
Example usage:
```
global_inputs = jax.experimental.pjit.host_local_array_to_global_array(
host_local_inputs, global_mesh, in_pspecs)
with mesh:
global_out = pjitted_fun(global_inputs)
host_local_output = jax.experimental.pjit.global_array_to_host_local_array(
global_out, mesh, out_pspecs)
```
Args:
global_inputs: A Pytree of global `jax.Array`s.
global_mesh: The global mesh.
pspecs: A Pytree of PartitionSpecs.
| 119 | 39 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def global_array_to_host_local_array(global_inputs, global_mesh, pspecs):
def _convert(arr, pspec):
local_aval = global_mesh._global_to_local(
pxla._get_array_mapping(pspec), arr.aval)
return array.ArrayImpl(
local_aval, MeshPspecSharding(global_mesh.local_mesh, pspec),
arr._arrays, committed=True)
flattened_inps, out_tree = tree_flatten(global_inputs)
out_pspecs = flatten_axis_resources(
'output pspecs', out_tree, pspecs, tupled_args=True)
out = tree_map(_convert, tuple(flattened_inps), out_pspecs)
return tree_unflatten(out_tree, out)
```
###Assistant : Converts a global `jax.Array` to a host local `jax.Array`.
You can use this function to transition to `jax.Array`. Using `jax.Array` with
`pjit` has the same semantics of using GDA with pjit i.e. all `jax.Array`
inputs to pjit should be globally shaped and the output from `pjit` will also
be globally shaped `jax.Array`s
You can use this function to convert the globally shaped `jax.Array` output
from pjit to host local values again so that the transition to jax.Array can
be a mechanical change.
Example usage:
```
global_inputs = jax.experimental.pjit.host_local_array_to_global_array(
host_local_inputs, global_mesh, in_pspecs)
with mesh:
global_out = pjitted_fun(global_inputs)
host_local_output = jax.experimental.pjit.global_array_to_host_local_array(
global_out, mesh, out_pspecs)
```
Args:
global_inputs: A Pytree of global `jax.Array`s.
global_mesh: The global mesh.
pspecs: A Pytree of PartitionSpecs.
|
2,783 | def __setitem__(self, key, value):
super(Py27Dict, self).__setitem__(key, value)
self.keylist.add(key)
|
Override of __setitem__ to track keys and simulate Python2.7 dict
Parameters
----------
key: hashable
value: Any
| 16 | 8 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def __setitem__(self, key, value):
super(Py27Dict, self).__setitem__(key, value)
self.keylist.add(key)
```
###Assistant :
Override of __setitem__ to track keys and simulate Python2.7 dict
Parameters
----------
key: hashable
value: Any
|
2,784 | def virtualenv_no_global() -> bool:
# PEP 405 compliance needs to be checked first since virtualenv >=20 would
# return True for both checks, but is only able to use the PEP 405 config.
if _running_under_venv():
return _no_global_under_venv()
if _running_under_regular_virtualenv():
return _no_global_under_regular_virtualenv()
return False
| Returns a boolean, whether running in venv with no system site-packages. | 11 | 43 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def virtualenv_no_global() -> bool:
# PEP 405 compliance needs to be checked first since virtualenv >=20 would
# return True for both checks, but is only able to use the PEP 405 config.
if _running_under_venv():
return _no_global_under_venv()
if _running_under_regular_virtualenv():
return _no_global_under_regular_virtualenv()
return False
```
###Assistant : Returns a boolean, whether running in venv with no system site-packages.
|
2,785 | def autocomplete(self):
texts = []
for field in self.search_fields:
for current_field, value in self.prepare_field(self.obj, field):
if isinstance(current_field, AutocompleteField):
texts.append((value))
return " ".join(texts)
|
Returns all values to index as "autocomplete". This is the value of all AutocompleteFields
| 14 | 22 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def autocomplete(self):
texts = []
for field in self.search_fields:
for current_field, value in self.prepare_field(self.obj, field):
if isinstance(current_field, AutocompleteField):
texts.append((value))
return " ".join(texts)
```
###Assistant :
Returns all values to index as "autocomplete". This is the value of all AutocompleteFields
|
2,786 | def list_to_tuple(maybe_list):
if isinstance(maybe_list, list):
return tuple(maybe_list)
return maybe_list
| Datasets will stack the list of tensor, so switch them to tuples. | 12 | 9 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def list_to_tuple(maybe_list):
if isinstance(maybe_list, list):
return tuple(maybe_list)
return maybe_list
```
###Assistant : Datasets will stack the list of tensor, so switch them to tuples.
|
2,787 | def test_not_recorded_for_unused(self, dag_maker, xcom_value):
with dag_maker(dag_id="test_not_recorded_for_unused") as dag:
| A value not used for task-mapping should not be recorded. | 10 | 8 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_not_recorded_for_unused(self, dag_maker, xcom_value):
with dag_maker(dag_id="test_not_recorded_for_unused") as dag:
```
###Assistant : A value not used for task-mapping should not be recorded.
|
2,788 | def _ragged_tensor_mse(y_true, y_pred):
return _ragged_tensor_apply_loss(mean_squared_error, y_true, y_pred)
@keras_export(
"keras.metrics.mean_absolute_error",
"keras.metrics.mae",
"keras.metrics.MAE",
"keras.losses.mean_absolute_error",
"keras.losses.mae",
"keras.losses.MAE",
)
@tf.__internal__.dispatch.add_dispatch_support | Implements support for handling RaggedTensors.
Args:
y_true: RaggedTensor truth values. shape = `[batch_size, d0, .. dN]`.
y_pred: RaggedTensor predicted values. shape = `[batch_size, d0, .. dN]`.
Returns:
Mean squared error values. shape = `[batch_size, d0, .. dN-1]`.
When the number of dimensions of the batch feature vector [d0, .. dN] is
greater than one the return value is a RaggedTensor. Otherwise a Dense
tensor with dimensions [batch_size] is returned.
| 69 | 16 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _ragged_tensor_mse(y_true, y_pred):
return _ragged_tensor_apply_loss(mean_squared_error, y_true, y_pred)
@keras_export(
"keras.metrics.mean_absolute_error",
"keras.metrics.mae",
"keras.metrics.MAE",
"keras.losses.mean_absolute_error",
"keras.losses.mae",
"keras.losses.MAE",
)
@tf.__internal__.dispatch.add_dispatch_support
```
###Assistant : Implements support for handling RaggedTensors.
Args:
y_true: RaggedTensor truth values. shape = `[batch_size, d0, .. dN]`.
y_pred: RaggedTensor predicted values. shape = `[batch_size, d0, .. dN]`.
Returns:
Mean squared error values. shape = `[batch_size, d0, .. dN-1]`.
When the number of dimensions of the batch feature vector [d0, .. dN] is
greater than one the return value is a RaggedTensor. Otherwise a Dense
tensor with dimensions [batch_size] is returned.
|
2,789 | def test_does_not_allow_extra_kwargs() -> None:
template = "This is a {foo} test."
with pytest.raises(KeyError):
formatter.format(template, foo="good", bar="oops")
| Test formatting does not allow extra key word arguments. | 9 | 16 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_does_not_allow_extra_kwargs() -> None:
template = "This is a {foo} test."
with pytest.raises(KeyError):
formatter.format(template, foo="good", bar="oops")
```
###Assistant : Test formatting does not allow extra key word arguments.
|
2,790 | def score_samples(self, X):
check_is_fitted(self)
X = check_array(X, accept_sparse="csr")
distances_X, neighbors_indices_X = self.kneighbors(
X, n_neighbors=self.n_neighbors_
)
X_lrd = self._local_reachability_density(distances_X, neighbors_indices_X)
lrd_ratios_array = self._lrd[neighbors_indices_X] / X_lrd[:, np.newaxis]
# as bigger is better:
return -np.mean(lrd_ratios_array, axis=1)
| Opposite of the Local Outlier Factor of X.
It is the opposite as bigger is better, i.e. large values correspond
to inliers.
**Only available for novelty detection (when novelty is set to True).**
The argument X is supposed to contain *new data*: if X contains a
point from training, it considers the later in its own neighborhood.
Also, the samples in X are not considered in the neighborhood of any
point. Because of this, the scores obtained via ``score_samples`` may
differ from the standard LOF scores.
The standard LOF scores for the training data is available via the
``negative_outlier_factor_`` attribute.
Parameters
----------
X : array-like of shape (n_samples, n_features)
The query sample or samples to compute the Local Outlier Factor
w.r.t. the training samples.
Returns
-------
opposite_lof_scores : ndarray of shape (n_samples,)
The opposite of the Local Outlier Factor of each input samples.
The lower, the more abnormal.
| 148 | 33 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def score_samples(self, X):
check_is_fitted(self)
X = check_array(X, accept_sparse="csr")
distances_X, neighbors_indices_X = self.kneighbors(
X, n_neighbors=self.n_neighbors_
)
X_lrd = self._local_reachability_density(distances_X, neighbors_indices_X)
lrd_ratios_array = self._lrd[neighbors_indices_X] / X_lrd[:, np.newaxis]
# as bigger is better:
return -np.mean(lrd_ratios_array, axis=1)
```
###Assistant : Opposite of the Local Outlier Factor of X.
It is the opposite as bigger is better, i.e. large values correspond
to inliers.
**Only available for novelty detection (when novelty is set to True).**
The argument X is supposed to contain *new data*: if X contains a
point from training, it considers the later in its own neighborhood.
Also, the samples in X are not considered in the neighborhood of any
point. Because of this, the scores obtained via ``score_samples`` may
differ from the standard LOF scores.
The standard LOF scores for the training data is available via the
``negative_outlier_factor_`` attribute.
Parameters
----------
X : array-like of shape (n_samples, n_features)
The query sample or samples to compute the Local Outlier Factor
w.r.t. the training samples.
Returns
-------
opposite_lof_scores : ndarray of shape (n_samples,)
The opposite of the Local Outlier Factor of each input samples.
The lower, the more abnormal.
|
2,791 | def get_payroll_period_days(start_date, end_date, employee, company=None):
if not company:
company = frappe.db.get_value("Employee", employee, "company")
payroll_period = frappe.db.sql(
,
{"company": company, "start_date": start_date, "end_date": end_date},
)
if len(payroll_period) > 0:
actual_no_of_days = date_diff(getdate(payroll_period[0][2]), getdate(payroll_period[0][1])) + 1
working_days = actual_no_of_days
if not cint(
frappe.db.get_value("Payroll Settings", None, "include_holidays_in_total_working_days")
):
holidays = get_holiday_dates_for_employee(
employee, getdate(payroll_period[0][1]), getdate(payroll_period[0][2])
)
working_days -= len(holidays)
return payroll_period[0][0], working_days, actual_no_of_days
return False, False, False
|
select name, start_date, end_date
from `tabPayroll Period`
where
company=%(company)s
and %(start_date)s between start_date and end_date
and %(end_date)s between start_date and end_date
| 21 | 63 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def get_payroll_period_days(start_date, end_date, employee, company=None):
if not company:
company = frappe.db.get_value("Employee", employee, "company")
payroll_period = frappe.db.sql(
,
{"company": company, "start_date": start_date, "end_date": end_date},
)
if len(payroll_period) > 0:
actual_no_of_days = date_diff(getdate(payroll_period[0][2]), getdate(payroll_period[0][1])) + 1
working_days = actual_no_of_days
if not cint(
frappe.db.get_value("Payroll Settings", None, "include_holidays_in_total_working_days")
):
holidays = get_holiday_dates_for_employee(
employee, getdate(payroll_period[0][1]), getdate(payroll_period[0][2])
)
working_days -= len(holidays)
return payroll_period[0][0], working_days, actual_no_of_days
return False, False, False
```
###Assistant :
select name, start_date, end_date
from `tabPayroll Period`
where
company=%(company)s
and %(start_date)s between start_date and end_date
and %(end_date)s between start_date and end_date
|
2,792 | def density(B, nodes):
n = len(B)
m = nx.number_of_edges(B)
nb = len(nodes)
nt = n - nb
if m == 0: # includes cases n==0 and n==1
d = 0.0
else:
if B.is_directed():
d = m / (2 * nb * nt)
else:
d = m / (nb * nt)
return d
| Returns density of bipartite graph B.
Parameters
----------
B : NetworkX graph
nodes: list or container
Nodes in one node set of the bipartite graph.
Returns
-------
d : float
The bipartite density
Examples
--------
>>> from networkx.algorithms import bipartite
>>> G = nx.complete_bipartite_graph(3, 2)
>>> X = set([0, 1, 2])
>>> bipartite.density(G, X)
1.0
>>> Y = set([3, 4])
>>> bipartite.density(G, Y)
1.0
Notes
-----
The container of nodes passed as argument must contain all nodes
in one of the two bipartite node sets to avoid ambiguity in the
case of disconnected graphs.
See :mod:`bipartite documentation <networkx.algorithms.bipartite>`
for further details on how bipartite graphs are handled in NetworkX.
See Also
--------
color
| 113 | 52 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def density(B, nodes):
n = len(B)
m = nx.number_of_edges(B)
nb = len(nodes)
nt = n - nb
if m == 0: # includes cases n==0 and n==1
d = 0.0
else:
if B.is_directed():
d = m / (2 * nb * nt)
else:
d = m / (nb * nt)
return d
```
###Assistant : Returns density of bipartite graph B.
Parameters
----------
B : NetworkX graph
nodes: list or container
Nodes in one node set of the bipartite graph.
Returns
-------
d : float
The bipartite density
Examples
--------
>>> from networkx.algorithms import bipartite
>>> G = nx.complete_bipartite_graph(3, 2)
>>> X = set([0, 1, 2])
>>> bipartite.density(G, X)
1.0
>>> Y = set([3, 4])
>>> bipartite.density(G, Y)
1.0
Notes
-----
The container of nodes passed as argument must contain all nodes
in one of the two bipartite node sets to avoid ambiguity in the
case of disconnected graphs.
See :mod:`bipartite documentation <networkx.algorithms.bipartite>`
for further details on how bipartite graphs are handled in NetworkX.
See Also
--------
color
|
2,793 | def redirect_or_json(origin, msg, status=""):
if request.headers.get('Accept') == 'application/json':
return {'status': status, 'message': msg}
else:
if status:
flash(msg, status)
else:
flash(msg)
return redirect(origin)
######################################################################################
# Error handlers
######################################################################################
|
Some endpoints are called by javascript,
returning json will allow us to more elegantly handle side-effects in-page
| 17 | 27 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def redirect_or_json(origin, msg, status=""):
if request.headers.get('Accept') == 'application/json':
return {'status': status, 'message': msg}
else:
if status:
flash(msg, status)
else:
flash(msg)
return redirect(origin)
######################################################################################
# Error handlers
######################################################################################
```
###Assistant :
Some endpoints are called by javascript,
returning json will allow us to more elegantly handle side-effects in-page
|
2,794 | def _map_drop_idx_to_infrequent(self, feature_idx, drop_idx):
if not self._infrequent_enabled:
return drop_idx
default_to_infrequent = self._default_to_infrequent_mappings[feature_idx]
if default_to_infrequent is None:
return drop_idx
# Raise error when explicitly dropping a category that is infrequent
infrequent_indices = self._infrequent_indices[feature_idx]
if infrequent_indices is not None and drop_idx in infrequent_indices:
categories = self.categories_[feature_idx]
raise ValueError(
f"Unable to drop category {categories[drop_idx]!r} from feature"
f" {feature_idx} because it is infrequent"
)
return default_to_infrequent[drop_idx]
| Convert `drop_idx` into the index for infrequent categories.
If there are no infrequent categories, then `drop_idx` is
returned. This method is called in `_compute_drop_idx` when the `drop`
parameter is an array-like.
| 31 | 62 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _map_drop_idx_to_infrequent(self, feature_idx, drop_idx):
if not self._infrequent_enabled:
return drop_idx
default_to_infrequent = self._default_to_infrequent_mappings[feature_idx]
if default_to_infrequent is None:
return drop_idx
# Raise error when explicitly dropping a category that is infrequent
infrequent_indices = self._infrequent_indices[feature_idx]
if infrequent_indices is not None and drop_idx in infrequent_indices:
categories = self.categories_[feature_idx]
raise ValueError(
f"Unable to drop category {categories[drop_idx]!r} from feature"
f" {feature_idx} because it is infrequent"
)
return default_to_infrequent[drop_idx]
```
###Assistant : Convert `drop_idx` into the index for infrequent categories.
If there are no infrequent categories, then `drop_idx` is
returned. This method is called in `_compute_drop_idx` when the `drop`
parameter is an array-like.
|
2,795 | def test_dashboard_module_decorator(enable_test_module):
head_cls_list = dashboard_utils.get_all_modules(dashboard_utils.DashboardHeadModule)
agent_cls_list = dashboard_utils.get_all_modules(
dashboard_utils.DashboardAgentModule
)
assert any(cls.__name__ == "TestHead" for cls in head_cls_list)
assert any(cls.__name__ == "TestAgent" for cls in agent_cls_list)
test_code =
run_string_as_driver(test_code)
|
import os
import ray.dashboard.utils as dashboard_utils
os.environ.pop("RAY_DASHBOARD_MODULE_TEST")
head_cls_list = dashboard_utils.get_all_modules(
dashboard_utils.DashboardHeadModule)
agent_cls_list = dashboard_utils.get_all_modules(
dashboard_utils.DashboardAgentModule)
print(head_cls_list)
print(agent_cls_list)
assert all(cls.__name__ != "TestHead" for cls in head_cls_list)
assert all(cls.__name__ != "TestAgent" for cls in agent_cls_list)
print("success")
| 34 | 29 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_dashboard_module_decorator(enable_test_module):
head_cls_list = dashboard_utils.get_all_modules(dashboard_utils.DashboardHeadModule)
agent_cls_list = dashboard_utils.get_all_modules(
dashboard_utils.DashboardAgentModule
)
assert any(cls.__name__ == "TestHead" for cls in head_cls_list)
assert any(cls.__name__ == "TestAgent" for cls in agent_cls_list)
test_code =
run_string_as_driver(test_code)
```
###Assistant :
import os
import ray.dashboard.utils as dashboard_utils
os.environ.pop("RAY_DASHBOARD_MODULE_TEST")
head_cls_list = dashboard_utils.get_all_modules(
dashboard_utils.DashboardHeadModule)
agent_cls_list = dashboard_utils.get_all_modules(
dashboard_utils.DashboardAgentModule)
print(head_cls_list)
print(agent_cls_list)
assert all(cls.__name__ != "TestHead" for cls in head_cls_list)
assert all(cls.__name__ != "TestAgent" for cls in agent_cls_list)
print("success")
|
2,796 | def parsestr(self, text, headersonly=False):
return self.parse(StringIO(text), headersonly=headersonly) | Create a message structure from a string.
Returns the root of the message structure. Optional headersonly is a
flag specifying whether to stop parsing after reading the headers or
not. The default is False, meaning it parses the entire contents of
the file.
| 43 | 7 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def parsestr(self, text, headersonly=False):
return self.parse(StringIO(text), headersonly=headersonly)
```
###Assistant : Create a message structure from a string.
Returns the root of the message structure. Optional headersonly is a
flag specifying whether to stop parsing after reading the headers or
not. The default is False, meaning it parses the entire contents of
the file.
|
2,797 | def get_prerequisite_model(queryset):
if not queryset.exists():
for prereq in getattr(queryset.model, 'prerequisite_models', []):
model = apps.get_model(prereq)
if not model.objects.exists():
return model
|
Return any prerequisite model that must be created prior to creating
an instance of the current model.
| 17 | 19 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def get_prerequisite_model(queryset):
if not queryset.exists():
for prereq in getattr(queryset.model, 'prerequisite_models', []):
model = apps.get_model(prereq)
if not model.objects.exists():
return model
```
###Assistant :
Return any prerequisite model that must be created prior to creating
an instance of the current model.
|
2,798 | def filter_empty_gradients(grads_and_vars):
grads_and_vars = tuple(grads_and_vars)
if not grads_and_vars:
return grads_and_vars
filtered = []
vars_with_empty_grads = []
for grad, var in grads_and_vars:
if grad is None:
vars_with_empty_grads.append(var)
else:
filtered.append((grad, var))
filtered = tuple(filtered)
if not filtered:
variable = ([v.name for _, v in grads_and_vars],)
raise ValueError(
f"No gradients provided for any variable: {variable}. "
f"Provided `grads_and_vars` is {grads_and_vars}."
)
if vars_with_empty_grads:
logging.warning(
(
"Gradients do not exist for variables %s when minimizing the loss. "
"If you're using `model.compile()`, did you forget to provide a `loss`"
"argument?"
),
([v.name for v in vars_with_empty_grads]),
)
return filtered
| Filter out `(grad, var)` pairs that have a gradient equal to `None`. | 12 | 95 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def filter_empty_gradients(grads_and_vars):
grads_and_vars = tuple(grads_and_vars)
if not grads_and_vars:
return grads_and_vars
filtered = []
vars_with_empty_grads = []
for grad, var in grads_and_vars:
if grad is None:
vars_with_empty_grads.append(var)
else:
filtered.append((grad, var))
filtered = tuple(filtered)
if not filtered:
variable = ([v.name for _, v in grads_and_vars],)
raise ValueError(
f"No gradients provided for any variable: {variable}. "
f"Provided `grads_and_vars` is {grads_and_vars}."
)
if vars_with_empty_grads:
logging.warning(
(
"Gradients do not exist for variables %s when minimizing the loss. "
"If you're using `model.compile()`, did you forget to provide a `loss`"
"argument?"
),
([v.name for v in vars_with_empty_grads]),
)
return filtered
```
###Assistant : Filter out `(grad, var)` pairs that have a gradient equal to `None`.
|
2,799 | def real_gaunt(l_1, l_2, l_3, m_1, m_2, m_3, prec=None):
r
l_1, l_2, l_3, m_1, m_2, m_3 = [
as_int(i) for i in (l_1, l_2, l_3, m_1, m_2, m_3)]
# check for quick exits
if sum(1 for i in (m_1, m_2, m_3) if i < 0) % 2:
return S.Zero # odd number of negative m
if (l_1 + l_2 + l_3) % 2:
return S.Zero # sum of l is odd
lmax = l_2 + l_3
lmin = max(abs(l_2 - l_3), min(abs(m_2 + m_3), abs(m_2 - m_3)))
if (lmin + lmax) % 2:
lmin += 1
if lmin not in range(lmax, lmin - 2, -2):
return S.Zero
kron_del = lambda i, j: 1 if i == j else 0
s = lambda e: -1 if e % 2 else 1 # (-1)**e to give +/-1, avoiding float when e<0
A = lambda a, b: (-kron_del(a, b)*s(a-b) + kron_del(a, -b)*
s(b)) if b < 0 else 0
B = lambda a, b: (kron_del(a, b) + kron_del(a, -b)*s(a)) if b > 0 else 0
C = lambda a, b: kron_del(abs(a), abs(b))*(kron_del(a, 0)*kron_del(b, 0) +
(B(a, b) + I*A(a, b))/sqrt(2))
ugnt = 0
for i in range(-l_1, l_1+1):
U1 = C(i, m_1)
for j in range(-l_2, l_2+1):
U2 = C(j, m_2)
U3 = C(-i-j, m_3)
ugnt = ugnt + re(U1*U2*U3)*gaunt(l_1, l_2, l_3, i, j, -i-j)
if prec is not None:
ugnt = ugnt.n(prec)
return ugnt
|
Calculate the real Gaunt coefficient.
Explanation
===========
The real Gaunt coefficient is defined as the integral over three
real spherical harmonics:
.. math::
\begin{aligned}
\operatorname{RealGaunt}(l_1,l_2,l_3,m_1,m_2,m_3)
&=\int Z^{m_1}_{l_1}(\Omega)
Z^{m_2}_{l_2}(\Omega) Z^{m_3}_{l_3}(\Omega) \,d\Omega \\
\end{aligned}
Alternatively, it can be defined in terms of the standard Gaunt
coefficient by relating the real spherical harmonics to the standard
spherical harmonics via a unitary transformation `U`, i.e.
`Z^{m}_{l}(\Omega)=\sum_{m'}U^{m}_{m'}Y^{m'}_{l}(\Omega)` [Homeier96]_.
The real Gaunt coefficient is then defined as
.. math::
\begin{aligned}
\operatorname{RealGaunt}(l_1,l_2,l_3,m_1,m_2,m_3)
&=\int Z^{m_1}_{l_1}(\Omega)
Z^{m_2}_{l_2}(\Omega) Z^{m_3}_{l_3}(\Omega) \,d\Omega \\
&=\sum_{m'_1 m'_2 m'_3} U^{m_1}_{m'_1}U^{m_2}_{m'_2}U^{m_3}_{m'_3}
\operatorname{Gaunt}(l_1,l_2,l_3,m'_1,m'_2,m'_3)
\end{aligned}
The unitary matrix `U` has components
.. math::
\begin{aligned}
U^m_{m'} = \delta_{|m||m'|}*(\delta_{m'0}\delta_{m0} + \frac{1}{\sqrt{2}}\big[\Theta(m)
\big(\delta_{m'm}+(-1)^{m'}\delta_{m'-m}\big)+i\Theta(-m)\big((-1)^{-m}
\delta_{m'-m}-\delta_{m'm}*(-1)^{m'-m}\big)\big])
\end{aligned}
where `\delta_{ij}` is the Kronecker delta symbol and `\Theta` is a step
function defined as
.. math::
\begin{aligned}
\Theta(x) = \begin{cases} 1 \,\text{for}\, x > 0 \\ 0 \,\text{for}\, x \leq 0 \end{cases}
\end{aligned}
Parameters
==========
l_1, l_2, l_3, m_1, m_2, m_3 :
Integer.
prec - precision, default: ``None``.
Providing a precision can
drastically speed up the calculation.
Returns
=======
Rational number times the square root of a rational number.
Examples
========
>>> from sympy.physics.wigner import real_gaunt
>>> real_gaunt(2,2,4,-1,-1,0)
-2/(7*sqrt(pi))
>>> real_gaunt(10,10,20,-9,-9,0).n(64)
-0.00002480019791932209313156167...
It is an error to use non-integer values for `l` and `m`::
real_gaunt(2.8,0.5,1.3,0,0,0)
Traceback (most recent call last):
...
ValueError: l values must be integer
real_gaunt(2,2,4,0.7,1,-3.4)
Traceback (most recent call last):
...
ValueError: m values must be integer
Notes
=====
The real Gaunt coefficient inherits from the standard Gaunt coefficient,
the invariance under any permutation of the pairs `(l_i, m_i)` and the
requirement that the sum of the `l_i` be even to yield a non-zero value.
It also obeys the following symmetry rules:
- zero for `l_1`, `l_2`, `l_3` not fulfiling the condition
`l_1 \in \{l_{\text{max}}, l_{\text{max}}-2, \ldots, l_{\text{min}}\}`,
where `l_{\text{max}} = l_2+l_3`,
.. math::
\begin{aligned}
l_{\text{min}} = \begin{cases} \kappa(l_2, l_3, m_2, m_3) & \text{if}\,
\kappa(l_2, l_3, m_2, m_3) + l_{\text{max}}\, \text{is even} \\
\kappa(l_2, l_3, m_2, m_3)+1 & \text{if}\, \kappa(l_2, l_3, m_2, m_3) +
l_{\text{max}}\, \text{is odd}\end{cases}
\end{aligned}
and `\kappa(l_2, l_3, m_2, m_3) = \max{\big(|l_2-l_3|, \min{\big(|m_2+m_3|,
|m_2-m_3|\big)}\big)}`
- zero for an odd number of negative `m_i`
Algorithms
==========
This function uses the algorithms of [Homeier96]_ and [Rasch03]_ to
calculate the value of the real Gaunt coefficient exactly. Note that
the formula used in [Rasch03]_ contains alternating sums over large
factorials and is therefore unsuitable for finite precision arithmetic
and only useful for a computer algebra system [Rasch03]_. However, this
function can in principle use any algorithm that computes the Gaunt
coefficient, so it is suitable for finite precision arithmetic in so far
as the algorithm which computes the Gaunt coefficient is.
| 429 | 231 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def real_gaunt(l_1, l_2, l_3, m_1, m_2, m_3, prec=None):
r
l_1, l_2, l_3, m_1, m_2, m_3 = [
as_int(i) for i in (l_1, l_2, l_3, m_1, m_2, m_3)]
# check for quick exits
if sum(1 for i in (m_1, m_2, m_3) if i < 0) % 2:
return S.Zero # odd number of negative m
if (l_1 + l_2 + l_3) % 2:
return S.Zero # sum of l is odd
lmax = l_2 + l_3
lmin = max(abs(l_2 - l_3), min(abs(m_2 + m_3), abs(m_2 - m_3)))
if (lmin + lmax) % 2:
lmin += 1
if lmin not in range(lmax, lmin - 2, -2):
return S.Zero
kron_del = lambda i, j: 1 if i == j else 0
s = lambda e: -1 if e % 2 else 1 # (-1)**e to give +/-1, avoiding float when e<0
A = lambda a, b: (-kron_del(a, b)*s(a-b) + kron_del(a, -b)*
s(b)) if b < 0 else 0
B = lambda a, b: (kron_del(a, b) + kron_del(a, -b)*s(a)) if b > 0 else 0
C = lambda a, b: kron_del(abs(a), abs(b))*(kron_del(a, 0)*kron_del(b, 0) +
(B(a, b) + I*A(a, b))/sqrt(2))
ugnt = 0
for i in range(-l_1, l_1+1):
U1 = C(i, m_1)
for j in range(-l_2, l_2+1):
U2 = C(j, m_2)
U3 = C(-i-j, m_3)
ugnt = ugnt + re(U1*U2*U3)*gaunt(l_1, l_2, l_3, i, j, -i-j)
if prec is not None:
ugnt = ugnt.n(prec)
return ugnt
```
###Assistant :
Calculate the real Gaunt coefficient.
Explanation
===========
The real Gaunt coefficient is defined as the integral over three
real spherical harmonics:
.. math::
\begin{aligned}
\operatorname{RealGaunt}(l_1,l_2,l_3,m_1,m_2,m_3)
&=\int Z^{m_1}_{l_1}(\Omega)
Z^{m_2}_{l_2}(\Omega) Z^{m_3}_{l_3}(\Omega) \,d\Omega \\
\end{aligned}
Alternatively, it can be defined in terms of the standard Gaunt
coefficient by relating the real spherical harmonics to the standard
spherical harmonics via a unitary transformation `U`, i.e.
`Z^{m}_{l}(\Omega)=\sum_{m'}U^{m}_{m'}Y^{m'}_{l}(\Omega)` [Homeier96]_.
The real Gaunt coefficient is then defined as
.. math::
\begin{aligned}
\operatorname{RealGaunt}(l_1,l_2,l_3,m_1,m_2,m_3)
&=\int Z^{m_1}_{l_1}(\Omega)
Z^{m_2}_{l_2}(\Omega) Z^{m_3}_{l_3}(\Omega) \,d\Omega \\
&=\sum_{m'_1 m'_2 m'_3} U^{m_1}_{m'_1}U^{m_2}_{m'_2}U^{m_3}_{m'_3}
\operatorname{Gaunt}(l_1,l_2,l_3,m'_1,m'_2,m'_3)
\end{aligned}
The unitary matrix `U` has components
.. math::
\begin{aligned}
U^m_{m'} = \delta_{|m||m'|}*(\delta_{m'0}\delta_{m0} + \frac{1}{\sqrt{2}}\big[\Theta(m)
\big(\delta_{m'm}+(-1)^{m'}\delta_{m'-m}\big)+i\Theta(-m)\big((-1)^{-m}
\delta_{m'-m}-\delta_{m'm}*(-1)^{m'-m}\big)\big])
\end{aligned}
where `\delta_{ij}` is the Kronecker delta symbol and `\Theta` is a step
function defined as
.. math::
\begin{aligned}
\Theta(x) = \begin{cases} 1 \,\text{for}\, x > 0 \\ 0 \,\text{for}\, x \leq 0 \end{cases}
\end{aligned}
Parameters
==========
l_1, l_2, l_3, m_1, m_2, m_3 :
Integer.
prec - precision, default: ``None``.
Providing a precision can
drastically speed up the calculation.
Returns
=======
Rational number times the square root of a rational number.
Examples
========
>>> from sympy.physics.wigner import real_gaunt
>>> real_gaunt(2,2,4,-1,-1,0)
-2/(7*sqrt(pi))
>>> real_gaunt(10,10,20,-9,-9,0).n(64)
-0.00002480019791932209313156167...
It is an error to use non-integer values for `l` and `m`::
real_gaunt(2.8,0.5,1.3,0,0,0)
Traceback (most recent call last):
...
ValueError: l values must be integer
real_gaunt(2,2,4,0.7,1,-3.4)
Traceback (most recent call last):
...
ValueError: m values must be integer
Notes
=====
The real Gaunt coefficient inherits from the standard Gaunt coefficient,
the invariance under any permutation of the pairs `(l_i, m_i)` and the
requirement that the sum of the `l_i` be even to yield a non-zero value.
It also obeys the following symmetry rules:
- zero for `l_1`, `l_2`, `l_3` not fulfiling the condition
`l_1 \in \{l_{\text{max}}, l_{\text{max}}-2, \ldots, l_{\text{min}}\}`,
where `l_{\text{max}} = l_2+l_3`,
.. math::
\begin{aligned}
l_{\text{min}} = \begin{cases} \kappa(l_2, l_3, m_2, m_3) & \text{if}\,
\kappa(l_2, l_3, m_2, m_3) + l_{\text{max}}\, \text{is even} \\
\kappa(l_2, l_3, m_2, m_3)+1 & \text{if}\, \kappa(l_2, l_3, m_2, m_3) +
l_{\text{max}}\, \text{is odd}\end{cases}
\end{aligned}
and `\kappa(l_2, l_3, m_2, m_3) = \max{\big(|l_2-l_3|, \min{\big(|m_2+m_3|,
|m_2-m_3|\big)}\big)}`
- zero for an odd number of negative `m_i`
Algorithms
==========
This function uses the algorithms of [Homeier96]_ and [Rasch03]_ to
calculate the value of the real Gaunt coefficient exactly. Note that
the formula used in [Rasch03]_ contains alternating sums over large
factorials and is therefore unsuitable for finite precision arithmetic
and only useful for a computer algebra system [Rasch03]_. However, this
function can in principle use any algorithm that computes the Gaunt
coefficient, so it is suitable for finite precision arithmetic in so far
as the algorithm which computes the Gaunt coefficient is.
|