ast_errors
stringlengths
0
3.2k
d_id
int64
44
121k
id
int64
70
338k
n_whitespaces
int64
3
14k
path
stringlengths
8
134
n_words
int64
4
4.82k
n_identifiers
int64
1
131
random_cut
stringlengths
16
15.8k
commit_message
stringlengths
2
15.3k
fun_name
stringlengths
1
84
commit_id
stringlengths
40
40
repo
stringlengths
3
28
file_name
stringlengths
5
79
ast_levels
int64
6
31
nloc
int64
1
548
url
stringlengths
31
59
complexity
int64
1
66
token_counts
int64
6
2.13k
n_ast_errors
int64
0
28
vocab_size
int64
4
1.11k
n_ast_nodes
int64
15
19.2k
language
stringclasses
1 value
documentation
dict
code
stringlengths
101
62.2k
18,905
92,382
121
src/sentry/sentry_metrics/indexer/base.py
33
13
def get_mapped_key_strings_to_ints(self) -> MutableMapping[str, int]: cache_
feat(metrics_indexer): Add rate limits functionality to indexer [INGEST-1380] (#36263) * feat(metrics_indexer): Add rate limits functionality to indexer [INGEST-1380] The postgres string indexer now is able to rate limit writes using four sentry options. If that happens, `None` is returned in place of an integer, and the FetchType is RATE_LIMITED. The kafka consumer/message processor explicitly checks for those `None` values and throws away every message that references a rate-limited string. It logs a Sentry error for every dropped message just because that's already what we do for other kinds of dropped messages. Rate limiting and quota management currently creates a ton of dataclasses and that probably wastes time. There are a ton of low-hanging fruits: * the return value of _construct_quotas could be globally cached, as long as the cache is wiped when the sentry options change. * the same Quota object (for global limits) is referenced from multiple RequestedQuota instances (one for each org). `sentry.ratelimits.sliding_windows` could check the `id()` of the quota (if there is no prefix override) to avoid computing and checking the same quota multiple times. An even lower hanging fruit is that we're fetching the same keys from Redis multiple times, because multiple organizations (and therefore multiple RequestedQuota instances) adhere to the global quota. So that's been fixed, but as for the rest let's wait for timings from prod. * fix typo * fix typing * apply review feedback * fix typing, add test * fix tests * apply review feedback about logging too many msgs * fix leaking option in test * sike, more test failures
get_mapped_key_strings_to_ints
c4cc0467974bcfb2b3c95120bd19c337aa977183
sentry
base.py
13
18
https://github.com/getsentry/sentry.git
4
66
0
26
111
Python
{ "docstring": "\n Return the results, but formatted as the following:\n {\n \"1:a\": 10,\n \"1:b\": 11,\n \"1:c\", 12,\n \"2:e\": 13\n }\n This is for when we use indexer_cache.set_many()\n ", "language": "en", "n_whitespaces": 129, "n_words": 25, "vocab_size": 24 }
def get_mapped_key_strings_to_ints(self) -> MutableMapping[str, int]: cache_key_results: MutableMapping[str, int] = {} for org_id, result_dict in self.results.items(): for string, id in result_dict.items(): key = f"{org_id}:{string}" if id is not None: cache_key_results[key] = id return cache_key_results
56,992
223,585
71
python3.10.4/Lib/email/_header_value_parser.py
29
12
def get_atext(value): m = _non_atom_end_matcher(value) if not m: raise errors.HeaderParseError( "expected atext but found '{}'".format(value)) atext = m.group() value = value[len(atext):] atext = ValueTerminal(atext, 'atext') _validate_xtext(atext) return atext,
add python 3.10.4 for windows
get_atext
8198943edd73a363c266633e1aa5b2a9e9c9f526
XX-Net
_header_value_parser.py
12
10
https://github.com/XX-net/XX-Net.git
2
61
0
23
106
Python
{ "docstring": "atext = <matches _atext_matcher>\n\n We allow any non-ATOM_ENDS in atext, but add an InvalidATextDefect to\n the token's defects list if we find non-atext characters.\n ", "language": "en", "n_whitespaces": 33, "n_words": 24, "vocab_size": 24 }
def get_atext(value): m = _non_atom_end_matcher(value) if not m: raise errors.HeaderParseError( "expected atext but found '{}'".format(value)) atext = m.group() value = value[len(atext):] atext = ValueTerminal(atext, 'atext') _validate_xtext(atext) return atext, value
3,328
20,336
20
pipenv/patched/notpip/_vendor/pygments/formatters/img.py
6
5
def _get_linenumber_pos(self, lineno): retur
check point progress on only bringing in pip==22.0.4 (#4966) * vendor in pip==22.0.4 * updating vendor packaging version * update pipdeptree to fix pipenv graph with new version of pip. * Vendoring of pip-shims 0.7.0 * Vendoring of requirementslib 1.6.3 * Update pip index safety restrictions patch for pip==22.0.4 * Update patches * exclude pyptoject.toml from black to see if that helps. * Move this part of the hash collection back to the top (like prior implementation) because it affects the outcome of this test now in pip 22.0.4
_get_linenumber_pos
f3166e673fe8d40277b804d35d77dcdb760fc3b3
pipenv
img.py
8
2
https://github.com/pypa/pipenv.git
1
21
0
6
34
Python
{ "docstring": "\n Get the actual position for the start of a line number.\n ", "language": "en", "n_whitespaces": 26, "n_words": 11, "vocab_size": 10 }
def _get_linenumber_pos(self, lineno): return (self.image_pad, self._get_line_y(lineno))
15,780
71,919
1,056
wagtail/admin/tests/test_contentstate.py
111
10
def test_image_inside_paragraph(self):
Reformat with black
test_image_inside_paragraph
d10f15e55806c6944827d801cd9c2d53f5da4186
wagtail
test_contentstate.py
16
52
https://github.com/wagtail/wagtail.git
1
181
0
72
347
Python
{ "docstring": "\n <p>before <embed embedtype=\"image\" alt=\"an image\" id=\"1\" format=\"left\" /> after</p>\n ", "language": "en", "n_whitespaces": 32, "n_words": 9, "vocab_size": 9 }
def test_image_inside_paragraph(self): # In Draftail's data model, images are block-level elements and therefore # split up preceding / following text into their own paragraphs converter = ContentstateConverter(features=["image"]) result = json.loads( converter.from_database_format( ) ) self.assertContentStateEqual( result, { "blocks": [ { "key": "00000", "inlineStyleRanges": [], "entityRanges": [], "depth": 0, "text": "before", "type": "unstyled", }, { "key": "00000", "inlineStyleRanges": [], "entityRanges": [{"key": 0, "offset": 0, "length": 1}], "depth": 0, "text": " ", "type": "atomic", }, { "key": "00000", "inlineStyleRanges": [], "entityRanges": [], "depth": 0, "text": "after", "type": "unstyled", }, ], "entityMap": { "0": { "data": { "format": "left", "alt": "an image", "id": "1", "src": "/media/not-found", }, "mutability": "IMMUTABLE", "type": "IMAGE", } }, }, )
76,515
260,816
216
sklearn/svm/_bounds.py
93
26
def l1_min_c(X, y, *, loss="squared_hinge", fit_intercept=True, intercept_scaling=1.0): if loss not in ("squared_hinge", "log"): raise ValueError('loss type not in ("squared_hinge", "log")') X = check_array(X, accept_sparse="csc") check_consistent_length(X, y) Y = LabelBinarizer(neg_label=-1).fit_transform(y).T # maximum absolute value over classes and features den = np.max(np.abs(safe_sparse_dot(Y, X))) if fit_intercept: bias = np.full( (np.size(y), 1), intercept_scaling, dtype=np.array(intercept_scaling).dtype ) den = max(den, abs(np.dot(Y, bias)).max()) if den == 0.0: raise ValueError( "Ill-posed l1_min_c calculation: l1 will always " "select zero coefficients for this data" ) if loss == "squared_hinge": return 0.5 / den else: # loss ==
DOC Ensures that l1_min_c passes numpydoc validation (#24134)
l1_min_c
6d16698dd8ba4407e5c3c588d7b5e6a5257eddc9
scikit-learn
_bounds.py
16
21
https://github.com/scikit-learn/scikit-learn.git
5
176
0
70
276
Python
{ "docstring": "Return the lowest bound for C.\n\n The lower bound for C is computed such that for C in (l1_min_C, infinity)\n the model is guaranteed not to be empty. This applies to l1 penalized\n classifiers, such as LinearSVC with penalty='l1' and\n linear_model.LogisticRegression with penalty='l1'.\n\n This value is valid if class_weight parameter in fit() is not set.\n\n Parameters\n ----------\n X : {array-like, sparse matrix} of shape (n_samples, n_features)\n Training vector, where `n_samples` is the number of samples and\n `n_features` is the number of features.\n\n y : array-like of shape (n_samples,)\n Target vector relative to X.\n\n loss : {'squared_hinge', 'log'}, default='squared_hinge'\n Specifies the loss function.\n With 'squared_hinge' it is the squared hinge loss (a.k.a. L2 loss).\n With 'log' it is the loss of logistic regression models.\n\n fit_intercept : bool, default=True\n Specifies if the intercept should be fitted by the model.\n It must match the fit() method parameter.\n\n intercept_scaling : float, default=1.0\n When fit_intercept is True, instance vector x becomes\n [x, intercept_scaling],\n i.e. a \"synthetic\" feature with constant value equals to\n intercept_scaling is appended to the instance vector.\n It must match the fit() method parameter.\n\n Returns\n -------\n l1_min_c : float\n Minimum value for C.\n ", "language": "en", "n_whitespaces": 336, "n_words": 190, "vocab_size": 121 }
def l1_min_c(X, y, *, loss="squared_hinge", fit_intercept=True, intercept_scaling=1.0): if loss not in ("squared_hinge", "log"): raise ValueError('loss type not in ("squared_hinge", "log")') X = check_array(X, accept_sparse="csc") check_consistent_length(X, y) Y = LabelBinarizer(neg_label=-1).fit_transform(y).T # maximum absolute value over classes and features den = np.max(np.abs(safe_sparse_dot(Y, X))) if fit_intercept: bias = np.full( (np.size(y), 1), intercept_scaling, dtype=np.array(intercept_scaling).dtype ) den = max(den, abs(np.dot(Y, bias)).max()) if den == 0.0: raise ValueError( "Ill-posed l1_min_c calculation: l1 will always " "select zero coefficients for this data" ) if loss == "squared_hinge": return 0.5 / den else: # loss == 'log': return 2.0 / den
7,856
43,184
89
airflow/migrations/versions/0111_2_3_3_add_indexes_for_cascade_deletes.py
32
10
def _mysql_tables_where_indexes_already_present(conn): to_check = [ ('xcom', 'idx_xcom_task_instance'), ('task_reschedule', 'idx_task_reschedule_dag_run'),
Add indexes for CASCADE deletes for task_instance (#24488) When we add foreign keys with ON DELETE CASCADE, and we delete rows in the foreign table, the database needs to join back to the referencing table. If there's no suitable index, then it can be slow to perform the deletes.
_mysql_tables_where_indexes_already_present
677c42227c08f705142f298ab88915f133cd94e5
airflow
0111_2_3_3_add_indexes_for_cascade_deletes.py
13
11
https://github.com/apache/airflow.git
3
61
0
29
115
Python
{ "docstring": "\n If user downgraded and is upgrading again, we have to check for existing\n indexes on mysql because we can't (and don't) drop them as part of the\n downgrade.\n ", "language": "en", "n_whitespaces": 41, "n_words": 28, "vocab_size": 27 }
def _mysql_tables_where_indexes_already_present(conn): to_check = [ ('xcom', 'idx_xcom_task_instance'), ('task_reschedule', 'idx_task_reschedule_dag_run'), ('task_fail', 'idx_task_fail_task_instance'), ] tables = set() for tbl, idx in to_check: if conn.execute(f"show indexes from {tbl} where Key_name = '{idx}'").first(): tables.add(tbl) return tables
70,986
246,075
548
tests/http/test_webclient.py
103
36
def test_webclient_resolves_with_client_resource(self): for resource_name_order_list in [ ["webclient", "client"], ["client", "webclient"], ]: # Create a dictionary from path regex -> resource resource_dict: Dict[str, Resource] = {}
Add a regression test for using both webclient and client resources simultaneously (#11765)
test_webclient_resolves_with_client_resource
121b9e2475f4d7b3bca50d81732f07db80b2264f
synapse
test_webclient.py
18
30
https://github.com/matrix-org/synapse.git
3
150
0
79
241
Python
{ "docstring": "\n Tests that both client and webclient resources can be accessed simultaneously.\n\n This is a regression test created in response to https://github.com/matrix-org/synapse/issues/11763.\n ", "language": "en", "n_whitespaces": 43, "n_words": 21, "vocab_size": 21 }
def test_webclient_resolves_with_client_resource(self): for resource_name_order_list in [ ["webclient", "client"], ["client", "webclient"], ]: # Create a dictionary from path regex -> resource resource_dict: Dict[str, Resource] = {} for resource_name in resource_name_order_list: resource_dict.update( SynapseHomeServer._configure_named_resource(self.hs, resource_name) ) # Create a root resource which ties the above resources together into one root_resource = Resource() create_resource_tree(resource_dict, root_resource) # Create a site configured with this resource to make HTTP requests against listener_config = ListenerConfig( port=8008, bind_addresses=["127.0.0.1"], type="http", http_options=HttpListenerConfig( resources=[HttpResourceConfig(names=resource_name_order_list)] ), ) test_site = SynapseSite( logger_name="synapse.access.http.fake", site_tag=self.hs.config.server.server_name, config=listener_config, resource=root_resource, server_version_string="1", max_request_body_size=1234, reactor=self.reactor, ) # Attempt to make requests to endpoints on both the webclient and client resources # on test_site. self._request_client_and_webclient_resources(test_site)
33,921
147,365
65
python/ray/cloudpickle/cloudpickle.py
32
10
def unregister_pickle_by_value(module): if not isinstance(module, types.ModuleType): raise ValueError(f"Input should be a module object, got {str(module)} instead") if module.__name__ not in _PICKLE_BY_VALUE_MODULES:
[docs] fix doctests and activate CI (#23418)
unregister_pickle_by_value
60054995e65304fb14e6d0ab69bdec07aa9389fe
ray
cloudpickle.py
13
7
https://github.com/ray-project/ray.git
3
47
0
28
92
Python
{ "docstring": "Unregister that the input module should be pickled by value.", "language": "en", "n_whitespaces": 9, "n_words": 10, "vocab_size": 10 }
def unregister_pickle_by_value(module): if not isinstance(module, types.ModuleType): raise ValueError(f"Input should be a module object, got {str(module)} instead") if module.__name__ not in _PICKLE_BY_VALUE_MODULES: raise ValueError(f"{module} is not registered for pickle by value") else: _PICKLE_BY_VALUE_MODULES.remove(module.__name__)
16,362
75,130
155
wagtail/images/tests/test_admin_views.py
30
19
def test_get_bad_permissions(self): # Remove privileges from user self.user.is_superuser = False self.user.user_permissions.a
Reformat with black
test_get_bad_permissions
d10f15e55806c6944827d801cd9c2d53f5da4186
wagtail
test_admin_views.py
14
12
https://github.com/wagtail/wagtail.git
1
78
0
24
135
Python
{ "docstring": "\n This tests that the view returns a \"permission denied\" redirect if a user without correct\n permissions attempts to access it\n ", "language": "en", "n_whitespaces": 42, "n_words": 20, "vocab_size": 19 }
def test_get_bad_permissions(self): # Remove privileges from user self.user.is_superuser = False self.user.user_permissions.add( Permission.objects.get( content_type__app_label="wagtailadmin", codename="access_admin" ) ) self.user.save() # Get response = self.client.get( reverse("wagtailimages:url_generator", args=(self.image.id,)) ) # Check response self.assertRedirects(response, reverse("wagtailadmin_home"))
3,772
21,342
40
pipenv/patched/notpip/_vendor/distlib/_backport/shutil.py
14
7
def get_archive_formats(): formats = [(name, registry[2]) for name, registry in _ARCHIVE_FORMATS.items
Vendor in pip 22.1.2
get_archive_formats
c69d55f7c82d5ae2cce542bcfb98d043ca4836a0
pipenv
shutil.py
10
5
https://github.com/pypa/pipenv.git
2
34
0
13
56
Python
{ "docstring": "Returns a list of supported formats for archiving and unarchiving.\n\n Each element of the returned sequence is a tuple (name, description)\n ", "language": "en", "n_whitespaces": 27, "n_words": 21, "vocab_size": 19 }
def get_archive_formats(): formats = [(name, registry[2]) for name, registry in _ARCHIVE_FORMATS.items()] formats.sort() return formats
46,070
189,462
1,007
manim/mobject/svg/svg_mobject.py
245
48
def _handle_transforms(self, element, mobject): if element.hasAttribute("x") and element.hasAttribute("y"): x = self._attribute_to_float(element.getAttribute("x")) # Flip y y = -self._attribute_to_float(element.getAttribute("y")) mobject.shift(x * RIGHT + y * UP) transform_attr_value = element.getAttribute("transform") # parse the various transforms in the attribute value transform_names = ["matrix", "translate", "scale", "rotate", "skewX", "skewY"] # Borrowed/Inspired from: # https://github.com/cjlano/svg/blob/3ea3384457c9780fa7d67837c9c5fd4ebc42cb3b/svg/svg.py#L75 # match any SVG transformation with its parameter (until final parenthesis) # [^)]* == anything but a closing parenthesis # '|'.join == OR-list of SVG transformations transform_regex = "|".join([x + r"[^)]*\)" for x in transform_names]) transforms = re.findall(transform_regex, transform_attr_value) number_regex = r"[-+]?(?:\d+(?:\.\d*)?|\.\d+)(?:[eE][-+]?\d+)?" for t in transforms: op
Hide more private methods from the docs. (#2468) * hide privs from text_mobject.py * hide privs from tex_mobject.py * hide privs from code_mobject.py * hide privs from svg_mobject.py * remove SVGPath and utils from __init__.py * don't import string_to_numbers * hide privs from geometry.py * hide privs from matrix.py * hide privs from numbers.py * hide privs from three_dimensions.py * forgot underscore under set_stroke_width_from_length * there were more i missed * unhidea method that was used in docs * forgot other text2hash * remove svg_path from docs
handle_transforms
902e7eb4f0147b5882a613b67467e38a1d47f01e
manim
svg_mobject.py
18
48
https://github.com/ManimCommunity/manim.git
14
429
0
143
706
Python
{ "docstring": "Applies the SVG transform to the specified mobject. Transforms include:\n ``matrix``, ``translate``, and ``scale``.\n\n Parameters\n ----------\n element : :class:`minidom.Element`\n The transform command to perform\n\n mobject : :class:`Mobject`\n The Mobject to transform.\n ", "language": "en", "n_whitespaces": 95, "n_words": 31, "vocab_size": 25 }
def _handle_transforms(self, element, mobject): if element.hasAttribute("x") and element.hasAttribute("y"): x = self._attribute_to_float(element.getAttribute("x")) # Flip y y = -self._attribute_to_float(element.getAttribute("y")) mobject.shift(x * RIGHT + y * UP) transform_attr_value = element.getAttribute("transform") # parse the various transforms in the attribute value transform_names = ["matrix", "translate", "scale", "rotate", "skewX", "skewY"] # Borrowed/Inspired from: # https://github.com/cjlano/svg/blob/3ea3384457c9780fa7d67837c9c5fd4ebc42cb3b/svg/svg.py#L75 # match any SVG transformation with its parameter (until final parenthesis) # [^)]* == anything but a closing parenthesis # '|'.join == OR-list of SVG transformations transform_regex = "|".join([x + r"[^)]*\)" for x in transform_names]) transforms = re.findall(transform_regex, transform_attr_value) number_regex = r"[-+]?(?:\d+(?:\.\d*)?|\.\d+)(?:[eE][-+]?\d+)?" for t in transforms: op_name, op_args = t.split("(") op_name = op_name.strip() op_args = [float(x) for x in re.findall(number_regex, op_args)] if op_name == "matrix": transform_args = np.array(op_args).reshape([3, 2]) x = transform_args[2][0] y = -transform_args[2][1] matrix = np.identity(self.dim) matrix[:2, :2] = transform_args[:2, :] matrix[1] *= -1 matrix[:, 1] *= -1 for mob in mobject.family_members_with_points(): if config["renderer"] == "opengl": mob.points = np.dot(mob.points, matrix) else: mob.points = np.dot(mob.points, matrix) mobject.shift(x * RIGHT + y * UP) elif op_name == "scale": scale_values = op_args if len(scale_values) == 2: scale_x, scale_y = scale_values mobject.scale(np.array([scale_x, scale_y, 1]), about_point=ORIGIN) elif len(scale_values) == 1: scale = scale_values[0] mobject.scale(np.array([scale, scale, 1]), about_point=ORIGIN) elif op_name == "translate": if len(op_args) == 2: x, y = op_args else: x = op_args y = 0 mobject.shift(x * RIGHT + y * DOWN) else: # TODO: handle rotate, skewX and skewY # for now adding a warning message logger.warning( "Handling of %s transform is not supported yet!", op_name, )
@pytest.fixture(name="pro")
97,233
298,288
11
tests/components/airvisual/conftest.py
6
7
def pro_data_fixture(): return json.loads(load_fixture("data.json", "airvisual_pro")) @pytest.fixture(
Ensure AirVisual Pro migration includes device and entity customizations (#84798) * Ensure AirVisual Pro migration includes device and entity customizations * Update homeassistant/components/airvisual/__init__.py Co-authored-by: Martin Hjelmare <marhje52@gmail.com> * Code review * Fix tests * Fix tests FOR REAL Co-authored-by: Martin Hjelmare <marhje52@gmail.com>
pro_data_fixture
34dc47ad1037c6bf569f8cb2199f5933c2a0a079
core
conftest.py
10
2
https://github.com/home-assistant/core.git
1
17
1
6
51
Python
{ "docstring": "Define an update coordinator data example for the Pro.", "language": "en", "n_whitespaces": 8, "n_words": 9, "vocab_size": 9 }
def pro_data_fixture(): return json.loads(load_fixture("data.json", "airvisual_pro")) @pytest.fixture(name="pro")
76,408
260,671
549
sklearn/datasets/_species_distributions.py
179
50
def fetch_species_distributions(*, data_home=None, download_if_missing=True): data_home = get_data_home(data_home) if not exists(data_home): makedirs(data_home) # Define parameters for the data files. These should not be changed # unless the data model changes. They will be saved in the npz file # with the downloaded data. extra_params = dict( x_left_lower_corner=-94.8, Nx=1212, y_left_lower_corner=-56.05, Ny=1592, grid_size=0.05, ) dtype = np.int16 archive_path = _pkl_filepath(data_home, DATA_ARCHIVE_NAME) if not exists(archive_path): if not download_if_missing: raise IOError("Data not found and `download_if_missing` is False") logger.info("Downloading species data from %s to %s" % (SAMPLES.url, data_home)) samples_path = _fetch_remote(SAMPLES, dirname=data_home) with np.load(samples_path) as X: # samples.zip is a valid npz for f in X.files: fhandle = BytesIO(X[f]) if "train" in f: train = _load_csv(fhandle) if "test" in f: test = _load_csv(fhandle) remove(samples_path) logger.info( "Downloading coverage data from %s to %s" % (COVERAGES.url, data_home) ) coverages_path = _fetch_remote(COVERAGES, dirname=data_home) with np.load(coverages_path) as X: # coverages.zip is a valid npz coverages = [] for f in X.files: fhandle = BytesIO(X[f]) logger.debug(" - converting {}".format(f)
DOC Ensures that fetch_species_distributions passes numpydoc validation (#24162) Co-authored-by: Franck Charras <franck.charras@inria.fr>
fetch_species_distributions
fc656c2189d64a43089f514dcdedb0fae70dfe56
scikit-learn
_species_distributions.py
16
43
https://github.com/scikit-learn/scikit-learn.git
8
302
0
115
485
Python
{ "docstring": "Loader for species distribution dataset from Phillips et. al. (2006).\n\n Read more in the :ref:`User Guide <datasets>`.\n\n Parameters\n ----------\n data_home : str, default=None\n Specify another download and cache folder for the datasets. By default\n all scikit-learn data is stored in '~/scikit_learn_data' subfolders.\n\n download_if_missing : bool, default=True\n If False, raise a IOError if the data is not locally available\n instead of trying to download the data from the source site.\n\n Returns\n -------\n data : :class:`~sklearn.utils.Bunch`\n Dictionary-like object, with the following attributes.\n\n coverages : array, shape = [14, 1592, 1212]\n These represent the 14 features measured\n at each point of the map grid.\n The latitude/longitude values for the grid are discussed below.\n Missing data is represented by the value -9999.\n train : record array, shape = (1624,)\n The training points for the data. Each point has three fields:\n\n - train['species'] is the species name\n - train['dd long'] is the longitude, in degrees\n - train['dd lat'] is the latitude, in degrees\n test : record array, shape = (620,)\n The test points for the data. Same format as the training data.\n Nx, Ny : integers\n The number of longitudes (x) and latitudes (y) in the grid\n x_left_lower_corner, y_left_lower_corner : floats\n The (x,y) position of the lower-left corner, in degrees\n grid_size : float\n The spacing between points of the grid, in degrees\n\n Notes\n -----\n\n This dataset represents the geographic distribution of species.\n The dataset is provided by Phillips et. al. (2006).\n\n The two species are:\n\n - `\"Bradypus variegatus\"\n <http://www.iucnredlist.org/details/3038/0>`_ ,\n the Brown-throated Sloth.\n\n - `\"Microryzomys minutus\"\n <http://www.iucnredlist.org/details/13408/0>`_ ,\n also known as the Forest Small Rice Rat, a rodent that lives in Peru,\n Colombia, Ecuador, Peru, and Venezuela.\n\n - For an example of using this dataset with scikit-learn, see\n :ref:`examples/applications/plot_species_distribution_modeling.py\n <sphx_glr_auto_examples_applications_plot_species_distribution_modeling.py>`.\n\n References\n ----------\n\n * `\"Maximum entropy modeling of species geographic distributions\"\n <http://rob.schapire.net/papers/ecolmod.pdf>`_\n S. J. Phillips, R. P. Anderson, R. E. Schapire - Ecological Modelling,\n 190:231-259, 2006.\n ", "language": "en", "n_whitespaces": 631, "n_words": 310, "vocab_size": 197 }
def fetch_species_distributions(*, data_home=None, download_if_missing=True): data_home = get_data_home(data_home) if not exists(data_home): makedirs(data_home) # Define parameters for the data files. These should not be changed # unless the data model changes. They will be saved in the npz file # with the downloaded data. extra_params = dict( x_left_lower_corner=-94.8, Nx=1212, y_left_lower_corner=-56.05, Ny=1592, grid_size=0.05, ) dtype = np.int16 archive_path = _pkl_filepath(data_home, DATA_ARCHIVE_NAME) if not exists(archive_path): if not download_if_missing: raise IOError("Data not found and `download_if_missing` is False") logger.info("Downloading species data from %s to %s" % (SAMPLES.url, data_home)) samples_path = _fetch_remote(SAMPLES, dirname=data_home) with np.load(samples_path) as X: # samples.zip is a valid npz for f in X.files: fhandle = BytesIO(X[f]) if "train" in f: train = _load_csv(fhandle) if "test" in f: test = _load_csv(fhandle) remove(samples_path) logger.info( "Downloading coverage data from %s to %s" % (COVERAGES.url, data_home) ) coverages_path = _fetch_remote(COVERAGES, dirname=data_home) with np.load(coverages_path) as X: # coverages.zip is a valid npz coverages = [] for f in X.files: fhandle = BytesIO(X[f]) logger.debug(" - converting {}".format(f)) coverages.append(_load_coverage(fhandle)) coverages = np.asarray(coverages, dtype=dtype) remove(coverages_path) bunch = Bunch(coverages=coverages, test=test, train=train, **extra_params) joblib.dump(bunch, archive_path, compress=9) else: bunch = joblib.load(archive_path) return bunch
53,677
213,613
33
ivy/core/device.py
19
7
def set_split_factor(factor, dev=None): assert 0 <= factor global split_factors dev = ivy.default(dev, default_device()) split_f
renamed dev_str arg to dev for all methods.
set_split_factor
d743336b1f3654cd0315f380f43eed4116997c1d
ivy
device.py
10
5
https://github.com/unifyai/ivy.git
1
34
0
17
56
Python
{ "docstring": "\n Set the global split factor for a given device, which can be used to scale batch splitting chunk sizes for the\n device across the codebase.\n\n :param factor: The factor to set the device-specific split factor to.\n :type factor: float\n :param dev: The device to set the split factor for. Sets the default device by default.\n :type dev: str, optional\n ", "language": "en", "n_whitespaces": 81, "n_words": 59, "vocab_size": 38 }
def set_split_factor(factor, dev=None): assert 0 <= factor global split_factors dev = ivy.default(dev, default_device()) split_factors[dev] = factor # noinspection PyShadowingNames
48,271
196,977
539
sympy/testing/runtests.py
197
40
def run(self, test, compileflags=None, out=None, clear_globs=True): self.test = test # Remove ``` from the end of example, which may appear in Markdown # files for example in test.examples: example.want = example.want.replace('```\n', '') example.exc_msg = example.exc_msg and example.exc_msg.replace('```\n', '') if compileflags is None: compileflags = pdoctest._extract_future_flags(test.globs) save_stdout = sys.stdout if out is None: out = save_stdout.write sys.stdout = self._fakeout # Patch pdb.set_trace to restore sys.stdout during interactive # debugging (so it's not still redirected to self._fakeout). # Note that the interactive output will go to *our* # save_stdout, even if that's not the real sys.stdout; this # allows us to write test cases for the set_trace behavior. save_set_trace = pdb.set_trace self.debugger = pdoctest._OutputRedirectingPdb(
Enable doctests in Markdown files
run
3ebd6862a0c33fcf357d9f4ac5c2a8fd80a98675
sympy
runtests.py
15
26
https://github.com/sympy/sympy.git
7
195
0
129
392
Python
{ "docstring": "\n Run the examples in ``test``, and display the results using the\n writer function ``out``.\n\n The examples are run in the namespace ``test.globs``. If\n ``clear_globs`` is true (the default), then this namespace will\n be cleared after the test runs, to help with garbage\n collection. If you would like to examine the namespace after\n the test completes, then use ``clear_globs=False``.\n\n ``compileflags`` gives the set of flags that should be used by\n the Python compiler when running the examples. If not\n specified, then it will default to the set of future-import\n flags that apply to ``globs``.\n\n The output of each example is checked using\n ``SymPyDocTestRunner.check_output``, and the results are\n formatted by the ``SymPyDocTestRunner.report_*`` methods.\n ", "language": "en", "n_whitespaces": 220, "n_words": 111, "vocab_size": 72 }
def run(self, test, compileflags=None, out=None, clear_globs=True): self.test = test # Remove ``` from the end of example, which may appear in Markdown # files for example in test.examples: example.want = example.want.replace('```\n', '') example.exc_msg = example.exc_msg and example.exc_msg.replace('```\n', '') if compileflags is None: compileflags = pdoctest._extract_future_flags(test.globs) save_stdout = sys.stdout if out is None: out = save_stdout.write sys.stdout = self._fakeout # Patch pdb.set_trace to restore sys.stdout during interactive # debugging (so it's not still redirected to self._fakeout). # Note that the interactive output will go to *our* # save_stdout, even if that's not the real sys.stdout; this # allows us to write test cases for the set_trace behavior. save_set_trace = pdb.set_trace self.debugger = pdoctest._OutputRedirectingPdb(save_stdout) self.debugger.reset() pdb.set_trace = self.debugger.set_trace # Patch linecache.getlines, so we can see the example's source # when we're inside the debugger. self.save_linecache_getlines = pdoctest.linecache.getlines linecache.getlines = self.__patched_linecache_getlines # Fail for deprecation warnings with raise_on_deprecated(): try: return self.__run(test, compileflags, out) finally: sys.stdout = save_stdout pdb.set_trace = save_set_trace linecache.getlines = self.save_linecache_getlines if clear_globs: test.globs.clear() # We have to override the name mangled methods. monkeypatched_methods = [ 'patched_linecache_getlines', 'run', 'record_outcome' ] for method in monkeypatched_methods: oldname = '_DocTestRunner__' + method newname = '_SymPyDocTestRunner__' + method setattr(SymPyDocTestRunner, newname, getattr(DocTestRunner, oldname))
5,217
29,303
41
saleor/graphql/product/tests/queries/test_product_variants_query.py
19
10
def _fetch_all_variants(client, variables={}, permissions=None): query = response = client.post_graphql( query, variables, permissions=permissions, check_no_permissions=False ) content = get_graphql_content(response) return content["data"]["productVariants"]
Split test_product.py and test_variant.py into multiple files (#11173) * Split test_product.py into multiple files * Split test_variant.py into multiple files
_fetch_all_variants
d90be220d6b687d08153934a51354011a3cb5ca1
saleor
test_product_variants_query.py
9
18
https://github.com/saleor/saleor.git
1
49
0
17
78
Python
{ "docstring": "\n query fetchAllVariants($channel: String) {\n productVariants(first: 10, channel: $channel) {\n totalCount\n edges {\n node {\n id\n }\n }\n }\n }\n ", "language": "en", "n_whitespaces": 165, "n_words": 19, "vocab_size": 13 }
def _fetch_all_variants(client, variables={}, permissions=None): query = response = client.post_graphql( query, variables, permissions=permissions, check_no_permissions=False ) content = get_graphql_content(response) return content["data"]["productVariants"]
2,940
19,350
552
ArmNavigation/arm_obstacle_navigation/arm_obstacle_navigation.py
192
49
def astar_torus(grid, start_node, goal_node): colors = ['white', 'black', 'red', 'pink', 'yellow', 'green', 'orange'] levels = [0, 1, 2, 3, 4, 5, 6, 7] cmap, norm = from_levels_and_colors(levels, colors) grid[start_node] = 4 grid[goal_node] = 5 parent_map = [[() for _ in range(M)] for _ in range(M)] heuristic_map = calc_heuristic_map(M, goal_node) explored_heuristic_map = np.full((M, M), np.inf) distance_map = np.full((M, M), np.inf) explored_heuristic_map[start_node] = heuristic_map[start_node] distance_map[start_node] = 0 while True: grid[start_node] = 4 grid[goal_node] = 5 current_node = np.unravel_index( np.argmin(explored_heuristic_map, axis=None), explored_heuristic_map.shape) min_distance = np.min(explored_heuristic_map) if (current_node == goal_node) or np.isinf(min_distance): break grid[current_node] = 2 explored_heuristic_map[current_node] = np.inf i, j = current_node[0], current_node[1] neighbors = find_neighbors(i, j) for neighbor in neighbors: if grid[neighbor] == 0 or grid[neighbor] == 5: distance_map[neighbor] = distance_map[current_node] + 1 explored_heuristic_map[neighbor] = heuristic_map[neighbor] parent_map[neighbor[0]][neighbo
docs: Fix a few typos (#695) There are small typos in: - ArmNavigation/arm_obstacle_navigation/arm_obstacle_navigation.py - ArmNavigation/arm_obstacle_navigation/arm_obstacle_navigation_2.py - docs/modules/slam/FastSLAM1/FastSLAM1_main.rst - docs/modules/slam/ekf_slam/ekf_slam_main.rst Fixes: - Should read `configuration` rather than `configuation`. - Should read `trajectory` rather than `tracjectory`. - Should read `prediction` rather than `prediciton`. Signed-off-by: Tim Gates <tim.gates@iress.com>
astar_torus
c6bdd48715adcbe17c4146b7cae3b0fc569f7bde
PythonRobotics
arm_obstacle_navigation.py
17
47
https://github.com/AtsushiSakai/PythonRobotics.git
13
475
0
134
721
Python
{ "docstring": "\n Finds a path between an initial and goal joint configuration using\n the A* Algorithm on a tororiadal grid.\n\n Args:\n grid: An occupancy grid (ndarray)\n start_node: Initial joint configuration (tuple)\n goal_node: Goal joint configuration (tuple)\n\n Returns:\n Obstacle-free route in joint space from start_node to goal_node\n ", "language": "en", "n_whitespaces": 88, "n_words": 44, "vocab_size": 37 }
def astar_torus(grid, start_node, goal_node): colors = ['white', 'black', 'red', 'pink', 'yellow', 'green', 'orange'] levels = [0, 1, 2, 3, 4, 5, 6, 7] cmap, norm = from_levels_and_colors(levels, colors) grid[start_node] = 4 grid[goal_node] = 5 parent_map = [[() for _ in range(M)] for _ in range(M)] heuristic_map = calc_heuristic_map(M, goal_node) explored_heuristic_map = np.full((M, M), np.inf) distance_map = np.full((M, M), np.inf) explored_heuristic_map[start_node] = heuristic_map[start_node] distance_map[start_node] = 0 while True: grid[start_node] = 4 grid[goal_node] = 5 current_node = np.unravel_index( np.argmin(explored_heuristic_map, axis=None), explored_heuristic_map.shape) min_distance = np.min(explored_heuristic_map) if (current_node == goal_node) or np.isinf(min_distance): break grid[current_node] = 2 explored_heuristic_map[current_node] = np.inf i, j = current_node[0], current_node[1] neighbors = find_neighbors(i, j) for neighbor in neighbors: if grid[neighbor] == 0 or grid[neighbor] == 5: distance_map[neighbor] = distance_map[current_node] + 1 explored_heuristic_map[neighbor] = heuristic_map[neighbor] parent_map[neighbor[0]][neighbor[1]] = current_node grid[neighbor] = 3 if np.isinf(explored_heuristic_map[goal_node]): route = [] print("No route found.") else: route = [goal_node] while parent_map[route[0][0]][route[0][1]] != (): route.insert(0, parent_map[route[0][0]][route[0][1]]) print("The route found covers %d grid cells." % len(route)) for i in range(1, len(route)): grid[route[i]] = 6 plt.cla() # for stopping simulation with the esc key. plt.gcf().canvas.mpl_connect('key_release_event', lambda event: [exit(0) if event.key == 'escape' else None]) plt.imshow(grid, cmap=cmap, norm=norm, interpolation=None) plt.show() plt.pause(1e-2) return route
75,049
257,234
53
haystack/pipelines/base.py
20
8
def root_node(self) -> Optional[str]: if len(self.graph.nodes) < 1: retur
Validate YAML files without loading the nodes (#2438) * Remove BasePipeline and make a module for RayPipeline * Can load pipelines from yaml, plenty of issues left * Extract graph validation logic into _add_node_to_pipeline_graph & refactor load_from_config and add_node to use it * Fix pipeline tests * Move some tests out of test_pipeline.py and create MockDenseRetriever * myoy and pylint (silencing too-many-public-methods) * Fix issue found in some yaml files and in schema files * Fix paths to YAML and fix some typos in Ray * Fix eval tests * Simplify MockDenseRetriever * Fix Ray test * Accidentally pushed merge coinflict, fixed * Typo in schemas * Typo in _json_schema.py * Slightly reduce noisyness of version validation warnings * Fix version logs tests * Fix version logs tests again * remove seemingly unused file * Add check and test to avoid adding the same node to the pipeline twice * Update Documentation & Code Style * Revert config to pipeline_config * Remo0ve unused import * Complete reverting to pipeline_config * Some more stray config= * Update Documentation & Code Style * Feedback * Move back other_nodes tests into pipeline tests temporarily * Update Documentation & Code Style * Fixing tests * Update Documentation & Code Style * Fixing ray and standard pipeline tests * Rename colliding load() methods in dense retrievers and faiss * Update Documentation & Code Style * Fix mypy on ray.py as well * Add check for no root node * Fix tests to use load_from_directory and load_index * Try to workaround the disabled add_node of RayPipeline * Update Documentation & Code Style * Fix Ray test * Fix FAISS tests * Relax class check in _add_node_to_pipeline_graph * Update Documentation & Code Style * Try to fix mypy in ray.py * unused import * Try another fix for Ray * Fix connector tests * Update Documentation & Code Style * Fix ray * Update Documentation & Code Style * use BaseComponent.load() in pipelines/base.py * another round of feedback * stray BaseComponent.load() * Update Documentation & Code Style * Fix FAISS tests too Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: tstadel <60758086+tstadel@users.noreply.github.com>
root_node
f8e02310bf0dfbd1ab79a1c3c73434e0aeba4f4b
haystack
base.py
10
7
https://github.com/deepset-ai/haystack.git
2
37
0
19
61
Python
{ "docstring": "\n Returns the root node of the pipeline's graph.\n ", "language": "en", "n_whitespaces": 23, "n_words": 8, "vocab_size": 7 }
def root_node(self) -> Optional[str]: if len(self.graph.nodes) < 1: return None return list(self.graph.nodes)[0] # List conversion is required, see networkx docs
48,760
197,989
352
sympy/core/add.py
71
29
def as_coefficients_dict(self, *syms): i
22531: as_coefficients_dict accepts symbols
as_coefficients_dict
ea7fed2718f07bac46d4e154bd4e7ec31a4289e7
sympy
add.py
16
23
https://github.com/sympy/sympy.git
7
187
0
47
297
Python
{ "docstring": "Return a dictionary mapping terms to their Rational coefficient.\n Since the dictionary is a defaultdict, inquiries about terms which\n were not present will return a coefficient of 0. If an expression is\n not an Add it is considered to have a single term.\n\n If symbols `syms` are provided, any multiplicative terms\n independent of them will be considered a coefficient and a\n regular dictionary of syms-dependent generators as keys and\n their corresponding coefficients as values will be returned.\n\n Examples\n ========\n\n >>> from sympy import exp\n >>> from sympy.abc import a, x\n >>> (3*x + a*x + 4).as_coefficients_dict()\n {1: 4, x: 3, a*x: 1}\n >>> _[a]\n 0\n >>> (3*a*x).as_coefficients_dict()\n {a*x: 3}\n\n >>> (3*exp(x)*x + a/x + 2).as_coefficients_dict(x)\n {1: 2, 1/x: a, x*exp(x): 3}\n ", "language": "en", "n_whitespaces": 261, "n_words": 121, "vocab_size": 83 }
def as_coefficients_dict(self, *syms): if not syms: d = defaultdict(list) for ai in self.args: c, m = ai.as_coeff_Mul() d[m].append(c) for k, v in d.items(): if len(v) == 1: d[k] = v[0] else: d[k] = Add(*v) di = defaultdict(int) di.update(d) return di else: d = defaultdict(list) ind, dep = self.as_independent(*syms, as_Add=True) for i in Add.make_args(dep): c, x = i.as_independent(*syms, as_Add=False) d[x].append(c) d = {k: Add(*d[k]) for k in d} d.update({S.One: ind}) return d
17,632
83,227
272
zerver/lib/test_classes.py
73
23
def verify_emoji_code_foreign_keys(self) -> None: dct = {} for row in RealmEmoji.objects.all(): dct[row.id] = row if not dct: raise AssertionError("test needs RealmEmoji rows") count = 0 for row in Reaction.objects.filter(reaction_type=Reaction.REALM_EMOJI): realm_emoji_id = int(row.emoji_code) assert realm_emoji_id in dct self.assertEqual(dct[realm_emoji_id].name, row.emoji_name) self.assertEqual(dct[realm_emoji_id].realm_id, row.user_profile.realm_id) count += 1 for row in UserStatus.objects.filter(reaction_type=UserStatus.RE
docs: Fix many spelling mistakes. Signed-off-by: Anders Kaseorg <anders@zulip.com>
verify_emoji_code_foreign_keys
b0ce4f1bce8031881addecb1e86073483517f392
zulip
test_classes.py
11
29
https://github.com/zulip/zulip.git
6
179
0
40
282
Python
{ "docstring": "\n DB tables that refer to RealmEmoji use int(emoji_code) as the\n foreign key. Those tables tend to de-normalize emoji_name due\n to our inheritance-based setup. This helper makes sure those\n invariants are intact, which is particularly tricky during\n the import/export process (or during conversions from things\n like Slack/RocketChat/MatterMost/etc.).\n ", "language": "en", "n_whitespaces": 96, "n_words": 46, "vocab_size": 41 }
def verify_emoji_code_foreign_keys(self) -> None: dct = {} for row in RealmEmoji.objects.all(): dct[row.id] = row if not dct: raise AssertionError("test needs RealmEmoji rows") count = 0 for row in Reaction.objects.filter(reaction_type=Reaction.REALM_EMOJI): realm_emoji_id = int(row.emoji_code) assert realm_emoji_id in dct self.assertEqual(dct[realm_emoji_id].name, row.emoji_name) self.assertEqual(dct[realm_emoji_id].realm_id, row.user_profile.realm_id) count += 1 for row in UserStatus.objects.filter(reaction_type=UserStatus.REALM_EMOJI): realm_emoji_id = int(row.emoji_code) assert realm_emoji_id in dct self.assertEqual(dct[realm_emoji_id].name, row.emoji_name) self.assertEqual(dct[realm_emoji_id].realm_id, row.user_profile.realm_id) count += 1 if count == 0: raise AssertionError("test is meaningless without any pertinent rows")
14,080
65,988
16
erpnext/erpnext_integrations/doctype/mpesa_settings/mpesa_settings.py
28
15
def format_string_to_json(balance_info): Working Account|KES|481000.00|481000.00|0.00|0.00 balance_dict = frappe._dict() for account_info in balance_info.split("&"): account_info = account_info.split("|") balance_dict[account_info[0]] = dict( current_balance=fmt_money(account_info[2], currency="KES"), available_balance=fmt_money(account_info[3], currency="KES"), reserved_balance=fmt_money(account_info[4], currency="KES"), uncleared_balance=fmt_money(account_info[5], currency="KES"), ) return dumps(balance_dict)
style: format code with black
format_string_to_json
494bd9ef78313436f0424b918f200dab8fc7c20b
erpnext
mpesa_settings.py
15
11
https://github.com/frappe/erpnext.git
2
103
0
22
166
Python
{ "docstring": "\n\tFormat string to json.\n\n\te.g: \n\t=> {'Working Account': {'current_balance': '481000.00',\n\t 'available_balance': '481000.00',\n\t 'reserved_balance': '0.00',\n\t 'uncleared_balance': '0.00'}}\n\t", "language": "en", "n_whitespaces": 35, "n_words": 16, "vocab_size": 15 }
def format_string_to_json(balance_info): Working Account|KES|481000.00|481000.00|0.00|0.00 balance_dict = frappe._dict() for account_info in balance_info.split("&"): account_info = account_info.split("|") balance_dict[account_info[0]] = dict( current_balance=fmt_money(account_info[2], currency="KES"), available_balance=fmt_money(account_info[3], currency="KES"), reserved_balance=fmt_money(account_info[4], currency="KES"), uncleared_balance=fmt_money(account_info[5], currency="KES"), ) return dumps(balance_dict)
40,626
170,943
495
pandas/io/xml.py
148
23
def _validate_path(self) -> list[Any]: msg = ( "xpath does not return any nodes or attributes. " "Be sure to specify in `xpath` the parent nodes of
STYLE: fix pylint: no-else-raise (#49520) * fix pylint: no-else-raise * fix possible imbalanced tuple unpacking warning Co-authored-by: carlotta <c.fabian@turbit.de>
_validate_path
d13c9e034ce8a1d738766c4b1cec80c76f5523be
pandas
xml.py
13
35
https://github.com/pandas-dev/pandas.git
14
160
0
86
268
Python
{ "docstring": "\n Notes\n -----\n `etree` supports limited XPath. If user attempts a more complex\n expression syntax error will raise.\n ", "language": "en", "n_whitespaces": 53, "n_words": 17, "vocab_size": 17 }
def _validate_path(self) -> list[Any]: msg = ( "xpath does not return any nodes or attributes. " "Be sure to specify in `xpath` the parent nodes of " "children and attributes to parse. " "If document uses namespaces denoted with " "xmlns, be sure to define namespaces and " "use them in xpath." ) try: elems = self.xml_doc.findall(self.xpath, namespaces=self.namespaces) children = [ch for el in elems for ch in el.findall("*")] attrs = {k: v for el in elems for k, v in el.attrib.items()} if elems is None: raise ValueError(msg) if elems is not None: if self.elems_only and children == []: raise ValueError(msg) if self.attrs_only and attrs == {}: raise ValueError(msg) if children == [] and attrs == {}: raise ValueError(msg) except (KeyError, SyntaxError): raise SyntaxError( "You have used an incorrect or unsupported XPath " "expression for etree library or you used an " "undeclared namespace prefix." ) return elems
35,242
153,058
183
modin/core/execution/ray/implementations/pandas_on_ray/partitioning/partition.py
38
13
def mask(self, row_labels, col_labels): new_obj =
REFACTOR-#2656: Update modin to fit algebra (code only) (#3717) Co-authored-by: Yaroslav Igoshev <Poolliver868@mail.ru> Co-authored-by: Vasily Litvinov <vasilij.n.litvinov@intel.com> Co-authored-by: Alexey Prutskov <alexey.prutskov@intel.com> Co-authored-by: Devin Petersohn <devin-petersohn@users.noreply.github.com> Signed-off-by: Rehan Durrani <rehan@ponder.io>
mask
58bbcc37477866d19c8b092a0e1974a4f0baa586
modin
partition.py
11
15
https://github.com/modin-project/modin.git
5
86
0
26
131
Python
{ "docstring": "\n Lazily create a mask that extracts the indices provided.\n\n Parameters\n ----------\n row_labels : list-like, slice or label\n The row labels for the rows to extract.\n col_labels : list-like, slice or label\n The column labels for the columns to extract.\n\n Returns\n -------\n PandasOnRayDataframePartition\n A new ``PandasOnRayDataframePartition`` object.\n ", "language": "en", "n_whitespaces": 143, "n_words": 46, "vocab_size": 34 }
def mask(self, row_labels, col_labels): new_obj = super().mask(row_labels, col_labels) if isinstance(row_labels, slice) and isinstance( self._length_cache, ObjectIDType ): new_obj._length_cache = compute_sliced_len.remote( row_labels, self._length_cache ) if isinstance(col_labels, slice) and isinstance( self._width_cache, ObjectIDType ): new_obj._width_cache = compute_sliced_len.remote( col_labels, self._width_cache ) return new_obj
10,347
51,540
102
modules/image/classification/efficientnetb0_imagenet/processor.py
33
17
def postprocess(data_out, label_list, top_k): output = [] for result in data_out: result_i = softmax(result) output_i = {} indexs = np.argsort(result_i)[::-1][0:top_k] for index in indexs: label = label_list[index].split(',')[0] output_i[label] = float(result_i[index]) output.append(output_i) return output
update efficientnetb0_imagenet (#2041) * update efficientnetb0_imagenet * remove unused print
postprocess
7cd67aba38c19a835c3229d9b4be21798c5c8673
PaddleHub
processor.py
14
11
https://github.com/PaddlePaddle/PaddleHub.git
3
86
0
25
138
Python
{ "docstring": "\n Postprocess output of network, one image at a time.\n\n Args:\n data_out (numpy.ndarray): output data of network.\n label_list (list): list of label.\n top_k (int): Return top k results.\n ", "language": "en", "n_whitespaces": 58, "n_words": 27, "vocab_size": 24 }
def postprocess(data_out, label_list, top_k): output = [] for result in data_out: result_i = softmax(result) output_i = {} indexs = np.argsort(result_i)[::-1][0:top_k] for index in indexs: label = label_list[index].split(',')[0] output_i[label] = float(result_i[index]) output.append(output_i) return output
2,635
13,415
164
jina/serve/executors/__init__.py
47
8
def requests(self): if hasattr(self, '_requests'): return self._requests else: if not hasattr(self, 'requests_by_class'):
fix: fix bug inheritance, requests nested dict (#5380)
requests
b44d767f22bd862cdb75926ba388c14f5db0323c
jina
__init__.py
14
10
https://github.com/jina-ai/jina.git
4
83
0
34
137
Python
{ "docstring": "\n Get the request dictionary corresponding to this specific class\n\n :return: Returns the requests corresponding to the specific Executor instance class\n ", "language": "en", "n_whitespaces": 42, "n_words": 20, "vocab_size": 14 }
def requests(self): if hasattr(self, '_requests'): return self._requests else: if not hasattr(self, 'requests_by_class'): self.requests_by_class = {} if self.__class__.__name__ not in self.requests_by_class: self.requests_by_class[self.__class__.__name__] = {} # we need to copy so that different instances with different (requests) in input do not disturb one another self._requests = copy.copy(self.requests_by_class[self.__class__.__name__]) return self._requests
6,170
33,860
65
src/transformers/pipelines/text2text_generation.py
27
12
def __call__(self, *args, **kwargs): r result = sup
Fixing t2t pipelines lists outputs. (#15008) Backward compatibility broken in https://github.com/huggingface/transformers/pull/14988
__call__
8c2618e6aac3473da7757fb230690ffd4aea4c32
transformers
text2text_generation.py
10
32
https://github.com/huggingface/transformers.git
5
68
0
23
102
Python
{ "docstring": "\n Generate the output text(s) using text(s) given as inputs.\n\n Args:\n args (`str` or `List[str]`):\n Input text for the encoder.\n return_tensors (`bool`, *optional*, defaults to `False`):\n Whether or not to include the tensors of predictions (as token indices) in the outputs.\n return_text (`bool`, *optional*, defaults to `True`):\n Whether or not to include the decoded texts in the outputs.\n clean_up_tokenization_spaces (`bool`, *optional*, defaults to `False`):\n Whether or not to clean up the potential extra spaces in the text output.\n truncation (`TruncationStrategy`, *optional*, defaults to `TruncationStrategy.DO_NOT_TRUNCATE`):\n The truncation strategy for the tokenization within the pipeline. `TruncationStrategy.DO_NOT_TRUNCATE`\n (default) will never truncate, but it is sometimes desirable to truncate the input to fit the model's\n max_length instead of throwing an error down the line.\n generate_kwargs:\n Additional keyword arguments to pass along to the generate method of the model (see the generate method\n corresponding to your framework [here](./model#generative-models)).\n\n Return:\n A list or a list of list of `dict`: Each result comes as a dictionary with the following keys:\n\n - **generated_text** (`str`, present when `return_text=True`) -- The generated text.\n - **generated_token_ids** (`torch.Tensor` or `tf.Tensor`, present when `return_tensors=True`) -- The token\n ids of the generated text.\n ", "language": "en", "n_whitespaces": 464, "n_words": 188, "vocab_size": 114 }
def __call__(self, *args, **kwargs): r result = super().__call__(*args, **kwargs) if isinstance(args[0], list) and all(isinstance(el, str) for el in args[0]): return [res[0] for res in result] return result
4,217
22,145
133
pipenv/patched/pip/_vendor/requests/utils.py
42
10
def rewind_body(prepared_request): body_seek = getattr(prepared_request.body, "seek", None) if body_seek is not None and isinstance( prepared_request._body_p
Rename notpip to pip. Vendor in pip-22.2.1 and latest requirementslib and vistir.
rewind_body
cd5a9683be69c86c8f3adcd13385a9bc5db198ec
pipenv
utils.py
13
13
https://github.com/pypa/pipenv.git
4
56
0
37
97
Python
{ "docstring": "Move file pointer back to its recorded starting position\n so it can be read again on redirect.\n ", "language": "en", "n_whitespaces": 23, "n_words": 17, "vocab_size": 17 }
def rewind_body(prepared_request): body_seek = getattr(prepared_request.body, "seek", None) if body_seek is not None and isinstance( prepared_request._body_position, integer_types ): try: body_seek(prepared_request._body_position) except OSError: raise UnrewindableBodyError( "An error occurred when rewinding request body for redirect." ) else: raise UnrewindableBodyError("Unable to rewind request body for redirect.")
118,400
323,181
192
paddlenlp/trainer/utils/helper.py
64
18
def nested_concat(tensors, new_tensors, padding_index=-100): assert type(tensors) == type( new_tensors ), f"Expected `tensors` and `new_tensors` to have the same type but found {type(tensors)} and {type(new_tensors)}." if isinstance(tensors, (list, tuple)): return type(tensors)(nested_concat( t, n, padding_index=padding_index)
[Trainer] Add init version of paddlenlp trainer and apply finetune for ernie-1.0 pretraining. (#1761) * add some datasets for finetune. * support fine tune for all tastks. * add trainer prototype. * init verison for paddlenlp trainer. * refine trainer. * update for some details. * support multi-cards training evaluation. * support load from ckpt. * support for export inference model. * first version of trainer. * seq cls support clue. * trainer support for token classification and question answersing tasks. * fix as reviews. Co-authored-by: Zeyu Chen <chenzeyu01@baidu.com>
nested_concat
44a290e94d1becd1f09fddc3d873f9e19c9d6919
PaddleNLP
helper.py
14
17
https://github.com/PaddlePaddle/PaddleNLP.git
5
116
0
50
200
Python
{ "docstring": "\n Concat the `new_tensors` to `tensors` on the first dim and pad them on the second if needed. Works for tensors or\n nested list/tuples of tensors.\n ", "language": "en", "n_whitespaces": 35, "n_words": 25, "vocab_size": 22 }
def nested_concat(tensors, new_tensors, padding_index=-100): assert type(tensors) == type( new_tensors ), f"Expected `tensors` and `new_tensors` to have the same type but found {type(tensors)} and {type(new_tensors)}." if isinstance(tensors, (list, tuple)): return type(tensors)(nested_concat( t, n, padding_index=padding_index) for t, n in zip(tensors, new_tensors)) elif isinstance(tensors, paddle.Tensor): return paddle_pad_and_concatenate( tensors, new_tensors, padding_index=padding_index) elif isinstance(tensors, np.ndarray): return numpy_pad_and_concatenate( tensors, new_tensors, padding_index=padding_index) else: raise TypeError( f"Unsupported type for concatenation: got {type(tensors)}")
10,764
53,269
72
src/prefect/cli/orion.py
22
10
def kubernetes_manifest(): tem
Add kubernetes manifest commands
kubernetes_manifest
23365cf7727c45f38ad983d610ffec5c15ceca21
prefect
orion.py
15
10
https://github.com/PrefectHQ/prefect.git
1
44
0
18
83
Python
{ "docstring": "\n Generates a kubernetes manifest for to deploy Orion to a cluster.\n\n Example:\n $ prefect orion kubernetes-manifest | kubectl apply -f -\n ", "language": "en", "n_whitespaces": 38, "n_words": 21, "vocab_size": 19 }
def kubernetes_manifest(): template = Template( (prefect.__module_path__ / "cli" / "templates" / "kubernetes.yaml").read_text() ) manifest = template.substitute( { "image_name": get_prefect_image_name(), } ) print(manifest)
18,149
86,690
1,253
tests/sentry/api/endpoints/test_project_dynamic_sampling.py
183
43
def test_queries_when_requested_project_is_head_of_trace(self, mock_query, mock_querybuilder): # Case A: Head of trace project self.login_as(self.user) heart = self.create_project( name="Heart", slug="heart", teams=[self.team], fire_project_created=True ) mock_query.side_effect = [ {"data": [{"count()": 1000}]}, ] mock_querybuilder.side_effect = [ { "data": [ { "trace": "6503ee33b7bc43aead1facaa625a5dba", "id": "6ddc83ee612b4e89b95b5278c8fd188f", "random_number() AS random_number": 42
feat(dynamic-sampling): Improve empty transaction breakdown message [TET-338] (#39539) This PR add new attribute parentProjectBreakdown to /api/0/projects/<organization_slug>/<project_slug>/dynamic-sampling/distribution/ api: ``` { "projectBreakdown": null, "sampleSize": 0, "startTimestamp": null, "endTimestamp": null, "parentProjectBreakdown": [ { "projectId": 1, "percentage": 0.9, "project": "sentry" }, { "projectId": 2, "percentage": 0.1, "project": "javascript" } ] } ``` TODO: - [x] Update src/sentry/snuba/referrer.py https://github.com/getsentry/sentry/blob/0fbbf1626f86399b1ca4a2781d66ef96aac69de7/src/sentry/snuba/referrer.py#L208-L210 - [x] Add missing tests Co-authored-by: Andrii Soldatenko <andrii.soldatenko@gmail.io> Co-authored-by: ahmedetefy <ahmed.etefy12@gmail.com>
test_queries_when_requested_project_is_head_of_trace
ceee9dfd8d6fed70d34546e7b46ebb7bf1d49745
sentry
test_project_dynamic_sampling.py
14
77
https://github.com/getsentry/sentry.git
1
384
0
103
644
Python
{ "docstring": "\n Case A: Requesting for a project (bar) that is root but is a head of distributed traces\n Example of smart query response (DYNAMIC_SAMPLING_DISTRIBUTION_FETCH_PROJECT_STATS):\n |---------+-------+------|\n | project | count | root |\n |---------+-------+------|\n | bar | 100 | 100 |\n | heart | 5 | 0 |\n |---------+-------+------|\n ", "language": "en", "n_whitespaces": 127, "n_words": 47, "vocab_size": 28 }
def test_queries_when_requested_project_is_head_of_trace(self, mock_query, mock_querybuilder): # Case A: Head of trace project self.login_as(self.user) heart = self.create_project( name="Heart", slug="heart", teams=[self.team], fire_project_created=True ) mock_query.side_effect = [ {"data": [{"count()": 1000}]}, ] mock_querybuilder.side_effect = [ { "data": [ { "trace": "6503ee33b7bc43aead1facaa625a5dba", "id": "6ddc83ee612b4e89b95b5278c8fd188f", "random_number() AS random_number": 4255299100, "is_root": 1, }, { "trace": "6503ee33b7bc43aead1facaa625a5dba", "id": "0b127a578f8440c793f9ba1de595229f", "random_number() AS random_number": 3976019453, "is_root": 1, }, ] }, { "data": [ { "project": self.project.slug, "project_id": self.project.id, "count": 2, "root_count": 2, }, { "project": heart.slug, "project_id": heart.id, "count": 1, "root_count": 0, }, ] }, ] end_time = timezone.now() start_time = end_time - timedelta(hours=1) query = "environment:dev" requested_sample_size = 2 calls = self.generate_fetch_transactions_count_query( query, start_time, end_time, requested_sample_size ) snuba_query_random_transactions = random_transactions_snuba_query( query, requested_sample_size, start_time, end_time, self.project ) snuba_query_project_stats = project_stats_snuba_query( query, start_time, end_time, self.project, trace_ids=["6503ee33b7bc43aead1facaa625a5dba"] * 2, ) with Feature({"organizations:server-side-sampling": True}): response = self.client.get( f"{self.endpoint}?sampleSize={requested_sample_size}&query={query}" ) assert response.status_code == 200 assert mock_query.mock_calls == calls assert len(mock_querybuilder.call_args_list) == 2 self.assert_mocked_query_calls( snuba_query_random_transactions, snuba_query_project_stats, mock_querybuilder ) response_data = response.json() assert response_data["projectBreakdown"] == [ {"project_id": self.project.id, "project": self.project.slug, "count()": 2}, {"project_id": heart.id, "project": heart.slug, "count()": 1}, ] assert response_data["parentProjectBreakdown"] == [ {"project": self.project.slug, "projectId": self.project.id, "percentage": 1.0} ]
14,129
66,180
18
erpnext/hr/doctype/leave_ledger_entry/leave_ledger_entry.py
29
15
def validate_leave_allocation_against_leave_application(ledger): leave_app
style: format code with black
validate_leave_allocation_against_leave_application
494bd9ef78313436f0424b918f200dab8fc7c20b
erpnext
leave_ledger_entry.py
14
20
https://github.com/frappe/erpnext.git
2
61
0
27
100
Python
{ "docstring": "Checks that leave allocation has no leave application against it\n\t\tSELECT transaction_name\n\t\tFROM `tabLeave Ledger Entry`\n\t\tWHERE\n\t\t\temployee=%s\n\t\t\tAND leave_type=%s\n\t\t\tAND transaction_type='Leave Application'\n\t\t\tAND from_date>=%s\n\t\t\tAND to_date<=%s\n\t", "language": "en", "n_whitespaces": 18, "n_words": 27, "vocab_size": 23 }
def validate_leave_allocation_against_leave_application(ledger): leave_application_records = frappe.db.sql_list( , (ledger.employee, ledger.leave_type, ledger.from_date, ledger.to_date), ) if leave_application_records: frappe.throw( _("Leave allocation {0} is linked with the Leave Application {1}").format( ledger.transaction_name, ", ".join(leave_application_records) ) )
52,657
209,346
129
scapy/contrib/pnio_rpc.py
53
6
def dce_rpc_endianess(pkt): try: endianness = pkt.underlayer.endian except AttributeError: # handle the case where a PN
MS-RPCE support (#3674) * Add DCE/RPC * Add tests to DCERPC5 / PNIO_RPC fixes * Support for NDR fields in DCERPC * Fully implement KRB5_GSS * Support also RFC4121
dce_rpc_endianess
a738a0b375a5599187626c9a9b081f7c25392f69
scapy
pnio_rpc.py
10
11
https://github.com/secdev/scapy.git
4
38
0
39
78
Python
{ "docstring": "determine the symbol for the endianness of a the DCE/RPC", "language": "en", "n_whitespaces": 9, "n_words": 10, "vocab_size": 8 }
def dce_rpc_endianess(pkt): try: endianness = pkt.underlayer.endian except AttributeError: # handle the case where a PNIO class is # built without its DCE-RPC under-layer # i.e there is no endianness indication return "!" if endianness == 0: # big endian return ">" elif endianness == 1: # little endian return "<" else: return "!"
10,853
53,590
424
src/prefect/flow_runners.py
108
17
def _get_extra_hosts(self, docker_client) -> Dict[str, str]: if sys.platform == "linux" and ( # Do not
Add pattern for loading CLI defaults from settings Also, renames ORION_HOST to API_URL and adds utils to `Settings` to retrieve things by the envar key
_get_extra_hosts
b25d9d283b714c719f363176d49892188c50dffd
prefect
flow_runners.py
14
25
https://github.com/PrefectHQ/prefect.git
5
99
0
87
188
Python
{ "docstring": "\n A host.docker.internal -> host-gateway mapping is necessary for communicating\n with the API on Linux machines. Docker Desktop on macOS will automatically\n already have this mapping.\n ", "language": "en", "n_whitespaces": 54, "n_words": 25, "vocab_size": 24 }
def _get_extra_hosts(self, docker_client) -> Dict[str, str]: if sys.platform == "linux" and ( # Do not warn if the user has specified a host manually that does not use # a local address "PREFECT_API_URL" not in self.env or re.search( ".*(localhost)|(127.0.0.1)|(host.docker.internal).*", self.env["PREFECT_API_URL"], ) ): user_version = packaging.version.parse(docker_client.version()["Version"]) required_version = packaging.version.parse("20.10.0") if user_version < required_version: warnings.warn( "`host.docker.internal` could not be automatically resolved to your " "local ip address. This feature is not supported on Docker Engine " f"v{user_version}, upgrade to v{required_version}+ if you " "encounter issues." ) return {} else: # Compatibility for linux -- https://github.com/docker/cli/issues/2290 # Only supported by Docker v20.10.0+ which is our minimum recommend version return {"host.docker.internal": "host-gateway"}
54,254
215,927
247
tests/pytests/unit/modules/test_win_certutil.py
47
12
def test_del_store(): with patch("salt.modules.win_certutil.get_cert_serial") as cert_serial_mock: cmd_mock = MagicMock( return_value=( "CertInfo\r\n" "================ Certificate 0 ================\r\n" "Serial Number: 180720d39cd2db3244ba037417241e90\r\n" "OtherStuff" ) ) cache_mock = MagicMock(return_value="/tmp/cert.cer") cert_serial_mock.return_value = "ABCDEF" with patch.dict( certutil.__salt__, {"cmd.run": cmd_mock, "cp.cache_file": cache_mock} ), patch("os.path.exists", MagicMock(return_value=True)): certutil.del_store("salt://path/to/file", "TrustedPublisher") cmd_mock.assert_called_once_with( 'certutil.exe -delstore TrustedPublisher "ABCDEF"'
Add tests, fix state module
test_del_store
a8d2d1e1397cdc79b2c5f1ad7f6e3b729dcf8857
salt
test_win_certutil.py
14
20
https://github.com/saltstack/salt.git
1
93
0
42
188
Python
{ "docstring": "\n Test removing a certificate to a specific store\n ", "language": "en", "n_whitespaces": 15, "n_words": 8, "vocab_size": 7 }
def test_del_store(): with patch("salt.modules.win_certutil.get_cert_serial") as cert_serial_mock: cmd_mock = MagicMock( return_value=( "CertInfo\r\n" "================ Certificate 0 ================\r\n" "Serial Number: 180720d39cd2db3244ba037417241e90\r\n" "OtherStuff" ) ) cache_mock = MagicMock(return_value="/tmp/cert.cer") cert_serial_mock.return_value = "ABCDEF" with patch.dict( certutil.__salt__, {"cmd.run": cmd_mock, "cp.cache_file": cache_mock} ), patch("os.path.exists", MagicMock(return_value=True)): certutil.del_store("salt://path/to/file", "TrustedPublisher") cmd_mock.assert_called_once_with( 'certutil.exe -delstore TrustedPublisher "ABCDEF"' ) cache_mock.assert_called_once_with("salt://path/to/file", "base")
117,233
320,622
95
tests/conftest.py
44
15
def _select_backend(config): backend_arg = config.getoption('--qute-backend') backend_env = os.
tests: Remove some unused imports
_select_backend
9c4169c7b7d96a10012a72c70fc38c6154f7481f
qutebrowser
conftest.py
10
11
https://github.com/qutebrowser/qutebrowser.git
5
62
0
31
113
Python
{ "docstring": "Select the backend for running tests.\n\n The backend is auto-selected in the following manner:\n 1. Use QtWebKit if available\n 2. Otherwise use QtWebEngine as a fallback\n\n Auto-selection is overridden by either passing a backend via\n `--qute-backend=<backend>` or setting the environment variable\n `QUTE_TESTS_BACKEND=<backend>`.\n\n Args:\n config: pytest config\n\n Raises:\n ImportError if the selected backend is not available.\n\n Returns:\n The selected backend as a string (e.g. 'webkit').\n ", "language": "en", "n_whitespaces": 115, "n_words": 64, "vocab_size": 49 }
def _select_backend(config): backend_arg = config.getoption('--qute-backend') backend_env = os.environ.get('QUTE_TESTS_BACKEND') backend = backend_arg or backend_env or _auto_select_backend() # Fail early if selected backend is not available # pylint: disable=unused-import if backend == 'webkit': import PyQt5.QtWebKitWidgets elif backend == 'webengine': import PyQt5.QtWebEngineWidgets else: raise utils.Unreachable(backend) return backend
47,842
196,342
85
sympy/logic/boolalg.py
28
15
def equals(self, other): from sympy.logic.inference import satisfiable from sympy.core.relational import Relational if self.has(Relational) or other.has(Relational): raise NotImplementedError('handling of relationals') return self.atoms() == other.atoms() and \ not satisfiable(No
Updated import locations
equals
498015021131af4dbb07eb110e5badaba8250c7b
sympy
boolalg.py
13
7
https://github.com/sympy/sympy.git
4
71
0
26
113
Python
{ "docstring": "\n Returns True if the given formulas have the same truth table.\n For two formulas to be equal they must have the same literals.\n\n Examples\n ========\n\n >>> from sympy.abc import A, B, C\n >>> from sympy import And, Or, Not\n >>> (A >> B).equals(~B >> ~A)\n True\n >>> Not(And(A, B, C)).equals(And(Not(A), Not(B), Not(C)))\n False\n >>> Not(And(A, Not(A))).equals(Or(B, Not(B)))\n False\n\n ", "language": "en", "n_whitespaces": 150, "n_words": 58, "vocab_size": 42 }
def equals(self, other): from sympy.logic.inference import satisfiable from sympy.core.relational import Relational if self.has(Relational) or other.has(Relational): raise NotImplementedError('handling of relationals') return self.atoms() == other.atoms() and \ not satisfiable(Not(Equivalent(self, other)))
14,703
67,999
45
erpnext/stock/utils.py
64
16
def get_latest_stock_qty(item_code, warehouse=None): values, condition = [item_code], "" if warehouse: lft, rgt, is_group = frappe.db.get_value("Warehouse", warehouse, ["lft", "rgt", "is_group"]) if is_group: values.extend([lft, rgt]) condition += "and exists (\ select
style: format code with black
get_latest_stock_qty
494bd9ef78313436f0424b918f200dab8fc7c20b
erpnext
utils.py
13
20
https://github.com/frappe/erpnext.git
3
98
0
52
165
Python
{ "docstring": "select sum(actual_qty) from tabBin\n\t\twhere item_code=%s {0}", "language": "en", "n_whitespaces": 5, "n_words": 7, "vocab_size": 7 }
def get_latest_stock_qty(item_code, warehouse=None): values, condition = [item_code], "" if warehouse: lft, rgt, is_group = frappe.db.get_value("Warehouse", warehouse, ["lft", "rgt", "is_group"]) if is_group: values.extend([lft, rgt]) condition += "and exists (\ select name from `tabWarehouse` wh where wh.name = tabBin.warehouse\ and wh.lft >= %s and wh.rgt <= %s)" else: values.append(warehouse) condition += " AND warehouse = %s" actual_qty = frappe.db.sql( .format( condition ), values, )[0][0] return actual_qty
75,584
259,125
138
sklearn/kernel_approximation.py
45
21
def get_feature_names_out(self, input_features=None): input_features = _check_feature_names_in( self, input_features, generate_names=True ) est_name = self.__class__.__nam
ENH Adds get_feature_names_out for AdditiveChi2Sampler (#22137) Co-authored-by: Olivier Grisel <olivier.grisel@gmail.com> Co-authored-by: Jérémie du Boisberranger <34657725+jeremiedbb@users.noreply.github.com>
get_feature_names_out
67a3feed2fe4e82c1cc129c34b9e223b94a8d531
scikit-learn
kernel_approximation.py
11
11
https://github.com/scikit-learn/scikit-learn.git
5
94
0
31
176
Python
{ "docstring": "Get output feature names for transformation.\n\n Parameters\n ----------\n input_features : array-like of str or None, default=None\n Only used to validate feature names with the names seen in :meth:`fit`.\n\n Returns\n -------\n feature_names_out : ndarray of str objects\n Transformed feature names.\n ", "language": "en", "n_whitespaces": 110, "n_words": 39, "vocab_size": 32 }
def get_feature_names_out(self, input_features=None): input_features = _check_feature_names_in( self, input_features, generate_names=True ) est_name = self.__class__.__name__.lower() names_list = [f"{est_name}_{name}_sqrt" for name in input_features] for j in range(1, self.sample_steps): cos_names = [f"{est_name}_{name}_cos{j}" for name in input_features] sin_names = [f"{est_name}_{name}_sin{j}" for name in input_features] names_list.extend(cos_names + sin_names) return np.asarray(names_list, dtype=object)
16,391
75,312
63
wagtail/images/tests/test_templatetags.py
21
12
def test_render_valid_image_as_context_variable(self): context = {"image": self.image, "image_node": "fake value"} node = ImageNode(Variable("image"), "original", "image_node") rendered = node.render(context)
Reformat with black
test_render_valid_image_as_context_variable
d10f15e55806c6944827d801cd9c2d53f5da4186
wagtail
test_templatetags.py
11
6
https://github.com/wagtail/wagtail.git
1
59
0
19
108
Python
{ "docstring": "\n Tests that an ImageNode with a valid image and a context variable name\n renders an empty string and puts a rendition in the context variable\n ", "language": "en", "n_whitespaces": 47, "n_words": 25, "vocab_size": 19 }
def test_render_valid_image_as_context_variable(self): context = {"image": self.image, "image_node": "fake value"} node = ImageNode(Variable("image"), "original", "image_node") rendered = node.render(context) self.assertEqual(rendered, "") self.assertIsInstance(context["image_node"], Rendition)
47,479
195,934
61
sympy/polys/rootisolation.py
29
12
def dup_cauchy_lower_bound(f, K): g = dup_reverse(f) if len(g) < 2: raise PolynomialError('Polynomial has no non-zero roots.') if K.is_ZZ: K = K.get_field() b = dup_cauchy_upper_bound(g, K) return
Add `dup_...` funcs for Cauchy bounds.
dup_cauchy_lower_bound
4f34fcc3406452ace4a70d541064f2dfdcee9205
sympy
rootisolation.py
10
8
https://github.com/sympy/sympy.git
3
53
0
25
89
Python
{ "docstring": "Compute the Cauchy lower bound on the absolute value of all non-zero\n roots of f, real or complex.", "language": "en", "n_whitespaces": 23, "n_words": 18, "vocab_size": 16 }
def dup_cauchy_lower_bound(f, K): g = dup_reverse(f) if len(g) < 2: raise PolynomialError('Polynomial has no non-zero roots.') if K.is_ZZ: K = K.get_field() b = dup_cauchy_upper_bound(g, K) return K.one / b
35,807
154,142
1,287
modin/core/io/column_stores/parquet_dispatcher.py
327
50
def call_deploy(cls, fname, col_partitions, storage_options, **kwargs): from pyarrow.parquet import ParquetFile from modin.core.storage_formats.pandas.parsers import ParquetFileToRead # If we don't have any columns to read, we should just return an empty # set of references. if len(col_partitions) == 0: return [] filesystem, parquet_files = cls.get_fsspec_files(fname, storage_options) row_groups_per_file = [] num_row_groups = 0 # Count up the total number of row groups across all files and # keep track of row groups per file to use later. for file
FIX-#4756: Correctly propagate `storage_options` in `read_parquet` (#4764) Co-authored-by: Yaroslav Igoshev <Poolliver868@mail.ru> Co-authored-by: Alexey Prutskov <lehaprutskov@gmail.com> Signed-off-by: Karthik Velayutham <vkarthik@ponder.io>
call_deploy
4548012a6372b8ce79d7e07c9ae13fd7444a91c8
modin
parquet_dispatcher.py
13
69
https://github.com/modin-project/modin.git
9
287
0
182
460
Python
{ "docstring": "\n Deploy remote tasks to the workers with passed parameters.\n\n Parameters\n ----------\n fname : str, path object or file-like object\n Name of the file to read.\n col_partitions : list\n List of arrays with columns names that should be read\n by each partition.\n storage_options : dict\n Parameters for specific storage engine.\n **kwargs : dict\n Parameters of deploying read_* function.\n\n Returns\n -------\n List\n Array with references to the task deploy result for each partition.\n ", "language": "en", "n_whitespaces": 215, "n_words": 71, "vocab_size": 52 }
def call_deploy(cls, fname, col_partitions, storage_options, **kwargs): from pyarrow.parquet import ParquetFile from modin.core.storage_formats.pandas.parsers import ParquetFileToRead # If we don't have any columns to read, we should just return an empty # set of references. if len(col_partitions) == 0: return [] filesystem, parquet_files = cls.get_fsspec_files(fname, storage_options) row_groups_per_file = [] num_row_groups = 0 # Count up the total number of row groups across all files and # keep track of row groups per file to use later. for file in parquet_files: with filesystem.open(file) as f: row_groups = ParquetFile(f).num_row_groups row_groups_per_file.append(row_groups) num_row_groups += row_groups # step determines how many row groups are going to be in a partition step = compute_chunksize( num_row_groups, NPartitions.get(), min_block_size=1, ) current_partition_size = 0 file_index = 0 partition_files = [] # 2D array - each element contains list of chunks to read row_groups_used_in_current_file = 0 total_row_groups_added = 0 # On each iteration, we add a chunk of one file. That will # take us either to the end of a partition, or to the end # of a file. while total_row_groups_added < num_row_groups: if current_partition_size == 0: partition_files.append([]) partition_file = partition_files[-1] file_path = parquet_files[file_index] row_group_start = row_groups_used_in_current_file row_groups_left_in_file = ( row_groups_per_file[file_index] - row_groups_used_in_current_file ) row_groups_left_for_this_partition = step - current_partition_size if row_groups_left_for_this_partition <= row_groups_left_in_file: # File has at least what we need to finish partition # So finish this partition and start a new one. num_row_groups_to_add = row_groups_left_for_this_partition current_partition_size = 0 else: # File doesn't have enough to complete this partition. Add # it into current partition and go to next file. num_row_groups_to_add = row_groups_left_in_file current_partition_size += num_row_groups_to_add if num_row_groups_to_add == row_groups_left_in_file: file_index += 1 row_groups_used_in_current_file = 0 else: row_groups_used_in_current_file += num_row_groups_to_add partition_file.append( ParquetFileToRead( file_path, row_group_start, row_group_start + num_row_groups_to_add ) ) total_row_groups_added += num_row_groups_to_add assert ( total_row_groups_added == num_row_groups ), "row groups added does not match total num of row groups across parquet files" all_partitions = [] for files_to_read in partition_files: all_partitions.append( [ cls.deploy( cls.parse, files_for_parser=files_to_read, columns=cols, num_returns=3, storage_options=storage_options, **kwargs, ) for cols in col_partitions ] ) return all_partitions
35,407
153,459
98
modin/db_conn.py
26
12
def get_connection(self): if self.lib == _PSYCOPG_LIB_NAME: import psycopg2 return psycopg2.connect(*self.args, **self.kwargs) if self.lib == _SQLALCHEMY_LIB_NAME: from sqlalchemy import create_engine
FEAT-#979: Enable reading from SQL server. (#4279) Co-authored-by: eavidan <eran.avidan@intel.com> Co-authored-by: Devin Petersohn <devin-petersohn@users.noreply.github.com> Signed-off-by: mvashishtha <mahesh@ponder.io>
get_connection
2d40797b2b700d81d4db4a4cd023d563edf6431f
modin
db_conn.py
13
8
https://github.com/modin-project/modin.git
3
63
0
21
106
Python
{ "docstring": "\n Make the database connection and get it.\n\n For psycopg2, pass all arguments to psycopg2.connect() and return the\n result of psycopg2.connect(). For sqlalchemy, pass all arguments to\n sqlalchemy.create_engine() and return the result of calling connect()\n on the engine.\n\n Returns\n -------\n Any\n The open database connection.\n ", "language": "en", "n_whitespaces": 119, "n_words": 44, "vocab_size": 30 }
def get_connection(self): if self.lib == _PSYCOPG_LIB_NAME: import psycopg2 return psycopg2.connect(*self.args, **self.kwargs) if self.lib == _SQLALCHEMY_LIB_NAME: from sqlalchemy import create_engine return create_engine(*self.args, **self.kwargs).connect() raise UnsupportedDatabaseException("Unsupported database library")
@keras_export("keras.backend.argmin") @tf.__internal__.dispatch.add_dispatch_support @doc_controls.do_not_generate_docs
80,095
269,459
12
keras/backend.py
9
10
def argmax(x, axis=-1): return tf.argmax(x, axis) @keras_export("keras.backend.argmin") @tf.__internal__.dispatch.add_dispatch_support @doc_controls.do_not_generate_docs
Reformatting the codebase with black. PiperOrigin-RevId: 450093126
argmax
84afc5193d38057e2e2badf9c889ea87d80d8fbf
keras
backend.py
7
2
https://github.com/keras-team/keras.git
1
20
1
9
62
Python
{ "docstring": "Returns the index of the maximum value along an axis.\n\n Args:\n x: Tensor or variable.\n axis: axis along which to perform the reduction.\n\n Returns:\n A tensor.\n ", "language": "en", "n_whitespaces": 56, "n_words": 26, "vocab_size": 23 }
def argmax(x, axis=-1): return tf.argmax(x, axis) @keras_export("keras.backend.argmin") @tf.__internal__.dispatch.add_dispatch_support @doc_controls.do_not_generate_docs
49,448
199,955
184
sympy/core/facts.py
51
8
def print_rules(self) -> Iterator[str]: yield from self._defined_facts_lines() yield '' yield '' yield from self._full_implications_lines() yield '' yield '' yield from self._prereq_lines() yield '' yield '' yield from self._beta_rules_lines() yield '' yield '' yield "generated_assumptions = {'defined_facts': defined_facts, 'full
refactor
print_rules
f68e8de4252200cfc74b9433d00f77c4510ac68d
sympy
facts.py
8
18
https://github.com/sympy/sympy.git
1
63
0
24
140
Python
{ "docstring": " Returns a generator with lines to represent the facts and rules ", "language": "en", "n_whitespaces": 12, "n_words": 11, "vocab_size": 11 }
def print_rules(self) -> Iterator[str]: yield from self._defined_facts_lines() yield '' yield '' yield from self._full_implications_lines() yield '' yield '' yield from self._prereq_lines() yield '' yield '' yield from self._beta_rules_lines() yield '' yield '' yield "generated_assumptions = {'defined_facts': defined_facts, 'full_implications': full_implications," yield " 'prereq': prereq, 'beta_rules': beta_rules, 'beta_triggers': beta_triggers}" yield '' yield ''
56,673
222,610
21
python3.10.4/Lib/distutils/cmd.py
7
5
def ensure_string(self, option, default=None): self._ensure_stringlike(option, "strin
add python 3.10.4 for windows
ensure_string
8198943edd73a363c266633e1aa5b2a9e9c9f526
XX-Net
cmd.py
8
2
https://github.com/XX-net/XX-Net.git
1
22
0
7
36
Python
{ "docstring": "Ensure that 'option' is a string; if not defined, set it to\n 'default'.\n ", "language": "en", "n_whitespaces": 27, "n_words": 13, "vocab_size": 13 }
def ensure_string(self, option, default=None): self._ensure_stringlike(option, "string", default)
9,926
49,815
27
modules/image/text_to_image/disco_diffusion_cnclip_vitb16/reverse_diffusion/model/nn.py
14
11
def update_ema(target_params, source_params, rate=0.99): for targ, src in zip(target_params, source_params): targ.detach().mul_(rate).add_(src, alpha=1
add disco_diffusion_cnclip_vitb16 module
update_ema
f4d6e64cdc132ae868699a0ba442f4ab1d304a14
PaddleHub
nn.py
13
3
https://github.com/PaddlePaddle/PaddleHub.git
2
47
0
14
70
Python
{ "docstring": "\n Update target parameters to be closer to those of source parameters using\n an exponential moving average.\n\n :param target_params: the target parameter sequence.\n :param source_params: the source parameter sequence.\n :param rate: the EMA rate (closer to 1 means slower).\n ", "language": "en", "n_whitespaces": 57, "n_words": 38, "vocab_size": 27 }
def update_ema(target_params, source_params, rate=0.99): for targ, src in zip(target_params, source_params): targ.detach().mul_(rate).add_(src, alpha=1 - rate)
@lru_cache(maxsize=1)
3,904
21,526
188
pipenv/patched/notpip/_vendor/platformdirs/android.py
68
19
def _android_folder() -> str | None: try: # First try to get path to android app via pyjnius from jnius import autoclass
Vendor in pip 22.1.2
_android_folder
c69d55f7c82d5ae2cce542bcfb98d043ca4836a0
pipenv
android.py
17
15
https://github.com/pypa/pipenv.git
4
86
1
52
164
Python
{ "docstring": ":return: base folder for the Android OS or None if cannot be found", "language": "en", "n_whitespaces": 12, "n_words": 13, "vocab_size": 13 }
def _android_folder() -> str | None: try: # First try to get path to android app via pyjnius from jnius import autoclass Context = autoclass("android.content.Context") # noqa: N806 result: str | None = Context.getFilesDir().getParentFile().getAbsolutePath() except Exception: # if fails find an android folder looking path on the sys.path pattern = re.compile(r"/data/(data|user/\d+)/(.+)/files") for path in sys.path: if pattern.match(path): result = path.split("/files")[0] break else: result = None return result @lru_cache(maxsize=1)
52,587
209,060
63
scapy/volatile.py
27
8
def de_bruijn(charset, n, maxlen): # type: (str, int, int) -> str k = len(char
Add CyclicPattern class for generation of payload data (#3508) * Add CyclicPattern class for generation of payload data * minor enhancment * fix python2 * fix python2 * use six * fix flake
de_bruijn
e2fc7dddb40a7b80f2e65ad6593c0b10080019d0
scapy
volatile.py
9
7
https://github.com/secdev/scapy.git
1
44
0
21
50
Python
{ "docstring": "\n Generate the De Bruijn Sequence up to `maxlen` characters\n for the charset `charset` and subsequences of length `n`.\n Algorithm modified from wikipedia\n https://en.wikipedia.org/wiki/De_Bruijn_sequence\n ", "language": "en", "n_whitespaces": 59, "n_words": 23, "vocab_size": 22 }
def de_bruijn(charset, n, maxlen): # type: (str, int, int) -> str k = len(charset) a = [0] * k * n sequence = [] # type: List[str]
21,283
101,901
168
lib/gui/display_command.py
33
24
def _add_option_refresh(self) -> None: logger.debug("Adding refresh option") btnrefresh = ttk.Button(self.optsframe, image=get_images().icons["reload"], command=lambda x="update": preview_trigger().set(x)) # type:ignore btnrefresh.pack(padx=2, side=tk.RIGHT) Tooltip(btnrefresh, text=_("Preview updates at every model save. Click to refresh now."), wrap_length=200) logger.debug("Added refresh option")
Typing - lib.gui.display_command
_add_option_refresh
dab823a3eb7a5257cb1e0818ee10ed234d3de97f
faceswap
display_command.py
14
11
https://github.com/deepfakes/faceswap.git
1
86
0
30
147
Python
{ "docstring": " Add refresh button to refresh preview immediately ", "language": "en", "n_whitespaces": 8, "n_words": 7, "vocab_size": 6 }
def _add_option_refresh(self) -> None: logger.debug("Adding refresh option") btnrefresh = ttk.Button(self.optsframe, image=get_images().icons["reload"], command=lambda x="update": preview_trigger().set(x)) # type:ignore btnrefresh.pack(padx=2, side=tk.RIGHT) Tooltip(btnrefresh, text=_("Preview updates at every model save. Click to refresh now."), wrap_length=200) logger.debug("Added refresh option")
69,929
242,808
201
src/PIL/Image.py
60
14
def close(self): try: if hasattr(self, "_close__fp"): self._close__fp() if self.fp: self.fp.close() self.fp = None except Exception as msg: logger.debug("Error closing: %s", msg) if getat
[Private] class names should be CamelCase
close
7fa92c67b1471a66739c4768cdef616c27675981
Pillow
Image.py
12
12
https://github.com/python-pillow/Pillow.git
5
77
0
51
138
Python
{ "docstring": "\n Closes the file pointer, if possible.\n\n This operation will destroy the image core and release its memory.\n The image data will be unusable afterward.\n\n This function is required to close images that have multiple frames or\n have not had their file read and closed by the\n :py:meth:`~PIL.Image.Image.load` method. See :ref:`file-handling` for\n more information.\n ", "language": "en", "n_whitespaces": 110, "n_words": 53, "vocab_size": 45 }
def close(self): try: if hasattr(self, "_close__fp"): self._close__fp() if self.fp: self.fp.close() self.fp = None except Exception as msg: logger.debug("Error closing: %s", msg) if getattr(self, "map", None): self.map = None # Instead of simply setting to None, we're setting up a # deferred error that will better explain that the core image # object is gone. self.im = DeferredError(ValueError("Operation on closed image"))
47,458
195,871
31
sympy/solvers/diophantine/diophantine.py
16
12
def diop_general_sum_of_squares(eq, limit=1): r var, coeff, diop_type = classify_diop(eq, _dict=False) if diop_type == GeneralSumOfSquares.name: return set(GeneralSumOfSquares(eq).solve(limit=limit))
Improved documentation formatting
diop_general_sum_of_squares
cda8dfe6f45dc5ed394c2f5cda706cd6c729f713
sympy
diophantine.py
13
37
https://github.com/sympy/sympy.git
2
47
0
15
73
Python
{ "docstring": "\n Solves the equation `x_{1}^2 + x_{2}^2 + . . . + x_{n}^2 - k = 0`.\n\n Returns at most ``limit`` number of solutions.\n\n Usage\n =====\n\n ``general_sum_of_squares(eq, limit)`` : Here ``eq`` is an expression which\n is assumed to be zero. Also, ``eq`` should be in the form,\n `x_{1}^2 + x_{2}^2 + . . . + x_{n}^2 - k = 0`.\n\n Details\n =======\n\n When `n = 3` if `k = 4^a(8m + 7)` for some `a, m \\in Z` then there will be\n no solutions. Refer to [1]_ for more details.\n\n Examples\n ========\n\n >>> from sympy.solvers.diophantine.diophantine import diop_general_sum_of_squares\n >>> from sympy.abc import a, b, c, d, e\n >>> diop_general_sum_of_squares(a**2 + b**2 + c**2 + d**2 + e**2 - 2345)\n {(15, 22, 22, 24, 24)}\n\n Reference\n =========\n\n .. [1] Representing an integer as a sum of three squares, [online],\n Available:\n http://www.proofwiki.org/wiki/Integer_as_Sum_of_Three_Squares\n ", "language": "en", "n_whitespaces": 216, "n_words": 138, "vocab_size": 98 }
def diop_general_sum_of_squares(eq, limit=1): r var, coeff, diop_type = classify_diop(eq, _dict=False) if diop_type == GeneralSumOfSquares.name: return set(GeneralSumOfSquares(eq).solve(limit=limit))
@Directory.register
45,631
186,806
97
acme/acme/messages.py
34
14
def resolved_combinations(self) -> Tuple[Tuple[ChallengeBody, ...], ...]:
deprecate more attributes in acme (#9369) * deprecate more attributes in acme * Deprecate .Authorization.combinations by renaming the field and deprecating in getters/setters * Silence deprecation warnings from our own imports of acme.mixins Co-authored-by: Brad Warren <bmw@users.noreply.github.com>
resolved_combinations
f7e61edcb2ea3195c9889c407a08e6dffb7f60dc
certbot
messages.py
11
11
https://github.com/certbot/certbot.git
3
50
1
31
87
Python
{ "docstring": "Combinations with challenges instead of indices.\n\n .. deprecated: 1.30.0\n\n ", "language": "en", "n_whitespaces": 23, "n_words": 9, "vocab_size": 9 }
def resolved_combinations(self) -> Tuple[Tuple[ChallengeBody, ...], ...]: warnings.warn( "acme.messages.Authorization.resolved_combinations is deprecated and will be " "removed in a future release.", DeprecationWarning) return tuple(tuple(self.challenges[idx] for idx in combo) for combo in self.combinations) # pylint: disable=not-an-iterable @Directory.register
75,768
259,434
383
sklearn/_loss/tests/test_loss.py
174
25
def test_tweedie_log_identity_consistency(p): half_tweedie_log = HalfTweedieLoss(power=p) half_tweedie_identity = HalfTweedieLossIdentity(power=p) n_samples = 10 y_true, raw_prediction = random_y_true_raw_prediction( loss=half_tweedie_log, n_samples=n_samples, seed=42 ) y_pred = half_tweedie_log.link.inverse(raw_prediction) # exp(raw_prediction) # Let's compare the loss values, up to some constant term that is dropped # in HalfTweedieLoss but not in HalfTweedieLossIdentity. loss_log = half_tweedie_log.loss( y_true=y_true, raw_prediction=raw_prediction ) + half_tweedie_log.constant_to_optimal_zero(y_true) loss_identity = half_tweedie_identity.loss( y_true=y_true, raw_prediction=y_pred ) + half_tweedie_identity.constant_to_optimal_zero(y_true) # Note that HalfTweedieLoss ignores different constant terms than # HalfTweedieLos
ENH migrate GLMs / TweedieRegressor to linear loss (#22548) Co-authored-by: Olivier Grisel <olivier.grisel@ensta.org> Co-authored-by: Thomas J. Fan <thomasjpfan@gmail.com>
test_tweedie_log_identity_consistency
75a94f518f7bd7d0bf581ffb67d9f961e3c4efbc
scikit-learn
test_loss.py
10
25
https://github.com/scikit-learn/scikit-learn.git
1
155
0
109
255
Python
{ "docstring": "Test for identical losses when only the link function is different.", "language": "en", "n_whitespaces": 10, "n_words": 11, "vocab_size": 11 }
def test_tweedie_log_identity_consistency(p): half_tweedie_log = HalfTweedieLoss(power=p) half_tweedie_identity = HalfTweedieLossIdentity(power=p) n_samples = 10 y_true, raw_prediction = random_y_true_raw_prediction( loss=half_tweedie_log, n_samples=n_samples, seed=42 ) y_pred = half_tweedie_log.link.inverse(raw_prediction) # exp(raw_prediction) # Let's compare the loss values, up to some constant term that is dropped # in HalfTweedieLoss but not in HalfTweedieLossIdentity. loss_log = half_tweedie_log.loss( y_true=y_true, raw_prediction=raw_prediction ) + half_tweedie_log.constant_to_optimal_zero(y_true) loss_identity = half_tweedie_identity.loss( y_true=y_true, raw_prediction=y_pred ) + half_tweedie_identity.constant_to_optimal_zero(y_true) # Note that HalfTweedieLoss ignores different constant terms than # HalfTweedieLossIdentity. Constant terms means terms not depending on # raw_prediction. By adding these terms, `constant_to_optimal_zero`, both losses # give the same values. assert_allclose(loss_log, loss_identity) # For gradients and hessians, the constant terms do not matter. We have, however, # to account for the chain rule, i.e. with x=raw_prediction # gradient_log(x) = d/dx loss_log(x) # = d/dx loss_identity(exp(x)) # = exp(x) * gradient_identity(exp(x)) # Similarly, # hessian_log(x) = exp(x) * gradient_identity(exp(x)) # + exp(x)**2 * hessian_identity(x) gradient_log, hessian_log = half_tweedie_log.gradient_hessian( y_true=y_true, raw_prediction=raw_prediction ) gradient_identity, hessian_identity = half_tweedie_identity.gradient_hessian( y_true=y_true, raw_prediction=y_pred ) assert_allclose(gradient_log, y_pred * gradient_identity) assert_allclose( hessian_log, y_pred * gradient_identity + y_pred**2 * hessian_identity )
@_noconds_(True)
48,218
196,851
306
sympy/integrals/transforms.py
89
33
def laplace_transform(f, t, s, legacy_matrix=True, **hints): r debug('\n***** laplace_transform(%s, %s, %s)'%(f, t, s)) if isinstance(f, MatrixBase) and hasattr(f, 'applyfunc'): conds = not hints.get('noconds', False) if conds and legacy_matrix: SymPyDeprecationWarning( feature="laplace_transform of a Matrix with noconds=False (default)", useinstead="the option legacy_matrix=False to get the new behaviour", issue=21504, deprecated_since_version="1.9" ).warn() return f.applyfunc(lambda fij: laplace_transform(fij, t, s, **hints)) else: elements_trans = [laplace_transform(fij, t, s, **hints) for fij in f] if c
Fix a few docstring formatting issues
laplace_transform
1eeb01e15f06c6692a5bfd6fd2d2a3002d864a07
sympy
transforms.py
17
85
https://github.com/sympy/sympy.git
7
196
1
71
315
Python
{ "docstring": "\n Compute the Laplace Transform `F(s)` of `f(t)`,\n\n .. math :: F(s) = \\int_{0^{-}}^\\infty e^{-st} f(t) \\mathrm{d}t.\n\n Explanation\n ===========\n\n For all sensible functions, this converges absolutely in a\n half-plane\n\n .. math :: a < \\operatorname{Re}(s)\n\n This function returns ``(F, a, cond)`` where ``F`` is the Laplace\n transform of ``f``, `a` is the half-plane of convergence, and `cond` are\n auxiliary convergence conditions.\n\n The implementation is rule-based, and if you are interested in which\n rules are applied, and whether integration is attemped, you can switch\n debug information on by setting ``sympy.SYMPY_DEBUG=True``.\n\n The lower bound is `0-`, meaning that this bound should be approached\n from the lower side. This is only necessary if distributions are involved.\n At present, it is only done if `f(t)` contains ``DiracDelta``, in which\n case the Laplace transform is computed implicitly as\n\n .. math :: F(s) = \\lim_{\\tau\\to 0^{-}} \\int_{\\tau}^\\infty e^{-st} f(t) \\mathrm{d}t\n\n by applying rules.\n\n If the integral cannot be fully computed in closed form, this function\n returns an unevaluated :class:`LaplaceTransform` object.\n\n For a description of possible hints, refer to the docstring of\n :func:`sympy.integrals.transforms.IntegralTransform.doit`. If ``noconds=True``,\n only `F` will be returned (i.e. not ``cond``, and also not the plane ``a``).\n\n .. deprecated:: 1.9\n Legacy behavior for matrices where ``laplace_transform`` with\n ``noconds=False`` (the default) returns a Matrix whose elements are\n tuples. The behavior of ``laplace_transform`` for matrices will change\n in a future release of SymPy to return a tuple of the transformed\n Matrix and the convergence conditions for the matrix as a whole. Use\n ``legacy_matrix=False`` to enable the new behavior.\n\n Examples\n ========\n\n >>> from sympy import DiracDelta, exp, laplace_transform\n >>> from sympy.abc import t, s, a\n >>> laplace_transform(t**4, t, s)\n (24/s**5, 0, True)\n >>> laplace_transform(t**a, t, s)\n (gamma(a + 1)/(s*s**a), 0, re(a) > -1)\n >>> laplace_transform(DiracDelta(t)-a*exp(-a*t),t,s)\n (s/(a + s), Max(0, -a), True)\n\n See Also\n ========\n\n inverse_laplace_transform, mellin_transform, fourier_transform\n hankel_transform, inverse_hankel_transform\n\n ", "language": "en", "n_whitespaces": 463, "n_words": 300, "vocab_size": 192 }
def laplace_transform(f, t, s, legacy_matrix=True, **hints): r debug('\n***** laplace_transform(%s, %s, %s)'%(f, t, s)) if isinstance(f, MatrixBase) and hasattr(f, 'applyfunc'): conds = not hints.get('noconds', False) if conds and legacy_matrix: SymPyDeprecationWarning( feature="laplace_transform of a Matrix with noconds=False (default)", useinstead="the option legacy_matrix=False to get the new behaviour", issue=21504, deprecated_since_version="1.9" ).warn() return f.applyfunc(lambda fij: laplace_transform(fij, t, s, **hints)) else: elements_trans = [laplace_transform(fij, t, s, **hints) for fij in f] if conds: elements, avals, conditions = zip(*elements_trans) f_laplace = type(f)(*f.shape, elements) return f_laplace, Max(*avals), And(*conditions) else: return type(f)(*f.shape, elements_trans) return LaplaceTransform(f, t, s).doit(**hints) @_noconds_(True)
74,867
256,315
171
test/benchmarks/nq_to_squad.py
71
14
def reduce_annotations(anno_types, answers): for at in set(anno_types): assert at in ("no_answer", "short_answer") if anno_types.count("short_answer") >= anno_types.count("no_answer"): majority = "short_answer" is_impossible = False else: majority = "no_answer" is_impossible = True answers = [a for at, a in zip(anno_types, answers) if at == majority] reduction = len(anno_types) -
Apply black formatting (#2115) * Testing black on ui/ * Applying black on docstores * Add latest docstring and tutorial changes * Create a single GH action for Black and docs to reduce commit noise to the minimum, slightly refactor the OpenAPI action too * Remove comments * Relax constraints on pydoc-markdown * Split temporary black from the docs. Pydoc-markdown was obsolete and needs a separate PR to upgrade * Fix a couple of bugs * Add a type: ignore that was missing somehow * Give path to black * Apply Black * Apply Black * Relocate a couple of type: ignore * Update documentation * Make Linux CI run after applying Black * Triggering Black * Apply Black * Remove dependency, does not work well * Remove manually double trailing commas * Update documentation Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
reduce_annotations
a59bca366174d9c692fa19750c24d65f47660ef7
haystack
nq_to_squad.py
10
20
https://github.com/deepset-ai/haystack.git
6
112
0
45
191
Python
{ "docstring": "\n In cases where there is annotator disagreement, this fn picks either only the short_answers or only the no_answers,\n depending on which is more numerous, with a bias towards picking short_answers.\n\n Note: By this stage, all long_answer annotations and all samples with yes/no answer have been removed.\n This leaves just no_answer and short_answers", "language": "en", "n_whitespaces": 64, "n_words": 52, "vocab_size": 44 }
def reduce_annotations(anno_types, answers): for at in set(anno_types): assert at in ("no_answer", "short_answer") if anno_types.count("short_answer") >= anno_types.count("no_answer"): majority = "short_answer" is_impossible = False else: majority = "no_answer" is_impossible = True answers = [a for at, a in zip(anno_types, answers) if at == majority] reduction = len(anno_types) - len(answers) assert reduction < 3 if not is_impossible: global n_no_ans n_no_ans += reduction else: global n_short n_short += reduction answers = [] return answers, is_impossible
107,162
308,405
222
homeassistant/components/mqtt/cover.py
35
22
async def async_open_cover(self, **kwargs): await mqtt.async_publish( self.ha
Add mqtt encoding support for publishing (#62739) * encoding support for mqtt publishing - todo tests * signature allows None values for qos and retain * common test for mqtt publishing encoding * better test with command templates * more tests * fix tests alarm control panel+tests light basic * tests light json and template * add tests vacuum and fix tests light_template
async_open_cover
d0c4f0fec4216e4193da716001b5e13e1e3f2106
core
cover.py
14
16
https://github.com/home-assistant/core.git
3
98
0
32
150
Python
{ "docstring": "Move the cover up.\n\n This method is a coroutine.\n ", "language": "en", "n_whitespaces": 23, "n_words": 9, "vocab_size": 9 }
async def async_open_cover(self, **kwargs): await mqtt.async_publish( self.hass, self._config.get(CONF_COMMAND_TOPIC), self._config[CONF_PAYLOAD_OPEN], self._config[CONF_QOS], self._config[CONF_RETAIN], self._config[CONF_ENCODING], ) if self._optimistic: # Optimistically assume that cover has changed state. self._state = STATE_OPEN if self._config.get(CONF_GET_POSITION_TOPIC): self._position = self.find_percentage_in_range( self._config[CONF_POSITION_OPEN], COVER_PAYLOAD ) self.async_write_ha_state()
3,191
20,042
176
pipenv/patched/notpip/_vendor/distro.py
43
10
def _parse_distro_release_content(line): # type: (str) -> Dict[str, str] matches = _DISTRO_RELEASE_CONTENT_REVERSED_PATTERN.match(line.strip()[::-1]) distro_info = {} if matches: # regexp ensures non-None distro_info["name"] = matches.group(3
check point progress on only bringing in pip==22.0.4 (#4966) * vendor in pip==22.0.4 * updating vendor packaging version * update pipdeptree to fix pipenv graph with new version of pip. * Vendoring of pip-shims 0.7.0 * Vendoring of requirementslib 1.6.3 * Update pip index safety restrictions patch for pip==22.0.4 * Update patches * exclude pyptoject.toml from black to see if that helps. * Move this part of the hash collection back to the top (like prior implementation) because it affects the outcome of this test now in pip 22.0.4
_parse_distro_release_content
f3166e673fe8d40277b804d35d77dcdb760fc3b3
pipenv
distro.py
13
12
https://github.com/pypa/pipenv.git
5
109
0
32
201
Python
{ "docstring": "\n Parse a line from a distro release file.\n\n Parameters:\n * line: Line from the distro release file. Must be a unicode string\n or a UTF-8 encoded byte string.\n\n Returns:\n A dictionary containing all information items.\n ", "language": "en", "n_whitespaces": 97, "n_words": 35, "vocab_size": 28 }
def _parse_distro_release_content(line): # type: (str) -> Dict[str, str] matches = _DISTRO_RELEASE_CONTENT_REVERSED_PATTERN.match(line.strip()[::-1]) distro_info = {} if matches: # regexp ensures non-None distro_info["name"] = matches.group(3)[::-1] if matches.group(2): distro_info["version_id"] = matches.group(2)[::-1] if matches.group(1): distro_info["codename"] = matches.group(1)[::-1] elif line: distro_info["name"] = line.strip() return distro_info _distro = LinuxDistribution()
6,866
37,755
333
src/transformers/modeling_utils.py
167
45
def load_sharded_checkpoint(model, folder, strict=True): # Load the index index_file = os.path.join(folder, WEIGHTS_INDEX_NAME) if not os.path.isfile(index_file): raise ValueError(f"Can't find a checkpoint index ({WEIGHTS_INDEX_NAME}) in {folder}.") with open(index_file, "r", encoding="utf-8") as f: index = json.load(f) shard_files = list(set(index["weight_map"].values())) # If strict=True, error before loading any of the state dicts. loaded_keys = index["weight_map"].keys() model_keys = model.state_dict().keys() missing_keys = [key for key in model_keys if key not in loaded_keys] unexpected_keys = [key for key in loaded_keys if key not in model_keys] if strict and (len(missing_keys) > 0 or len(unexpected_keys) > 0): error_message = f"Error(s) in loading state_dict for {model.__class__.__name__}" if len(missing_keys) > 0: str_missing_keys = ",
Make Trainer compatible with sharded checkpoints (#17053) * Make Trainer compatible with sharded checkpoints * Add doc
load_sharded_checkpoint
a8fa2f91f409a0657937016b983b74f58a07ae72
transformers
modeling_utils.py
15
26
https://github.com/huggingface/transformers.git
14
264
0
104
468
Python
{ "docstring": "\n This is the same as\n [`torch.nn.Module.load_state_dict`](https://pytorch.org/docs/stable/generated/torch.nn.Module.html?highlight=load_state_dict#torch.nn.Module.load_state_dict)\n but for a sharded checkpoint.\n\n This load is performed efficiently: each checkpoint shard is loaded one by one in RAM and deleted after being\n loaded in the model.\n\n Args:\n model (`torch.nn.Module`): The model in which to load the checkpoint.\n folder (`str` or `os.PathLike`): A path to a folder containing the sharded checkpoint.\n strict (`bool`, *optional`, defaults to `True`):\n Whether to strictly enforce that the keys in the model state dict match the keys in the sharded checkpoint.\n\n Returns:\n `NamedTuple`: A named tuple with `missing_keys` and `unexpected_keys` fields\n - `missing_keys` is a list of str containing the missing keys\n - `unexpected_keys` is a list of str containing the unexpected keys\n ", "language": "en", "n_whitespaces": 201, "n_words": 115, "vocab_size": 67 }
def load_sharded_checkpoint(model, folder, strict=True): # Load the index index_file = os.path.join(folder, WEIGHTS_INDEX_NAME) if not os.path.isfile(index_file): raise ValueError(f"Can't find a checkpoint index ({WEIGHTS_INDEX_NAME}) in {folder}.") with open(index_file, "r", encoding="utf-8") as f: index = json.load(f) shard_files = list(set(index["weight_map"].values())) # If strict=True, error before loading any of the state dicts. loaded_keys = index["weight_map"].keys() model_keys = model.state_dict().keys() missing_keys = [key for key in model_keys if key not in loaded_keys] unexpected_keys = [key for key in loaded_keys if key not in model_keys] if strict and (len(missing_keys) > 0 or len(unexpected_keys) > 0): error_message = f"Error(s) in loading state_dict for {model.__class__.__name__}" if len(missing_keys) > 0: str_missing_keys = ",".join([f'"{k}"' for k in missing_keys]) error_message += f"\nMissing key(s): {str_missing_keys}." if len(unexpected_keys) > 0: str_unexpected_keys = ",".join([f'"{k}"' for k in unexpected_keys]) error_message += f"\nMissing key(s): {str_unexpected_keys}." raise RuntimeError(error_message) for shard_file in shard_files: state_dict = torch.load(os.path.join(folder, shard_file)) model.load_state_dict(state_dict, strict=False) # Make sure memory is fred before we load the next state dict. del state_dict gc.collect() # Return the same thing as PyTorch load_state_dict function. return torch.nn.modules.module._IncompatibleKeys(missing_keys, unexpected_keys)
@frappe.whitelist() @frappe.validate_and_sanitize_search_inputs
13,980
65,661
62
erpnext/controllers/queries.py
86
27
def customer_query(doctype, txt, searchfield, start, page_len, filters): conditions = [] cust_master_name = frappe.defaults.get_user_default("cust_master_name") if cust_master_name == "Customer Name": fields = ["name", "customer_group", "territory"] else: fields = ["name", "customer_name", "customer_group", "territory"] fields = get_fields("Customer", fields) searchfields = frappe.get_meta("Customer").get_search_fields() searchfields = " or ".join(field + " like %(txt)s" for field in searchfields) return frappe.db.sql( .format( **{ "fields": ", ".join(fields), "scond": searchfields, "mcond": get_match_cond(doctype), "fc
style: format code with black
customer_query
494bd9ef78313436f0424b918f200dab8fc7c20b
erpnext
queries.py
16
30
https://github.com/frappe/erpnext.git
3
172
1
69
322
Python
{ "docstring": "select {fields} from `tabCustomer`\n\t\twhere docstatus < 2\n\t\t\tand ({scond}) and disabled=0\n\t\t\t{fcond} {mcond}\n\t\torder by\n\t\t\tif(locate(%(_txt)s, name), locate(%(_txt)s, name), 99999),\n\t\t\tif(locate(%(_txt)s, customer_name), locate(%(_txt)s, customer_name), 99999),\n\t\t\tidx desc,\n\t\t\tname, customer_name\n\t\tlimit %(start)s, %(page_len)s", "language": "en", "n_whitespaces": 23, "n_words": 33, "vocab_size": 27 }
def customer_query(doctype, txt, searchfield, start, page_len, filters): conditions = [] cust_master_name = frappe.defaults.get_user_default("cust_master_name") if cust_master_name == "Customer Name": fields = ["name", "customer_group", "territory"] else: fields = ["name", "customer_name", "customer_group", "territory"] fields = get_fields("Customer", fields) searchfields = frappe.get_meta("Customer").get_search_fields() searchfields = " or ".join(field + " like %(txt)s" for field in searchfields) return frappe.db.sql( .format( **{ "fields": ", ".join(fields), "scond": searchfields, "mcond": get_match_cond(doctype), "fcond": get_filters_cond(doctype, filters, conditions).replace("%", "%%"), } ), {"txt": "%%%s%%" % txt, "_txt": txt.replace("%", ""), "start": start, "page_len": page_len}, ) # searches for supplier @frappe.whitelist() @frappe.validate_and_sanitize_search_inputs
18,042
85,773
21
src/sentry/tagstore/base.py
7
6
def get_group_tag_value_count(self, group, environment_id, key): raise No
feat(perf_issues): Fix `GroupTagKeyDetailsEndpoint` to work for performance issues (#38860) This allows this endpoint to return results for performance issues.
get_group_tag_value_count
72e351082168f68cbaa5700a51e8ed577222e887
sentry
base.py
6
2
https://github.com/getsentry/sentry.git
1
14
0
7
22
Python
{ "docstring": "\n >>> get_group_tag_value_count(group, 3, 'key1')\n ", "language": "en", "n_whitespaces": 19, "n_words": 4, "vocab_size": 4 }
def get_group_tag_value_count(self, group, environment_id, key): raise NotImplementedError
9,144
47,522
174
tests/jobs/test_scheduler_job.py
47
35
def test_enqueue_task_instances_sets_ti_state_to_None_if_dagrun_in_finish_state(self, state, dag_maker): dag_i
Replace usage of `DummyOperator` with `EmptyOperator` (#22974) * Replace usage of `DummyOperator` with `EmptyOperator`
test_enqueue_task_instances_sets_ti_state_to_None_if_dagrun_in_finish_state
49e336ae0302b386a2f47269a6d13988382d975f
airflow
test_scheduler_job.py
11
17
https://github.com/apache/airflow.git
1
139
0
38
233
Python
{ "docstring": "This tests that task instances whose dagrun is in finished state are not queued", "language": "en", "n_whitespaces": 13, "n_words": 14, "vocab_size": 14 }
def test_enqueue_task_instances_sets_ti_state_to_None_if_dagrun_in_finish_state(self, state, dag_maker): dag_id = 'SchedulerJobTest.test_enqueue_task_instances_with_queued_state' task_id_1 = 'dummy' session = settings.Session() with dag_maker(dag_id=dag_id, start_date=DEFAULT_DATE, session=session): task1 = EmptyOperator(task_id=task_id_1) self.scheduler_job = SchedulerJob(subdir=os.devnull) dr1 = dag_maker.create_dagrun(state=state) ti = dr1.get_task_instance(task1.task_id, session) ti.state = State.SCHEDULED session.merge(ti) session.commit() with patch.object(BaseExecutor, 'queue_command') as mock_queue_command: self.scheduler_job._enqueue_task_instances_with_queued_state([ti]) ti.refresh_from_db() assert ti.state == State.NONE mock_queue_command.assert_not_called()
12,746
61,907
633
.venv/lib/python3.8/site-packages/pip/_vendor/distlib/compat.py
155
21
def match_hostname(cert, hostname): if not cert: raise ValueError("empty or no certificate, match_hostname needs a " "SSL socket or SSL context with either " "CERT_OPTIONAL or CERT_REQUIRED") dnsnames = [] san = cert.get('subjectAltName', ()) for key, value in san: if key == 'DNS': if _dnsname_match(value, hostname): return dnsnames.append(value) if not dnsnames: # The subject is only checked when there is no dNSName entry # in subjectAltName for sub in cert.get('subject', ()): for key, value in sub: # XXX according to RFC 2818, the most specific Common Name # must be used. if key == 'commonName':
upd; format
match_hostname
f638f5d0e6c8ebed0e69a6584bc7f003ec646580
transferlearning
compat.py
15
30
https://github.com/jindongwang/transferlearning.git
12
166
0
106
314
Python
{ "docstring": "Verify that *cert* (in decoded format as returned by\n SSLSocket.getpeercert()) matches the *hostname*. RFC 2818 and RFC 6125\n rules are followed, but IP addresses are not accepted for *hostname*.\n\n CertificateError is raised on failure. On success, the function\n returns nothing.\n ", "language": "en", "n_whitespaces": 76, "n_words": 40, "vocab_size": 36 }
def match_hostname(cert, hostname): if not cert: raise ValueError("empty or no certificate, match_hostname needs a " "SSL socket or SSL context with either " "CERT_OPTIONAL or CERT_REQUIRED") dnsnames = [] san = cert.get('subjectAltName', ()) for key, value in san: if key == 'DNS': if _dnsname_match(value, hostname): return dnsnames.append(value) if not dnsnames: # The subject is only checked when there is no dNSName entry # in subjectAltName for sub in cert.get('subject', ()): for key, value in sub: # XXX according to RFC 2818, the most specific Common Name # must be used. if key == 'commonName': if _dnsname_match(value, hostname): return dnsnames.append(value) if len(dnsnames) > 1: raise CertificateError("hostname %r " "doesn't match either of %s" % (hostname, ', '.join(map(repr, dnsnames)))) elif len(dnsnames) == 1: raise CertificateError("hostname %r " "doesn't match %r" % (hostname, dnsnames[0])) else: raise CertificateError("no appropriate commonName or " "subjectAltName fields were found") try: from types import SimpleNamespace as Container except ImportError: # pragma: no cover
22,008
104,860
112
src/datasets/iterable_dataset.py
18
19
def take(self, n) -> "IterableDataset": ex_iterable = TakeExamplesIterable(self._ex_iterable, n) return iterable_dataset( ex_iterable=ex_iterable, info=self._info.copy(), split=self._split, format_type=self._format_type, shuffling=copy.deepcopy(self._shuffling), token_per_repo_id=self._token_per_repo_id,
Stream private zipped images (#4173) * keep track of repo_id and token to decode remote images * add test * fix * docstrings + comments * fix string_to_dict * fix tests
take
f51b6994db27ea69261ef919fb7775928f9ec10b
datasets
iterable_dataset.py
11
29
https://github.com/huggingface/datasets.git
1
67
0
17
106
Python
{ "docstring": "\n Create a new IterableDataset with only the first ``n`` elements.\n\n Args:\n n (:obj:`int`): number of elements to take.\n\n Example:\n\n ```py\n >>> from datasets import load_dataset\n >>> ds = load_dataset(\"rotten_tomatoes\", split=\"train\", streaming=True)\n >>> small_ds = ds.take(2)\n >>> list(small_ds)\n [{'label': 1,\n 'text': 'the rock is destined to be the 21st century\\'s new \" conan \" and that he\\'s going to make a splash even greater than arnold schwarzenegger , jean-claud van damme or steven segal .'},\n {'label': 1,\n 'text': 'the gorgeously elaborate continuation of \" the lord of the rings \" trilogy is so huge that a column of words cannot adequately describe co-writer/director peter jackson\\'s expanded vision of j . r . r . tolkien\\'s middle-earth .'}]\n ```\n ", "language": "en", "n_whitespaces": 230, "n_words": 117, "vocab_size": 90 }
def take(self, n) -> "IterableDataset": ex_iterable = TakeExamplesIterable(self._ex_iterable, n) return iterable_dataset( ex_iterable=ex_iterable, info=self._info.copy(), split=self._split, format_type=self._format_type, shuffling=copy.deepcopy(self._shuffling), token_per_repo_id=self._token_per_repo_id, )
19,068
94,333
615
tests/sentry/event_manager/test_event_manager.py
66
29
def test_category_match_group(self): from sentry.grouping.enhancer import Enhancements enhancement = Enhancements.from_config_string( , ) event = make_event( platform="native", exception={ "values": [ { "type": "Hello", "stacktrace": { "frames": [ { "function": "foo", }, { "function": "bar", },
test(event_manager): Fix incorrect invocations of manager.save (#36615)
test_category_match_group
39cfdcb446e74732c67ce07d7dd8d8d5ace471b1
sentry
test_event_manager.py
20
38
https://github.com/getsentry/sentry.git
1
154
0
47
265
Python
{ "docstring": "\n Regression test to ensure categories are applied consistently and don't\n produce hash mismatches.\n \n function:foo category=foo_like\n category:foo_like -group\n ", "language": "en", "n_whitespaces": 73, "n_words": 17, "vocab_size": 17 }
def test_category_match_group(self): from sentry.grouping.enhancer import Enhancements enhancement = Enhancements.from_config_string( , ) event = make_event( platform="native", exception={ "values": [ { "type": "Hello", "stacktrace": { "frames": [ { "function": "foo", }, { "function": "bar", }, ] }, } ] }, ) manager = EventManager(event) manager.normalize() grouping_config = { "enhancements": enhancement.dumps(), "id": "mobile:2021-02-12", } manager.get_data()["grouping_config"] = grouping_config event1 = manager.save(self.project.id) event2 = Event(event1.project_id, event1.event_id, data=event1.data) assert event1.get_hashes().hashes == event2.get_hashes(grouping_config).hashes
19,326
96,559
366
src/sentry/plugins/bases/notify.py
48
26
def notify(self, notification, raise_exception=False): event = notification.event try: return self.notify_users( event.group, event, triggering_rules=[r.label for r in notification.rules] ) except ( ApiError, HTTPError, InvalidIdentity, PluginError, SSLError, UrllibHTTPError, ) as err: self.logger.info( "notification-plugin.notify-failed", extra={ "error": str(err), "plugin": self.slug, "project_id": event.group.project_id, "organization_id": event.group.project.organization_id, }, ) if raise_exception:
fix(plugins): Silence error (#32042)
notify
542484c0cd71625e62e086f3f7c5aaf85360f724
sentry
notify.py
16
26
https://github.com/getsentry/sentry.git
4
114
0
45
175
Python
{ "docstring": "\n This calls the notify_users method of the plugin.\n Normally this method eats the error and logs it but if we\n set raise_exception=True like we do for the test plugin button,\n the exception is raised\n ", "language": "en", "n_whitespaces": 70, "n_words": 34, "vocab_size": 28 }
def notify(self, notification, raise_exception=False): event = notification.event try: return self.notify_users( event.group, event, triggering_rules=[r.label for r in notification.rules] ) except ( ApiError, HTTPError, InvalidIdentity, PluginError, SSLError, UrllibHTTPError, ) as err: self.logger.info( "notification-plugin.notify-failed", extra={ "error": str(err), "plugin": self.slug, "project_id": event.group.project_id, "organization_id": event.group.project.organization_id, }, ) if raise_exception: raise err return False
17,345
82,298
51
cms/tests/test_rendering.py
16
14
def test_processors(self): from djangocms_text_ckeditor.cms_plugins import TextPlugin from cms.plugin_pool import plugin_pool instance = CMSPlugin.objects.all()[0].get_plugin_instance()[0] load_from_string = self.load_template_from_string
Enabled isort workflow (#7200) * Ran isort * Enabled isort workflow Co-authored-by: Vinit Kumar <mail@vinitkumar.me>
test_processors
a3110e1ff24085373898c7d2a85f628abeb8518d
django-cms
test_rendering.py
13
27
https://github.com/django-cms/django-cms.git
1
169
0
13
69
Python
{ "docstring": "\n Tests that plugin processors and plugin context processors can be defined\n in settings and are working and that extra plugin context processors can be\n passed to PluginContext.\n ", "language": "en", "n_whitespaces": 56, "n_words": 27, "vocab_size": 17 }
def test_processors(self): from djangocms_text_ckeditor.cms_plugins import TextPlugin from cms.plugin_pool import plugin_pool instance = CMSPlugin.objects.all()[0].get_plugin_instance()[0] load_from_string = self.load_template_from_string
115,025
316,447
25
tests/test_config_entries.py
13
9
async def test_unique_id_ignore(hass, manager): async_setup_entry = AsyncMock(return_value=False) mock_integration(hass, MockModule("comp", async_setup_entry=async_setup_entry)) mock_entity_platform(hass, "config_flow.comp", None)
Search/replace RESULT_TYPE_* by FlowResultType enum (#74642)
test_unique_id_ignore
7cd68381f1d4f58930ffd631dfbfc7159d459832
core
test_config_entries.py
10
24
https://github.com/home-assistant/core.git
1
185
0
13
63
Python
{ "docstring": "Test that we can ignore flows that are in progress and have a unique ID.", "language": "en", "n_whitespaces": 14, "n_words": 15, "vocab_size": 14 }
async def test_unique_id_ignore(hass, manager): async_setup_entry = AsyncMock(return_value=False) mock_integration(hass, MockModule("comp", async_setup_entry=async_setup_entry)) mock_entity_platform(hass, "config_flow.comp", None)
53,448
212,840
58
PySimpleGUI.py
19
10
def bind(self, bind_string, key, propagate=True): if not self._is_window_created('tried Window.bind'): return self.TKroot.bind(bind_string, lambda evt: self._user_bind_callback(bind_string, evt, propagate)) self.user_bind_d
Added propagate parameter to the Element.bind and Window.bind methods. Indicates whether tkinter should propagate the event to the corresponding element/window or stop with the user callback
bind
b3680477c755277192715b343e9cd4254de7c45e
PySimpleGUI
PySimpleGUI.py
10
5
https://github.com/PySimpleGUI/PySimpleGUI.git
2
54
0
19
85
Python
{ "docstring": "\n Used to add tkinter events to a Window.\n The tkinter specific data is in the Window's member variable user_bind_event\n :param bind_string: The string tkinter expected in its bind function\n :type bind_string: (str)\n :param key: The event that will be generated when the tkinter event occurs\n :type key: str | int | tuple | object\n :param propagate: If True then tkinter will be told to propagate the event\n :type propagate: (bool)\n ", "language": "en", "n_whitespaces": 157, "n_words": 70, "vocab_size": 46 }
def bind(self, bind_string, key, propagate=True): if not self._is_window_created('tried Window.bind'): return self.TKroot.bind(bind_string, lambda evt: self._user_bind_callback(bind_string, evt, propagate)) self.user_bind_dict[bind_string] = key
52,997
211,000
759
ppdet/modeling/heads/cascade_head.py
167
52
def forward(self, body_feats=None, rois=None, rois_num=None, inputs=None): targets = [] if self.training: rois, rois_num, targets = self.bbox_assigner(rois, rois_num, inputs) targets_list = [targets] self.assigned_rois = (rois, rois_num) self.assigned_targets = targets pred_bbox = None head_out_list = [] for i in range(self.num_cascade_stages): if i > 0: rois, rois_num = self._get_rois_from_boxes(pred_bbox, inputs['im_shape']) if self.training: rois, rois_num, targets = self.bbox_assigner( rois, rois_num, inputs, i, is_cascade=True) targets_list.append(targets) rois_feat = self.roi_extractor(body_feats, rois, rois_num) bbox_feat = self.head(rois_feat, i) scores = self.bbox_score_list[i](bbox_feat) deltas = self.bbox_delta_list[i](bbox_feat)
upgrade cascade model (#6346) * add reg_class_agnostic * add loss_rpn_bbox
forward
d409ec06779e9de0cdbd76af4dc2c00b4b58ccb0
PaddleDetection
cascade_head.py
17
41
https://github.com/PaddlePaddle/PaddleDetection.git
10
390
0
107
585
Python
{ "docstring": "\n body_feats (list[Tensor]): Feature maps from backbone\n rois (Tensor): RoIs generated from RPN module\n rois_num (Tensor): The number of RoIs in each image\n inputs (dict{Tensor}): The ground-truth of image\n ", "language": "en", "n_whitespaces": 64, "n_words": 28, "vocab_size": 22 }
def forward(self, body_feats=None, rois=None, rois_num=None, inputs=None): targets = [] if self.training: rois, rois_num, targets = self.bbox_assigner(rois, rois_num, inputs) targets_list = [targets] self.assigned_rois = (rois, rois_num) self.assigned_targets = targets pred_bbox = None head_out_list = [] for i in range(self.num_cascade_stages): if i > 0: rois, rois_num = self._get_rois_from_boxes(pred_bbox, inputs['im_shape']) if self.training: rois, rois_num, targets = self.bbox_assigner( rois, rois_num, inputs, i, is_cascade=True) targets_list.append(targets) rois_feat = self.roi_extractor(body_feats, rois, rois_num) bbox_feat = self.head(rois_feat, i) scores = self.bbox_score_list[i](bbox_feat) deltas = self.bbox_delta_list[i](bbox_feat) # TODO (lyuwenyu) Is it correct for only one class ? if not self.reg_class_agnostic and i < self.num_cascade_stages - 1: deltas = deltas.reshape([-1, self.num_classes, 4]) labels = scores[:, :-1].argmax(axis=-1) deltas = deltas[paddle.arange(deltas.shape[0]), labels] head_out_list.append([scores, deltas, rois]) pred_bbox = self._get_pred_bbox(deltas, rois, self.bbox_weight[i]) if self.training: loss = {} for stage, value in enumerate(zip(head_out_list, targets_list)): (scores, deltas, rois), targets = value loss_stage = self.get_loss(scores, deltas, targets, rois, self.bbox_weight[stage]) for k, v in loss_stage.items(): loss[k + "_stage{}".format( stage)] = v / self.num_cascade_stages return loss, bbox_feat else: scores, deltas, self.refined_rois = self.get_prediction( head_out_list) return (deltas, scores), self.head
56,997
223,601
24
python3.10.4/Lib/email/_header_value_parser.py
12
7
def get_fws(value): newvalue = value.lstrip() fws = WhiteSpaceTerminal(value[:len(value)-len(newvalue)], 'fws') return fws, newvalue
add python 3.10.4 for windows
get_fws
8198943edd73a363c266633e1aa5b2a9e9c9f526
XX-Net
_header_value_parser.py
13
4
https://github.com/XX-net/XX-Net.git
1
37
0
10
64
Python
{ "docstring": "FWS = 1*WSP\n\n This isn't the RFC definition. We're using fws to represent tokens where\n folding can be done, but when we are parsing the *un*folding has already\n been done so we don't need to watch out for CRLF.\n\n ", "language": "en", "n_whitespaces": 52, "n_words": 39, "vocab_size": 36 }
def get_fws(value): newvalue = value.lstrip() fws = WhiteSpaceTerminal(value[:len(value)-len(newvalue)], 'fws') return fws, newvalue
37,386
158,218
147
d2l/mxnet.py
42
21
def load_data_snli(batch_size, num_steps=50): num_workers = d2l.get_dataloader_workers() data_dir = d2l.download_extract('SNLI') train_data = read_snli(data_dir, True) test_data = read_snli(data_dir, False) train_set = SNLIDataset(train_data,
[PaddlePaddle] Merge master into Paddle branch (#1186) * change 15.2 title in chinese version (#1109) change title ’15.2. 情感分析:使用递归神经网络‘ to ’15.2. 情感分析:使用循环神经网络‘ * 修改部分语义表述 (#1105) * Update r0.17.5 (#1120) * Bump versions in installation * 94行typo: (“bert.mall”)->(“bert.small”) (#1129) * line 313: "bert.mall" -> "bert.small" (#1130) * fix: update language as native reader (#1114) * Fix the translation of "stride" (#1115) * Update index.md (#1118) 修改部分语义表述 * Update self-attention-and-positional-encoding.md (#1133) 依照本书的翻译习惯,将pooling翻译成汇聚 * maybe a comment false (#1149) * maybe a little false * maybe a little false * A minor bug in the rcnn section (Chinese edition) (#1148) * Update bert.md (#1137) 一个笔误 # 假设batch_size=2,num_pred_positions=3 # 那么batch_idx应该是np.repeat( [0,1], 3 ) = [0,0,0,1,1,1] * Update calculus.md (#1135) * fix typo in git documentation (#1106) * fix: Update the Chinese translation in lr-scheduler.md (#1136) * Update lr-scheduler.md * Update chapter_optimization/lr-scheduler.md Co-authored-by: goldmermaid <goldpiggy@berkeley.edu> Co-authored-by: goldmermaid <goldpiggy@berkeley.edu> * fix translation for kaggle-house-price.md (#1107) * fix translation for kaggle-house-price.md * fix translation for kaggle-house-price.md Signed-off-by: sunhaizhou <haizhou.sun@smartmore.com> * Update weight-decay.md (#1150) * Update weight-decay.md 关于“k多选d”这一部分,中文读者使用排列组合的方式可能更容易理解 关于“给定k个变量,阶数的个数为...”这句话是有歧义的,不是很像中国话,应该是说“阶数为d的项的个数为...”。 并增加了一句对“因此即使是阶数上的微小变化,比如从$2$到$3$,也会显著增加我们模型的复杂性。”的解释 解释为何会增加复杂性以及为何需要细粒度工具。 * Update chapter_multilayer-perceptrons/weight-decay.md yep Co-authored-by: goldmermaid <goldpiggy@berkeley.edu> * Update chapter_multilayer-perceptrons/weight-decay.md yep Co-authored-by: goldmermaid <goldpiggy@berkeley.edu> Co-authored-by: goldmermaid <goldpiggy@berkeley.edu> * Fix a spelling error (#1161) * Update gru.md (#1152) The key distinction between vanilla RNNs and GRUs is that the latter support gating of the hidden state. 翻译错误 * Unify the function naming (#1113) Unify naming of the function 'init_xavier()'. * Update mlp-concise.md (#1166) * Update mlp-concise.md 语句不通顺 * Update environment.md 语序异常 * Update config.ini * fix the imprecise description (#1168) Co-authored-by: yuande <yuande> * fix typo in chapter_natural-language-processing-pretraining/glove.md (#1175) * Fix some typos. (#1163) * Update batch-norm.md (#1170) fixing typos u->x in article * Update linear-regression.md (#1090) We invoke Stuart Russell and Peter Norvig who, in their classic AI text book Artificial Intelligence: A Modern Approach :cite:Russell.Norvig.2016, pointed out that 原译文把who也直接翻译出来了。 * Update mlp.md (#1117) * Update mlp.md 修改部分语义表述 * Update chapter_multilayer-perceptrons/mlp.md Co-authored-by: goldmermaid <goldpiggy@berkeley.edu> * Update chapter_multilayer-perceptrons/mlp.md Co-authored-by: Aston Zhang <22279212+astonzhang@users.noreply.github.com> Co-authored-by: goldmermaid <goldpiggy@berkeley.edu> * Correct a translation error. (#1091) * Correct a translation error. * Update chapter_computer-vision/image-augmentation.md Co-authored-by: Aston Zhang <22279212+astonzhang@users.noreply.github.com> * Update aws.md (#1121) * Update aws.md * Update chapter_appendix-tools-for-deep-learning/aws.md Co-authored-by: Aston Zhang <22279212+astonzhang@users.noreply.github.com> * Update image-augmentation.md (#1093) * Update anchor.md (#1088) fix a minor issue in code * Update anchor.md * Update image-augmentation.md * fix typo and improve translation in chapter_linear-networks\softmax-regression.md (#1087) * Avoid `torch.meshgrid` user warning (#1174) Avoids the following user warning: ```python ~/anaconda3/envs/torch/lib/python3.10/site-packages/torch/functional.py:568: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:2228.) return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined] ``` * bump to 2.0.0-beta1 * Update sequence.md * bump beta1 on readme * Add latex code block background to config * BLD: Bump python support version 3.9 (#1183) * BLD: Bump python support version 3.9 * Remove clear and manually downgrade protobuf 4.21.4 to 3.19.4 * BLD: Bump torch and tensorflow * Update Jenkinsfile * Update chapter_installation/index.md * Update chapter_installation/index.md Co-authored-by: Aston Zhang <22279212+astonzhang@users.noreply.github.com> * Update config.ini * Update INFO.md * Update INFO.md * Drop mint to show code in pdf, use Inconsolata font, apply code cell color (#1187) * resolve the conflicts * revise from publisher (#1089) * revise from publisher * d2l api * post_latex * revise from publisher * revise ch11 * Delete d2l-Copy1.bib * clear cache * rm d2lbook clear * debug anchor * keep original d2l doc Co-authored-by: Ubuntu <ubuntu@ip-172-31-12-66.us-west-2.compute.internal> Co-authored-by: Aston Zhang <22279212+astonzhang@users.noreply.github.com> Co-authored-by: Aston Zhang <asv325@gmail.com> * 重复语句 (#1188) Co-authored-by: Aston Zhang <22279212+astonzhang@users.noreply.github.com> * Improve expression for chapter_preliminaries/pandas.md (#1184) * Update pandas.md * Improve expression * Improve expression * Update chapter_preliminaries/pandas.md Co-authored-by: Aston Zhang <22279212+astonzhang@users.noreply.github.com> * Improce expression for chapter_preliminaries/linear-algebra.md (#1185) * Improce expression * Improve code comments * Update chapter_preliminaries/linear-algebra.md * Update chapter_preliminaries/linear-algebra.md * Update chapter_preliminaries/linear-algebra.md * Update chapter_preliminaries/linear-algebra.md Co-authored-by: Aston Zhang <22279212+astonzhang@users.noreply.github.com> * Fix multibox_detection bugs * Update d2l to 0.17.5 version * restore older version * Upgrade pandas * change to python3.8 * Test warning log * relocate warning log * test logs filtering * Update gru.md * Add DeprecationWarning filter * Test warning log * Update attention mechanisms & computational performance * Update multilayer perceptron& linear & convolution networks & computer vision * Update recurrent&optimition&nlp pretraining & nlp applications * ignore warnings * Update index.md * Update linear networks * Update multilayer perceptrons&deep learning computation * Update preliminaries * Check and Add warning filter * Update kaggle-cifar10.md * Update object-detection-dataset.md * Update ssd.md fcn.md * Update hybridize.md * Update hybridize.md Signed-off-by: sunhaizhou <haizhou.sun@smartmore.com> Co-authored-by: zhou201505013 <39976863+zhou201505013@users.noreply.github.com> Co-authored-by: Xinwei Liu <xinzone@outlook.com> Co-authored-by: Anirudh Dagar <anirudhdagar6@gmail.com> Co-authored-by: Aston Zhang <22279212+astonzhang@users.noreply.github.com> Co-authored-by: hugo_han <57249629+HugoHann@users.noreply.github.com> Co-authored-by: gyro永不抽风 <1247006353@qq.com> Co-authored-by: CanChengZheng <zcc550169544@163.com> Co-authored-by: linlin <jajupmochi@gmail.com> Co-authored-by: iuk <liukun0104@gmail.com> Co-authored-by: yoos <49556860+liyunlongaaa@users.noreply.github.com> Co-authored-by: Mr. Justice Lawrence John Wargrave <65226618+RUCWargrave@users.noreply.github.com> Co-authored-by: Chiyuan Fu <fuchiyuan2019@outlook.com> Co-authored-by: Sunhuashan <48636870+Sunhuashan@users.noreply.github.com> Co-authored-by: Haiker Sun <haizhou.uestc2011@gmail.com> Co-authored-by: Ming Liu <akira.liu@njnu.edu.cn> Co-authored-by: goldmermaid <goldpiggy@berkeley.edu> Co-authored-by: silenceZheng66 <13754430639@163.com> Co-authored-by: Wenchao Yan <56541797+YWonchall@users.noreply.github.com> Co-authored-by: Kiki2049 <55939997+Kiki2049@users.noreply.github.com> Co-authored-by: Krahets <krahets@163.com> Co-authored-by: friedmainfunction <73703265+friedmainfunction@users.noreply.github.com> Co-authored-by: Jameson <miraclecome@gmail.com> Co-authored-by: P. Yao <12227516+YaoPengCN@users.noreply.github.com> Co-authored-by: Yulv-git <34329208+Yulv-git@users.noreply.github.com> Co-authored-by: Liu,Xiao <45966993+liuxiao916@users.noreply.github.com> Co-authored-by: YIN, Gang <1246410+yingang@users.noreply.github.com> Co-authored-by: Joe-HZ <58297431+Joe-HZ@users.noreply.github.com> Co-authored-by: lybloveyou <102609904+lybloveyou@users.noreply.github.com> Co-authored-by: VigourJiang <jiangfuqiang154@163.com> Co-authored-by: zxhd863943427 <74853597+zxhd863943427@users.noreply.github.com> Co-authored-by: LYF <27893441+liyufan@users.noreply.github.com> Co-authored-by: Aston Zhang <asv325@gmail.com> Co-authored-by: xiaotinghe <xiaotih@amazon.com> Co-authored-by: Ubuntu <ubuntu@ip-172-31-12-66.us-west-2.compute.internal> Co-authored-by: Holly-Max <60691735+Holly-Max@users.noreply.github.com> Co-authored-by: HinGwenWoong <peterhuang0323@qq.com> Co-authored-by: Shuai Zhang <cheungdaven@gmail.com>
load_data_snli
b64b41d8c1ac23c43f7a4e3f9f6339d6f0012ab2
d2l-zh
mxnet.py
9
12
https://github.com/d2l-ai/d2l-zh.git
1
109
0
32
165
Python
{ "docstring": "Download the SNLI dataset and return data iterators and vocabulary.\n\n Defined in :numref:`sec_natural-language-inference-and-dataset`", "language": "en", "n_whitespaces": 15, "n_words": 13, "vocab_size": 12 }
def load_data_snli(batch_size, num_steps=50): num_workers = d2l.get_dataloader_workers() data_dir = d2l.download_extract('SNLI') train_data = read_snli(data_dir, True) test_data = read_snli(data_dir, False) train_set = SNLIDataset(train_data, num_steps) test_set = SNLIDataset(test_data, num_steps, train_set.vocab) train_iter = gluon.data.DataLoader(train_set, batch_size, shuffle=True, num_workers=num_workers) test_iter = gluon.data.DataLoader(test_set, batch_size, shuffle=False, num_workers=num_workers) return train_iter, test_iter, train_set.vocab
34,972
151,197
189
freqtrade/freqai/utils.py
84
34
def plot_feature_importance(model, feature_names, pair, train_dir, count_max=50) -> None: try: import plotly.graph_objects as go from plotly.subplots import make_subplots except ImportError: logger.exception("Module plotly not found \n Please install using `pip3 install plotly`") exit(1) from freqtrade.plot.plotting import store_plot_file # Gather feature importance from model if "c
plot features as html instead of png
plot_feature_importance
86aa875bc9d5edeba04f908fe45b011e52045c83
freqtrade
utils.py
13
37
https://github.com/freqtrade/freqtrade.git
4
229
0
67
261
Python
{ "docstring": "\n Plot Best and Worst Features by importance for CatBoost model.\n Called once per sub-train.\n Usage: plot_feature_importance(\n model=model,\n feature_names=dk.training_features_list,\n pair=pair,\n train_dir=dk.data_path)\n ", "language": "en", "n_whitespaces": 89, "n_words": 20, "vocab_size": 20 }
def plot_feature_importance(model, feature_names, pair, train_dir, count_max=50) -> None: try: import plotly.graph_objects as go from plotly.subplots import make_subplots except ImportError: logger.exception("Module plotly not found \n Please install using `pip3 install plotly`") exit(1) from freqtrade.plot.plotting import store_plot_file # Gather feature importance from model if "catboost.core" in str(model.__class__): feature_importance = model.get_feature_importance() elif "lightgbm.sklearn" in str(model.__class__): feature_importance = model.feature_importances_ else: raise NotImplementedError(f"Cannot extract feature importance for {model.__class__}") # Data preparation fi_df = pd.DataFrame({ "feature_names": np.array(feature_names), "feature_importance": np.array(feature_importance) }) fi_df_top = fi_df.nlargest(count_max, "feature_importance")[::-1] fi_df_worst = fi_df.nsmallest(count_max, "feature_importance")[::-1] # Plotting
51,034
205,205
95
django/db/backends/sqlite3/introspection.py
23
12
def get_primary_key_column(self, cursor, table_name): cursor.execute( "PRAGMA table_info(%s)" % self.connection.ops.quote_name(table_name) ) for _, name, *_, pk in cursor.fetchall(): if pk: return name return
Refs #33476 -- Reformatted code with Black.
get_primary_key_column
9c19aff7c7561e3a82978a272ecdaad40dda5c00
django
introspection.py
12
8
https://github.com/django/django.git
3
50
0
22
80
Python
{ "docstring": "Return the column name of the primary key for the given table.", "language": "en", "n_whitespaces": 11, "n_words": 12, "vocab_size": 10 }
def get_primary_key_column(self, cursor, table_name): cursor.execute( "PRAGMA table_info(%s)" % self.connection.ops.quote_name(table_name) ) for _, name, *_, pk in cursor.fetchall(): if pk: return name return None
17,455
82,603
301
cms/utils/setup.py
95
14
def validate_settings(): try: django_backend = [x for x in settings.TEMPLATES if x['BACKEND'] == 'django.template.backends.django.DjangoTemplates'][0] except IndexError: raise ImproperlyConfigured( "django CMS requires django.template.context_processors.request in " "'django.template.backends.django.DjangoTemplates' context processors."
fix: Adds a deprecation warning for SEND_BROKEN_LINK_EMAILS (#7420) * Fix: toolbar bug 3.10.rc1 * Feat: Dark mode support, including input from @marksweb, bugfix for tooltips * Upstream change to be able to merge * Feat: Dark mode support, including input from @marksweb, bugfix for tooltips * Revert "Fix: toolbar bug 3.10.rc1" This reverts commit 592a2b604e8f72b8e9c948e83163394cc6e8fe3d. * Fix: Recommit toolbar fix (??) * Fix: After lint failure: Remove spaces added by PyCharm * Fix: Wizzard button color * Fix: Correct toolbar according to cms_path Fix: Avoid unnecessary toolbar loading * TASK: use isort to sort imports * Fix: Move CMS.API.Toolbar.get_color_scheme to CMS.API.Helpers.getColorScheme and CMS.API.Toolbar.set_color_scheme to CMS.API.Helpers.setColorScheme * Fix: Typo in comment * Fix: Typos in comments * Fix: Typos in comments * Add: Changelog entry * Fix: base unit test for js frontend * Add: Basic set/get color scheme test * fix: deprecate SEND_BROKEN_LINK_EMAILS setting * fix: flake8 w504 Co-authored-by: Vinit Kumar <mail@vinitkumar.me> Co-authored-by: Simon Krull <krull@punkt.de> Co-authored-by: Mark Walker <theshow@gmail.com>
validate_settings
d38f4a1cc7fc6b9e06a01622dd584329b73b410d
django-cms
setup.py
14
21
https://github.com/django-cms/django-cms.git
8
108
0
68
201
Python
{ "docstring": "\n Check project settings file for required options\n ", "language": "en", "n_whitespaces": 14, "n_words": 7, "vocab_size": 7 }
def validate_settings(): try: django_backend = [x for x in settings.TEMPLATES if x['BACKEND'] == 'django.template.backends.django.DjangoTemplates'][0] except IndexError: raise ImproperlyConfigured( "django CMS requires django.template.context_processors.request in " "'django.template.backends.django.DjangoTemplates' context processors." ) context_processors = django_backend.get('OPTIONS', {}).get('context_processors', []) if ('django.core.context_processors.request' not in context_processors and # noqa: W504 'django.template.context_processors.request' not in context_processors): raise ImproperlyConfigured("django CMS requires django.template.context_processors.request in " "'django.template.backends.django.DjangoTemplates' context processors.") if ( hasattr(settings, "SEND_BROKEN_LINK_EMAILS") and # noqa: W504 "django.middleware.common.BrokenLinkEmailsMiddleware" not in getattr(settings, "MIDDLEWARE", []) ): warnings.warn('The setting "SEND_BROKEN_LINK_EMAILS" will not be honored by django CMS as of version 4.1. ' 'Add "django.middleware.common.BrokenLinkEmailsMiddleware" to your MIDDLEWARE settings ' 'instead.', DeprecationWarning)
35,209
152,965
44
modin/config/envvars.py
16
4
def get(cls): min_partition_size = super().get() assert min_partition_size > 0, "`min_partition_size` should be > 0" return min_partition_size
REFACTOR-#3768: change 'compute_chunksize' signature (#3769) Co-authored-by: Yaroslav Igoshev <Poolliver868@mail.ru> Signed-off-by: Anatoly Myachev <anatoly.myachev@intel.com>
get
0bdc482d6f1682e103b4c4d7ee7c4d505d2d3b1c
modin
envvars.py
10
4
https://github.com/modin-project/modin.git
1
23
0
13
42
Python
{ "docstring": "\n Get ``MinPartitionSize`` with extra checks.\n\n Returns\n -------\n int\n ", "language": "en", "n_whitespaces": 44, "n_words": 8, "vocab_size": 8 }
def get(cls): min_partition_size = super().get() assert min_partition_size > 0, "`min_partition_size` should be > 0" return min_partition_size
27,510
124,089
531
python/ray/tune/examples/pbt_function.py
207
25
def pbt_function(config): lr = config["lr"] accuracy = 0.0 # end = 1000 start = 0 if session.get_checkpoint(): state = session.get_checkpoint().to_dict() accuracy = state["acc"] start = state["step"] midpoint = 100 # lr starts decreasing after acc > midpoint q_tolerance = 3 # penalize exceeding lr by more than this multiple noise_level = 2 # add gaussian noise to the acc increase # triangle wave: # - start at 0.001 @ t=0, # - peak at 0.01 @ t=midpoint, # - end at 0.001 @ t=midpoint * 2, for step in range(start, 100): if accuracy < midpoint: optimal_lr = 0.01 * accuracy / midpoint else: optimal_lr = 0.01 - 0.01 * (accuracy - midpoint) / midpoint optimal_lr = min(0.01, max(0.001, optimal_lr)) # compute accuracy increase q_err = max(lr, optimal_lr) / min(lr, optimal_lr) if q_err < q_tolerance: accuracy
[air] update documentation to use `session.report` (#26051) Update documentation to use `session.report`. Next steps: 1. Update our internal caller to use `session.report`. Most importantly, CheckpointManager and DataParallelTrainer. 2. Update `get_trial_resources` to use PGF notions to incorporate the requirement of ResourceChangingScheduler. @Yard1 3. After 2 is done, change all `tune.get_trial_resources` to `session.get_trial_resources` 4. [internal implementation] remove special checkpoint handling logic from huggingface trainer. Optimize the flow for checkpoint conversion with `session.report`. Co-authored-by: Antoni Baum <antoni.baum@protonmail.com>
pbt_function
ac831fded416381ad3c7fe2ba135eaa4aaab5879
ray
pbt_function.py
16
37
https://github.com/ray-project/ray.git
7
253
0
114
407
Python
{ "docstring": "Toy PBT problem for benchmarking adaptive learning rate.\n\n The goal is to optimize this trainable's accuracy. The accuracy increases\n fastest at the optimal lr, which is a function of the current accuracy.\n\n The optimal lr schedule for this problem is the triangle wave as follows.\n Note that many lr schedules for real models also follow this shape:\n\n best lr\n ^\n | /\\\n | / \\\n | / \\\n | / \\\n ------------> accuracy\n\n In this problem, using PBT with a population of 2-4 is sufficient to\n roughly approximate this lr schedule. Higher population sizes will yield\n faster convergence. Training will not converge without PBT.\n ", "language": "en", "n_whitespaces": 177, "n_words": 104, "vocab_size": 71 }
def pbt_function(config): lr = config["lr"] accuracy = 0.0 # end = 1000 start = 0 if session.get_checkpoint(): state = session.get_checkpoint().to_dict() accuracy = state["acc"] start = state["step"] midpoint = 100 # lr starts decreasing after acc > midpoint q_tolerance = 3 # penalize exceeding lr by more than this multiple noise_level = 2 # add gaussian noise to the acc increase # triangle wave: # - start at 0.001 @ t=0, # - peak at 0.01 @ t=midpoint, # - end at 0.001 @ t=midpoint * 2, for step in range(start, 100): if accuracy < midpoint: optimal_lr = 0.01 * accuracy / midpoint else: optimal_lr = 0.01 - 0.01 * (accuracy - midpoint) / midpoint optimal_lr = min(0.01, max(0.001, optimal_lr)) # compute accuracy increase q_err = max(lr, optimal_lr) / min(lr, optimal_lr) if q_err < q_tolerance: accuracy += (1.0 / q_err) * random.random() elif lr > optimal_lr: accuracy -= (q_err - q_tolerance) * random.random() accuracy += noise_level * np.random.normal() accuracy = max(0, accuracy) checkpoint = None if step % 3 == 0: checkpoint = Checkpoint.from_dict({"acc": accuracy, "step": start}) session.report( { "mean_accuracy": accuracy, "cur_lr": lr, "optimal_lr": optimal_lr, # for debugging "q_err": q_err, # for debugging "done": accuracy > midpoint * 2, # this stops the training process }, checkpoint=checkpoint, )
84,764
284,498
77
openbb_terminal/stocks/options/hedge/hedge_model.py
53
12
def add_hedge_option(price, implied_volatility, strike, days, side): # Determine delta position given the option delta = calc_delta(price, implied_volatility, strike, days, 0, side) # Determine gamma position given the option gamma = calc_gamma(price, implied_volatility, strike, days, 0) # Determine vega position given the option vega = calc_vega(price, implied_volatility, strike, days, 0) return delta, gamma, vega
Feature/hedge (#1768) * [Bug] Incorrect log for reddit keys. #1733 fix * Create new feature-hedge * Significantly improve code of hedge menu * More robust * Robustness * Fix tests * Fix can't multiply sequence by non-int of type 'numpy.float64' error * Temporary fix of singular matrix error. Return first feasible solution * Update Hugo Documentation * Combining menus and cleaning up code * Tidy up call_exp * Update tests Round 1 * Update tests Round 2 * Fix linting error * Fix linting? * Fixed glitch Co-authored-by: JerBouma <jer.bouma@gmail.com> Co-authored-by: James Maslek <jmaslek11@gmail.com> Co-authored-by: Colin Delahunty <72827203+colin99d@users.noreply.github.com> Co-authored-by: colin99d <colin99delahunty@gmail.com> Co-authored-by: didierlopes.eth <dro.lopes@campus.fct.unl.pt>
add_hedge_option
54a1b6f545a0016c576e9e00eef5c003d229dacf
OpenBBTerminal
hedge_model.py
8
5
https://github.com/OpenBB-finance/OpenBBTerminal.git
1
64
0
25
88
Python
{ "docstring": "Determine the delta, gamma and vega value of the portfolio and/or options.\n\n Parameters\n ----------\n price: int\n The price.\n implied_volatility: float\n The implied volatility.\n strike: float\n The strike price.\n days: float\n The amount of days until expiration. Use annual notation thus a month would be 30 / 360.\n sign: int\n Whether you have a long (1) or short (-1) position\n\n Returns\n -------\n delta: float\n gamma: float\n portfolio: float\n ", "language": "en", "n_whitespaces": 141, "n_words": 67, "vocab_size": 54 }
def add_hedge_option(price, implied_volatility, strike, days, side): # Determine delta position given the option delta = calc_delta(price, implied_volatility, strike, days, 0, side) # Determine gamma position given the option gamma = calc_gamma(price, implied_volatility, strike, days, 0) # Determine vega position given the option vega = calc_vega(price, implied_volatility, strike, days, 0) return delta, gamma, vega
57,028
223,645
93
python3.10.4/Lib/email/charset.py
33
9
def header_encode(self, string): codec = self.output_codec or 'us-ascii' header_bytes = _encode(string, codec) # 7bit/8bit encodings return the string unchanged (modulo conversions) encoder_module = self._get_encoder(header_bytes) if encoder_module is None: return string return encoder
add python 3.10.4 for windows
header_encode
8198943edd73a363c266633e1aa5b2a9e9c9f526
XX-Net
charset.py
8
7
https://github.com/XX-net/XX-Net.git
3
47
0
26
78
Python
{ "docstring": "Header-encode a string by converting it first to bytes.\n\n The type of encoding (base64 or quoted-printable) will be based on\n this charset's `header_encoding`.\n\n :param string: A unicode string for the header. It must be possible\n to encode this string to bytes using the character set's\n output codec.\n :return: The encoded string, with RFC 2047 chrome.\n ", "language": "en", "n_whitespaces": 113, "n_words": 55, "vocab_size": 47 }
def header_encode(self, string): codec = self.output_codec or 'us-ascii' header_bytes = _encode(string, codec) # 7bit/8bit encodings return the string unchanged (modulo conversions) encoder_module = self._get_encoder(header_bytes) if encoder_module is None: return string return encoder_module.header_encode(header_bytes, codec)
7,639
42,583
720
nltk/corpus/reader/bcp47.py
137
20
def data_dict(self, records): self.version = records[0].replace("File-Date:", "").strip() dic = {} dic["deprecated"] = {} for label in [ "language", "extlang", "script", "region", "variant", "redundant", "grandfathered",
Support both iso639-3 codes and BCP-47 language tags (#3060) * Add support for iso639-3 language codes * Add support for retired language codes * Move langnames.py to the top-level * Add langcode() function * Add iso639retired dictionary * Improve wrapper functions * Add module docstring with doctest * Add 2-letter language codes * Add regular expression check * Improve inverse lookup of retired codes * Support BCP-47 * Avoid deprecated langcodes * Set stack level for warnings to warn on the langname call Now it throws e.g. ``` ...\nltk_3060.py:9: UserWarning: Shortening 'smo' to 'sm' print(f"{lang}: {langname(code)}") ``` Rather than ``` ...\nltk\langnames.py:64: UserWarning: Shortening zha to za warn(f"Shortening {code} to {code2}") ``` * Dict key membership is equivalent to dict membership * Resolve bug: subtag -> tag * Capitalize BCP47 in CorpusReader name * Reimplement removed type hint changes from #3081 Co-authored-by: Tom Aarsen <Cubiegamedev@gmail.com>
data_dict
f019fbedb3d2b6a2e6b58ec1b38db612b106568b
nltk
bcp47.py
17
44
https://github.com/nltk/nltk.git
14
294
0
75
484
Python
{ "docstring": "Convert the BCP-47 language subtag registry to a dictionary", "language": "en", "n_whitespaces": 8, "n_words": 9, "vocab_size": 9 }
def data_dict(self, records): self.version = records[0].replace("File-Date:", "").strip() dic = {} dic["deprecated"] = {} for label in [ "language", "extlang", "script", "region", "variant", "redundant", "grandfathered", ]: dic["deprecated"][label] = {} for record in records[1:]: fields = [field.split(": ") for field in record.strip().split("\n")] typ = fields[0][1] tag = fields[1][1] if typ not in dic: dic[typ] = {} subfields = {} for field in fields[2:]: if len(field) == 2: [key, val] = field if key not in subfields: subfields[key] = [val] else: # multiple value subfields[key].append(val) else: # multiline field subfields[key][-1] += " " + field[0].strip() if ( "Deprecated" not in record and typ == "language" and key == "Description" ): self.langcode[subfields[key][-1]] = tag for key in subfields: if len(subfields[key]) == 1: # single value subfields[key] = subfields[key][0] if "Deprecated" in record: dic["deprecated"][typ][tag] = subfields else: dic[typ][tag] = subfields return dic
90,844
291,740
11
tests/test_core.py
5
6
def test_async_add_hass_job_schedule_partial_coroutinefunction(event_loop):
Upgrade pytest-aiohttp (#82475) * Upgrade pytest-aiohttp * Make sure executors, tasks and timers are closed Some test will trigger warnings on garbage collect, these warnings spills over into next test. Some test trigger tasks that raise errors on shutdown, these spill over into next test. This is to mimic older pytest-aiohttp and it's behaviour on test cleanup. Discussions on similar changes for pytest-aiohttp are here: https://github.com/pytest-dev/pytest-asyncio/pull/309 * Replace loop with event_loop * Make sure time is frozen for tests * Make sure the ConditionType is not async /home-assistant/homeassistant/helpers/template.py:2082: RuntimeWarning: coroutine 'AsyncMockMixin._execute_mock_call' was never awaited def wrapper(*args, **kwargs): Enable tracemalloc to get traceback where the object was allocated. See https://docs.pytest.org/en/stable/how-to/capture-warnings.html#resource-warnings for more info. * Increase litejet press tests with a factor 10 The times are simulated anyway, and we can't stop the normal event from occuring. * Use async handlers for aiohttp tests/components/motioneye/test_camera.py::test_get_still_image_from_camera tests/components/motioneye/test_camera.py::test_get_still_image_from_camera tests/components/motioneye/test_camera.py::test_get_stream_from_camera tests/components/motioneye/test_camera.py::test_get_stream_from_camera tests/components/motioneye/test_camera.py::test_camera_option_stream_url_template tests/components/motioneye/test_camera.py::test_camera_option_stream_url_template /Users/joakim/src/hass/home-assistant/venv/lib/python3.9/site-packages/aiohttp/web_urldispatcher.py:189: DeprecationWarning: Bare functions are deprecated, use async ones warnings.warn( * Switch to freezegun in modbus tests The tests allowed clock to tick in between steps * Make sure skybell object are fully mocked Old tests would trigger attempts to post to could services: ``` DEBUG:aioskybell:HTTP post https://cloud.myskybell.com/api/v3/login/ Request with headers: {'content-type': 'application/json', 'accept': '*/*', 'x-skybell-app-id': 'd2b542c7-a7e4-4e1e-b77d-2b76911c7c46', 'x-skybell-client-id': '1f36a3c0-6dee-4997-a6db-4e1c67338e57'} ``` * Fix sorting that broke after rebase
test_async_add_hass_job_schedule_partial_coroutinefunction
c576a68d336bc91fd82c299d9b3e5dfdc1c14960
core
test_core.py
12
8
https://github.com/home-assistant/core.git
1
82
0
5
34
Python
{ "docstring": "Test that we schedule partial coros and add jobs to the job pool.", "language": "en", "n_whitespaces": 12, "n_words": 13, "vocab_size": 13 }
def test_async_add_hass_job_schedule_partial_coroutinefunction(event_loop): hass = MagicMock(loop=MagicMock(wraps=event_loop))
75,208
258,256
216
haystack/utils/squad_data.py
36
22
def to_label_objs(self, answer_type="generative"): df_labels = self.df[["id", "question", "answer_text", "answer_start", "context", "document_id"]] record_dicts = df_labels.to_dict("records") labels = [ Label( query=record["question"], answer=Answer(answer=record["answer_text"], answer_type=answer_type),
refactor: update Squad data (#3513) * refractor the to_squad data class * fix the validation label * refractor the to_squad data class * fix the validation label * add the test for the to_label object function * fix the tests for to_label_objects * move all the test related to squad data to one file * remove unused imports * revert tiny_augmented.json Co-authored-by: ZanSara <sarazanzo94@gmail.com>
to_label_objs
d114a994f1af71d3721cecd14da6f6b4592043b8
haystack
squad_data.py
17
16
https://github.com/deepset-ai/haystack.git
2
124
0
32
206
Python
{ "docstring": "Export all labels stored in this object to haystack.Label objects", "language": "en", "n_whitespaces": 9, "n_words": 10, "vocab_size": 10 }
def to_label_objs(self, answer_type="generative"): df_labels = self.df[["id", "question", "answer_text", "answer_start", "context", "document_id"]] record_dicts = df_labels.to_dict("records") labels = [ Label( query=record["question"], answer=Answer(answer=record["answer_text"], answer_type=answer_type), is_correct_answer=True, is_correct_document=True, id=record["id"], origin=record.get("origin", "gold-label"), document=Document(content=record.get("context"), id=str(record["document_id"])), ) for record in record_dicts ] return labels
116,994
319,781
75
src/documents/tests/test_api.py
19
13
def test_api_get_storage_path(self): response = self.client.get("/api/storage_paths/", format="json") self.assertEqual(response.status_code, 200) self.assertEqual(response.status_code, 200) self.assertEqual(response.data["count"], 1) resp_storage_path = response.data["results"][0] self.assertEqual(resp_storage_path["id"], self.sp1.i
Increases test coverage of storage paths
test_api_get_storage_path
53baed03895f28f24113d376b089e3ef281b34ed
paperless-ngx
test_api.py
10
8
https://github.com/paperless-ngx/paperless-ngx.git
1
94
0
16
155
Python
{ "docstring": "\n GIVEN:\n - API request to get all storage paths\n WHEN:\n - API is called\n THEN:\n - Existing storage paths are returned\n ", "language": "en", "n_whitespaces": 83, "n_words": 21, "vocab_size": 16 }
def test_api_get_storage_path(self): response = self.client.get("/api/storage_paths/", format="json") self.assertEqual(response.status_code, 200) self.assertEqual(response.status_code, 200) self.assertEqual(response.data["count"], 1) resp_storage_path = response.data["results"][0] self.assertEqual(resp_storage_path["id"], self.sp1.id) self.assertEqual(resp_storage_path["path"], self.sp1.path)
76,249
260,439
52
sklearn/manifold/tests/test_mds.py
33
13
def test_normalize_metric_warning(): msg = "Normalized stress is not supported" sim = np.array([[0, 5, 3, 4],
ENH Calculate normed stress (Stress-1) in `manifold.MDS` (#22562) Co-authored-by: Chiara Marmo <cmarmo@users.noreply.github.com> Co-authored-by: Roth E Conrad <rotheconrad@gatech.edu> Co-authored-by: Guillaume Lemaitre <g.lemaitre58@gmail.com> Co-authored-by: Thomas J. Fan <thomasjpfan@gmail.com>
test_normalize_metric_warning
ae51c13af76af206e6815d0ca0d0052f73167caa
scikit-learn
test_mds.py
10
5
https://github.com/scikit-learn/scikit-learn.git
1
82
0
29
117
Python
{ "docstring": "\n Test that a UserWarning is emitted when using normalized stress with\n metric-MDS.\n ", "language": "en", "n_whitespaces": 22, "n_words": 12, "vocab_size": 12 }
def test_normalize_metric_warning(): msg = "Normalized stress is not supported" sim = np.array([[0, 5, 3, 4], [5, 0, 2, 2], [3, 2, 0, 1], [4, 2, 1, 0]]) with pytest.raises(ValueError, match=msg): mds.smacof(sim, metric=True, normalized_stress=True)
13,576
64,188
32
erpnext/patches/v13_0/add_bin_unique_constraint.py
54
25
def delete_and_patch_duplicate_bins(): duplicate_bins = frappe.db.sql(, as_dict=1) for duplicate_bin in duplicate_bins: existing_bins = frappe.get_list("Bin", filters={ "item_code": duplicate_bin.item_code, "warehouse": duplicate_bin.warehouse }, fields=["name"], order_by="creation",) # keep last one existing_bins.pop() for broken_bin in existing_bins: frappe.delete_doc("Bin", broken_bin.name) qty_dict = { "reserved_qty": get_reserved_qty(duplicate_bin.item_code, duplicate_bin.warehouse), "indented_qty": get_indented_qty(duplicate_bin.item_code, duplicate_bin.warehouse), "ordered_qty": get_ordered_qty(duplicate_bin.item_code, duplicate_bin.warehouse), "planned_qty": get_planned_qty(duplicate_bin.item_code, duplicate_bin.warehouse), "actual_qty": get_bal
refactor: patch for fixing broken bins fix(patch): delete fully broken bins if bin doesn't have item_code or warehouse then it's not recoverable.
delete_and_patch_duplicate_bins
c2ecc7a2d1da839423fd768821b1f77ddcf7f53d
erpnext
add_bin_unique_constraint.py
14
30
https://github.com/frappe/erpnext.git
3
158
0
47
254
Python
{ "docstring": "\n\t\tSELECT\n\t\t\titem_code, warehouse, count(*) as bin_count\n\t\tFROM\n\t\t\ttabBin\n\t\tGROUP BY\n\t\t\titem_code, warehouse\n\t\tHAVING\n\t\t\tbin_count > 1\n\t", "language": "en", "n_whitespaces": 8, "n_words": 16, "vocab_size": 14 }
def delete_and_patch_duplicate_bins(): duplicate_bins = frappe.db.sql(, as_dict=1) for duplicate_bin in duplicate_bins: existing_bins = frappe.get_list("Bin", filters={ "item_code": duplicate_bin.item_code, "warehouse": duplicate_bin.warehouse }, fields=["name"], order_by="creation",) # keep last one existing_bins.pop() for broken_bin in existing_bins: frappe.delete_doc("Bin", broken_bin.name) qty_dict = { "reserved_qty": get_reserved_qty(duplicate_bin.item_code, duplicate_bin.warehouse), "indented_qty": get_indented_qty(duplicate_bin.item_code, duplicate_bin.warehouse), "ordered_qty": get_ordered_qty(duplicate_bin.item_code, duplicate_bin.warehouse), "planned_qty": get_planned_qty(duplicate_bin.item_code, duplicate_bin.warehouse), "actual_qty": get_balance_qty_from_sle(duplicate_bin.item_code, duplicate_bin.warehouse) } update_bin_qty(duplicate_bin.item_code, duplicate_bin.warehouse, qty_dict)
@pytest.mark.asyncio
28,504
127,689
269
dashboard/modules/job/tests/test_job_agent.py
74
40
async def test_stop_long_running_job(job_sdk_client): agent_client, head_client = job_sdk_client with tempfile.TemporaryDirectory() as tmp_dir: path = Path(tmp_dir) driver_script = test_script_file = path / "test_script.py" with open(test_script_file, "w+") as file: file.write(driver_script) runtime_env = {"working_dir": tmp_dir} runtime_env = upload_working_dir_if_needed(runtime_env, tmp_dir, logg
[Job Submission][refactor 4/N] Complete the remaining interfaces on JobAgent (#28533) Signed-off-by: Catch-Bull <burglarralgrub@gmail.com> just need to implement stop_job, and I remove get_job_info because we can access JobInfoStorage without call `ray.init`.
test_stop_long_running_job
8840be1942a69b2595a05c5c5556b0daec7abbcd
ray
test_job_agent.py
13
30
https://github.com/ray-project/ray.git
1
152
1
57
269
Python
{ "docstring": "\n Submit a job that runs for a while and stop it in the middle.\n \nprint('Hello !')\nimport time\ntime.sleep(300) # This should never finish\nraise RuntimeError('Intentionally failed.')\n ", "language": "en", "n_whitespaces": 38, "n_words": 27, "vocab_size": 26 }
async def test_stop_long_running_job(job_sdk_client): agent_client, head_client = job_sdk_client with tempfile.TemporaryDirectory() as tmp_dir: path = Path(tmp_dir) driver_script = test_script_file = path / "test_script.py" with open(test_script_file, "w+") as file: file.write(driver_script) runtime_env = {"working_dir": tmp_dir} runtime_env = upload_working_dir_if_needed(runtime_env, tmp_dir, logger=logger) runtime_env = RuntimeEnv(**runtime_env).to_dict() request = validate_request_type( {"runtime_env": runtime_env, "entrypoint": "python test_script.py"}, JobSubmitRequest, ) submit_result = await agent_client.submit_job_internal(request) job_id = submit_result.submission_id resp = await agent_client.stop_job_internal(job_id) assert resp.stopped is True wait_for_condition( partial( _check_job, client=head_client, job_id=job_id, status=JobStatus.STOPPED ), timeout=10, ) @pytest.mark.asyncio
83,776
281,459
48
gamestonk_terminal/cryptocurrency/due_diligence/dd_controller.py
20
12
def print_help(self): source_txt = CRYPTO_SOURCES.get(self.source, "?") if self.source != "" else "" help_text = f console.print(text=help_text, menu="Stocks - Due Dil
Terminal Wide Rich (#1161) * My idea for how we handle Rich moving forward * remove independent consoles * FIxed pylint issues * add a few vars * Switched print to console * More transitions * Changed more prints * Replaced all prints * Fixing tabulate * Finished replace tabulate * Finished removing rich from Tabulate * add Panel around menu * add GST watermark under feature flag * Fixed 46 tests * Delete test_screener[False].yaml * Delete test_screener[True].yaml * Fixed the rest of the tests * add help and source color vars and use rgb * rich on stocks/options * update rich on disc, dps, sia * rich in gov, ins and scr menus * ba and ca menus with rich * Fixed import issue * Fixed some tests * removed termcolor * Removed prettytable * add rich to remaining stocks menus * FIxed linting issue * Added James' changes * Updated dependencies * Add rich to cryptocurrency menu * refactor economy and forex * refactor etf with rich * refactor mfunds * refactor rich rest * not specify style so default color works well on any background * Fixing mypy issues * Updated tests * More test fixes * James' test fixes * Updating tests : stocks/screener - fix cassettes using BR * Updating tests : crypto * Updating tests : disable DEBUG_MODE * Updating tests : stocks/fa/yfinance * minor fixes that escape * Improve the rich table function (that replaces tabulate :D ) * Fixed bad code * delete rogue file + dcf fix + NoConsole * sia mypy * fuck you linter * fuck you linter pt 2 * skip hehe * i hate the black linter * ubuntu mypy attempt * Update : rich_config + gtff * Updating tests : conftest * Updating tests : stocks * Update : rich_config * Updating : rich_config * make panel configurable for Theodore :b * colors update * Merged * Updating : rich_config + feature_flags * Updating : rich_config * Updating tests : stocks * Updating : feature_flags Co-authored-by: DidierRLopes <dro.lopes@campus.fct.unl.pt> Co-authored-by: Chavithra PARANA <chavithra@gmail.com> Co-authored-by: james <jmaslek11@gmail.com> Co-authored-by: jose-donato <zmcdonato@gmail.com>
print_help
82747072c511beb1b2672846ae2ee4aec53eb562
OpenBBTerminal
dd_controller.py
10
41
https://github.com/OpenBB-finance/OpenBBTerminal.git
2
42
0
18
86
Python
{ "docstring": "Print help[cmds]\n load load a specific cryptocurrency for analysis\n\n[param]Coin: [/param]{self.current_coin}\n[param]Source: [/param]{source_txt}\n\n[src]Glassnode[/src]\n active active addresses\n nonzero addresses with non-zero balances\n change 30d change of supply held on exchange wallets\n eb total balance held on exchanges (in percentage and units)\n[src]Coinglass[/src]\n oi open interest per exchange\n[src]CoinPaprika[/src]\n basic basic information about loaded coin\n ps price and supply related metrics for loaded coin\n mkt all markets for loaded coin\n ex all exchanges where loaded coin is listed\n twitter tweets for loaded coin\n events events related to loaded coin\n[src]CoinGecko[/src]\n info basic information about loaded coin\n market market stats about loaded coin\n ath all time high related stats for loaded coin\n atl all time low related stats for loaded coin\n web found websites for loaded coin e.g forum, homepage\n social social portals urls for loaded coin, e.g reddit, twitter\n score different kind of scores for loaded coin, e.g developer score, sentiment score\n dev github, bitbucket coin development statistics\n bc links to blockchain explorers for loaded coin\n[src]Binance[/src]\n binbook show order book\n balance show coin balance\n[src]Coinbase[/src]\n cbbook show order book\n trades show last trades\n stats show coin stats[/cmds]\n", "language": "en", "n_whitespaces": 499, "n_words": 187, "vocab_size": 107 }
def print_help(self): source_txt = CRYPTO_SOURCES.get(self.source, "?") if self.source != "" else "" help_text = f console.print(text=help_text, menu="Stocks - Due Diligence")
18,522
89,255
225
tests/sentry/integrations/github/test_client.py
38
20
def test_get_cached_repo_files_with_all_files(self): responses.add( method=responses.GET,
feat(derive-code-mappings): Add caching support for fetching files (#41777) This improves the readability of the code and separates caching logic to their respective functions. This allows getting files for a repo with caching support without having to call `get_trees_for_org`. There will be a follow up PR to improve the caching logic. Co-authored-by: Mark Story <mark@mark-story.com>
test_get_cached_repo_files_with_all_files
07558e31bd672fab58cff55cf4e9cf0e02b36654
sentry
test_client.py
14
17
https://github.com/getsentry/sentry.git
1
103
0
32
201
Python
{ "docstring": "Fetch files for repo. All files rather than just source code files", "language": "en", "n_whitespaces": 11, "n_words": 12, "vocab_size": 10 }
def test_get_cached_repo_files_with_all_files(self): responses.add( method=responses.GET, url=f"https://api.github.com/repos/{self.repo.name}/git/trees/master?recursive=1", status=200, json={ "tree": [ {"type": "blob", "path": "src/foo.py"}, {"type": "blob", "path": "README"}, ] }, ) repo_key = f"github:repo:{self.repo.name}:all" assert cache.get(repo_key) is None with mock.patch("sentry.integrations.github.client.get_jwt", return_value=b"jwt_token_1"): files = self.client.get_cached_repo_files(self.repo.name, "master") assert files == ["src/foo.py"]
50,810
204,604
421
django/core/management/base.py
79
26
def check_migrations(self): from django.db.migrations.executor import MigrationExecutor try: executor = MigrationExecutor(connections[DEFAULT_DB_ALIAS]) except ImproperlyConfigured: # No databases are configured (or the dummy one) return plan = executor.migration_plan(executor.loader.graph.leaf_nodes()) if plan: apps_waiting_migration = sorted( {migration.app_label for migration, backwards in plan} ) self.stdout.write( self.style.NOTICE( "\nYou have %(unapplied_migration_count)s unapplied migration(s). " "Your project may not work properly until you apply the " "migrations for app(s): %(apps_waiting_migration)s." % { "unapplied_migration_count": len(plan), "apps_waiting_migration": ", ".join(app
Refs #33476 -- Reformatted code with Black.
check_migrations
9c19aff7c7561e3a82978a272ecdaad40dda5c00
django
base.py
17
25
https://github.com/django/django.git
4
117
0
69
201
Python
{ "docstring": "\n Print a warning if the set of migrations on disk don't match the\n migrations in the database.\n ", "language": "en", "n_whitespaces": 39, "n_words": 17, "vocab_size": 14 }
def check_migrations(self): from django.db.migrations.executor import MigrationExecutor try: executor = MigrationExecutor(connections[DEFAULT_DB_ALIAS]) except ImproperlyConfigured: # No databases are configured (or the dummy one) return plan = executor.migration_plan(executor.loader.graph.leaf_nodes()) if plan: apps_waiting_migration = sorted( {migration.app_label for migration, backwards in plan} ) self.stdout.write( self.style.NOTICE( "\nYou have %(unapplied_migration_count)s unapplied migration(s). " "Your project may not work properly until you apply the " "migrations for app(s): %(apps_waiting_migration)s." % { "unapplied_migration_count": len(plan), "apps_waiting_migration": ", ".join(apps_waiting_migration), } ) ) self.stdout.write( self.style.NOTICE("Run 'python manage.py migrate' to apply them.") )
41,910
176,449
232
networkx/algorithms/chordal.py
82
24
def find_induced_nodes(G, s, t, treewidth_bound=sys.maxsize): if not is_chordal(G): raise nx.NetworkXError("Input graph is not chordal.") H = nx.Graph(G) H.add_edge(s, t) induced_nodes = set() triplet = _find_chordality_breaker(H, s, treewidth_bound) while triplet: (u, v, w) = triplet induced_nodes.update(triplet) for n in triplet: if n != s: H.add_edge(s, n) triplet = _find_chordality_breaker(H, s, tre
Minor improvements from general code readthrough (#5414) * Add deprecated directive to reversed docstring. * Add missing dep directives to shpfiles. * Remove defn of INF sentinel. * typo. * str -> comment in forloop. * STY: appropriate casing for var name.
find_induced_nodes
cc1db275efc709cb964ce88abbfa877798d58c10
networkx
chordal.py
16
21
https://github.com/networkx/networkx.git
8
149
0
60
233
Python
{ "docstring": "Returns the set of induced nodes in the path from s to t.\n\n Parameters\n ----------\n G : graph\n A chordal NetworkX graph\n s : node\n Source node to look for induced nodes\n t : node\n Destination node to look for induced nodes\n treewidth_bound: float\n Maximum treewidth acceptable for the graph H. The search\n for induced nodes will end as soon as the treewidth_bound is exceeded.\n\n Returns\n -------\n induced_nodes : Set of nodes\n The set of induced nodes in the path from s to t in G\n\n Raises\n ------\n NetworkXError\n The algorithm does not support DiGraph, MultiGraph and MultiDiGraph.\n If the input graph is an instance of one of these classes, a\n :exc:`NetworkXError` is raised.\n The algorithm can only be applied to chordal graphs. If the input\n graph is found to be non-chordal, a :exc:`NetworkXError` is raised.\n\n Examples\n --------\n >>> G = nx.Graph()\n >>> G = nx.generators.classic.path_graph(10)\n >>> induced_nodes = nx.find_induced_nodes(G, 1, 9, 2)\n >>> sorted(induced_nodes)\n [1, 2, 3, 4, 5, 6, 7, 8, 9]\n\n Notes\n -----\n G must be a chordal graph and (s,t) an edge that is not in G.\n\n If a treewidth_bound is provided, the search for induced nodes will end\n as soon as the treewidth_bound is exceeded.\n\n The algorithm is inspired by Algorithm 4 in [1]_.\n A formal definition of induced node can also be found on that reference.\n\n References\n ----------\n .. [1] Learning Bounded Treewidth Bayesian Networks.\n Gal Elidan, Stephen Gould; JMLR, 9(Dec):2699--2731, 2008.\n http://jmlr.csail.mit.edu/papers/volume9/elidan08a/elidan08a.pdf\n ", "language": "en", "n_whitespaces": 416, "n_words": 239, "vocab_size": 126 }
def find_induced_nodes(G, s, t, treewidth_bound=sys.maxsize): if not is_chordal(G): raise nx.NetworkXError("Input graph is not chordal.") H = nx.Graph(G) H.add_edge(s, t) induced_nodes = set() triplet = _find_chordality_breaker(H, s, treewidth_bound) while triplet: (u, v, w) = triplet induced_nodes.update(triplet) for n in triplet: if n != s: H.add_edge(s, n) triplet = _find_chordality_breaker(H, s, treewidth_bound) if induced_nodes: # Add t and the second node in the induced path from s to t. induced_nodes.add(t) for u in G[s]: if len(induced_nodes & set(G[u])) == 2: induced_nodes.add(u) break return induced_nodes
@pytest.fixture
40,573
170,548
45
pandas/conftest.py
33
11
def any_skipna_inferred_dtype(request): inferred_dtype, values = request.param values = np.array(values, dtype=object) #
STYLE fix: pylint "consider-using-from" (#49335) * use from import * delete empty file Co-authored-by: carlotta <c.fabian@turbit.de> Co-authored-by: cfabian <cfabian@student.42wolfsburg.de>
any_skipna_inferred_dtype
f9ff3796329e4bedb4a5477739f5eb8d2e40761d
pandas
conftest.py
9
4
https://github.com/pandas-dev/pandas.git
1
29
1
24
60
Python
{ "docstring": "\n Fixture for all inferred dtypes from _libs.lib.infer_dtype\n\n The covered (inferred) types are:\n * 'string'\n * 'empty'\n * 'bytes'\n * 'mixed'\n * 'mixed-integer'\n * 'mixed-integer-float'\n * 'floating'\n * 'integer'\n * 'decimal'\n * 'boolean'\n * 'datetime64'\n * 'datetime'\n * 'date'\n * 'timedelta'\n * 'time'\n * 'period'\n * 'interval'\n\n Returns\n -------\n inferred_dtype : str\n The string for the inferred dtype from _libs.lib.infer_dtype\n values : np.ndarray\n An array of object dtype that will be inferred to have\n `inferred_dtype`\n\n Examples\n --------\n >>> from pandas._libs import lib\n >>>\n >>> def test_something(any_skipna_inferred_dtype):\n ... inferred_dtype, values = any_skipna_inferred_dtype\n ... # will pass\n ... assert lib.infer_dtype(values, skipna=True) == inferred_dtype\n ", "language": "en", "n_whitespaces": 230, "n_words": 100, "vocab_size": 68 }
def any_skipna_inferred_dtype(request): inferred_dtype, values = request.param values = np.array(values, dtype=object) # object dtype to avoid casting # correctness of inference tested in tests/dtypes/test_inference.py return inferred_dtype, values # ---------------------------------------------------------------- # Misc # ---------------------------------------------------------------- @pytest.fixture
1,609
9,409
66
reconstruction/ostec/external/stylegan2/dnnlib/tflib/ops/upfirdn_2d.py
43
16
def downsample_2d(x, k=None, factor=2, gain=1, data_format='NCHW', impl='cuda'): r assert isinstance(factor, int)
initialize ostec
downsample_2d
7375ee364e0df2a417f92593e09557f1b2a3575a
insightface
upfirdn_2d.py
11
28
https://github.com/deepinsight/insightface.git
3
87
0
36
153
Python
{ "docstring": "Downsample a batch of 2D images with the given filter.\n\n Accepts a batch of 2D images of the shape `[N, C, H, W]` or `[N, H, W, C]`\n and downsamples each image with the given filter. The filter is normalized so that\n if the input pixels are constant, they will be scaled by the specified `gain`.\n Pixels outside the image are assumed to be zero, and the filter is padded with\n zeros so that its shape is a multiple of the downsampling factor.\n\n Args:\n x: Input tensor of the shape `[N, C, H, W]` or `[N, H, W, C]`.\n k: FIR filter of the shape `[firH, firW]` or `[firN]` (separable).\n The default is `[1] * factor`, which corresponds to average pooling.\n factor: Integer downsampling factor (default: 2).\n gain: Scaling factor for signal magnitude (default: 1.0).\n data_format: `'NCHW'` or `'NHWC'` (default: `'NCHW'`).\n impl: Name of the implementation to use. Can be `\"ref\"` or `\"cuda\"` (default).\n\n Returns:\n Tensor of the shape `[N, C, H // factor, W // factor]` or\n `[N, H // factor, W // factor, C]`, and same datatype as `x`.\n ", "language": "en", "n_whitespaces": 327, "n_words": 181, "vocab_size": 106 }
def downsample_2d(x, k=None, factor=2, gain=1, data_format='NCHW', impl='cuda'): r assert isinstance(factor, int) and factor >= 1 if k is None: k = [1] * factor k = _setup_kernel(k) * gain p = k.shape[0] - factor return _simple_upfirdn_2d(x, k, down=factor, pad0=(p+1)//2, pad1=p//2, data_format=data_format, impl=impl) #----------------------------------------------------------------------------
40,239
168,224
373
pandas/core/groupby/grouper.py
136
11
def _check_deprecated_resample_kwargs(kwargs, origin): # Deprecation warning of `base` and `loffset` since v1.1.0: # we are raising the warning here to be able to set the `stacklevel` # properly since we need to raise the `base` and `loffset` deprecation # warning from three different cases: # core/generic.py::NDFrame.resample # core/groupby/groupby.py::GroupBy.resample # core/groupby/grouper.py::Grouper # raising these warnings from TimeGrouper directly would fail the test: # tests/resample/test_deprecated.py::test_deprecating_on_loffset_and_base if kwargs.get("base", None) is not None: warnings.warn( "'base' in .resample() and in Grouper() is deprecated.\n" "The new arguments that you should use are 'offset' or 'origin'.\n" '\n>>> df.resample(freq="3s", base=2)\n' "\nbecomes:\n" '\n>>> df.resample(freq="3s", offset="2s")\n', FutureWarning, stacklevel=find_stack_level(inspect.currentframe()),
PERF cache find_stack_level (#48023) cache stacklevel
_check_deprecated_resample_kwargs
2f8d0a36703e81e4dca52ca9fe4f58c910c1b304
pandas
grouper.py
14
22
https://github.com/pandas-dev/pandas.git
3
83
0
85
176
Python
{ "docstring": "\n Check for use of deprecated parameters in ``resample`` and related functions.\n\n Raises the appropriate warnings if these parameters are detected.\n Only sets an approximate ``stacklevel`` for the warnings (see #37603, #36629).\n\n Parameters\n ----------\n kwargs : dict\n Dictionary of keyword arguments to check for deprecated parameters.\n origin : object\n From where this function is being called; either Grouper or TimeGrouper. Used\n to determine an approximate stacklevel.\n ", "language": "en", "n_whitespaces": 111, "n_words": 65, "vocab_size": 54 }
def _check_deprecated_resample_kwargs(kwargs, origin): # Deprecation warning of `base` and `loffset` since v1.1.0: # we are raising the warning here to be able to set the `stacklevel` # properly since we need to raise the `base` and `loffset` deprecation # warning from three different cases: # core/generic.py::NDFrame.resample # core/groupby/groupby.py::GroupBy.resample # core/groupby/grouper.py::Grouper # raising these warnings from TimeGrouper directly would fail the test: # tests/resample/test_deprecated.py::test_deprecating_on_loffset_and_base if kwargs.get("base", None) is not None: warnings.warn( "'base' in .resample() and in Grouper() is deprecated.\n" "The new arguments that you should use are 'offset' or 'origin'.\n" '\n>>> df.resample(freq="3s", base=2)\n' "\nbecomes:\n" '\n>>> df.resample(freq="3s", offset="2s")\n', FutureWarning, stacklevel=find_stack_level(inspect.currentframe()), ) if kwargs.get("loffset", None) is not None: warnings.warn( "'loffset' in .resample() and in Grouper() is deprecated.\n" '\n>>> df.resample(freq="3s", loffset="8H")\n' "\nbecomes:\n" "\n>>> from pandas.tseries.frequencies import to_offset" '\n>>> df = df.resample(freq="3s").mean()' '\n>>> df.index = df.index.to_timestamp() + to_offset("8H")\n', FutureWarning, stacklevel=find_stack_level(inspect.currentframe()), )
8,400
44,887
45
airflow/providers/google/cloud/hooks/datacatalog.py
13
8
def get_conn(self) -> DataCatalogClient:
Extract ClientInfo to module level (#21554)
get_conn
1b568d73e1dfb838a3a0446e3a6063b9f27f04b8
airflow
datacatalog.py
13
5
https://github.com/apache/airflow.git
2
36
0
12
60
Python
{ "docstring": "Retrieves client library object that allow access to Cloud Data Catalog service.", "language": "en", "n_whitespaces": 11, "n_words": 12, "vocab_size": 12 }
def get_conn(self) -> DataCatalogClient: if not self._client: self._client = DataCatalogClient(credentials=self._get_credentials(), client_info=CLIENT_INFO) return self._client
11,591
56,932
82
src/prefect/blocks/kubernetes.py
13
8
def get_api_client(self) -> ApiClient: try: return new_client_from_config_dict( config_dict=self.config, context=self.context )
organizational changes for the KubernetesClusterConfig and add from_environment classmethod
get_api_client
574d10ff7612661b37801c811862f18998521d58
prefect
kubernetes.py
11
10
https://github.com/PrefectHQ/prefect.git
2
29
0
13
49
Python
{ "docstring": "\n Returns an instance of the kubernetes api client with a specific context\n ", "language": "en", "n_whitespaces": 27, "n_words": 12, "vocab_size": 12 }
def get_api_client(self) -> ApiClient: try: return new_client_from_config_dict( config_dict=self.config, context=self.context ) except ConfigException: raise
43,822
182,433
258
src/textual/_arrangement.py
75
25
def cuts(self) -> list[list[int]]: if self._cuts is not None: return self._cuts width = self.width height = self.height screen_region = Region(0, 0, width, height) cuts_sets = [{0, width} for
ws
cuts
57a05c7bbd14728f0dbde8b8e55d6f086362c35e
textual
_arrangement.py
16
23
https://github.com/Textualize/textual.git
9
143
0
51
218
Python
{ "docstring": "Get vertical cuts.\n\n A cut is every point on a line where a widget starts or ends.\n\n Returns:\n list[list[int]]: A list of cuts for every line.\n ", "language": "en", "n_whitespaces": 58, "n_words": 26, "vocab_size": 23 }
def cuts(self) -> list[list[int]]: if self._cuts is not None: return self._cuts width = self.width height = self.height screen_region = Region(0, 0, width, height) cuts_sets = [{0, width} for _ in range(height)] if self.map is not None: for region, order, clip in self.map.values(): region = region.intersection(clip) if region and (region in screen_region): region_cuts = region.x_extents for y in region.y_range: cuts_sets[y].update(region_cuts) # Sort the cuts for each line self._cuts = [sorted(cut_set) for cut_set in cuts_sets] return self._cuts
80,576
270,861
35
keras/engine/base_layer_utils.py
12
4
def is_subclassed(layer): return ( la
Reformatting the codebase with black. PiperOrigin-RevId: 450093126
is_subclassed
84afc5193d38057e2e2badf9c889ea87d80d8fbf
keras
base_layer_utils.py
11
5
https://github.com/keras-team/keras.git
2
32
0
10
58
Python
{ "docstring": "Returns True if the object is a subclassed layer or subclassed model.", "language": "en", "n_whitespaces": 11, "n_words": 12, "vocab_size": 11 }
def is_subclassed(layer): return ( layer.__module__.find("keras.engine") == -1 and layer.__module__.find("keras.layers") == -1 )
48,249
196,925
62
sympy/matrices/dense.py
9
6
def _mat(self): sympy_deprecation_warning( , deprecated_since_version="1.9", active_deprecations_target="deprecated-private-matrix-attributes" ) return
Update the deprecation of the _mat and _smat Matrix properties
_mat
0b4d5fa57d64b1102e51e03ed80013e16053bf96
sympy
dense.py
9
10
https://github.com/sympy/sympy.git
1
23
0
9
42
Python
{ "docstring": "\n The private _mat attribute of Matrix is deprecated. Use the\n .flat() method instead.\n ", "language": "en", "n_whitespaces": 47, "n_words": 13, "vocab_size": 13 }
def _mat(self): sympy_deprecation_warning( , deprecated_since_version="1.9", active_deprecations_target="deprecated-private-matrix-attributes" ) return self.flat()
78,274
266,037
115
netbox/extras/tests/test_customfields.py
34
22
def test_missing_required_field(self): cf3 = CustomField(type=CustomFieldTypeChoices.TYPE_TEXT, name='baz', required=True) cf3.save() cf3.conte
Closes #10052: The cf attribute now returns deserialized custom field data
test_missing_required_field
ea6d86e6c4bb6037465410db6205a7471bc81a6c
netbox
test_customfields.py
11
10
https://github.com/netbox-community/netbox.git
1
92
0
28
165
Python
{ "docstring": "\n Check that a ValidationError is raised if any required custom fields are not present.\n ", "language": "en", "n_whitespaces": 29, "n_words": 14, "vocab_size": 14 }
def test_missing_required_field(self): cf3 = CustomField(type=CustomFieldTypeChoices.TYPE_TEXT, name='baz', required=True) cf3.save() cf3.content_types.set([ContentType.objects.get_for_model(Site)]) site = Site(name='Test Site', slug='test-site') # Set custom field data with a required field omitted site.custom_field_data['foo'] = 'abc' with self.assertRaises(ValidationError): site.clean() site.custom_field_data['baz'] = 'def' site.clean()
16,736
78,230
40
wagtail/admin/tests/test_templatetags.py
11
9
def test_basic(self): context = Context({}) template = expected = self.assertHTMLEqual(expected, Template(
Introduce new template fragment composition tags
test_basic
524cab82e33b43463b746c3df1a80657b3ae874a
wagtail
test_templatetags.py
11
15
https://github.com/wagtail/wagtail.git
1
34
0
9
60
Python
{ "docstring": "\n {% load wagtailadmin_tags %}\n {% fragment as my_fragment %}\n <p>Hello, World</p>\n {% endfragment %}\n Text coming after:\n {{ my_fragment }}\n \n Text coming after:\n <p>Hello, World</p>\n ", "language": "en", "n_whitespaces": 129, "n_words": 25, "vocab_size": 15 }
def test_basic(self): context = Context({}) template = expected = self.assertHTMLEqual(expected, Template(template).render(context))
40,094
167,732
75
pandas/core/arrays/sparse/accessor.py
14
8
def to_dense(self) -> Series: from pandas import Series return Series( self._parent.array.to_dense
TYP: more return annotations in core/ (#47618) * TYP: more return annotations in core/ * from __future__ import annotations * more __future__
to_dense
f65417656ba8c59438d832b6e2a431f78d40c21c
pandas
accessor.py
11
32
https://github.com/pandas-dev/pandas.git
1
42
0
14
67
Python
{ "docstring": "\n Convert a Series from sparse values to dense.\n\n .. versionadded:: 0.25.0\n\n Returns\n -------\n Series:\n A Series with the same values, stored as a dense array.\n\n Examples\n --------\n >>> series = pd.Series(pd.arrays.SparseArray([0, 1, 0]))\n >>> series\n 0 0\n 1 1\n 2 0\n dtype: Sparse[int64, 0]\n\n >>> series.sparse.to_dense()\n 0 0\n 1 1\n 2 0\n dtype: int64\n ", "language": "en", "n_whitespaces": 217, "n_words": 54, "vocab_size": 39 }
def to_dense(self) -> Series: from pandas import Series return Series( self._parent.array.to_dense(), index=self._parent.index, name=self._parent.name, )