content
stringlengths 21
24.2k
| avg_line_length
float64 10.4
231
| max_line_length
int64 20
8.17k
| alphanum_fraction
float64 0.25
0.82
| licenses
sequence | repository_name
stringlengths 11
51
| path
stringlengths 7
121
| size
int64 21
24.2k
| lang
stringclasses 1
value | nl_text
stringlengths 19
20.6k
| nl_size
int64 19
20.6k
| nl_ratio
float64 0.8
1.07
|
---|---|---|---|---|---|---|---|---|---|---|---|
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License.
"""spaCy ANN Linker, a pipeline component for generating spaCy KnowledgeBase Alias Candidates for Entity Linking."""
__version__ = '0.1.10'
from .ann_linker import AnnLinker
from .remote_ann_linker import RemoteAnnLinker
# TODO: Uncomment (and probably fix a bit) once this PR is merged upstream
# https://github.com/explosion/spaCy/pull/4988 to enable kb registry with
# customizable `get_candidates` function
#
# from spacy.kb import KnowledgeBase
# from spacy.tokens import Span
# from spacy.util import registry
# @registry.kb.register("get_candidates")
# def get_candidates(kb: KnowledgeBase, ent: Span):
# alias = ent._.alias_candidates[0] if ent._.alias_candidates else ent.text
# return kb.get_candidates(alias)
| 34.583333 | 116 | 0.772289 | [
"MIT"
] | jjjamie/spacy-ann-linker | spacy_ann/__init__.py | 830 | Python | spaCy ANN Linker, a pipeline component for generating spaCy KnowledgeBase Alias Candidates for Entity Linking.
Copyright (c) Microsoft Corporation. All rights reserved. Licensed under the MIT License. TODO: Uncomment (and probably fix a bit) once this PR is merged upstream https://github.com/explosion/spaCy/pull/4988 to enable kb registry with customizable `get_candidates` function from spacy.kb import KnowledgeBase from spacy.tokens import Span from spacy.util import registry @registry.kb.register("get_candidates") def get_candidates(kb: KnowledgeBase, ent: Span): alias = ent._.alias_candidates[0] if ent._.alias_candidates else ent.text return kb.get_candidates(alias) | 689 | 0.83012 |
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# Copyright 2011-2014, Nigel Small
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from py2neo.error.client import *
from py2neo.error.server import *
| 33.238095 | 74 | 0.747851 | [
"Apache-2.0"
] | alaalqadi/py2neo | py2neo/error/__init__.py | 698 | Python | !/usr/bin/env python -*- coding: utf-8 -*- Copyright 2011-2014, Nigel Small Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. | 597 | 0.855301 |
# Copyright (c) 2013-2014 Will Thames <will@thames.id.au>
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
"""Main ansible-lint package."""
from ansiblelint.rules import AnsibleLintRule
from ansiblelint.version import __version__
__all__ = (
"__version__",
"AnsibleLintRule" # deprecated, import it directly from rules
)
| 44.7 | 79 | 0.774049 | [
"MIT"
] | ragne/ansible-lint | lib/ansiblelint/__init__.py | 1,341 | Python | Main ansible-lint package.
Copyright (c) 2013-2014 Will Thames <will@thames.id.au> Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. deprecated, import it directly from rules | 1,147 | 0.855332 |
# Everything we've seen to this point has been a problem known as regression in
# which we're trying to predict an actual numeric value for each observation of
# N input numeric values. A more common problem is that of classification -
# predicting a single binary occurance, class or label for each input. The
# example we'll explore now is attempting to predict for every passenger aboard
# the Titanic, if they survived or not. Clearly, this is not a numeric value,
# but a boolean one: True (survived) or False (didn't survive)
#
# A different way to think about classification is in terms closer to regression
# where instead of approximating an output value for each input, we're
# learning a threshold line in the function where values below these threshold
# doesn't belong to a class, and values above it do.
#
# The weights of an output unit determine the logical expression for the
# corresponding input, while the bias acts as the threshold (axon hillock) that
# must be surpassed in order for the unit to activate. So the bias basically
# describe the excitability of the unit, or how likely it is to fire. While the
# weights are the effect of the individual inputs. Mathematically:
#
# y = w * x + b >= 0 => w * x >= -b
#
# That means that in order for the output of the unit to be greater than 1 we
# need w * x to be greater than the negative of the bias. Remember that in
# classification the input x is a binary 0 or 1, so we have two cases:
#
# x = 0: w * 0 >= -b = 0 >= -b
# x = 1: w * 1 >= -b = w >= -b
#
# So basically, the bias describes two properties: (a) the default activation of
# the unit, whether it should fire or not on zero input (x = 0). And (b) how big
# should the weights be to excite or inhibit that default activation for a non-
# zero input (x = 1). A positive bias (1) will fire unless there are enough
# negative weights (where the input is 1) to inhibit it, while a negative bias
# (-1) will not fire unless there are enough positive weights to excite it. With
# these two variables, we can describe any single-argument boolean function:
#
# w b y >= -b f
# =================================
# 0 1 0 * x >= -1 T
# 0 -1 0 * x >= 1 F
# 1 -1 1 * x >= 1 x F (when x=F) or T (x=T) # identify
# -1 0 -1 * x >= 0 !x F (when x=T) or T (x=F) # negation
#
# When we add arguments, we can support more boolean operations like AND and OR.
# Lets start with AND: we will need the sum of a subgroup of the weights exceed
# the negative bias:
#
# w1 w2 b y >= -b f
# ==================================
# 1 1 -2 x1 + x2 >= 2 x1 AND x2
# -1 1 -1 -x1 + x2 >= 1 !x1 AND x2
# 1 -1 -1 x1 - x2 >= 1 x1 AND !x2
# -1 -1 0 -x1 - x2 >= 0 !x1 AND !x2
#
# It's possible to have other weights, but there's a subgroup of the weights
# where each isn't big enough to exceed -b by itself, but their sum does. All
# of these weights needs to be activated (by an input of 1) in order for the sum
# to be greater than -b.
#
# Now for the OR. Because we might have several such subgroups that satisfy the
# relationship above, each subgroup can, by itself, exceed -b. Thus there's an
# OR operator between these subgroups:
#
# w1 w2 w3 b y >= -b f
# ==============================================
# 1 1 2 -2 x1 + x2 + 2*x3 >= 2 ( x1 AND x2) OR ( x3)
# -1 1 -2 -1 -x1 + x2 - 2*x3 >= 1 (!x1 AND x2) OR (!x3)
#
# We end up with function structures like:
#
# f = (x1 AND x2 ...) OR ( x2 AND x3 ...) ...
# f = (x1 AND !x2 ...) OR (!x2 AND x3 ...) ...
# f = (x1 AND x2 ...) OR ( x3 AND x4 ...) ...
# ^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^
# subgroup 1 subgroup 2
#
# Where the OR separates all subgroups of the weights that has a sum greater
# than -b, while the AND separates the individual weights within each such
# group.
#
# NOTE that each input is always used with the same sign across all subgroups,
# either identity or negation - never both. Our model can only approximate
# linear boolean functions which are ones where each input always contributes
# the same amount towards the same output: T or F. If one argument is more
# likely to make the output true, it must be the case that regardless of all
# other arguments, it will continue to make the output similarily likely to be
# true (or false). It cannot be the case where one of the inputs is sometimes
# used as an identity and other times is negated. For example, these boolean
# functions aren't linear and thus cannot be approximated by this model:
#
# (x1 AND !x2) OR (!x1 AND x2) # exclusive-or (XOR)
# (x1 AND x2) OR (!x1 AND !x2) # Equivalence
#
# This is because it's impossible to choose a weight for the input that's both
# negative and positive. We need to pick one. So either that input makes the
# output bigger, or smaller, or neither - but not conditionally both. NOTE that
# this is a weak definition of linearity in boolean function, and is possibly
# wrong. I couldn't easily wrap my head around it, so perhaps the wikipedia
# entry[1] on it will help.
#
# [1] https://en.wikipedia.org/wiki/Linearity#Boolean_functions
import numpy as np
np.random.seed(1)
EPOCHS = 300
ALPHA = 0.01
# Our 1-dimensional input is the sex of the passenger: m (male) or f (female)
# Our output is a number, either 1 (survived) or 0 (didn't survive)
X = ["f", "m", "f", "m", "f", "m", "f", "m", "f", "m", "f", "m", "f", "m"]
T = [ 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0 ]
# One of the main issues to take care of is encoding: how do we transform these
# textual categories into numeric inputs that we can estimate. One naive
# approach might be to use a single input feature, say a value of 0 represents a
# male, and 1 represents a female. That wouldn't work, because any kind of
# weight we'll use will end up increasing for females. Thus we have no way to
# find different weights for the different categories. This is not necessarily
# correct for ordinal values like age or fare cost, but it's still common to
# learn these weights independently by grouping multiple numeric values into a
# discrete set of categories ("young", "old" for age; "cheap", "expansive" for
# fare cost). The same limitation obviously applied if we use more values with
# binary encoding.
#
# The best known approach currently is one-hot (or one-of-k) in which each value
# is assigned a completely different input. If we have k values, we'll use
# k input neurons (one for male and the other for female) in which only one
# neuron can be lit (value of 1) for any given training case. If we have
# multiple categories we can concatenate multiple such one-of-k's as needed as
# that maintains the fact that each value is assign a separate input and weight.
N = len(set(X)) # 1 per unique value
# encode the input data strings into a list of one-of-k's. We want to return a
# list of numbers, where all are set zeros, but only one is to set to one. That
# should be applied to each feature - one for value. More features would require
# a concatenation of such one-of-k's
def one_of_k(v):
x = np.zeros(N)
idx = ["m", "f"].index(v)
x[idx] = 1.
return x
X = np.array([one_of_k(x) for x in X])
w = np.random.randn(N + 1) * 0.01 # start with small random weights
data = zip(X, T)
for i in xrange(EPOCHS):
np.random.shuffle(data)
e = 0
# we will now also compute the accuracy as a count of how many instances in
# the data were predicted correctly. This is a more quantitive way of
# representing the correctness of the prediction as opposed to an arbitrary
# error function
accuracy = 0
# mini-batches
for x, t in data:
# predict
x = np.append(x, 1.) # add the fixed bias.
y = sum(w * x)
# error & derivatives
e += (y - t) ** 2 / 2
dy = (y - t)
dw = dy * x
# update
w += ALPHA * -dw # mini-batch update
# did we predict correctly? We need to transform the output number
# into a boolean prediction: whether the label should be turned on
# or off. For this example, we'll simply see if the prediction is
# closer to 0 or 1, by first clipping to the [0, 1] range in order
# to trim values outside of this range, and then rounding.
accuracy += 1 if round(np.clip(y, 0, 1)) == t else 0
e /= len(data)
print "%s: ERROR = %f ; ACCURACY = %d of %d" % (i, e, accuracy, len(data))
print
print "W = %s" % w
| 47.311475 | 80 | 0.64784 | [
"MIT"
] | avinoamr/ai-neural | 07_classification.py | 8,658 | Python | Everything we've seen to this point has been a problem known as regression in which we're trying to predict an actual numeric value for each observation of N input numeric values. A more common problem is that of classification - predicting a single binary occurance, class or label for each input. The example we'll explore now is attempting to predict for every passenger aboard the Titanic, if they survived or not. Clearly, this is not a numeric value, but a boolean one: True (survived) or False (didn't survive) A different way to think about classification is in terms closer to regression where instead of approximating an output value for each input, we're learning a threshold line in the function where values below these threshold doesn't belong to a class, and values above it do. The weights of an output unit determine the logical expression for the corresponding input, while the bias acts as the threshold (axon hillock) that must be surpassed in order for the unit to activate. So the bias basically describe the excitability of the unit, or how likely it is to fire. While the weights are the effect of the individual inputs. Mathematically: y = w * x + b >= 0 => w * x >= -b That means that in order for the output of the unit to be greater than 1 we need w * x to be greater than the negative of the bias. Remember that in classification the input x is a binary 0 or 1, so we have two cases: x = 0: w * 0 >= -b = 0 >= -b x = 1: w * 1 >= -b = w >= -b So basically, the bias describes two properties: (a) the default activation of the unit, whether it should fire or not on zero input (x = 0). And (b) how big should the weights be to excite or inhibit that default activation for a non- zero input (x = 1). A positive bias (1) will fire unless there are enough negative weights (where the input is 1) to inhibit it, while a negative bias (-1) will not fire unless there are enough positive weights to excite it. With these two variables, we can describe any single-argument boolean function: w b y >= -b f ================================= 0 1 0 * x >= -1 T 0 -1 0 * x >= 1 F 1 -1 1 * x >= 1 x F (when x=F) or T (x=T) identify -1 0 -1 * x >= 0 !x F (when x=T) or T (x=F) negation When we add arguments, we can support more boolean operations like AND and OR. Lets start with AND: we will need the sum of a subgroup of the weights exceed the negative bias: w1 w2 b y >= -b f ================================== 1 1 -2 x1 + x2 >= 2 x1 AND x2 -1 1 -1 -x1 + x2 >= 1 !x1 AND x2 1 -1 -1 x1 - x2 >= 1 x1 AND !x2 -1 -1 0 -x1 - x2 >= 0 !x1 AND !x2 It's possible to have other weights, but there's a subgroup of the weights where each isn't big enough to exceed -b by itself, but their sum does. All of these weights needs to be activated (by an input of 1) in order for the sum to be greater than -b. Now for the OR. Because we might have several such subgroups that satisfy the relationship above, each subgroup can, by itself, exceed -b. Thus there's an OR operator between these subgroups: w1 w2 w3 b y >= -b f ============================================== 1 1 2 -2 x1 + x2 + 2*x3 >= 2 ( x1 AND x2) OR ( x3) -1 1 -2 -1 -x1 + x2 - 2*x3 >= 1 (!x1 AND x2) OR (!x3) We end up with function structures like: f = (x1 AND x2 ...) OR ( x2 AND x3 ...) ... f = (x1 AND !x2 ...) OR (!x2 AND x3 ...) ... f = (x1 AND x2 ...) OR ( x3 AND x4 ...) ... ^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^ subgroup 1 subgroup 2 Where the OR separates all subgroups of the weights that has a sum greater than -b, while the AND separates the individual weights within each such group. NOTE that each input is always used with the same sign across all subgroups, either identity or negation - never both. Our model can only approximate linear boolean functions which are ones where each input always contributes the same amount towards the same output: T or F. If one argument is more likely to make the output true, it must be the case that regardless of all other arguments, it will continue to make the output similarily likely to be true (or false). It cannot be the case where one of the inputs is sometimes used as an identity and other times is negated. For example, these boolean functions aren't linear and thus cannot be approximated by this model: (x1 AND !x2) OR (!x1 AND x2) exclusive-or (XOR) (x1 AND x2) OR (!x1 AND !x2) Equivalence This is because it's impossible to choose a weight for the input that's both negative and positive. We need to pick one. So either that input makes the output bigger, or smaller, or neither - but not conditionally both. NOTE that this is a weak definition of linearity in boolean function, and is possibly wrong. I couldn't easily wrap my head around it, so perhaps the wikipedia entry[1] on it will help. [1] https://en.wikipedia.org/wiki/LinearityBoolean_functions Our 1-dimensional input is the sex of the passenger: m (male) or f (female) Our output is a number, either 1 (survived) or 0 (didn't survive) One of the main issues to take care of is encoding: how do we transform these textual categories into numeric inputs that we can estimate. One naive approach might be to use a single input feature, say a value of 0 represents a male, and 1 represents a female. That wouldn't work, because any kind of weight we'll use will end up increasing for females. Thus we have no way to find different weights for the different categories. This is not necessarily correct for ordinal values like age or fare cost, but it's still common to learn these weights independently by grouping multiple numeric values into a discrete set of categories ("young", "old" for age; "cheap", "expansive" for fare cost). The same limitation obviously applied if we use more values with binary encoding. The best known approach currently is one-hot (or one-of-k) in which each value is assigned a completely different input. If we have k values, we'll use k input neurons (one for male and the other for female) in which only one neuron can be lit (value of 1) for any given training case. If we have multiple categories we can concatenate multiple such one-of-k's as needed as that maintains the fact that each value is assign a separate input and weight. 1 per unique value encode the input data strings into a list of one-of-k's. We want to return a list of numbers, where all are set zeros, but only one is to set to one. That should be applied to each feature - one for value. More features would require a concatenation of such one-of-k's start with small random weights we will now also compute the accuracy as a count of how many instances in the data were predicted correctly. This is a more quantitive way of representing the correctness of the prediction as opposed to an arbitrary error function mini-batches predict add the fixed bias. error & derivatives update mini-batch update did we predict correctly? We need to transform the output number into a boolean prediction: whether the label should be turned on or off. For this example, we'll simply see if the prediction is closer to 0 or 1, by first clipping to the [0, 1] range in order to trim values outside of this range, and then rounding. | 7,422 | 0.857242 |
# Copyright 2018-2021 Xanadu Quantum Technologies Inc.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Quantum gradient transforms are strategies for computing the gradient of a quantum
circuit that work by **transforming** the quantum circuit into one or more gradient circuits.
These gradient circuits, once executed and post-processed, return the gradient
of the original circuit.
Examples of quantum gradient transforms include finite-differences and parameter-shift
rules.
This module provides a selection of device-independent, differentiable quantum
gradient transforms. As such, these quantum gradient transforms can be used to
compute the gradients of quantum circuits on both simulators and hardware.
In addition, it also includes an API for writing your own quantum gradient
transforms.
These quantum gradient transforms can be used in two ways:
- Transforming quantum circuits directly
- Registering a quantum gradient strategy for use when performing autodifferentiation
with a :class:`QNode <pennylane.QNode>`.
Overview
--------
Gradient transforms
^^^^^^^^^^^^^^^^^^^
.. autosummary::
:toctree: api
finite_diff
param_shift
param_shift_cv
Custom gradients
^^^^^^^^^^^^^^^^
.. autosummary::
:toctree: api
gradient_transform
Utility functions
^^^^^^^^^^^^^^^^^
.. autosummary::
:toctree: api
finite_diff_coeffs
generate_shifted_tapes
generate_shift_rule
generate_multi_shift_rule
eigvals_to_frequencies
compute_vjp
batch_vjp
vjp
Registering autodifferentiation gradients
-----------------------------------------
All PennyLane QNodes are automatically differentiable, and can be included
seamlessly within an autodiff pipeline. When creating a :class:`QNode <pennylane.QNode>`, the
strategy for determining the optimal differentiation strategy is *automated*,
and takes into account the circuit, device, autodiff framework, and metadata
(such as whether a finite number of shots are used).
.. code-block:: python
dev = qml.device("default.qubit", wires=2, shots=1000)
@qml.qnode(dev, interface="tf")
def circuit(weights):
...
In particular:
- When using a simulator device with exact measurement statistics, backpropagation
is preferred due to performance and memory improvements.
- When using a hardware device, or a simulator with a finite number of shots,
a quantum gradient transform---such as the parameter-shift rule---is preferred.
If you would like to specify a particular quantum gradient transform to use
when differentiating your quantum circuit, this can be passed when
creating the QNode:
.. code-block:: python
@qml.qnode(dev, gradient_fn=qml.gradients.param_shift)
def circuit(weights):
...
When using your preferred autodiff framework to compute the gradient of your
hybrid quantum-classical cost function, the specified gradient transform
for each QNode will be used.
.. note::
A single cost function may include multiple QNodes, each with their
own quantum gradient transform registered.
Transforming QNodes
-------------------
Alternatively, quantum gradient transforms can be applied manually to QNodes.
.. code-block:: python
dev = qml.device("default.qubit", wires=2)
@qml.qnode(dev)
def circuit(weights):
qml.RX(weights[0], wires=0)
qml.RY(weights[1], wires=1)
qml.CNOT(wires=[0, 1])
qml.RX(weights[2], wires=1)
return qml.probs(wires=1)
>>> weights = np.array([0.1, 0.2, 0.3], requires_grad=True)
>>> circuit(weights)
tensor([0.9658079, 0.0341921], requires_grad=True)
>>> qml.gradients.param_shift(circuit)(weights)
tensor([[-0.04673668, -0.09442394, -0.14409127],
[ 0.04673668, 0.09442394, 0.14409127]], requires_grad=True)
Comparing this to autodifferentiation:
>>> qml.grad(circuit)(weights)
array([[-0.04673668, -0.09442394, -0.14409127],
[ 0.04673668, 0.09442394, 0.14409127]])
Quantum gradient transforms can also be applied as decorators to QNodes,
if *only* gradient information is needed. Evaluating the QNode will then
automatically return the gradient:
.. code-block:: python
dev = qml.device("default.qubit", wires=2)
@qml.gradients.param_shift
@qml.qnode(dev)
def decorated_circuit(weights):
qml.RX(weights[0], wires=0)
qml.RY(weights[1], wires=1)
qml.CNOT(wires=[0, 1])
qml.RX(weights[2], wires=1)
return qml.probs(wires=1)
>>> decorated_circuit(weights)
tensor([[-0.04673668, -0.09442394, -0.14409127],
[ 0.04673668, 0.09442394, 0.14409127]], requires_grad=True)
.. note::
If your circuit contains any operations not supported by the gradient
transform, the transform will attempt to automatically decompose the
circuit into only operations that support gradients.
.. note::
If you wish to only return the purely **quantum** component of the
gradient---that is, the gradient of the output with respect to
**gate** arguments, not QNode arguments---pass ``hybrid=False``
when applying the transform:
>>> qml.gradients.param_shift(circuit, hybrid=False)(weights)
Differentiating gradient transforms
-----------------------------------
Gradient transforms are themselves differentiable, allowing higher-order
gradients to be computed:
.. code-block:: python
dev = qml.device("default.qubit", wires=2)
@qml.qnode(dev)
def circuit(weights):
qml.RX(weights[0], wires=0)
qml.RY(weights[1], wires=1)
qml.CNOT(wires=[0, 1])
qml.RX(weights[2], wires=1)
return qml.expval(qml.PauliZ(1))
>>> weights = np.array([0.1, 0.2, 0.3], requires_grad=True)
>>> circuit(weights)
tensor(0.9316158, requires_grad=True)
>>> qml.gradients.param_shift(circuit)(weights) # gradient
array([[-0.09347337, -0.18884787, -0.28818254]])
>>> qml.jacobian(qml.gradients.param_shift(circuit))(weights) # hessian
array([[[-0.9316158 , 0.01894799, 0.0289147 ],
[ 0.01894799, -0.9316158 , 0.05841749],
[ 0.0289147 , 0.05841749, -0.9316158 ]]])
Transforming tapes
------------------
Gradient transforms can be applied to low-level :class:`~.QuantumTape` objects,
a datastructure representing variational quantum algorithms:
.. code-block:: python
weights = np.array([0.1, 0.2, 0.3], requires_grad=True)
with qml.tape.JacobianTape() as tape:
qml.RX(weights[0], wires=0)
qml.RY(weights[1], wires=1)
qml.CNOT(wires=[0, 1])
qml.RX(weights[2], wires=1)
qml.expval(qml.PauliZ(1))
Unlike when transforming a QNode, transforming a tape directly
will perform no implicit quantum device evaluation. Instead, it returns
the processed tapes, and a post-processing function, which together
define the gradient:
>>> gradient_tapes, fn = qml.gradients.param_shift(tape)
>>> gradient_tapes
[<JacobianTape: wires=[0, 1], params=3>,
<JacobianTape: wires=[0, 1], params=3>,
<JacobianTape: wires=[0, 1], params=3>,
<JacobianTape: wires=[0, 1], params=3>,
<JacobianTape: wires=[0, 1], params=3>,
<JacobianTape: wires=[0, 1], params=3>]
This can be useful if the underlying circuits representing the gradient
computation need to be analyzed.
The output tapes can then be evaluated and post-processed to retrieve
the gradient:
>>> dev = qml.device("default.qubit", wires=2)
>>> fn(qml.execute(gradient_tapes, dev, None))
[[-0.09347337 -0.18884787 -0.28818254]]
Note that the post-processing function ``fn`` returned by the
gradient transform is applied to the flat list of results returned
from executing the gradient tapes.
Custom gradient transforms
--------------------------
Using the :class:`~.gradient_transform` decorator, custom gradient transforms
can be created:
.. code-block:: python
@gradient_transform
def my_custom_gradient(tape, **kwargs):
...
return gradient_tapes, processing_fn
Once created, a custom gradient transform can be applied directly
to QNodes, or registered as the quantum gradient transform to use
during autodifferentiation.
For more details, please see the :class:`~.gradient_transform`
documentation.
"""
import pennylane as qml
from . import finite_difference
from . import parameter_shift
from . import parameter_shift_cv
from .gradient_transform import gradient_transform
from .finite_difference import finite_diff, finite_diff_coeffs, generate_shifted_tapes
from .parameter_shift import param_shift
from .parameter_shift_cv import param_shift_cv
from .vjp import compute_vjp, batch_vjp, vjp
from .hamiltonian_grad import hamiltonian_grad
from .general_shift_rules import (
eigvals_to_frequencies,
generate_shift_rule,
generate_multi_shift_rule,
)
| 31.838926 | 94 | 0.697407 | [
"Apache-2.0"
] | AkashNarayanan/pennylane | pennylane/gradients/__init__.py | 9,488 | Python | Quantum gradient transforms are strategies for computing the gradient of a quantum
circuit that work by **transforming** the quantum circuit into one or more gradient circuits.
These gradient circuits, once executed and post-processed, return the gradient
of the original circuit.
Examples of quantum gradient transforms include finite-differences and parameter-shift
rules.
This module provides a selection of device-independent, differentiable quantum
gradient transforms. As such, these quantum gradient transforms can be used to
compute the gradients of quantum circuits on both simulators and hardware.
In addition, it also includes an API for writing your own quantum gradient
transforms.
These quantum gradient transforms can be used in two ways:
- Transforming quantum circuits directly
- Registering a quantum gradient strategy for use when performing autodifferentiation
with a :class:`QNode <pennylane.QNode>`.
Overview
--------
Gradient transforms
^^^^^^^^^^^^^^^^^^^
.. autosummary::
:toctree: api
finite_diff
param_shift
param_shift_cv
Custom gradients
^^^^^^^^^^^^^^^^
.. autosummary::
:toctree: api
gradient_transform
Utility functions
^^^^^^^^^^^^^^^^^
.. autosummary::
:toctree: api
finite_diff_coeffs
generate_shifted_tapes
generate_shift_rule
generate_multi_shift_rule
eigvals_to_frequencies
compute_vjp
batch_vjp
vjp
Registering autodifferentiation gradients
-----------------------------------------
All PennyLane QNodes are automatically differentiable, and can be included
seamlessly within an autodiff pipeline. When creating a :class:`QNode <pennylane.QNode>`, the
strategy for determining the optimal differentiation strategy is *automated*,
and takes into account the circuit, device, autodiff framework, and metadata
(such as whether a finite number of shots are used).
.. code-block:: python
dev = qml.device("default.qubit", wires=2, shots=1000)
@qml.qnode(dev, interface="tf")
def circuit(weights):
...
In particular:
- When using a simulator device with exact measurement statistics, backpropagation
is preferred due to performance and memory improvements.
- When using a hardware device, or a simulator with a finite number of shots,
a quantum gradient transform---such as the parameter-shift rule---is preferred.
If you would like to specify a particular quantum gradient transform to use
when differentiating your quantum circuit, this can be passed when
creating the QNode:
.. code-block:: python
@qml.qnode(dev, gradient_fn=qml.gradients.param_shift)
def circuit(weights):
...
When using your preferred autodiff framework to compute the gradient of your
hybrid quantum-classical cost function, the specified gradient transform
for each QNode will be used.
.. note::
A single cost function may include multiple QNodes, each with their
own quantum gradient transform registered.
Transforming QNodes
-------------------
Alternatively, quantum gradient transforms can be applied manually to QNodes.
.. code-block:: python
dev = qml.device("default.qubit", wires=2)
@qml.qnode(dev)
def circuit(weights):
qml.RX(weights[0], wires=0)
qml.RY(weights[1], wires=1)
qml.CNOT(wires=[0, 1])
qml.RX(weights[2], wires=1)
return qml.probs(wires=1)
>>> weights = np.array([0.1, 0.2, 0.3], requires_grad=True)
>>> circuit(weights)
tensor([0.9658079, 0.0341921], requires_grad=True)
>>> qml.gradients.param_shift(circuit)(weights)
tensor([[-0.04673668, -0.09442394, -0.14409127],
[ 0.04673668, 0.09442394, 0.14409127]], requires_grad=True)
Comparing this to autodifferentiation:
>>> qml.grad(circuit)(weights)
array([[-0.04673668, -0.09442394, -0.14409127],
[ 0.04673668, 0.09442394, 0.14409127]])
Quantum gradient transforms can also be applied as decorators to QNodes,
if *only* gradient information is needed. Evaluating the QNode will then
automatically return the gradient:
.. code-block:: python
dev = qml.device("default.qubit", wires=2)
@qml.gradients.param_shift
@qml.qnode(dev)
def decorated_circuit(weights):
qml.RX(weights[0], wires=0)
qml.RY(weights[1], wires=1)
qml.CNOT(wires=[0, 1])
qml.RX(weights[2], wires=1)
return qml.probs(wires=1)
>>> decorated_circuit(weights)
tensor([[-0.04673668, -0.09442394, -0.14409127],
[ 0.04673668, 0.09442394, 0.14409127]], requires_grad=True)
.. note::
If your circuit contains any operations not supported by the gradient
transform, the transform will attempt to automatically decompose the
circuit into only operations that support gradients.
.. note::
If you wish to only return the purely **quantum** component of the
gradient---that is, the gradient of the output with respect to
**gate** arguments, not QNode arguments---pass ``hybrid=False``
when applying the transform:
>>> qml.gradients.param_shift(circuit, hybrid=False)(weights)
Differentiating gradient transforms
-----------------------------------
Gradient transforms are themselves differentiable, allowing higher-order
gradients to be computed:
.. code-block:: python
dev = qml.device("default.qubit", wires=2)
@qml.qnode(dev)
def circuit(weights):
qml.RX(weights[0], wires=0)
qml.RY(weights[1], wires=1)
qml.CNOT(wires=[0, 1])
qml.RX(weights[2], wires=1)
return qml.expval(qml.PauliZ(1))
>>> weights = np.array([0.1, 0.2, 0.3], requires_grad=True)
>>> circuit(weights)
tensor(0.9316158, requires_grad=True)
>>> qml.gradients.param_shift(circuit)(weights) # gradient
array([[-0.09347337, -0.18884787, -0.28818254]])
>>> qml.jacobian(qml.gradients.param_shift(circuit))(weights) # hessian
array([[[-0.9316158 , 0.01894799, 0.0289147 ],
[ 0.01894799, -0.9316158 , 0.05841749],
[ 0.0289147 , 0.05841749, -0.9316158 ]]])
Transforming tapes
------------------
Gradient transforms can be applied to low-level :class:`~.QuantumTape` objects,
a datastructure representing variational quantum algorithms:
.. code-block:: python
weights = np.array([0.1, 0.2, 0.3], requires_grad=True)
with qml.tape.JacobianTape() as tape:
qml.RX(weights[0], wires=0)
qml.RY(weights[1], wires=1)
qml.CNOT(wires=[0, 1])
qml.RX(weights[2], wires=1)
qml.expval(qml.PauliZ(1))
Unlike when transforming a QNode, transforming a tape directly
will perform no implicit quantum device evaluation. Instead, it returns
the processed tapes, and a post-processing function, which together
define the gradient:
>>> gradient_tapes, fn = qml.gradients.param_shift(tape)
>>> gradient_tapes
[<JacobianTape: wires=[0, 1], params=3>,
<JacobianTape: wires=[0, 1], params=3>,
<JacobianTape: wires=[0, 1], params=3>,
<JacobianTape: wires=[0, 1], params=3>,
<JacobianTape: wires=[0, 1], params=3>,
<JacobianTape: wires=[0, 1], params=3>]
This can be useful if the underlying circuits representing the gradient
computation need to be analyzed.
The output tapes can then be evaluated and post-processed to retrieve
the gradient:
>>> dev = qml.device("default.qubit", wires=2)
>>> fn(qml.execute(gradient_tapes, dev, None))
[[-0.09347337 -0.18884787 -0.28818254]]
Note that the post-processing function ``fn`` returned by the
gradient transform is applied to the flat list of results returned
from executing the gradient tapes.
Custom gradient transforms
--------------------------
Using the :class:`~.gradient_transform` decorator, custom gradient transforms
can be created:
.. code-block:: python
@gradient_transform
def my_custom_gradient(tape, **kwargs):
...
return gradient_tapes, processing_fn
Once created, a custom gradient transform can be applied directly
to QNodes, or registered as the quantum gradient transform to use
during autodifferentiation.
For more details, please see the :class:`~.gradient_transform`
documentation.
Copyright 2018-2021 Xanadu Quantum Technologies Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. | 8,601 | 0.906513 |
import dynamo as dyn
import numpy as np
import scipy.io
from scipy import optimize
# def VecFnc(
# input,
# n=4,
# a1=10.0,
# a2=10.0,
# Kdxx=4,
# Kdyx=4,
# Kdyy=4,
# Kdxy=4,
# b1=10.0,
# b2=10.0,
# k1=1.0,
# k2=1.0,
# c1=0,
# ):
# x, y = input
# dxdt = (
# c1
# + a1 * (x ** n) / (Kdxx ** n + (x ** n))
# + (b1 * (Kdyx ** n)) / (Kdyx ** n + (y ** n))
# - (x * k1)
# )
# dydt = (
# c1
# + a2 * (y ** n) / (Kdyy ** n + (y ** n))
# + (b2 * (Kdxy ** n)) / (Kdxy ** n + (x ** n))
# - (y * k2)
# )
#
# return [dxdt, dydt]
#
#
# def test_Bhattacharya(adata=None):
# """ Test the test_Bhattacharya method for mapping quasi-potential landscape.
# The original system (VecFnc) from the Bhattacharya paper and the reconstructed vector field function in the neuron
# datasets are used for testing.
#
# Reference: A deterministic map of Waddington’s epigenetic landscape for cell fate specification
# Sudin Bhattacharya, Qiang Zhang and Melvin E. Andersen
#
# Returns
# -------
# a matplotlib plot
# """
#
# # simulation model from the original study
# (
# attractors_num_X_Y,
# sepx_old_new_pathNum,
# numPaths_att,
# num_attractors,
# numPaths,
# numTimeSteps,
# pot_path,
# path_tag,
# attractors_pot,
# x_path,
# y_path,
# ) = dyn.tl.path_integral(
# VecFnc,
# x_lim=[0, 40],
# y_lim=[0, 40],
# xyGridSpacing=2,
# dt=1e-2,
# tol=1e-2,
# numTimeSteps=1400,
# )
# Xgrid, Ygrid, Zgrid = dyn.tl.alignment(
# numPaths, numTimeSteps, pot_path, path_tag, attractors_pot, x_path, y_path
# )
#
# dyn.pl.show_landscape(adata, Xgrid, Ygrid, Zgrid) ### update
#
# # neuron model
# VecFld = scipy.io.loadmat(
# "/Volumes/xqiu/proj/dynamo/data/VecFld.mat"
# ) # file is downloadable here: https://www.dropbox.com/s/02xwwfo5v33tj70/VecFld.mat?dl=1
#
# def vector_field_function(x, VecFld=VecFld):
# """Learn an analytical function of vector field from sparse single cell samples on the entire space robustly.
#
# Reference: Regularized vector field learning with sparse approximation for mismatch removal, Ma, Jiayi, etc. al, Pattern Recognition
# """
#
# x = np.array(x).reshape((1, -1))
# if np.size(x) == 1:
# x = x[None, :]
# K = dyn.tl.con_K(x, VecFld["X"], VecFld["beta"])
# K = K.dot(VecFld["C"])
# return K.T
#
# (
# attractors_num_X_Y,
# sepx_old_new_pathNum,
# numPaths_att,
# num_attractors,
# numPaths,
# numTimeSteps,
# pot_path,
# path_tag,
# attractors_pot,
# x_path,
# y_path,
# ) = dyn.tl.path_integral(
# vector_field_function,
# x_lim=[-30, 30],
# y_lim=[-30, 30],
# xyGridSpacing=0.5,
# dt=1e-2,
# tol=1e-2,
# numTimeSteps=2000,
# )
# Xgrid, Ygrid, Zgrid = dyn.tl.alignment(
# numPaths, numTimeSteps, pot_path, path_tag, attractors_pot, x_path, y_path
# )
#
# dyn.pl.show_landscape(Xgrid, Ygrid, Zgrid)
#
#
# # test Wang's LAP method
# def F(X, a_s=1.5, n=4, S=0.5, b=1, k=1):
# x1, x2 = X
#
# F_1 = (
# (a_s * (x1 ** n) / ((S ** n) + (x1 ** n)))
# + (b * (S ** n) / ((S ** n) + (x2 ** n)))
# - (k * x1)
# )
# F_2 = (
# (a_s * (x2 ** n) / ((S ** n) + (x2 ** n)))
# + (b * (S ** n) / ((S ** n) + (x1 ** n)))
# - (k * x2)
# )
#
# return np.r_[F_1, F_2]
#
#
# def test_Wang_LAP():
# """Test the least action path method from Jin Wang and colleagues (http://www.pnas.org/cgi/doi/10.1073/pnas.1017017108)
#
# Returns
# -------
#
# """
# x1_end = 1
# x2_end = 0
# x2_init = 1.5
# x1_init = 1.5
# N = 20
#
# x1_input = np.arange(
# x1_init, x1_end + (x1_end - x1_init) / N, (x1_end - x1_init) / N
# )
# x2_input = np.arange(
# x2_init, x2_end + (x2_end - x2_init) / N, (x2_end - x2_init) / N
# )
# X_input = np.vstack((x1_input, x2_input))
#
# dyn.tl.Wang_action(X_input, F=F, D=0.1, N=20, dim=2, lamada_=1)
# res = optimize.basinhopping(
# dyn.tl.Wang_action, x0=X_input, minimizer_kwargs={"args": (2, F, 0.1, 20, 1)}
# )
# res
#
#
# def two_gene_model(X, a=1, b=1, k=1, S=0.5, n=4):
# """Two gene network motif used in `From understanding the development landscape of the canonical fate-switch pair to
# constructing a dynamic landscape for two-step neural differentiation`, Xiaojie Qiu, Shanshan Ding, Tieliu Shi, Plos one
# 2011.
#
# Parameters
# ----------
# X: `numpy.array` (dimension: 2 x 1)
# Concentration of two genes.
# a: `float`
# Parameter a in the two gene model.
# b: `float`
# Parameter b in the two gene model.
# k: `float`
# Parameter k in the two gene model.
# S: `float`
# Parameter S in the two gene model.
# n: `float`
# Parameter n in the two gene model.
#
# Returns
# -------
# F: `numpy.ndarray`
# matrix (1 x 2) of velocity values at X.
# """
#
# x1, x2 = X[0], X[1]
# F1 = (
# (a * (x1 ** n) / ((S ** n) + (x1 ** n)))
# + (b * (S ** n) / ((S ** n) + (x2 ** n)))
# - (k * x1)
# )
# F2 = (
# (a * (x2 ** n) / ((S ** n) + (x2 ** n)))
# + (b * (S ** n) / ((S ** n) + (x1 ** n)))
# - (k * x2)
# )
#
# F = np.array([[F1], [F2]]).T
# return F
#
#
# def test_Ao_LAP():
# import sympy as sp
#
# a = 1
# b = 1
# k = 1
# S = 0.5
# n = 4
# D = 0.1 * np.eye(2)
#
# N = 50
# space = 5 / N
#
# x1 = sp.Symbol("x1")
# x2 = sp.Symbol("x2")
# X = sp.Matrix([x1, x2])
# F1 = (
# (a * (x1 ** n) / ((S ** n) + (x1 ** n)))
# + (b * (S ** n) / ((S ** n) + (x2 ** n)))
# - (k * x1)
# )
# F2 = (
# (a * (x2 ** n) / ((S ** n) + (x2 ** n)))
# + (b * (S ** n) / ((S ** n) + (x1 ** n)))
# - (k * x2)
# )
# F = sp.Matrix([F1, F2])
# J = F.jacobian(X)
# U = np.zeros((N, N))
#
# for i in range(N):
# for j in range(N):
# X_s = np.array([i * space, j * space])
# # F = J.subs(X, X_s)
# F = J.subs(x1, X_s[0])
# F = np.array(F.subs(x2, X_s[1]), dtype=float)
# Q, _ = dyn.tl.solveQ(D, F)
# H = np.linalg.inv(D + Q).dot(F)
# U[i, j] = -0.5 * X_s @ H @ X_s
# test calculating jacobian below:
# import dynamo as dyn
# import numpy as np
#
# adata = dyn.sim.Simulator(motif="twogenes")
# adata.obsm['X_umap'], adata.obsm['velocity_umap'] = adata.X, adata.layers['velocity']
# dyn.vf.VectorField(adata, basis='umap')
#
# # plot potential and topography
# dyn.ext.ddhodge(adata, basis='umap')
# dyn.pl.topography(adata, color='umap_ddhodge_potential')
#
# adata.var['use_for_dynamics'] = True
# a = np.zeros((2, 2), int)
# np.fill_diagonal(a, 1)
#
# adata.uns['PCs'] = a
# dyn.vf.jacobian(adata, basis='umap', regulators=['Pu.1', 'Gata.1'],
# effectors=['Pu.1', 'Gata.1'], store_in_adata=True)
#
# # plot the recovered jacobian
# dyn.pl.jacobian(adata)
#
# #plot jacobian kinetics and heatmap
# dyn.pl.jacobian_kinetics(adata, basis='umap', tkey='umap_ddhodge_potential')
# dyn.pl.jacobian_heatmap(adata, cell_idx=[0], basis='umap')
#
# def jacobian(x1, x2):
# J = np.array([[0.25 * x1**3 / (0.0625 + x1**4)**2 - 1, -0.25 * x2**3 / (0.0625 + x2**4)**2],
# [- 0.25 * x1**3 / (0.0625 + x1**4)**2, 0.25 * x2**3 / (0.0625 + x2**4)**2 - 1]])
# return J
# # plot the true jacobian
# J_dict = adata.uns['jacobian_umap'].copy()
#
# J = np.zeros_like(J_dict['jacobian'])
# for ind, i in enumerate(adata.X):
# J[:, :, ind] = dyn.sim.two_genes_motif_jacobian(i[0], i[1])
#
# J_dict['jacobian'] = J
# adata.uns['jacobian_true'] = J_dict
# adata.obsm['X_true'] = adata.obsm['X_umap']
#
# dyn.pl.jacobian(adata, basis='true')
#
| 27.744966 | 136 | 0.497823 | [
"BSD-3-Clause"
] | aanaseer/dynamo-release | tests/tests.py | 8,270 | Python | def VecFnc( input, n=4, a1=10.0, a2=10.0, Kdxx=4, Kdyx=4, Kdyy=4, Kdxy=4, b1=10.0, b2=10.0, k1=1.0, k2=1.0, c1=0, ): x, y = input dxdt = ( c1 + a1 * (x ** n) / (Kdxx ** n + (x ** n)) + (b1 * (Kdyx ** n)) / (Kdyx ** n + (y ** n)) - (x * k1) ) dydt = ( c1 + a2 * (y ** n) / (Kdyy ** n + (y ** n)) + (b2 * (Kdxy ** n)) / (Kdxy ** n + (x ** n)) - (y * k2) ) return [dxdt, dydt] def test_Bhattacharya(adata=None): """ Test the test_Bhattacharya method for mapping quasi-potential landscape. The original system (VecFnc) from the Bhattacharya paper and the reconstructed vector field function in the neuron datasets are used for testing. Reference: A deterministic map of Waddington’s epigenetic landscape for cell fate specification Sudin Bhattacharya, Qiang Zhang and Melvin E. Andersen Returns ------- a matplotlib plot """ simulation model from the original study ( attractors_num_X_Y, sepx_old_new_pathNum, numPaths_att, num_attractors, numPaths, numTimeSteps, pot_path, path_tag, attractors_pot, x_path, y_path, ) = dyn.tl.path_integral( VecFnc, x_lim=[0, 40], y_lim=[0, 40], xyGridSpacing=2, dt=1e-2, tol=1e-2, numTimeSteps=1400, ) Xgrid, Ygrid, Zgrid = dyn.tl.alignment( numPaths, numTimeSteps, pot_path, path_tag, attractors_pot, x_path, y_path ) dyn.pl.show_landscape(adata, Xgrid, Ygrid, Zgrid) update neuron model VecFld = scipy.io.loadmat( "/Volumes/xqiu/proj/dynamo/data/VecFld.mat" ) file is downloadable here: https://www.dropbox.com/s/02xwwfo5v33tj70/VecFld.mat?dl=1 def vector_field_function(x, VecFld=VecFld): """Learn an analytical function of vector field from sparse single cell samples on the entire space robustly. Reference: Regularized vector field learning with sparse approximation for mismatch removal, Ma, Jiayi, etc. al, Pattern Recognition """ x = np.array(x).reshape((1, -1)) if np.size(x) == 1: x = x[None, :] K = dyn.tl.con_K(x, VecFld["X"], VecFld["beta"]) K = K.dot(VecFld["C"]) return K.T ( attractors_num_X_Y, sepx_old_new_pathNum, numPaths_att, num_attractors, numPaths, numTimeSteps, pot_path, path_tag, attractors_pot, x_path, y_path, ) = dyn.tl.path_integral( vector_field_function, x_lim=[-30, 30], y_lim=[-30, 30], xyGridSpacing=0.5, dt=1e-2, tol=1e-2, numTimeSteps=2000, ) Xgrid, Ygrid, Zgrid = dyn.tl.alignment( numPaths, numTimeSteps, pot_path, path_tag, attractors_pot, x_path, y_path ) dyn.pl.show_landscape(Xgrid, Ygrid, Zgrid) test Wang's LAP method def F(X, a_s=1.5, n=4, S=0.5, b=1, k=1): x1, x2 = X F_1 = ( (a_s * (x1 ** n) / ((S ** n) + (x1 ** n))) + (b * (S ** n) / ((S ** n) + (x2 ** n))) - (k * x1) ) F_2 = ( (a_s * (x2 ** n) / ((S ** n) + (x2 ** n))) + (b * (S ** n) / ((S ** n) + (x1 ** n))) - (k * x2) ) return np.r_[F_1, F_2] def test_Wang_LAP(): """Test the least action path method from Jin Wang and colleagues (http://www.pnas.org/cgi/doi/10.1073/pnas.1017017108) Returns ------- """ x1_end = 1 x2_end = 0 x2_init = 1.5 x1_init = 1.5 N = 20 x1_input = np.arange( x1_init, x1_end + (x1_end - x1_init) / N, (x1_end - x1_init) / N ) x2_input = np.arange( x2_init, x2_end + (x2_end - x2_init) / N, (x2_end - x2_init) / N ) X_input = np.vstack((x1_input, x2_input)) dyn.tl.Wang_action(X_input, F=F, D=0.1, N=20, dim=2, lamada_=1) res = optimize.basinhopping( dyn.tl.Wang_action, x0=X_input, minimizer_kwargs={"args": (2, F, 0.1, 20, 1)} ) res def two_gene_model(X, a=1, b=1, k=1, S=0.5, n=4): """Two gene network motif used in `From understanding the development landscape of the canonical fate-switch pair to constructing a dynamic landscape for two-step neural differentiation`, Xiaojie Qiu, Shanshan Ding, Tieliu Shi, Plos one 2011. Parameters ---------- X: `numpy.array` (dimension: 2 x 1) Concentration of two genes. a: `float` Parameter a in the two gene model. b: `float` Parameter b in the two gene model. k: `float` Parameter k in the two gene model. S: `float` Parameter S in the two gene model. n: `float` Parameter n in the two gene model. Returns ------- F: `numpy.ndarray` matrix (1 x 2) of velocity values at X. """ x1, x2 = X[0], X[1] F1 = ( (a * (x1 ** n) / ((S ** n) + (x1 ** n))) + (b * (S ** n) / ((S ** n) + (x2 ** n))) - (k * x1) ) F2 = ( (a * (x2 ** n) / ((S ** n) + (x2 ** n))) + (b * (S ** n) / ((S ** n) + (x1 ** n))) - (k * x2) ) F = np.array([[F1], [F2]]).T return F def test_Ao_LAP(): import sympy as sp a = 1 b = 1 k = 1 S = 0.5 n = 4 D = 0.1 * np.eye(2) N = 50 space = 5 / N x1 = sp.Symbol("x1") x2 = sp.Symbol("x2") X = sp.Matrix([x1, x2]) F1 = ( (a * (x1 ** n) / ((S ** n) + (x1 ** n))) + (b * (S ** n) / ((S ** n) + (x2 ** n))) - (k * x1) ) F2 = ( (a * (x2 ** n) / ((S ** n) + (x2 ** n))) + (b * (S ** n) / ((S ** n) + (x1 ** n))) - (k * x2) ) F = sp.Matrix([F1, F2]) J = F.jacobian(X) U = np.zeros((N, N)) for i in range(N): for j in range(N): X_s = np.array([i * space, j * space]) F = J.subs(X, X_s) F = J.subs(x1, X_s[0]) F = np.array(F.subs(x2, X_s[1]), dtype=float) Q, _ = dyn.tl.solveQ(D, F) H = np.linalg.inv(D + Q).dot(F) U[i, j] = -0.5 * X_s @ H @ X_s test calculating jacobian below: import dynamo as dyn import numpy as np adata = dyn.sim.Simulator(motif="twogenes") adata.obsm['X_umap'], adata.obsm['velocity_umap'] = adata.X, adata.layers['velocity'] dyn.vf.VectorField(adata, basis='umap') plot potential and topography dyn.ext.ddhodge(adata, basis='umap') dyn.pl.topography(adata, color='umap_ddhodge_potential') adata.var['use_for_dynamics'] = True a = np.zeros((2, 2), int) np.fill_diagonal(a, 1) adata.uns['PCs'] = a dyn.vf.jacobian(adata, basis='umap', regulators=['Pu.1', 'Gata.1'], effectors=['Pu.1', 'Gata.1'], store_in_adata=True) plot the recovered jacobian dyn.pl.jacobian(adata) plot jacobian kinetics and heatmap dyn.pl.jacobian_kinetics(adata, basis='umap', tkey='umap_ddhodge_potential') dyn.pl.jacobian_heatmap(adata, cell_idx=[0], basis='umap') def jacobian(x1, x2): J = np.array([[0.25 * x1**3 / (0.0625 + x1**4)**2 - 1, -0.25 * x2**3 / (0.0625 + x2**4)**2], [- 0.25 * x1**3 / (0.0625 + x1**4)**2, 0.25 * x2**3 / (0.0625 + x2**4)**2 - 1]]) return J plot the true jacobian J_dict = adata.uns['jacobian_umap'].copy() J = np.zeros_like(J_dict['jacobian']) for ind, i in enumerate(adata.X): J[:, :, ind] = dyn.sim.two_genes_motif_jacobian(i[0], i[1]) J_dict['jacobian'] = J adata.uns['jacobian_true'] = J_dict adata.obsm['X_true'] = adata.obsm['X_umap'] dyn.pl.jacobian(adata, basis='true') | 7,589 | 0.917876 |
'''
The Fibonacci sequence is defined by the recurrence relation:
Fn = Fn−1 + Fn−2, where F1 = 1 and F2 = 1.
Hence the first 12 terms will be:
F1 = 1
F2 = 1
F3 = 2
F4 = 3
F5 = 5
F6 = 8
F7 = 13
F8 = 21
F9 = 34
F10 = 55
F11 = 89
F12 = 144
The 12th term, F12, is the first term to contain three digits.
What is the index of the first term in the Fibonacci sequence to contain 1000 digits?
'''
# Initializing values
a = 1
b = 2
c = a + b
ind = 4
# Stopping counting when the first term in the Fibonacci sequence hits 1000 digits
while len(str(c)) < 1000:
a = b
b = c
c = a + b
ind += 1
print(ind)
| 16.425 | 85 | 0.601218 | [
"MIT"
] | malienko/projecteuler_python | problem25.py | 661 | Python | The Fibonacci sequence is defined by the recurrence relation:
Fn = Fn−1 + Fn−2, where F1 = 1 and F2 = 1.
Hence the first 12 terms will be:
F1 = 1
F2 = 1
F3 = 2
F4 = 3
F5 = 5
F6 = 8
F7 = 13
F8 = 21
F9 = 34
F10 = 55
F11 = 89
F12 = 144
The 12th term, F12, is the first term to contain three digits.
What is the index of the first term in the Fibonacci sequence to contain 1000 digits?
Initializing values Stopping counting when the first term in the Fibonacci sequence hits 1000 digits | 541 | 0.82344 |
"""
Django settings for the Sphinx documentation builder.
All configuration is imported from :mod:`backend.settings` except it sets :attr:`USE_I18N` to ``False`` to make sure
the documentation is not partially translated.
For more information on this file, see :doc:`topics/settings`.
For the full list of settings and their values, see :doc:`ref/settings`.
"""
# pylint: disable=wildcard-import
# pylint: disable=unused-wildcard-import
from .settings import *
#: A boolean that specifies whether Django’s translation system should be enabled
#: (see :setting:`django:USE_I18N` and :doc:`topics/i18n/index`)
USE_I18N = False
# Remove cacheops during documentation build because it changes related names
if "cacheops" in INSTALLED_APPS:
INSTALLED_APPS.remove("cacheops")
| 38.85 | 116 | 0.773488 | [
"Apache-2.0"
] | Integreat/integreat-cms | src/backend/sphinx_settings.py | 779 | Python | Django settings for the Sphinx documentation builder.
All configuration is imported from :mod:`backend.settings` except it sets :attr:`USE_I18N` to ``False`` to make sure
the documentation is not partially translated.
For more information on this file, see :doc:`topics/settings`.
For the full list of settings and their values, see :doc:`ref/settings`.
pylint: disable=wildcard-import pylint: disable=unused-wildcard-import: A boolean that specifies whether Django’s translation system should be enabled: (see :setting:`django:USE_I18N` and :doc:`topics/i18n/index`) Remove cacheops during documentation build because it changes related names | 646 | 0.831403 |
##
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
## | 42.846154 | 75 | 0.746858 | [
"Apache-2.0"
] | AnthonyTruchet/cylon | python/pycylon/pycylon/common/__init__.py | 557 | Python | Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. | 517 | 0.928187 |
"""
Projection class for the Sunyaev-Zeldovich effect. Requires SZpack (version 1.1.1),
which is included in SZpack.v1.1.1 and will be automatically installed.
Website for the SZpack library: http://www.chluba.de/SZpack/
For details on the computations involved please refer to the following references:
Chluba, Nagai, Sazonov, Nelson, MNRAS, 2012, arXiv:1205.5778
Chluba, Switzer, Nagai, Nelson, MNRAS, 2012, arXiv:1211.3206
Many thanks to John ZuHone, who wrote the yt part of this model.
"""
import numpy as np
from pymsz.SZpacklib import SZpack
# I0 = (2 * (kboltz * Tcmb)**3 / ((hcgs * clight)**2) / units.sr).in_units("MJy/steradian")
class SZpack_model(object):
r""" Theoretical calculation of y and T_sz -map for the thermal SZ effect.
model = TH_model(model_file, npixel, axis)
Parameters
----------
simudata : the simulation data from load_data
freqs : The frequencies (in GHz) at which to compute the SZ spectral distortion. array_like
npixel : number of pixels for your image, int.
Assume that x-y have the same number of pixels
axis : can be 'x', 'y', 'z', or a list of degrees [alpha, beta, gamma],
which will rotate the data points by $\alpha$ around the x-axis,
$\beta$ around the y-axis, and $\gamma$ around the z-axis
neighbours: this parameter only works with simulation data (not yt data).
If this is set, it will force the SPH particles smoothed into nearby N
neighbours, HSML from the simulation will be ignored.
If no HSML provided in the simulation, neighbours = 27
AR : angular resolution in arcsec.
Default : None, which gives npixel = 2 * cluster radius
and ignores the cluster's redshift.
Otherwise, cluster's redshift with AR decides how large the cluster looks.
redshift : The redshift where the cluster is at.
Default : None, we will look it from simulation data.
If redshift = 0, it will be automatically put into 0.02,
unless AR is set to None.
high_order : boolean, optional
Should we calculate high-order moments of velocity and temperature?
Returns
-------
Theoretical projected y-map in a given direction. 2D mesh data right now.
See also
--------
SZ_models for the mock SZ signal at different frequencies.
Notes
-----
Examples
--------
>>> freqs = [90., 180., 240.]
>>> szprj = SZProjection(ds, freqs, high_order=True)
"""
def __init__(self, simudata, freqs, npixel=500, neighbours=None, axis='z', AR=None,
redshift=None):
self.npl = npixel
self.ngb = neighbours
self.ax = axis
self.ar = AR
self.red = redshift
self.pxs = 0
self.ydata = np.array([])
self.freqs = np.asarray(freqs)
if simudata.data_type == "snapshot":
self._cal_ss(simudata)
elif simudata.data_type == "yt_data":
self._cal_yt(simudata)
else:
raise ValueError("Do not accept this data type %s"
"Please try to use load_data to get the data" % simudata.data_type)
# def _cal_ss(self, simd):
# Kpc = 3.0856775809623245e+21 # cm
# simd.prep_ss_SZ()
#
# def _cal_yt(self, simd):
# from yt.config import ytcfg
# from yt.utilities.physical_constants import sigma_thompson, clight, mh
# # kboltz, Tcmb, hcgs,
# from yt.funcs import fix_axis, get_pbar
# from yt.visualization.volume_rendering.off_axis_projection import \
# off_axis_projection
# from yt.utilities.parallel_tools.parallel_analysis_interface import \
# communication_system, parallel_root_only
# # from yt import units
# from yt.utilities.on_demand_imports import _astropy
#
# def generate_beta_par(L):
# def _beta_par(field, data):
# vpar = data["density"] * (data["velocity_x"] * L[0] +
# data["velocity_y"] * L[1] +
# data["velocity_z"] * L[2])
# return vpar / clight
# return _beta_par
# Ptype = simd.prep_yt_SZ()
#
# # self.ds = ds
# # self.num_freqs = len(freqs)
# # self.high_order = high_order
# # self.freqs = ds.arr(freqs, "GHz")
# # self.mueinv = 1. / mue
# # self.xinit = hcgs * self.freqs.in_units("Hz") / (kboltz * Tcmb)
# # self.freq_fields = ["%d_GHz" % (int(freq)) for freq in freqs]
# # self.data = {}
# #
# # self.display_names = {}
# # self.display_names["TeSZ"] = r"$\mathrm{T_e}$"
# # self.display_names["Tau"] = r"$\mathrm{\tau}$"
# #
# # for f, field in zip(self.freqs, self.freq_fields):
# # self.display_names[field] = r"$\mathrm{\Delta{I}_{%d\ GHz}}$" % int(f)
# #
# # def on_axis(self, axis, center="c", width=(1, "unitary"), nx=800, source=None):
# # r""" Make an on-axis projection of the SZ signal.
# #
# # Parameters
# # ----------
# # axis : integer or string
# # The axis of the simulation domain along which to make the SZprojection.
# # center : A sequence of floats, a string, or a tuple.
# # The coordinate of the center of the image. If set to 'c', 'center' or
# # left blank, the plot is centered on the middle of the domain. If set to
# # 'max' or 'm', the center will be located at the maximum of the
# # ('gas', 'density') field. Centering on the max or min of a specific
# # field is supported by providing a tuple such as ("min","temperature") or
# # ("max","dark_matter_density"). Units can be specified by passing in *center*
# # as a tuple containing a coordinate and string unit name or by passing
# # in a YTArray. If a list or unitless array is supplied, code units are
# # assumed.
# # width : tuple or a float.
# # Width can have four different formats to support windows with variable
# # x and y widths. They are:
# #
# # ================================== =======================
# # format example
# # ================================== =======================
# # (float, string) (10,'kpc')
# # ((float, string), (float, string)) ((10,'kpc'),(15,'kpc'))
# # float 0.2
# # (float, float) (0.2, 0.3)
# # ================================== =======================
# #
# # For example, (10, 'kpc') requests a plot window that is 10 kiloparsecs
# # wide in the x and y directions, ((10,'kpc'),(15,'kpc')) requests a
# # window that is 10 kiloparsecs wide along the x axis and 15
# # kiloparsecs wide along the y axis. In the other two examples, code
# # units are assumed, for example (0.2, 0.3) requests a plot that has an
# # x width of 0.2 and a y width of 0.3 in code units. If units are
# # provided the resulting plot axis labels will use the supplied units.
# # nx : integer, optional
# # The dimensions on a side of the projection image.
# # source : yt.data_objects.data_containers.YTSelectionContainer, optional
# # If specified, this will be the data source used for selecting regions to project.
# #
# # Examples
# # --------
# # >>> szprj.on_axis("y", center="max", width=(1.0, "Mpc"), source=my_sphere)
# # """
#
# axis = fix_axis(axis, self.ds)
# ctr, dctr = self.ds.coordinates.sanitize_center(center, axis)
# width = self.ds.coordinates.sanitize_width(axis, width, None)
#
# L = np.zeros(3)
# L[axis] = 1.0
#
# beta_par = generate_beta_par(L)
# self.ds.add_field(("gas", "beta_par"), function=beta_par, units="g/cm**3")
# setup_sunyaev_zeldovich_fields(self.ds)
# proj = self.ds.proj("density", axis, center=ctr, data_source=source)
# frb = proj.to_frb(width[0], nx, height=width[1])
# dens = frb["density"]
# Te = frb["t_sz"] / dens
# bpar = frb["beta_par"] / dens
# omega1 = frb["t_squared"] / dens / (Te * Te) - 1.
# bperp2 = np.zeros((nx, nx))
# sigma1 = np.zeros((nx, nx))
# kappa1 = np.zeros((nx, nx))
# if self.high_order:
# bperp2 = frb["beta_perp_squared"] / dens
# sigma1 = frb["t_beta_par"] / dens / Te - bpar
# kappa1 = frb["beta_par_squared"] / dens - bpar * bpar
# tau = sigma_thompson * dens * self.mueinv / mh
#
# nx, ny = frb.buff_size
# self.bounds = frb.bounds
# self.dx = (frb.bounds[1] - frb.bounds[0]) / nx
# self.dy = (frb.bounds[3] - frb.bounds[2]) / ny
# self.nx = nx
#
# self._compute_intensity(np.array(tau), np.array(Te), np.array(bpar),
# np.array(omega1), np.array(sigma1),
# np.array(kappa1), np.array(bperp2))
#
# self.ds.field_info.pop(("gas", "beta_par"))
#
# def off_axis(self, L, center="c", width=(1.0, "unitary"), depth=(1.0, "unitary"),
# nx=800, nz=800, north_vector=None, no_ghost=False, source=None):
# r""" Make an off-axis projection of the SZ signal.
#
# Parameters
# ----------
# L : array_like
# The normal vector of the projection.
# center : A sequence of floats, a string, or a tuple.
# The coordinate of the center of the image. If set to 'c', 'center' or
# left blank, the plot is centered on the middle of the domain. If set to
# 'max' or 'm', the center will be located at the maximum of the
# ('gas', 'density') field. Centering on the max or min of a specific
# field is supported by providing a tuple such as ("min","temperature") or
# ("max","dark_matter_density"). Units can be specified by passing in *center*
# as a tuple containing a coordinate and string unit name or by passing
# in a YTArray. If a list or unitless array is supplied, code units are
# assumed.
# width : tuple or a float.
# Width can have four different formats to support windows with variable
# x and y widths. They are:
#
# ================================== =======================
# format example
# ================================== =======================
# (float, string) (10,'kpc')
# ((float, string), (float, string)) ((10,'kpc'),(15,'kpc'))
# float 0.2
# (float, float) (0.2, 0.3)
# ================================== =======================
#
# For example, (10, 'kpc') requests a plot window that is 10 kiloparsecs
# wide in the x and y directions, ((10,'kpc'),(15,'kpc')) requests a
# window that is 10 kiloparsecs wide along the x axis and 15
# kiloparsecs wide along the y axis. In the other two examples, code
# units are assumed, for example (0.2, 0.3) requests a plot that has an
# x width of 0.2 and a y width of 0.3 in code units. If units are
# provided the resulting plot axis labels will use the supplied units.
# depth : A tuple or a float
# A tuple containing the depth to project through and the string
# key of the unit: (width, 'unit'). If set to a float, code units
# are assumed
# nx : integer, optional
# The dimensions on a side of the projection image.
# nz : integer, optional
# Deprecated, this is still in the function signature for API
# compatibility
# north_vector : a sequence of floats
# A vector defining the 'up' direction in the plot. This
# option sets the orientation of the slicing plane. If not
# set, an arbitrary grid-aligned north-vector is chosen.
# no_ghost: bool, optional
# Optimization option for off-axis cases. If True, homogenized bricks will
# extrapolate out from grid instead of interpolating from
# ghost zones that have to first be calculated. This can
# lead to large speed improvements, but at a loss of
# accuracy/smoothness in resulting image. The effects are
# less notable when the transfer function is smooth and
# broad. Default: True
# source : yt.data_objects.data_containers.YTSelectionContainer, optional
# If specified, this will be the data source used for selecting regions
# to project.
#
# Examples
# --------
# >>> L = np.array([0.5, 1.0, 0.75])
# >>> szprj.off_axis(L, center="c", width=(2.0, "Mpc"))
# """
# wd = self.ds.coordinates.sanitize_width(L, width, depth)
# w = tuple(el.in_units('code_length').v for el in wd)
# ctr, dctr = self.ds.coordinates.sanitize_center(center, L)
# res = (nx, nx)
#
# if source is None:
# source = self.ds
#
# beta_par = generate_beta_par(L)
# self.ds.add_field(("gas", "beta_par"), function=beta_par, units="g/cm**3")
# setup_sunyaev_zeldovich_fields(self.ds)
#
# dens = off_axis_projection(source, ctr, L, w, res, "density",
# north_vector=north_vector, no_ghost=no_ghost)
# Te = off_axis_projection(source, ctr, L, w, res, "t_sz",
# north_vector=north_vector, no_ghost=no_ghost) / dens
# bpar = off_axis_projection(source, ctr, L, w, res, "beta_par",
# north_vector=north_vector, no_ghost=no_ghost) / dens
# omega1 = off_axis_projection(source, ctr, L, w, res, "t_squared",
# north_vector=north_vector, no_ghost=no_ghost) / dens
# omega1 = omega1 / (Te * Te) - 1.
# if self.high_order:
# bperp2 = off_axis_projection(source, ctr, L, w, res, "beta_perp_squared",
# north_vector=north_vector, no_ghost=no_ghost) / dens
# sigma1 = off_axis_projection(source, ctr, L, w, res, "t_beta_par",
# north_vector=north_vector, no_ghost=no_ghost) / dens
# sigma1 = sigma1 / Te - bpar
# kappa1 = off_axis_projection(source, ctr, L, w, res, "beta_par_squared",
# north_vector=north_vector, no_ghost=no_ghost) / dens
# kappa1 -= bpar
# else:
# bperp2 = np.zeros((nx, nx))
# sigma1 = np.zeros((nx, nx))
# kappa1 = np.zeros((nx, nx))
# tau = sigma_thompson * dens * self.mueinv / mh
#
# self.bounds = (-0.5 * wd[0], 0.5 * wd[0], -0.5 * wd[1], 0.5 * wd[1])
# self.dx = wd[0] / nx
# self.dy = wd[1] / nx
# self.nx = nx
#
# self._compute_intensity(np.array(tau), np.array(Te), np.array(bpar),
# np.array(omega1), np.array(sigma1),
# np.array(kappa1), np.array(bperp2))
#
# self.ds.field_info.pop(("gas", "beta_par"))
#
# def _compute_intensity(self, tau, Te, bpar, omega1, sigma1, kappa1, bperp2):
#
# # Bad hack, but we get NaNs if we don't do something like this
# small_beta = np.abs(bpar) < 1.0e-20
# bpar[small_beta] = 1.0e-20
#
# comm = communication_system.communicators[-1]
#
# nx, ny = self.nx, self.nx
# signal = np.zeros((self.num_freqs, nx, ny))
# xo = np.zeros(self.num_freqs)
#
# k = int(0)
#
# start_i = comm.rank * nx // comm.size
# end_i = (comm.rank + 1) * nx // comm.size
#
# pbar = get_pbar("Computing SZ signal.", nx * nx)
#
# for i in range(start_i, end_i):
# for j in range(ny):
# xo[:] = self.xinit[:]
# SZpack.compute_combo_means(xo, tau[i, j], Te[i, j],
# bpar[i, j], omega1[i, j],
# sigma1[i, j], kappa1[i, j], bperp2[i, j])
# signal[:, i, j] = xo[:]
# pbar.update(k)
# k += 1
#
# signal = comm.mpi_allreduce(signal)
#
# pbar.finish()
#
# for i, field in enumerate(self.freq_fields):
# self.data[field] = I0 * self.xinit[i]**3 * signal[i, :, :]
# self.data["Tau"] = self.ds.arr(tau, "dimensionless")
# self.data["TeSZ"] = self.ds.arr(Te, "keV")
#
# def write_fits(self, filename, sky_scale=None, sky_center=None, clobber=True):
# r""" Export images to a FITS file. Writes the SZ distortion in all
# specified frequencies as well as the mass-weighted temperature and the
# optical depth. Distance units are in kpc, unless *sky_center*
# and *scale* are specified.
#
# Parameters
# ----------
# filename : string
# The name of the FITS file to be written.
# sky_scale : tuple
# Conversion between an angle unit and a length unit, if sky
# coordinates are desired, e.g. (1.0, "arcsec/kpc")
# sky_center : tuple, optional
# The (RA, Dec) coordinate in degrees of the central pixel. Must
# be specified with *sky_scale*.
# clobber : boolean, optional
# If the file already exists, do we overwrite?
#
# Examples
# --------
# >>> # This example just writes out a FITS file with kpc coords
# >>> szprj.write_fits("SZbullet.fits", clobber=False)
# >>> # This example uses sky coords
# >>> sky_scale = (1., "arcsec/kpc") # One arcsec per kpc
# >>> sky_center = (30., 45., "deg")
# >>> szprj.write_fits("SZbullet.fits", sky_center=sky_center, sky_scale=sky_scale)
# """
# from yt.visualization.fits_image import FITSImageData
#
# dx = self.dx.in_units("kpc")
# dy = dx
#
# w = _astropy.pywcs.WCS(naxis=2)
# w.wcs.crpix = [0.5 * (self.nx + 1)] * 2
# w.wcs.cdelt = [dx.v, dy.v]
# w.wcs.crval = [0.0, 0.0]
# w.wcs.cunit = ["kpc"] * 2
# w.wcs.ctype = ["LINEAR"] * 2
#
# fib = FITSImageData(self.data, fields=self.data.keys(), wcs=w)
# if sky_scale is not None and sky_center is not None:
# fib.create_sky_wcs(sky_center, sky_scale)
# fib.writeto(filename, clobber=clobber)
#
# @parallel_root_only
# def write_png(self, filename_prefix, cmap_name=None,
# axes_units="kpc", log_fields=None):
# r""" Export images to PNG files. Writes the SZ distortion in all
# specified frequencies as well as the mass-weighted temperature and the
# optical depth. Distance units are in kpc.
#
# Parameters
# ----------
# filename_prefix : string
# The prefix of the image filenames.
#
# Examples
# --------
# >>> szprj.write_png("SZsloshing")
# """
# if cmap_name is None:
# cmap_name = ytcfg.get("yt", "default_colormap")
#
# import matplotlib
# matplotlib.use('Agg')
# import matplotlib.pyplot as plt
# if log_fields is None:
# log_fields = {}
# ticks_font = matplotlib.font_manager.FontProperties(family='serif', size=16)
# extent = tuple([bound.in_units(axes_units).value for bound in self.bounds])
# for field, image in self.items():
# data = image.copy()
# vmin, vmax = image.min(), image.max()
# negative = False
# crossover = False
# if vmin < 0 and vmax < 0:
# data *= -1
# negative = True
# if field in log_fields:
# log_field = log_fields[field]
# else:
# log_field = True
# if log_field:
# formatter = matplotlib.ticker.LogFormatterMathtext()
# norm = matplotlib.colors.LogNorm()
# if vmin < 0 and vmax > 0:
# crossover = True
# linthresh = min(vmax, -vmin) / 100.
# norm = matplotlib.colors.SymLogNorm(linthresh,
# vmin=vmin, vmax=vmax)
# else:
# norm = None
# formatter = None
# filename = filename_prefix + "_" + field + ".png"
# cbar_label = self.display_names[field]
# units = self.data[field].units.latex_representation()
# if units is not None and units != "":
# cbar_label += r'$\ \ (' + units + r')$'
# fig = plt.figure(figsize=(10.0, 8.0))
# ax = fig.add_subplot(111)
# cax = ax.imshow(data.d, norm=norm, extent=extent, cmap=cmap_name, origin="lower")
# for label in ax.get_xticklabels():
# label.set_fontproperties(ticks_font)
# for label in ax.get_yticklabels():
# label.set_fontproperties(ticks_font)
# ax.set_xlabel(r"$\mathrm{x\ (%s)}$" % axes_units, fontsize=16)
# ax.set_ylabel(r"$\mathrm{y\ (%s)}$" % axes_units, fontsize=16)
# cbar = fig.colorbar(cax, format=formatter)
# cbar.ax.set_ylabel(cbar_label, fontsize=16)
# if negative:
# cbar.ax.set_yticklabels(["-" + label.get_text()
# for label in cbar.ax.get_yticklabels()])
# if crossover:
# yticks = list(-10**np.arange(np.floor(np.log10(-vmin)),
# np.rint(np.log10(linthresh)) - 1, -1)) + [0] + \
# list(10**np.arange(np.rint(np.log10(linthresh)),
# np.ceil(np.log10(vmax)) + 1))
# cbar.set_ticks(yticks)
# for label in cbar.ax.get_yticklabels():
# label.set_fontproperties(ticks_font)
# fig.tight_layout()
# plt.savefig(filename)
#
# @parallel_root_only
# def write_hdf5(self, filename):
# r"""Export the set of S-Z fields to a set of HDF5 datasets.
#
# Parameters
# ----------
# filename : string
# This file will be opened in "write" mode.
#
# Examples
# --------
# >>> szprj.write_hdf5("SZsloshing.h5")
# """
# for field, data in self.items():
# data.write_hdf5(filename, dataset_name=field)
#
# def keys(self):
# return self.data.keys()
#
# def items(self):
# return self.data.items()
#
# def values(self):
# return self.data.values()
#
# def has_key(self, key):
# return key in self.data.keys()
#
# def __getitem__(self, key):
# return self.data[key]
#
# @property
# def shape(self):
# return (self.nx, self.nx)
| 46.893617 | 101 | 0.520995 | [
"MIT"
] | weiguangcui/pymsz | pymsz/SZpack_models.py | 24,244 | Python | Theoretical calculation of y and T_sz -map for the thermal SZ effect.
model = TH_model(model_file, npixel, axis)
Parameters
----------
simudata : the simulation data from load_data
freqs : The frequencies (in GHz) at which to compute the SZ spectral distortion. array_like
npixel : number of pixels for your image, int.
Assume that x-y have the same number of pixels
axis : can be 'x', 'y', 'z', or a list of degrees [alpha, beta, gamma],
which will rotate the data points by $\alpha$ around the x-axis,
$\beta$ around the y-axis, and $\gamma$ around the z-axis
neighbours: this parameter only works with simulation data (not yt data).
If this is set, it will force the SPH particles smoothed into nearby N
neighbours, HSML from the simulation will be ignored.
If no HSML provided in the simulation, neighbours = 27
AR : angular resolution in arcsec.
Default : None, which gives npixel = 2 * cluster radius
and ignores the cluster's redshift.
Otherwise, cluster's redshift with AR decides how large the cluster looks.
redshift : The redshift where the cluster is at.
Default : None, we will look it from simulation data.
If redshift = 0, it will be automatically put into 0.02,
unless AR is set to None.
high_order : boolean, optional
Should we calculate high-order moments of velocity and temperature?
Returns
-------
Theoretical projected y-map in a given direction. 2D mesh data right now.
See also
--------
SZ_models for the mock SZ signal at different frequencies.
Notes
-----
Examples
--------
>>> freqs = [90., 180., 240.]
>>> szprj = SZProjection(ds, freqs, high_order=True)
Projection class for the Sunyaev-Zeldovich effect. Requires SZpack (version 1.1.1),
which is included in SZpack.v1.1.1 and will be automatically installed.
Website for the SZpack library: http://www.chluba.de/SZpack/
For details on the computations involved please refer to the following references:
Chluba, Nagai, Sazonov, Nelson, MNRAS, 2012, arXiv:1205.5778
Chluba, Switzer, Nagai, Nelson, MNRAS, 2012, arXiv:1211.3206
Many thanks to John ZuHone, who wrote the yt part of this model.
I0 = (2 * (kboltz * Tcmb)**3 / ((hcgs * clight)**2) / units.sr).in_units("MJy/steradian") def _cal_ss(self, simd): Kpc = 3.0856775809623245e+21 cm simd.prep_ss_SZ() def _cal_yt(self, simd): from yt.config import ytcfg from yt.utilities.physical_constants import sigma_thompson, clight, mh kboltz, Tcmb, hcgs, from yt.funcs import fix_axis, get_pbar from yt.visualization.volume_rendering.off_axis_projection import \ off_axis_projection from yt.utilities.parallel_tools.parallel_analysis_interface import \ communication_system, parallel_root_only from yt import units from yt.utilities.on_demand_imports import _astropy def generate_beta_par(L): def _beta_par(field, data): vpar = data["density"] * (data["velocity_x"] * L[0] + data["velocity_y"] * L[1] + data["velocity_z"] * L[2]) return vpar / clight return _beta_par Ptype = simd.prep_yt_SZ() self.ds = ds self.num_freqs = len(freqs) self.high_order = high_order self.freqs = ds.arr(freqs, "GHz") self.mueinv = 1. / mue self.xinit = hcgs * self.freqs.in_units("Hz") / (kboltz * Tcmb) self.freq_fields = ["%d_GHz" % (int(freq)) for freq in freqs] self.data = {} self.display_names = {} self.display_names["TeSZ"] = r"$\mathrm{T_e}$" self.display_names["Tau"] = r"$\mathrm{\tau}$" for f, field in zip(self.freqs, self.freq_fields): self.display_names[field] = r"$\mathrm{\Delta{I}_{%d\ GHz}}$" % int(f) def on_axis(self, axis, center="c", width=(1, "unitary"), nx=800, source=None): r""" Make an on-axis projection of the SZ signal. Parameters ---------- axis : integer or string The axis of the simulation domain along which to make the SZprojection. center : A sequence of floats, a string, or a tuple. The coordinate of the center of the image. If set to 'c', 'center' or left blank, the plot is centered on the middle of the domain. If set to 'max' or 'm', the center will be located at the maximum of the ('gas', 'density') field. Centering on the max or min of a specific field is supported by providing a tuple such as ("min","temperature") or ("max","dark_matter_density"). Units can be specified by passing in *center* as a tuple containing a coordinate and string unit name or by passing in a YTArray. If a list or unitless array is supplied, code units are assumed. width : tuple or a float. Width can have four different formats to support windows with variable x and y widths. They are: ================================== ======================= format example ================================== ======================= (float, string) (10,'kpc') ((float, string), (float, string)) ((10,'kpc'),(15,'kpc')) float 0.2 (float, float) (0.2, 0.3) ================================== ======================= For example, (10, 'kpc') requests a plot window that is 10 kiloparsecs wide in the x and y directions, ((10,'kpc'),(15,'kpc')) requests a window that is 10 kiloparsecs wide along the x axis and 15 kiloparsecs wide along the y axis. In the other two examples, code units are assumed, for example (0.2, 0.3) requests a plot that has an x width of 0.2 and a y width of 0.3 in code units. If units are provided the resulting plot axis labels will use the supplied units. nx : integer, optional The dimensions on a side of the projection image. source : yt.data_objects.data_containers.YTSelectionContainer, optional If specified, this will be the data source used for selecting regions to project. Examples -------- >>> szprj.on_axis("y", center="max", width=(1.0, "Mpc"), source=my_sphere) """ axis = fix_axis(axis, self.ds) ctr, dctr = self.ds.coordinates.sanitize_center(center, axis) width = self.ds.coordinates.sanitize_width(axis, width, None) L = np.zeros(3) L[axis] = 1.0 beta_par = generate_beta_par(L) self.ds.add_field(("gas", "beta_par"), function=beta_par, units="g/cm**3") setup_sunyaev_zeldovich_fields(self.ds) proj = self.ds.proj("density", axis, center=ctr, data_source=source) frb = proj.to_frb(width[0], nx, height=width[1]) dens = frb["density"] Te = frb["t_sz"] / dens bpar = frb["beta_par"] / dens omega1 = frb["t_squared"] / dens / (Te * Te) - 1. bperp2 = np.zeros((nx, nx)) sigma1 = np.zeros((nx, nx)) kappa1 = np.zeros((nx, nx)) if self.high_order: bperp2 = frb["beta_perp_squared"] / dens sigma1 = frb["t_beta_par"] / dens / Te - bpar kappa1 = frb["beta_par_squared"] / dens - bpar * bpar tau = sigma_thompson * dens * self.mueinv / mh nx, ny = frb.buff_size self.bounds = frb.bounds self.dx = (frb.bounds[1] - frb.bounds[0]) / nx self.dy = (frb.bounds[3] - frb.bounds[2]) / ny self.nx = nx self._compute_intensity(np.array(tau), np.array(Te), np.array(bpar), np.array(omega1), np.array(sigma1), np.array(kappa1), np.array(bperp2)) self.ds.field_info.pop(("gas", "beta_par")) def off_axis(self, L, center="c", width=(1.0, "unitary"), depth=(1.0, "unitary"), nx=800, nz=800, north_vector=None, no_ghost=False, source=None): r""" Make an off-axis projection of the SZ signal. Parameters ---------- L : array_like The normal vector of the projection. center : A sequence of floats, a string, or a tuple. The coordinate of the center of the image. If set to 'c', 'center' or left blank, the plot is centered on the middle of the domain. If set to 'max' or 'm', the center will be located at the maximum of the ('gas', 'density') field. Centering on the max or min of a specific field is supported by providing a tuple such as ("min","temperature") or ("max","dark_matter_density"). Units can be specified by passing in *center* as a tuple containing a coordinate and string unit name or by passing in a YTArray. If a list or unitless array is supplied, code units are assumed. width : tuple or a float. Width can have four different formats to support windows with variable x and y widths. They are: ================================== ======================= format example ================================== ======================= (float, string) (10,'kpc') ((float, string), (float, string)) ((10,'kpc'),(15,'kpc')) float 0.2 (float, float) (0.2, 0.3) ================================== ======================= For example, (10, 'kpc') requests a plot window that is 10 kiloparsecs wide in the x and y directions, ((10,'kpc'),(15,'kpc')) requests a window that is 10 kiloparsecs wide along the x axis and 15 kiloparsecs wide along the y axis. In the other two examples, code units are assumed, for example (0.2, 0.3) requests a plot that has an x width of 0.2 and a y width of 0.3 in code units. If units are provided the resulting plot axis labels will use the supplied units. depth : A tuple or a float A tuple containing the depth to project through and the string key of the unit: (width, 'unit'). If set to a float, code units are assumed nx : integer, optional The dimensions on a side of the projection image. nz : integer, optional Deprecated, this is still in the function signature for API compatibility north_vector : a sequence of floats A vector defining the 'up' direction in the plot. This option sets the orientation of the slicing plane. If not set, an arbitrary grid-aligned north-vector is chosen. no_ghost: bool, optional Optimization option for off-axis cases. If True, homogenized bricks will extrapolate out from grid instead of interpolating from ghost zones that have to first be calculated. This can lead to large speed improvements, but at a loss of accuracy/smoothness in resulting image. The effects are less notable when the transfer function is smooth and broad. Default: True source : yt.data_objects.data_containers.YTSelectionContainer, optional If specified, this will be the data source used for selecting regions to project. Examples -------- >>> L = np.array([0.5, 1.0, 0.75]) >>> szprj.off_axis(L, center="c", width=(2.0, "Mpc")) """ wd = self.ds.coordinates.sanitize_width(L, width, depth) w = tuple(el.in_units('code_length').v for el in wd) ctr, dctr = self.ds.coordinates.sanitize_center(center, L) res = (nx, nx) if source is None: source = self.ds beta_par = generate_beta_par(L) self.ds.add_field(("gas", "beta_par"), function=beta_par, units="g/cm**3") setup_sunyaev_zeldovich_fields(self.ds) dens = off_axis_projection(source, ctr, L, w, res, "density", north_vector=north_vector, no_ghost=no_ghost) Te = off_axis_projection(source, ctr, L, w, res, "t_sz", north_vector=north_vector, no_ghost=no_ghost) / dens bpar = off_axis_projection(source, ctr, L, w, res, "beta_par", north_vector=north_vector, no_ghost=no_ghost) / dens omega1 = off_axis_projection(source, ctr, L, w, res, "t_squared", north_vector=north_vector, no_ghost=no_ghost) / dens omega1 = omega1 / (Te * Te) - 1. if self.high_order: bperp2 = off_axis_projection(source, ctr, L, w, res, "beta_perp_squared", north_vector=north_vector, no_ghost=no_ghost) / dens sigma1 = off_axis_projection(source, ctr, L, w, res, "t_beta_par", north_vector=north_vector, no_ghost=no_ghost) / dens sigma1 = sigma1 / Te - bpar kappa1 = off_axis_projection(source, ctr, L, w, res, "beta_par_squared", north_vector=north_vector, no_ghost=no_ghost) / dens kappa1 -= bpar else: bperp2 = np.zeros((nx, nx)) sigma1 = np.zeros((nx, nx)) kappa1 = np.zeros((nx, nx)) tau = sigma_thompson * dens * self.mueinv / mh self.bounds = (-0.5 * wd[0], 0.5 * wd[0], -0.5 * wd[1], 0.5 * wd[1]) self.dx = wd[0] / nx self.dy = wd[1] / nx self.nx = nx self._compute_intensity(np.array(tau), np.array(Te), np.array(bpar), np.array(omega1), np.array(sigma1), np.array(kappa1), np.array(bperp2)) self.ds.field_info.pop(("gas", "beta_par")) def _compute_intensity(self, tau, Te, bpar, omega1, sigma1, kappa1, bperp2): Bad hack, but we get NaNs if we don't do something like this small_beta = np.abs(bpar) < 1.0e-20 bpar[small_beta] = 1.0e-20 comm = communication_system.communicators[-1] nx, ny = self.nx, self.nx signal = np.zeros((self.num_freqs, nx, ny)) xo = np.zeros(self.num_freqs) k = int(0) start_i = comm.rank * nx // comm.size end_i = (comm.rank + 1) * nx // comm.size pbar = get_pbar("Computing SZ signal.", nx * nx) for i in range(start_i, end_i): for j in range(ny): xo[:] = self.xinit[:] SZpack.compute_combo_means(xo, tau[i, j], Te[i, j], bpar[i, j], omega1[i, j], sigma1[i, j], kappa1[i, j], bperp2[i, j]) signal[:, i, j] = xo[:] pbar.update(k) k += 1 signal = comm.mpi_allreduce(signal) pbar.finish() for i, field in enumerate(self.freq_fields): self.data[field] = I0 * self.xinit[i]**3 * signal[i, :, :] self.data["Tau"] = self.ds.arr(tau, "dimensionless") self.data["TeSZ"] = self.ds.arr(Te, "keV") def write_fits(self, filename, sky_scale=None, sky_center=None, clobber=True): r""" Export images to a FITS file. Writes the SZ distortion in all specified frequencies as well as the mass-weighted temperature and the optical depth. Distance units are in kpc, unless *sky_center* and *scale* are specified. Parameters ---------- filename : string The name of the FITS file to be written. sky_scale : tuple Conversion between an angle unit and a length unit, if sky coordinates are desired, e.g. (1.0, "arcsec/kpc") sky_center : tuple, optional The (RA, Dec) coordinate in degrees of the central pixel. Must be specified with *sky_scale*. clobber : boolean, optional If the file already exists, do we overwrite? Examples -------- >>> This example just writes out a FITS file with kpc coords >>> szprj.write_fits("SZbullet.fits", clobber=False) >>> This example uses sky coords >>> sky_scale = (1., "arcsec/kpc") One arcsec per kpc >>> sky_center = (30., 45., "deg") >>> szprj.write_fits("SZbullet.fits", sky_center=sky_center, sky_scale=sky_scale) """ from yt.visualization.fits_image import FITSImageData dx = self.dx.in_units("kpc") dy = dx w = _astropy.pywcs.WCS(naxis=2) w.wcs.crpix = [0.5 * (self.nx + 1)] * 2 w.wcs.cdelt = [dx.v, dy.v] w.wcs.crval = [0.0, 0.0] w.wcs.cunit = ["kpc"] * 2 w.wcs.ctype = ["LINEAR"] * 2 fib = FITSImageData(self.data, fields=self.data.keys(), wcs=w) if sky_scale is not None and sky_center is not None: fib.create_sky_wcs(sky_center, sky_scale) fib.writeto(filename, clobber=clobber) @parallel_root_only def write_png(self, filename_prefix, cmap_name=None, axes_units="kpc", log_fields=None): r""" Export images to PNG files. Writes the SZ distortion in all specified frequencies as well as the mass-weighted temperature and the optical depth. Distance units are in kpc. Parameters ---------- filename_prefix : string The prefix of the image filenames. Examples -------- >>> szprj.write_png("SZsloshing") """ if cmap_name is None: cmap_name = ytcfg.get("yt", "default_colormap") import matplotlib matplotlib.use('Agg') import matplotlib.pyplot as plt if log_fields is None: log_fields = {} ticks_font = matplotlib.font_manager.FontProperties(family='serif', size=16) extent = tuple([bound.in_units(axes_units).value for bound in self.bounds]) for field, image in self.items(): data = image.copy() vmin, vmax = image.min(), image.max() negative = False crossover = False if vmin < 0 and vmax < 0: data *= -1 negative = True if field in log_fields: log_field = log_fields[field] else: log_field = True if log_field: formatter = matplotlib.ticker.LogFormatterMathtext() norm = matplotlib.colors.LogNorm() if vmin < 0 and vmax > 0: crossover = True linthresh = min(vmax, -vmin) / 100. norm = matplotlib.colors.SymLogNorm(linthresh, vmin=vmin, vmax=vmax) else: norm = None formatter = None filename = filename_prefix + "_" + field + ".png" cbar_label = self.display_names[field] units = self.data[field].units.latex_representation() if units is not None and units != "": cbar_label += r'$\ \ (' + units + r')$' fig = plt.figure(figsize=(10.0, 8.0)) ax = fig.add_subplot(111) cax = ax.imshow(data.d, norm=norm, extent=extent, cmap=cmap_name, origin="lower") for label in ax.get_xticklabels(): label.set_fontproperties(ticks_font) for label in ax.get_yticklabels(): label.set_fontproperties(ticks_font) ax.set_xlabel(r"$\mathrm{x\ (%s)}$" % axes_units, fontsize=16) ax.set_ylabel(r"$\mathrm{y\ (%s)}$" % axes_units, fontsize=16) cbar = fig.colorbar(cax, format=formatter) cbar.ax.set_ylabel(cbar_label, fontsize=16) if negative: cbar.ax.set_yticklabels(["-" + label.get_text() for label in cbar.ax.get_yticklabels()]) if crossover: yticks = list(-10**np.arange(np.floor(np.log10(-vmin)), np.rint(np.log10(linthresh)) - 1, -1)) + [0] + \ list(10**np.arange(np.rint(np.log10(linthresh)), np.ceil(np.log10(vmax)) + 1)) cbar.set_ticks(yticks) for label in cbar.ax.get_yticklabels(): label.set_fontproperties(ticks_font) fig.tight_layout() plt.savefig(filename) @parallel_root_only def write_hdf5(self, filename): r"""Export the set of S-Z fields to a set of HDF5 datasets. Parameters ---------- filename : string This file will be opened in "write" mode. Examples -------- >>> szprj.write_hdf5("SZsloshing.h5") """ for field, data in self.items(): data.write_hdf5(filename, dataset_name=field) def keys(self): return self.data.keys() def items(self): return self.data.items() def values(self): return self.data.values() def has_key(self, key): return key in self.data.keys() def __getitem__(self, key): return self.data[key] @property def shape(self): return (self.nx, self.nx) | 20,623 | 0.850643 |
# -*- coding: utf-8 -*-
"""
Created on Mon Jan 8 21:45:27 2018
@author: pilla
"""
| 10.625 | 35 | 0.552941 | [
"BSD-2-Clause"
] | psnipiv/LicenseGenerator | LicenseGenerator/accounts/tests/__init__.py | 85 | Python | Created on Mon Jan 8 21:45:27 2018
@author: pilla
-*- coding: utf-8 -*- | 75 | 0.882353 |
# Tests for client code in bin/test
| 18 | 35 | 0.75 | [
"Apache-2.0"
] | EHRI/resync | bin/test/__init__.py | 36 | Python | Tests for client code in bin/test | 33 | 0.916667 |
'''
For this exercise, you'll use what you've learned about the zip() function and combine two lists into a dictionary.
These lists are actually extracted from a bigger dataset file of world development indicators from the World Bank. For pedagogical purposes, we have pre-processed this dataset into the lists that you'll be working with.
The first list feature_names contains header names of the dataset and the second list row_vals contains actual values of a row from the dataset, corresponding to each of the header names.
'''
# Zip lists: zipped_lists
zipped_lists = zip(feature_names, row_vals)
# Create a dictionary: rs_dict
rs_dict = dict(zipped_lists)
# Print the dictionary
print(rs_dict)
| 44 | 219 | 0.795455 | [
"MIT"
] | Baidaly/datacamp-samples | 10 - python-data-science-toolbox-part-2/case_study/1 - dictionaries for data science.py | 704 | Python | For this exercise, you'll use what you've learned about the zip() function and combine two lists into a dictionary.
These lists are actually extracted from a bigger dataset file of world development indicators from the World Bank. For pedagogical purposes, we have pre-processed this dataset into the lists that you'll be working with.
The first list feature_names contains header names of the dataset and the second list row_vals contains actual values of a row from the dataset, corresponding to each of the header names.
Zip lists: zipped_lists Create a dictionary: rs_dict Print the dictionary | 601 | 0.853693 |
# -*- coding: utf-8 -*-
# ------------------------------------------------------------------------------
#
# Copyright 2018-2019 Fetch.AI Limited
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# ------------------------------------------------------------------------------
"""This module contains the implementation of the confirmation_aw3 skill."""
from aea.configurations.base import PublicId
PUBLIC_ID = PublicId.from_str("fetchai/confirmation_aw3:0.3.0")
| 37.884615 | 80 | 0.617259 | [
"Apache-2.0"
] | marcofavorito/agents-aea | packages/fetchai/skills/confirmation_aw3/__init__.py | 985 | Python | This module contains the implementation of the confirmation_aw3 skill.
-*- coding: utf-8 -*- ------------------------------------------------------------------------------ Copyright 2018-2019 Fetch.AI Limited Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ------------------------------------------------------------------------------ | 831 | 0.843655 |
"""
Airflow API (Stable)
# Overview To facilitate management, Apache Airflow supports a range of REST API endpoints across its objects. This section provides an overview of the API design, methods, and supported use cases. Most of the endpoints accept `JSON` as input and return `JSON` responses. This means that you must usually add the following headers to your request: ``` Content-type: application/json Accept: application/json ``` ## Resources The term `resource` refers to a single type of object in the Airflow metadata. An API is broken up by its endpoint's corresponding resource. The name of a resource is typically plural and expressed in camelCase. Example: `dagRuns`. Resource names are used as part of endpoint URLs, as well as in API parameters and responses. ## CRUD Operations The platform supports **C**reate, **R**ead, **U**pdate, and **D**elete operations on most resources. You can review the standards for these operations and their standard parameters below. Some endpoints have special behavior as exceptions. ### Create To create a resource, you typically submit an HTTP `POST` request with the resource's required metadata in the request body. The response returns a `201 Created` response code upon success with the resource's metadata, including its internal `id`, in the response body. ### Read The HTTP `GET` request can be used to read a resource or to list a number of resources. A resource's `id` can be submitted in the request parameters to read a specific resource. The response usually returns a `200 OK` response code upon success, with the resource's metadata in the response body. If a `GET` request does not include a specific resource `id`, it is treated as a list request. The response usually returns a `200 OK` response code upon success, with an object containing a list of resources' metadata in the response body. When reading resources, some common query parameters are usually available. e.g.: ``` v1/connections?limit=25&offset=25 ``` |Query Parameter|Type|Description| |---------------|----|-----------| |limit|integer|Maximum number of objects to fetch. Usually 25 by default| |offset|integer|Offset after which to start returning objects. For use with limit query parameter.| ### Update Updating a resource requires the resource `id`, and is typically done using an HTTP `PATCH` request, with the fields to modify in the request body. The response usually returns a `200 OK` response code upon success, with information about the modified resource in the response body. ### Delete Deleting a resource requires the resource `id` and is typically executing via an HTTP `DELETE` request. The response usually returns a `204 No Content` response code upon success. ## Conventions - Resource names are plural and expressed in camelCase. - Names are consistent between URL parameter name and field name. - Field names are in snake_case. ```json { \"name\": \"string\", \"slots\": 0, \"occupied_slots\": 0, \"used_slots\": 0, \"queued_slots\": 0, \"open_slots\": 0 } ``` ### Update Mask Update mask is available as a query parameter in patch endpoints. It is used to notify the API which fields you want to update. Using `update_mask` makes it easier to update objects by helping the server know which fields to update in an object instead of updating all fields. The update request ignores any fields that aren't specified in the field mask, leaving them with their current values. Example: ``` resource = request.get('/resource/my-id').json() resource['my_field'] = 'new-value' request.patch('/resource/my-id?update_mask=my_field', data=json.dumps(resource)) ``` ## Versioning and Endpoint Lifecycle - API versioning is not synchronized to specific releases of the Apache Airflow. - APIs are designed to be backward compatible. - Any changes to the API will first go through a deprecation phase. # Summary of Changes | Airflow version | Description | |-|-| | v2.0 | Initial release | | v2.0.2 | Added /plugins endpoint | | v2.1 | New providers endpoint | # Trying the API You can use a third party airflow_client.client, such as [curl](https://curl.haxx.se/), [HTTPie](https://httpie.org/), [Postman](https://www.postman.com/) or [the Insomnia rest airflow_client.client](https://insomnia.rest/) to test the Apache Airflow API. Note that you will need to pass credentials data. For e.g., here is how to pause a DAG with [curl](https://curl.haxx.se/), when basic authorization is used: ```bash curl -X PATCH 'https://example.com/api/v1/dags/{dag_id}?update_mask=is_paused' \\ -H 'Content-Type: application/json' \\ --user \"username:password\" \\ -d '{ \"is_paused\": true }' ``` Using a graphical tool such as [Postman](https://www.postman.com/) or [Insomnia](https://insomnia.rest/), it is possible to import the API specifications directly: 1. Download the API specification by clicking the **Download** button at top of this document 2. Import the JSON specification in the graphical tool of your choice. - In *Postman*, you can click the **import** button at the top - With *Insomnia*, you can just drag-and-drop the file on the UI Note that with *Postman*, you can also generate code snippets by selecting a request and clicking on the **Code** button. ## Enabling CORS [Cross-origin resource sharing (CORS)](https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS) is a browser security feature that restricts HTTP requests that are initiated from scripts running in the browser. For details on enabling/configuring CORS, see [Enabling CORS](https://airflow.apache.org/docs/apache-airflow/stable/security/api.html). # Authentication To be able to meet the requirements of many organizations, Airflow supports many authentication methods, and it is even possible to add your own method. If you want to check which auth backend is currently set, you can use `airflow config get-value api auth_backend` command as in the example below. ```bash $ airflow config get-value api auth_backend airflow.api.auth.backend.basic_auth ``` The default is to deny all requests. For details on configuring the authentication, see [API Authorization](https://airflow.apache.org/docs/apache-airflow/stable/security/api.html). # Errors We follow the error response format proposed in [RFC 7807](https://tools.ietf.org/html/rfc7807) also known as Problem Details for HTTP APIs. As with our normal API responses, your airflow_client.client must be prepared to gracefully handle additional members of the response. ## Unauthenticated This indicates that the request has not been applied because it lacks valid authentication credentials for the target resource. Please check that you have valid credentials. ## PermissionDenied This response means that the server understood the request but refuses to authorize it because it lacks sufficient rights to the resource. It happens when you do not have the necessary permission to execute the action you performed. You need to get the appropriate permissions in other to resolve this error. ## BadRequest This response means that the server cannot or will not process the request due to something that is perceived to be a airflow_client.client error (e.g., malformed request syntax, invalid request message framing, or deceptive request routing). To resolve this, please ensure that your syntax is correct. ## NotFound This airflow_client.client error response indicates that the server cannot find the requested resource. ## MethodNotAllowed Indicates that the request method is known by the server but is not supported by the target resource. ## NotAcceptable The target resource does not have a current representation that would be acceptable to the user agent, according to the proactive negotiation header fields received in the request, and the server is unwilling to supply a default representation. ## AlreadyExists The request could not be completed due to a conflict with the current state of the target resource, e.g. the resource it tries to create already exists. ## Unknown This means that the server encountered an unexpected condition that prevented it from fulfilling the request. # noqa: E501
The version of the OpenAPI document: 1.0.0
Contact: dev@airflow.apache.org
Generated by: https://openapi-generator.tech
"""
import sys
import unittest
import airflow_client.client
from airflow_client.client.model.dag import DAG
globals()['DAG'] = DAG
from airflow_client.client.model.dag_collection_all_of import DAGCollectionAllOf
class TestDAGCollectionAllOf(unittest.TestCase):
"""DAGCollectionAllOf unit test stubs"""
def setUp(self):
pass
def tearDown(self):
pass
def testDAGCollectionAllOf(self):
"""Test DAGCollectionAllOf"""
# FIXME: construct object with mandatory attributes with example values
# model = DAGCollectionAllOf() # noqa: E501
pass
if __name__ == '__main__':
unittest.main()
| 230.589744 | 8,173 | 0.759813 | [
"Apache-2.0"
] | sptsakcg/airflow-client-python | airflow_client/test/test_dag_collection_all_of.py | 8,993 | Python | DAGCollectionAllOf unit test stubs
Test DAGCollectionAllOf
Airflow API (Stable)
# Overview To facilitate management, Apache Airflow supports a range of REST API endpoints across its objects. This section provides an overview of the API design, methods, and supported use cases. Most of the endpoints accept `JSON` as input and return `JSON` responses. This means that you must usually add the following headers to your request: ``` Content-type: application/json Accept: application/json ``` ## Resources The term `resource` refers to a single type of object in the Airflow metadata. An API is broken up by its endpoint's corresponding resource. The name of a resource is typically plural and expressed in camelCase. Example: `dagRuns`. Resource names are used as part of endpoint URLs, as well as in API parameters and responses. ## CRUD Operations The platform supports **C**reate, **R**ead, **U**pdate, and **D**elete operations on most resources. You can review the standards for these operations and their standard parameters below. Some endpoints have special behavior as exceptions. ### Create To create a resource, you typically submit an HTTP `POST` request with the resource's required metadata in the request body. The response returns a `201 Created` response code upon success with the resource's metadata, including its internal `id`, in the response body. ### Read The HTTP `GET` request can be used to read a resource or to list a number of resources. A resource's `id` can be submitted in the request parameters to read a specific resource. The response usually returns a `200 OK` response code upon success, with the resource's metadata in the response body. If a `GET` request does not include a specific resource `id`, it is treated as a list request. The response usually returns a `200 OK` response code upon success, with an object containing a list of resources' metadata in the response body. When reading resources, some common query parameters are usually available. e.g.: ``` v1/connections?limit=25&offset=25 ``` |Query Parameter|Type|Description| |---------------|----|-----------| |limit|integer|Maximum number of objects to fetch. Usually 25 by default| |offset|integer|Offset after which to start returning objects. For use with limit query parameter.| ### Update Updating a resource requires the resource `id`, and is typically done using an HTTP `PATCH` request, with the fields to modify in the request body. The response usually returns a `200 OK` response code upon success, with information about the modified resource in the response body. ### Delete Deleting a resource requires the resource `id` and is typically executing via an HTTP `DELETE` request. The response usually returns a `204 No Content` response code upon success. ## Conventions - Resource names are plural and expressed in camelCase. - Names are consistent between URL parameter name and field name. - Field names are in snake_case. ```json { "name": "string", "slots": 0, "occupied_slots": 0, "used_slots": 0, "queued_slots": 0, "open_slots": 0 } ``` ### Update Mask Update mask is available as a query parameter in patch endpoints. It is used to notify the API which fields you want to update. Using `update_mask` makes it easier to update objects by helping the server know which fields to update in an object instead of updating all fields. The update request ignores any fields that aren't specified in the field mask, leaving them with their current values. Example: ``` resource = request.get('/resource/my-id').json() resource['my_field'] = 'new-value' request.patch('/resource/my-id?update_mask=my_field', data=json.dumps(resource)) ``` ## Versioning and Endpoint Lifecycle - API versioning is not synchronized to specific releases of the Apache Airflow. - APIs are designed to be backward compatible. - Any changes to the API will first go through a deprecation phase. # Summary of Changes | Airflow version | Description | |-|-| | v2.0 | Initial release | | v2.0.2 | Added /plugins endpoint | | v2.1 | New providers endpoint | # Trying the API You can use a third party airflow_client.client, such as [curl](https://curl.haxx.se/), [HTTPie](https://httpie.org/), [Postman](https://www.postman.com/) or [the Insomnia rest airflow_client.client](https://insomnia.rest/) to test the Apache Airflow API. Note that you will need to pass credentials data. For e.g., here is how to pause a DAG with [curl](https://curl.haxx.se/), when basic authorization is used: ```bash curl -X PATCH 'https://example.com/api/v1/dags/{dag_id}?update_mask=is_paused' \ -H 'Content-Type: application/json' \ --user "username:password" \ -d '{ "is_paused": true }' ``` Using a graphical tool such as [Postman](https://www.postman.com/) or [Insomnia](https://insomnia.rest/), it is possible to import the API specifications directly: 1. Download the API specification by clicking the **Download** button at top of this document 2. Import the JSON specification in the graphical tool of your choice. - In *Postman*, you can click the **import** button at the top - With *Insomnia*, you can just drag-and-drop the file on the UI Note that with *Postman*, you can also generate code snippets by selecting a request and clicking on the **Code** button. ## Enabling CORS [Cross-origin resource sharing (CORS)](https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS) is a browser security feature that restricts HTTP requests that are initiated from scripts running in the browser. For details on enabling/configuring CORS, see [Enabling CORS](https://airflow.apache.org/docs/apache-airflow/stable/security/api.html). # Authentication To be able to meet the requirements of many organizations, Airflow supports many authentication methods, and it is even possible to add your own method. If you want to check which auth backend is currently set, you can use `airflow config get-value api auth_backend` command as in the example below. ```bash $ airflow config get-value api auth_backend airflow.api.auth.backend.basic_auth ``` The default is to deny all requests. For details on configuring the authentication, see [API Authorization](https://airflow.apache.org/docs/apache-airflow/stable/security/api.html). # Errors We follow the error response format proposed in [RFC 7807](https://tools.ietf.org/html/rfc7807) also known as Problem Details for HTTP APIs. As with our normal API responses, your airflow_client.client must be prepared to gracefully handle additional members of the response. ## Unauthenticated This indicates that the request has not been applied because it lacks valid authentication credentials for the target resource. Please check that you have valid credentials. ## PermissionDenied This response means that the server understood the request but refuses to authorize it because it lacks sufficient rights to the resource. It happens when you do not have the necessary permission to execute the action you performed. You need to get the appropriate permissions in other to resolve this error. ## BadRequest This response means that the server cannot or will not process the request due to something that is perceived to be a airflow_client.client error (e.g., malformed request syntax, invalid request message framing, or deceptive request routing). To resolve this, please ensure that your syntax is correct. ## NotFound This airflow_client.client error response indicates that the server cannot find the requested resource. ## MethodNotAllowed Indicates that the request method is known by the server but is not supported by the target resource. ## NotAcceptable The target resource does not have a current representation that would be acceptable to the user agent, according to the proactive negotiation header fields received in the request, and the server is unwilling to supply a default representation. ## AlreadyExists The request could not be completed due to a conflict with the current state of the target resource, e.g. the resource it tries to create already exists. ## Unknown This means that the server encountered an unexpected condition that prevented it from fulfilling the request. # noqa: E501
The version of the OpenAPI document: 1.0.0
Contact: dev@airflow.apache.org
Generated by: https://openapi-generator.tech
FIXME: construct object with mandatory attributes with example values model = DAGCollectionAllOf() noqa: E501 | 8,464 | 0.941176 |
# ===============================================================================
# Copyright 2014 Jake Ross
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ===============================================================================
# ============= enthought library imports =======================
from __future__ import absolute_import
from traits.api import List
# ============= standard library imports ========================
# ============= local library imports ==========================
from pychron.loggable import Loggable
class BaseTagModel(Loggable):
items = List
# ============= EOF =============================================
| 37.322581 | 81 | 0.543647 | [
"Apache-2.0"
] | ASUPychron/pychron | pychron/pipeline/tagging/base_tags.py | 1,157 | Python | =============================================================================== Copyright 2014 Jake Ross Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. =============================================================================== ============= enthought library imports ======================= ============= standard library imports ======================== ============= local library imports ========================== ============= EOF ============================================= | 960 | 0.829732 |
# from imports import *
# import random
# class Docs(commands.Cog):
# def __init__(self, bot):
# self.bot = bot
# self.bot.loop.create_task(self.__ainit__())
# async def __ainit__(self):
# await self.bot.wait_until_ready()
# self.scraper = AsyncScraper(session = self.bot.session)
# async def rtfm_lookup(self, program = None, *, args = None):
# rtfm_dictionary = {
# "fusion.py": "https://fusion.senarc.org/en/master/",
# "development" : "https://fusion.senarc.org/en/development/"
# }
# if not args:
# return rtfm_dictionary.get(program)
# else:
# url = rtfm_dictionary.get(program)
# results = await self.scraper.search(args, page=url)
# if not results:
# return f"Could not find anything with {args}."
# else:
# return results
# def reference(self, message):
# reference = message.reference
# if reference and isinstance(reference.resolved, discord.Message):
# return reference.resolved.to_reference()
# return None
# async def rtfm_send(self, ctx, results):
# if isinstance(results, str):
# await ctx.send(results, allowed_mentions = discord.AllowedMentions.none())
# else:
# embed = discord.Embed(color = random.randint(0, 16777215))
# results = results[:10]
# embed.description = "\n".join(f"[`{result}`]({value})" for result, value in results)
# reference = self.reference(ctx.message)
# await ctx.send(embed=embed, reference = reference)
# @commands.group(slash_interaction=True, aliases=["rtd", "rtfs"], brief="Search for attributes from docs.")
# async def rtfm(self, ctx, *, args = None):
# await ctx.trigger_typing()
# results = await self.rtfm_lookup(program = "fusion.py", args = args)
# await self.rtfm_send(ctx, results)
# @rtfm.command(slash_interaction=True, brief = "a command using doc_search to look up at development's docs")
# async def development(self, ctx, *, args = None):
# await ctx.trigger_typing()
# results = await self.rtfm_lookup(program="development", args = args)
# await self.rtfm_send(ctx, results)
# def setup(bot):
# bot.add_cog(Docs(bot))
| 30.082192 | 112 | 0.651184 | [
"MIT"
] | BenitzCoding/Utility-Bot | cogs/docs.py | 2,196 | Python | from imports import * import random class Docs(commands.Cog): def __init__(self, bot): self.bot = bot self.bot.loop.create_task(self.__ainit__()) async def __ainit__(self): await self.bot.wait_until_ready() self.scraper = AsyncScraper(session = self.bot.session) async def rtfm_lookup(self, program = None, *, args = None): rtfm_dictionary = { "fusion.py": "https://fusion.senarc.org/en/master/", "development" : "https://fusion.senarc.org/en/development/" } if not args: return rtfm_dictionary.get(program) else: url = rtfm_dictionary.get(program) results = await self.scraper.search(args, page=url) if not results: return f"Could not find anything with {args}." else: return results def reference(self, message): reference = message.reference if reference and isinstance(reference.resolved, discord.Message): return reference.resolved.to_reference() return None async def rtfm_send(self, ctx, results): if isinstance(results, str): await ctx.send(results, allowed_mentions = discord.AllowedMentions.none()) else: embed = discord.Embed(color = random.randint(0, 16777215)) results = results[:10] embed.description = "\n".join(f"[`{result}`]({value})" for result, value in results) reference = self.reference(ctx.message) await ctx.send(embed=embed, reference = reference) @commands.group(slash_interaction=True, aliases=["rtd", "rtfs"], brief="Search for attributes from docs.") async def rtfm(self, ctx, *, args = None): await ctx.trigger_typing() results = await self.rtfm_lookup(program = "fusion.py", args = args) await self.rtfm_send(ctx, results) @rtfm.command(slash_interaction=True, brief = "a command using doc_search to look up at development's docs") async def development(self, ctx, *, args = None): await ctx.trigger_typing() results = await self.rtfm_lookup(program="development", args = args) await self.rtfm_send(ctx, results) def setup(bot): bot.add_cog(Docs(bot)) | 2,074 | 0.944444 |
# -*- coding: utf-8 -*-
"""
Module entry point.
------------------------------------------------------------------------------
This file is part of grepros - grep for ROS bag files and live topics.
Released under the BSD License.
@author Erki Suurjaak
@created 24.10.2021
@modified 02.11.2021
------------------------------------------------------------------------------
"""
from . import main
if "__main__" == __name__:
main.run()
| 25.111111 | 78 | 0.429204 | [
"BSD-3-Clause"
] | suurjaak/grepros | src/grepros/__main__.py | 452 | Python | Module entry point.
------------------------------------------------------------------------------
This file is part of grepros - grep for ROS bag files and live topics.
Released under the BSD License.
@author Erki Suurjaak
@created 24.10.2021
@modified 02.11.2021
------------------------------------------------------------------------------
-*- coding: utf-8 -*- | 381 | 0.84292 |
# Copyright 2017 Google Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Stub ``ctypes`` module.
Used by ``setuptools.windows_helpers``.
"""
| 35.210526 | 74 | 0.750374 | [
"Apache-2.0"
] | dhermes/google-cloud-python-on-gae | language-app/stubs/ctypes.py | 669 | Python | Stub ``ctypes`` module.
Used by ``setuptools.windows_helpers``.
Copyright 2017 Google Inc. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. | 636 | 0.950673 |
# MIT License
#
# Copyright (C) IBM Corporation 2019
#
# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated
# documentation files (the "Software"), to deal in the Software without restriction, including without limitation the
# rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit
# persons to whom the Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all copies or substantial portions of the
# Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE
# WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
# TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
"""
Differential Privacy Library for Python
=======================================
The IBM Differential Privacy Library is a library for writing, executing and experimenting with differential privacy.
The Library includes a basic differential privacy mechanisms, the building blocks of differential privacy; tools for
basic data analysis with differential privacy; and machine learning models that satisfy differential privacy.
"""
from diffprivlib import mechanisms
from diffprivlib import models
from diffprivlib import tools
__version__ = '0.2.0'
| 51.75 | 120 | 0.778986 | [
"MIT"
] | Jakondak/differential-privacy-library | diffprivlib/__init__.py | 1,656 | Python | Differential Privacy Library for Python
=======================================
The IBM Differential Privacy Library is a library for writing, executing and experimenting with differential privacy.
The Library includes a basic differential privacy mechanisms, the building blocks of differential privacy; tools for
basic data analysis with differential privacy; and machine learning models that satisfy differential privacy.
MIT License Copyright (C) IBM Corporation 2019 Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. | 1,495 | 0.902778 |
# -*- coding: utf-8 -*-
"""
# Author : Camey
# DateTime : 2022/3/12 8:49 下午
# Description :
""" | 17.333333 | 33 | 0.5 | [
"Apache-2.0"
] | abcdcamey/Gobigger-Explore | my_work/config/__init__.py | 108 | Python | # Author : Camey
# DateTime : 2022/3/12 8:49 下午
# Description :
-*- coding: utf-8 -*- | 96 | 0.923077 |
#
# @lc app=leetcode.cn id=35 lang=python3
#
# [35] search-insert-position
#
None
# @lc code=end | 13.714286 | 40 | 0.677083 | [
"MIT"
] | smartmark-pro/leetcode_record | codes_auto/35.search-insert-position.py | 96 | Python | @lc app=leetcode.cn id=35 lang=python3 [35] search-insert-position @lc code=end | 79 | 0.822917 |
"""
Plugin architecture, based on decorators
References:
1. https://play.pixelblaster.ro/blog/2017/12/18/a-quick-and-dirty-mini-plugin-system-for-python/
"""
| 23.285714 | 100 | 0.736196 | [
"MIT"
] | huuhoa/colusa | src/colusa/plugins/__init__.py | 163 | Python | Plugin architecture, based on decorators
References:
1. https://play.pixelblaster.ro/blog/2017/12/18/a-quick-and-dirty-mini-plugin-system-for-python/ | 154 | 0.944785 |
# TODO:
#
# A handler/formatter that can replace django.utils.log.AdminEmailHandler
#
# Needed:
# * A dump of locals() on each affected line in the stacktrace
| 22.714286 | 73 | 0.742138 | [
"Apache-2.0"
] | Uninett/python-logging-humio | src/humiologging/handlers/django.py | 159 | Python | TODO: A handler/formatter that can replace django.utils.log.AdminEmailHandler Needed: * A dump of locals() on each affected line in the stacktrace | 146 | 0.918239 |
"""pytest style fixtures for use in Virtool Workflows.""" | 57 | 57 | 0.754386 | [
"MIT"
] | igboyes/virtool-workflow | virtool_workflow/fixtures/__init__.py | 57 | Python | pytest style fixtures for use in Virtool Workflows. | 51 | 0.894737 |
# coding: utf-8
"""
Flat API
The Flat API allows you to easily extend the abilities of the [Flat Platform](https://flat.io), with a wide range of use cases including the following: * Creating and importing new music scores using MusicXML, MIDI, Guitar Pro (GP3, GP4, GP5, GPX, GP), PowerTab, TuxGuitar and MuseScore files * Browsing, updating, copying, exporting the user's scores (for example in MP3, WAV or MIDI) * Managing educational resources with Flat for Education: creating & updating the organization accounts, the classes, rosters and assignments. The Flat API is built on HTTP. Our API is RESTful It has predictable resource URLs. It returns HTTP response codes to indicate errors. It also accepts and returns JSON in the HTTP body. The [schema](/swagger.yaml) of this API follows the [OpenAPI Initiative (OAI) specification](https://www.openapis.org/), you can use and work with [compatible Swagger tools](http://swagger.io/open-source-integrations/). This API features Cross-Origin Resource Sharing (CORS) implemented in compliance with [W3C spec](https://www.w3.org/TR/cors/). You can use your favorite HTTP/REST library for your programming language to use Flat's API. This specification and reference is [available on Github](https://github.com/FlatIO/api-reference). Getting Started and learn more: * [API Overview and interoduction](https://flat.io/developers/docs/api/) * [Authentication (Personal Access Tokens or OAuth2)](https://flat.io/developers/docs/api/authentication.html) * [SDKs](https://flat.io/developers/docs/api/sdks.html) * [Rate Limits](https://flat.io/developers/docs/api/rate-limits.html) * [Changelog](https://flat.io/developers/docs/api/changelog.html) # noqa: E501
OpenAPI spec version: 2.7.0
Contact: developers@flat.io
Generated by: https://openapi-generator.tech
"""
from __future__ import absolute_import
import unittest
import flat_api
from flat_api.models.collection import Collection # noqa: E501
from flat_api.rest import ApiException
class TestCollection(unittest.TestCase):
"""Collection unit test stubs"""
def setUp(self):
pass
def tearDown(self):
pass
def testCollection(self):
"""Test Collection"""
# FIXME: construct object with mandatory attributes with example values
# model = flat_api.models.collection.Collection() # noqa: E501
pass
if __name__ == '__main__':
unittest.main()
| 59.658537 | 1,686 | 0.742845 | [
"Apache-2.0"
] | FlatIO/api-client-python | test/test_collection.py | 2,446 | Python | Collection unit test stubs
Test Collection
Flat API
The Flat API allows you to easily extend the abilities of the [Flat Platform](https://flat.io), with a wide range of use cases including the following: * Creating and importing new music scores using MusicXML, MIDI, Guitar Pro (GP3, GP4, GP5, GPX, GP), PowerTab, TuxGuitar and MuseScore files * Browsing, updating, copying, exporting the user's scores (for example in MP3, WAV or MIDI) * Managing educational resources with Flat for Education: creating & updating the organization accounts, the classes, rosters and assignments. The Flat API is built on HTTP. Our API is RESTful It has predictable resource URLs. It returns HTTP response codes to indicate errors. It also accepts and returns JSON in the HTTP body. The [schema](/swagger.yaml) of this API follows the [OpenAPI Initiative (OAI) specification](https://www.openapis.org/), you can use and work with [compatible Swagger tools](http://swagger.io/open-source-integrations/). This API features Cross-Origin Resource Sharing (CORS) implemented in compliance with [W3C spec](https://www.w3.org/TR/cors/). You can use your favorite HTTP/REST library for your programming language to use Flat's API. This specification and reference is [available on Github](https://github.com/FlatIO/api-reference). Getting Started and learn more: * [API Overview and interoduction](https://flat.io/developers/docs/api/) * [Authentication (Personal Access Tokens or OAuth2)](https://flat.io/developers/docs/api/authentication.html) * [SDKs](https://flat.io/developers/docs/api/sdks.html) * [Rate Limits](https://flat.io/developers/docs/api/rate-limits.html) * [Changelog](https://flat.io/developers/docs/api/changelog.html) # noqa: E501
OpenAPI spec version: 2.7.0
Contact: developers@flat.io
Generated by: https://openapi-generator.tech
coding: utf-8 noqa: E501 FIXME: construct object with mandatory attributes with example values model = flat_api.models.collection.Collection() noqa: E501 | 1,995 | 0.815617 |
# File: __init__.py
#
# Copyright (c) 2018-2019 Splunk Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software distributed under
# the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND,
# either express or implied. See the License for the specific language governing permissions
# and limitations under the License.
| 40.266667 | 95 | 0.756623 | [
"Apache-2.0"
] | splunk-soar-connectors/hipchat | __init__.py | 604 | Python | File: __init__.py Copyright (c) 2018-2019 Splunk Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. | 575 | 0.951987 |
# Databricks notebook source
# MAGIC %md-sandbox
# MAGIC
# MAGIC <div style="text-align: center; line-height: 0; padding-top: 9px;">
# MAGIC <img src="https://databricks.com/wp-content/uploads/2018/03/db-academy-rgb-1200px.png" alt="Databricks Learning" style="width: 600px">
# MAGIC </div>
# COMMAND ----------
# MAGIC %md
# MAGIC # Parsing Errors
# MAGIC
# MAGIC This code is going to throw several errors. Click on **`Run All`** above.
# COMMAND ----------
# MAGIC %run ../Includes/module-4/setup-lesson-4.02
# COMMAND ----------
# ANSWER
x = 1
x * 7
# COMMAND ----------
# MAGIC %md
# MAGIC Note that **`Run All`** execution mimics scheduled job execution; the **`Command skipped`** output we see below is the same we'll see in a job result.
# COMMAND ----------
# ANSWER
y = 99.52
y // 1
# COMMAND ----------
# MAGIC %md
# MAGIC The above is what we see when have Python errors
# COMMAND ----------
# ANSWER
import pyspark.sql.functions as F
# COMMAND ----------
# MAGIC %md
# MAGIC Let's look at a Spark error.
# MAGIC
# MAGIC While running multiple commands in a single cell, it can sometimes to be difficult to parse where an error is coming from.
# COMMAND ----------
# ANSWER
df = (spark.read
.format("csv")
.option("header", True)
.schema("date DATE, temp INTEGER")
.load("/databricks-datasets/weather/low_temps"))
df.createOrReplaceTempView("low_temps")
df.join(df, "date").groupBy("date").count()
# COMMAND ----------
# MAGIC %md
# MAGIC Sometimes an error isn't an error, but doesn't achieve what was intended.
# COMMAND ----------
# MAGIC %sql
# MAGIC -- ANSWER
# MAGIC SELECT dayofmonth(date) FROM low_temps
# COMMAND ----------
# MAGIC %md
# MAGIC Use the below cell to figure out how to fix the code above.
# COMMAND ----------
display(df)
# COMMAND ----------
# MAGIC %md
# MAGIC Column names cause common errors when trying to save tables.
# COMMAND ----------
# MAGIC %sql
# MAGIC -- ANSWER
# MAGIC CREATE TABLE test_table
# MAGIC AS (
# MAGIC SELECT dayofmonth(date) % 3 three_day_cycle FROM low_temps
# MAGIC )
# COMMAND ----------
# MAGIC %md
# MAGIC Run the following cell to delete the tables and files associated with this lesson.
# COMMAND ----------
DA.cleanup()
# COMMAND ----------
# MAGIC %md-sandbox
# MAGIC © 2022 Databricks, Inc. All rights reserved.<br/>
# MAGIC Apache, Apache Spark, Spark and the Spark logo are trademarks of the <a href="https://www.apache.org/">Apache Software Foundation</a>.<br/>
# MAGIC <br/>
# MAGIC <a href="https://databricks.com/privacy-policy">Privacy Policy</a> | <a href="https://databricks.com/terms-of-use">Terms of Use</a> | <a href="https://help.databricks.com/">Support</a>
| 23.405172 | 192 | 0.650829 | [
"CC0-1.0"
] | Code360In/advanced-data-engineering-with-databricks | Advanced-Data-Engineering-with-Databricks/Solutions/04 - Databricks in Production/ADE 4.02 - Error Prone.py | 2,715 | Python | Databricks notebook source MAGIC %md-sandbox MAGIC MAGIC <div style="text-align: center; line-height: 0; padding-top: 9px;"> MAGIC <img src="https://databricks.com/wp-content/uploads/2018/03/db-academy-rgb-1200px.png" alt="Databricks Learning" style="width: 600px"> MAGIC </div> COMMAND ---------- MAGIC %md MAGIC Parsing Errors MAGIC MAGIC This code is going to throw several errors. Click on **`Run All`** above. COMMAND ---------- MAGIC %run ../Includes/module-4/setup-lesson-4.02 COMMAND ---------- ANSWER COMMAND ---------- MAGIC %md MAGIC Note that **`Run All`** execution mimics scheduled job execution; the **`Command skipped`** output we see below is the same we'll see in a job result. COMMAND ---------- ANSWER COMMAND ---------- MAGIC %md MAGIC The above is what we see when have Python errors COMMAND ---------- ANSWER COMMAND ---------- MAGIC %md MAGIC Let's look at a Spark error. MAGIC MAGIC While running multiple commands in a single cell, it can sometimes to be difficult to parse where an error is coming from. COMMAND ---------- ANSWER COMMAND ---------- MAGIC %md MAGIC Sometimes an error isn't an error, but doesn't achieve what was intended. COMMAND ---------- MAGIC %sql MAGIC -- ANSWER MAGIC SELECT dayofmonth(date) FROM low_temps COMMAND ---------- MAGIC %md MAGIC Use the below cell to figure out how to fix the code above. COMMAND ---------- COMMAND ---------- MAGIC %md MAGIC Column names cause common errors when trying to save tables. COMMAND ---------- MAGIC %sql MAGIC -- ANSWER MAGIC CREATE TABLE test_table MAGIC AS ( MAGIC SELECT dayofmonth(date) % 3 three_day_cycle FROM low_temps MAGIC ) COMMAND ---------- MAGIC %md MAGIC Run the following cell to delete the tables and files associated with this lesson. COMMAND ---------- COMMAND ---------- MAGIC %md-sandbox MAGIC © 2022 Databricks, Inc. All rights reserved.<br/> MAGIC Apache, Apache Spark, Spark and the Spark logo are trademarks of the <a href="https://www.apache.org/">Apache Software Foundation</a>.<br/> MAGIC <br/> MAGIC <a href="https://databricks.com/privacy-policy">Privacy Policy</a> | <a href="https://databricks.com/terms-of-use">Terms of Use</a> | <a href="https://help.databricks.com/">Support</a> | 2,221 | 0.818048 |
# -*- coding: utf-8 -*-
# ------------------------------------------------------------------------------
#
# Copyright 2018-2019 Fetch.AI Limited
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# ------------------------------------------------------------------------------
"""This module contains an example of skill for an AEA."""
from aea.configurations.base import PublicId
PUBLIC_ID = PublicId.from_str("fetchai/gym:0.13.0")
| 36.730769 | 80 | 0.604188 | [
"Apache-2.0"
] | marcofavorito/agents-aea | packages/fetchai/skills/gym/__init__.py | 955 | Python | This module contains an example of skill for an AEA.
-*- coding: utf-8 -*- ------------------------------------------------------------------------------ Copyright 2018-2019 Fetch.AI Limited Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ------------------------------------------------------------------------------ | 813 | 0.851309 |
# # SPDX-License-Identifier: MIT
# from augur.augurplugin import AugurPlugin
# from augur.application import Application
# class HousekeeperPlugin(AugurPlugin):
# """
# This plugin serves as an example as to how to load plugins into Augur
# """
# def __init__(self, augur_app):
# super().__init__(augur_app)
# self.__housekeeper = self.__call__()
# def __call__(self):
# from .housekeeper import Housekeeper
# return Housekeeper(
# user=self._augur.read_config('Database', 'user', 'AUGUR_DB_USER', 'root'),
# password=self._augur.read_config('Database', 'password', 'AUGUR_DB_PASS', 'password'),
# host=self._augur.read_config('Database', 'host', 'AUGUR_DB_HOST', '127.0.0.1'),
# port=self._augur.read_config('Database', 'port', 'AUGUR_DB_PORT', '3306'),
# dbname=self._augur.read_config('Database', 'database', 'AUGUR_DB_NAME', 'msr14')
# )
# HousekeeperPlugin.augur_plugin_meta = {
# 'name': 'housekeeper',
# 'datasource': True
# }
# Application.register_plugin(HousekeeperPlugin)
# __all__ = ['HousekeeperPlugin'] | 39.1 | 104 | 0.636829 | [
"MIT"
] | 0WeiyuFeng0/augur | augur/housekeeper/__init__.py | 1,173 | Python | SPDX-License-Identifier: MIT from augur.augurplugin import AugurPlugin from augur.application import Application class HousekeeperPlugin(AugurPlugin): """ This plugin serves as an example as to how to load plugins into Augur """ def __init__(self, augur_app): super().__init__(augur_app) self.__housekeeper = self.__call__() def __call__(self): from .housekeeper import Housekeeper return Housekeeper( user=self._augur.read_config('Database', 'user', 'AUGUR_DB_USER', 'root'), password=self._augur.read_config('Database', 'password', 'AUGUR_DB_PASS', 'password'), host=self._augur.read_config('Database', 'host', 'AUGUR_DB_HOST', '127.0.0.1'), port=self._augur.read_config('Database', 'port', 'AUGUR_DB_PORT', '3306'), dbname=self._augur.read_config('Database', 'database', 'AUGUR_DB_NAME', 'msr14') ) HousekeeperPlugin.augur_plugin_meta = { 'name': 'housekeeper', 'datasource': True } Application.register_plugin(HousekeeperPlugin) __all__ = ['HousekeeperPlugin'] | 1,116 | 0.951407 |
# coding: utf-8
"""
OpenShift API (with Kubernetes)
OpenShift provides builds, application lifecycle, image content management, and administrative policy on top of Kubernetes. The API allows consistent management of those objects. All API operations are authenticated via an Authorization bearer token that is provided for service accounts as a generated secret (in JWT form) or via the native OAuth endpoint located at /oauth/authorize. Core infrastructure components may use openshift.client certificates that require no authentication. All API operations return a 'resourceVersion' string that represents the version of the object in the underlying storage. The standard LIST operation performs a snapshot read of the underlying objects, returning a resourceVersion representing a consistent version of the listed objects. The WATCH operation allows all updates to a set of objects after the provided resourceVersion to be observed by a openshift.client. By listing and beginning a watch from the returned resourceVersion, openshift.clients may observe a consistent view of the state of one or more objects. Note that WATCH always returns the update after the provided resourceVersion. Watch may be extended a limited time in the past - using etcd 2 the watch window is 1000 events (which on a large cluster may only be a few tens of seconds) so openshift.clients must explicitly handle the \"watch to old error\" by re-listing. Objects are divided into two rough categories - those that have a lifecycle and must reflect the state of the cluster, and those that have no state. Objects with lifecycle typically have three main sections: * 'metadata' common to all objects * a 'spec' that represents the desired state * a 'status' that represents how much of the desired state is reflected on the cluster at the current time Objects that have no state have 'metadata' but may lack a 'spec' or 'status' section. Objects are divided into those that are namespace scoped (only exist inside of a namespace) and those that are cluster scoped (exist outside of a namespace). A namespace scoped resource will be deleted when the namespace is deleted and cannot be created if the namespace has not yet been created or is in the process of deletion. Cluster scoped resources are typically only accessible to admins - resources like nodes, persistent volumes, and cluster policy. All objects have a schema that is a combination of the 'kind' and 'apiVersion' fields. This schema is additive only for any given version - no backwards incompatible changes are allowed without incrementing the apiVersion. The server will return and accept a number of standard responses that share a common schema - for instance, the common error type is 'unversioned.Status' (described below) and will be returned on any error from the API server. The API is available in multiple serialization formats - the default is JSON (Accept: application/json and Content-Type: application/json) but openshift.clients may also use YAML (application/yaml) or the native Protobuf schema (application/vnd.kubernetes.protobuf). Note that the format of the WATCH API call is slightly different - for JSON it returns newline delimited objects while for Protobuf it returns length-delimited frames (4 bytes in network-order) that contain a 'versioned.Watch' Protobuf object. See the OpenShift documentation at https://docs.openshift.org for more information.
OpenAPI spec version: v3.6.0-alpha.0
Generated by: https://github.com/swagger-api/swagger-codegen.git
"""
from __future__ import absolute_import
import os
import sys
import unittest
import openshift.client
from kubernetes.client.rest import ApiException
from openshift.client.models.v1beta1_cpu_target_utilization import V1beta1CPUTargetUtilization
class TestV1beta1CPUTargetUtilization(unittest.TestCase):
""" V1beta1CPUTargetUtilization unit test stubs """
def setUp(self):
pass
def tearDown(self):
pass
def testV1beta1CPUTargetUtilization(self):
"""
Test V1beta1CPUTargetUtilization
"""
model = openshift.client.models.v1beta1_cpu_target_utilization.V1beta1CPUTargetUtilization()
if __name__ == '__main__':
unittest.main()
| 99.023256 | 3,380 | 0.793565 | [
"Apache-2.0"
] | flaper87/openshift-restclient-python | openshift/test/test_v1beta1_cpu_target_utilization.py | 4,258 | Python | V1beta1CPUTargetUtilization unit test stubs
Test V1beta1CPUTargetUtilization
OpenShift API (with Kubernetes)
OpenShift provides builds, application lifecycle, image content management, and administrative policy on top of Kubernetes. The API allows consistent management of those objects. All API operations are authenticated via an Authorization bearer token that is provided for service accounts as a generated secret (in JWT form) or via the native OAuth endpoint located at /oauth/authorize. Core infrastructure components may use openshift.client certificates that require no authentication. All API operations return a 'resourceVersion' string that represents the version of the object in the underlying storage. The standard LIST operation performs a snapshot read of the underlying objects, returning a resourceVersion representing a consistent version of the listed objects. The WATCH operation allows all updates to a set of objects after the provided resourceVersion to be observed by a openshift.client. By listing and beginning a watch from the returned resourceVersion, openshift.clients may observe a consistent view of the state of one or more objects. Note that WATCH always returns the update after the provided resourceVersion. Watch may be extended a limited time in the past - using etcd 2 the watch window is 1000 events (which on a large cluster may only be a few tens of seconds) so openshift.clients must explicitly handle the "watch to old error" by re-listing. Objects are divided into two rough categories - those that have a lifecycle and must reflect the state of the cluster, and those that have no state. Objects with lifecycle typically have three main sections: * 'metadata' common to all objects * a 'spec' that represents the desired state * a 'status' that represents how much of the desired state is reflected on the cluster at the current time Objects that have no state have 'metadata' but may lack a 'spec' or 'status' section. Objects are divided into those that are namespace scoped (only exist inside of a namespace) and those that are cluster scoped (exist outside of a namespace). A namespace scoped resource will be deleted when the namespace is deleted and cannot be created if the namespace has not yet been created or is in the process of deletion. Cluster scoped resources are typically only accessible to admins - resources like nodes, persistent volumes, and cluster policy. All objects have a schema that is a combination of the 'kind' and 'apiVersion' fields. This schema is additive only for any given version - no backwards incompatible changes are allowed without incrementing the apiVersion. The server will return and accept a number of standard responses that share a common schema - for instance, the common error type is 'unversioned.Status' (described below) and will be returned on any error from the API server. The API is available in multiple serialization formats - the default is JSON (Accept: application/json and Content-Type: application/json) but openshift.clients may also use YAML (application/yaml) or the native Protobuf schema (application/vnd.kubernetes.protobuf). Note that the format of the WATCH API call is slightly different - for JSON it returns newline delimited objects while for Protobuf it returns length-delimited frames (4 bytes in network-order) that contain a 'versioned.Watch' Protobuf object. See the OpenShift documentation at https://docs.openshift.org for more information.
OpenAPI spec version: v3.6.0-alpha.0
Generated by: https://github.com/swagger-api/swagger-codegen.git
coding: utf-8 | 3,605 | 0.846642 |