markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
GenBank files don't have any per-letter annotations:
record.letter_annotations
notebooks/04 - Sequence Annotation objects.ipynb
tiagoantao/biopython-notebook
mit
Most of the annotations information gets recorded in the \verb|annotations| dictionary, for example:
len(record.annotations) record.annotations["source"]
notebooks/04 - Sequence Annotation objects.ipynb
tiagoantao/biopython-notebook
mit
The dbxrefs list gets populated from any PROJECT or DBLINK lines:
record.dbxrefs
notebooks/04 - Sequence Annotation objects.ipynb
tiagoantao/biopython-notebook
mit
Finally, and perhaps most interestingly, all the entries in the features table (e.g. the genes or CDS features) get recorded as SeqFeature objects in the features list.
len(record.features)
notebooks/04 - Sequence Annotation objects.ipynb
tiagoantao/biopython-notebook
mit
Feature, location and position objects SeqFeature objects Sequence features are an essential part of describing a sequence. Once you get beyond the sequence itself, you need some way to organize and easily get at the more 'abstract' information that is known about the sequence. While it is probably impossible to develop a general sequence feature class that will cover everything, the Biopython SeqFeature class attempts to encapsulate as much of the information about the sequence as possible. The design is heavily based on the GenBank/EMBL feature tables, so if you understand how they look, you'll probably have an easier time grasping the structure of the Biopython classes. The key idea about each SeqFeature object is to describe a region on a parent sequence, typically a SeqRecord object. That region is described with a location object, typically a range between two positions (see below). The SeqFeature class has a number of attributes, so first we'll list them and their general features, and then later in the chapter work through examples to show how this applies to a real life example. The attributes of a SeqFeature are: .type] - This is a textual description of the type of feature (for instance, this will be something like 'CDS' or 'gene'). .location - The location of the SeqFeature on the sequence that you are dealing with. The SeqFeature delegates much of its functionality to the location object, and includes a number of shortcut attributes for properties of the location: .ref - shorthand for .location.ref - any (different) reference sequence the location is referring to. Usually just None. .ref_db - shorthand for .location.ref_db - specifies the database any identifier in .ref refers to. Usually just None. .strand - shorthand for .location.strand - the strand on the sequence that the feature is located on. For double stranded nucleotide sequence this may either be 1 for the top strand, -1 for the bottom strand, 0 if the strand is important but is unknown, or None if it doesn't matter. This is None for proteins, or single stranded sequences. .qualifiers - This is a Python dictionary of additional information about the feature. The key is some kind of terse one-word description of what the information contained in the value is about, and the value is the actual information. For example, a common key for a qualifier might be 'evidence' and the value might be 'computational (non-experimental). This is just a way to let the person who is looking at the feature know that it has not be experimentally (i.e. in a wet lab) confirmed. Note that other the value will be a list of strings (even when there is only one string). This is a reflection of the feature tables in GenBank/EMBL files. .sub_features - This used to be used to represent features with complicated locations like 'joins' in GenBank/EMBL files. This has been deprecated with the introduction of the CompoundLocation object, and should now be ignored. Positions and locations The key idea about each SeqFeature object is to describe a region on a parent sequence, for which we use a location object, typically describing a range between two positions. Two try to clarify the terminology we're using: position - This refers to a single position on a sequence, which may be fuzzy or not. For instance, 5, 20, <100 and >200 are all positions. location - A location is region of sequence bounded by some positions. For instance 5..20 (i.e. 5 to 20) is a location. I just mention this because sometimes I get confused between the two. FeatureLocation object Unless you work with eukaryotic genes, most SeqFeature locations are extremely simple - you just need start and end coordinates and a strand. That's essentially all the basic FeatureLocation object does. In practise of course, things can be more complicated. First of all we have to handle compound locations made up of several regions. Secondly, the positions themselves may be fuzzy (inexact). CompoundLocation object Biopython 1.62 introduced the CompoundLocation as part of a restructuring of how complex locations made up of multiple regions are represented. The main usage is for handling `join' locations in EMBL/GenBank files. Fuzzy Positions So far we've only used simple positions. One complication in dealing with feature locations comes in the positions themselves. In biology many times things aren't entirely certain (as much as us wet lab biologists try to make them certain!). For instance, you might do a dinucleotide priming experiment and discover that the start of mRNA transcript starts at one of two sites. This is very useful information, but the complication comes in how to represent this as a position. To help us deal with this, we have the concept of fuzzy positions. Basically there are several types of fuzzy positions, so we have five classes do deal with them: ExactPosition - As its name suggests, this class represents a position which is specified as exact along the sequence. This is represented as just a number, and you can get the position by looking at the position attribute of the object. BeforePosition - This class represents a fuzzy position that occurs prior to some specified site. In GenBank/EMBL notation, this is represented as something like <13, signifying that the real position is located somewhere less than 13. To get the specified upper boundary, look at the position attribute of the object. AfterPosition - Contrary to BeforePosition, this class represents a position that occurs after some specified site. This is represented in GenBank as >13, and like BeforePosition, you get the boundary number by looking at the position attribute of the object. WithinPosition - Occasionally used for GenBank/EMBL locations, this class models a position which occurs somewhere between two specified nucleotides. In GenBank/EMBL notation, this would be represented as (1.5), to represent that the position is somewhere within the range 1 to 5. To get the information in this class you have to look at two attributes. The position attribute specifies the lower boundary of the range we are looking at, so in our example case this would be one. The extension attribute specifies the range to the higher boundary, so in this case it would be 4. So object.position is the lower boundary and object.position + object.extension is the upper boundary. OneOfPosition - Occasionally used for GenBank/EMBL locations, this class deals with a position where several possible values exist, for instance you could use this if the start codon was unclear and there where two candidates for the start of the gene. Alternatively, that might be handled explicitly as two related gene features. UnknownPosition - This class deals with a position of unknown location. This is not used in GenBank/EMBL, but corresponds to the `?' feature coordinate used in UniProt. Here's an example where we create a location with fuzzy end points:
from Bio import SeqFeature start_pos = SeqFeature.AfterPosition(5) end_pos = SeqFeature.BetweenPosition(9, left=8, right=9) my_location = SeqFeature.FeatureLocation(start_pos, end_pos)
notebooks/04 - Sequence Annotation objects.ipynb
tiagoantao/biopython-notebook
mit
Note that the details of some of the fuzzy-locations changed in Biopython 1.59, in particular for BetweenPosition and WithinPosition you must now make it explicit which integer position should be used for slicing etc. For a start position this is generally the lower (left) value, while for an end position this would generally be the higher (right) value. If you print out a FeatureLocation object, you can get a nice representation of the information:
print(my_location)
notebooks/04 - Sequence Annotation objects.ipynb
tiagoantao/biopython-notebook
mit
We can access the fuzzy start and end positions using the start and end attributes of the location:
my_location.start print(my_location.start) my_location.end print(my_location.end)
notebooks/04 - Sequence Annotation objects.ipynb
tiagoantao/biopython-notebook
mit
If you don't want to deal with fuzzy positions and just want numbers, they are actually subclasses of integers so should work like integers:
int(my_location.start) int(my_location.end)
notebooks/04 - Sequence Annotation objects.ipynb
tiagoantao/biopython-notebook
mit
For compatibility with older versions of Biopython you can ask for the \verb|nofuzzy_start| and \verb|nofuzzy_end| attributes of the location which are plain integers:
my_location.nofuzzy_start my_location.nofuzzy_end
notebooks/04 - Sequence Annotation objects.ipynb
tiagoantao/biopython-notebook
mit
Notice that this just gives you back the position attributes of the fuzzy locations. Similarly, to make it easy to create a position without worrying about fuzzy positions, you can just pass in numbers to the FeaturePosition constructors, and you'll get back out ExactPosition objects:
exact_location = SeqFeature.FeatureLocation(5, 9) print(exact_location) exact_location.start print(int(exact_location.start)) exact_location.nofuzzy_start
notebooks/04 - Sequence Annotation objects.ipynb
tiagoantao/biopython-notebook
mit
That is most of the nitty gritty about dealing with fuzzy positions in Biopython. It has been designed so that dealing with fuzziness is not that much more complicated than dealing with exact positions, and hopefully you find that true! Location testing You can use the Python keyword in with a SeqFeature or location object to see if the base/residue for a parent coordinate is within the feature/location or not. For example, suppose you have a SNP of interest and you want to know which features this SNP is within, and lets suppose this SNP is at index 4350 (Python counting!). Here is a simple brute force solution where we just check all the features one by one in a loop:
my_snp = 4350 record = SeqIO.read("data/NC_005816.gb", "genbank") for feature in record.features: if my_snp in feature: print("%s %s" % (feature.type, feature.qualifiers.get('db_xref')))
notebooks/04 - Sequence Annotation objects.ipynb
tiagoantao/biopython-notebook
mit
Note that gene and CDS features from GenBank or EMBL files defined with joins are the union of the exons -- they do not cover any introns. Sequence described by a feature or location A SeqFeature or location object doesn't directly contain a sequence, instead the location describes how to get this from the parent sequence. For example consider a (short) gene sequence with location 5:18 on the reverse strand, which in GenBank/EMBL notation using 1-based counting would be complement(6..18), like this:
from Bio.SeqFeature import SeqFeature, FeatureLocation seq = Seq("ACCGAGACGGCAAAGGCTAGCATAGGTATGAGACTTCCTTCCTGCCAGTGCTGAGGAACTGGGAGCCTAC") feature = SeqFeature(FeatureLocation(5, 18), type="gene", strand=-1)
notebooks/04 - Sequence Annotation objects.ipynb
tiagoantao/biopython-notebook
mit
You could take the parent sequence, slice it to extract 5:18, and then take the reverse complement. If you are using Biopython 1.59 or later, the feature location's start and end are integer like so this works:
feature_seq = seq[feature.location.start:feature.location.end].reverse_complement() print(feature_seq)
notebooks/04 - Sequence Annotation objects.ipynb
tiagoantao/biopython-notebook
mit
This is a simple example so this isn't too bad -- however once you have to deal with compound features (joins) this is rather messy. Instead, the SeqFeature object has an extract method to take care of all this (and since Biopython 1.78 can handle trans-splicing by supplying a dictionary of referenced sequences):
feature_seq = feature.extract(seq) print(feature_seq)
notebooks/04 - Sequence Annotation objects.ipynb
tiagoantao/biopython-notebook
mit
The length of a SeqFeature or location matches that of the region of sequence it describes.
print(len(feature_seq)) print(len(feature)) print(len(feature.location))
notebooks/04 - Sequence Annotation objects.ipynb
tiagoantao/biopython-notebook
mit
For simple FeatureLocation objects the length is just the difference between the start and end positions. However, for a CompoundLocation the length is the sum of the constituent regions. Comparison The SeqRecord mobjects can be very complex, but hereโ€™s a simple example:
from Bio.SeqRecord import SeqRecord record1 = SeqRecord(Seq("ACGT"), id="test") record2 = SeqRecord(Seq("ACGT"), id="test")
notebooks/04 - Sequence Annotation objects.ipynb
tiagoantao/biopython-notebook
mit
What happens when you try to compare these โ€œidenticalโ€ records?
record1 == record2
notebooks/04 - Sequence Annotation objects.ipynb
tiagoantao/biopython-notebook
mit
Perhaps surprisingly older versions of Biopython would use Pythonโ€™s default object comparison for theSeqRecord, meaning record1 == record2 would only return True if these variables pointed at the same object in memory. In this example, record1 == record2 would have returned False here!
record1 == record2 # on old versions of Biopython!
notebooks/04 - Sequence Annotation objects.ipynb
tiagoantao/biopython-notebook
mit
False As of Biopython 1.67, SeqRecord comparison like record1 == record2 will instead raise an explicit error to avoid people being caught out by this:
record1 == record2
notebooks/04 - Sequence Annotation objects.ipynb
tiagoantao/biopython-notebook
mit
Instead you should check the attributes you are interested in, for example the identifier and the sequence:
record1.id == record2.id record1.seq == record2.seq
notebooks/04 - Sequence Annotation objects.ipynb
tiagoantao/biopython-notebook
mit
Beware that comparing complex objects quickly gets complicated. References Another common annotation related to a sequence is a reference to a journal or other published work dealing with the sequence. We have a fairly simple way of representing a Reference in Biopython -- we have a Bio.SeqFeature.Reference class that stores the relevant information about a reference as attributes of an object. The attributes include things that you would expect to see in a reference like journal, title and authors. Additionally, it also can hold the medline_id and pubmed_id and a comment about the reference. These are all accessed simply as attributes of the object. A reference also has a location object so that it can specify a particular location on the sequence that the reference refers to. For instance, you might have a journal that is dealing with a particular gene located on a BAC, and want to specify that it only refers to this position exactly. The location is a potentially fuzzy location. Any reference objects are stored as a list in the SeqRecord object's annotations dictionary under the key 'references'. That's all there is too it. References are meant to be easy to deal with, and hopefully general enough to cover lots of usage cases. The format method The format method of the SeqRecord class gives a string containing your record formatted using one of the output file formats supported by Bio.SeqIO, such as FASTA:
record = SeqRecord( Seq( "MMYQQGCFAGGTVLRLAKDLAENNRGARVLVVCSEITAVTFRGPSETHLDSMVGQALFGD" "GAGAVIVGSDPDLSVERPLYELVWTGATLLPDSEGAIDGHLREVGLTFHLLKDVPGLISK" "NIEKSLKEAFTPLGISDWNSTFWIAHPGGPAILDQVEAKLGLKEEKMRATREVLSEYGNM" "SSAC" ), id="gi|14150838|gb|AAK54648.1|AF376133_1", description="chalcone synthase [Cucumis sativus]", ) print(record.format("fasta"))
notebooks/04 - Sequence Annotation objects.ipynb
tiagoantao/biopython-notebook
mit
This format method takes a single mandatory argument, a lower case string which is supported by Bio.SeqIO as an output format. However, some of the file formats Bio.SeqIO can write to require more than one record (typically the case for multiple sequence alignment formats), and thus won't work via this format() method. Slicing a SeqRecord You can slice a SeqRecord, to give you a new SeqRecord covering just part of the sequence. What is important here is that any per-letter annotations are also sliced, and any features which fall completely within the new sequence are preserved (with their locations adjusted). For example, taking the same GenBank file used earlier:
record = SeqIO.read("data/NC_005816.gb", "genbank") print(record) len(record) len(record.features)
notebooks/04 - Sequence Annotation objects.ipynb
tiagoantao/biopython-notebook
mit
For this example we're going to focus in on the pim gene, YP_pPCP05. If you have a look at the GenBank file directly you'll find this gene/CDS has location string 4343..4780, or in Python counting 4342:4780. From looking at the file you can work out that these are the twelfth and thirteenth entries in the file, so in Python zero-based counting they are entries 11 and 12 in the features list:
print(record.features[20]) print(record.features[21])
notebooks/04 - Sequence Annotation objects.ipynb
tiagoantao/biopython-notebook
mit
Let's slice this parent record from 4300 to 4800 (enough to include the pim gene/CDS), and see how many features we get:
sub_record = record[4300:4800] sub_record len(sub_record) len(sub_record.features)
notebooks/04 - Sequence Annotation objects.ipynb
tiagoantao/biopython-notebook
mit
Our sub-record just has two features, the gene and CDS entries for YP_pPCP05:
print(sub_record.features[0]) print(sub_record.features[1])
notebooks/04 - Sequence Annotation objects.ipynb
tiagoantao/biopython-notebook
mit
Notice that their locations have been adjusted to reflect the new parent sequence! While Biopython has done something sensible and hopefully intuitive with the features (and any per-letter annotation), for the other annotation it is impossible to know if this still applies to the sub-sequence or not. To avoid guessing, the annotations and dbxrefs are omitted from the sub-record, and it is up to you to transfer any relevant information as appropriate.
print(sub_record.annotations) print(sub_record.dbxrefs)
notebooks/04 - Sequence Annotation objects.ipynb
tiagoantao/biopython-notebook
mit
The same point could be made about the record id, name and description, but for practicality these are preserved:
print(sub_record.id) print(sub_record.name) print(sub_record.description)
notebooks/04 - Sequence Annotation objects.ipynb
tiagoantao/biopython-notebook
mit
This illustrates the problem nicely though, our new sub-record is not the complete sequence of the plasmid, so the description is wrong! Let's fix this and then view the sub-record as a reduced FASTA file using the format method described above:
sub_record.description ="Yersinia pestis biovar Microtus str. 91001 plasmid pPCP1, partial." print(sub_record.format("fasta"))
notebooks/04 - Sequence Annotation objects.ipynb
tiagoantao/biopython-notebook
mit
Adding SeqRecord objects You can add SeqRecord objects together, giving a new SeqRecord. What is important here is that any common per-letter annotations are also added, all the features are preserved (with their locations adjusted), and any other common annotation is also kept (like the id, name and description). For an example with per-letter annotation, we'll use the first record in a FASTQ file.
record = next(SeqIO.parse("data/example.fastq", "fastq")) print(len(record)) print(record.seq) print(record.letter_annotations["phred_quality"])
notebooks/04 - Sequence Annotation objects.ipynb
tiagoantao/biopython-notebook
mit
Let's suppose this was Roche 454 data, and that from other information you think the TTT should be only TT. We can make a new edited record by first slicing the SeqRecord before and after the 'extra' third T:
left = record[:20] print(left.seq) print(left.letter_annotations["phred_quality"]) right = record[21:] print(right.seq) print(right.letter_annotations["phred_quality"])
notebooks/04 - Sequence Annotation objects.ipynb
tiagoantao/biopython-notebook
mit
Now add the two parts together:
edited = left + right print(len(edited)) print(edited.seq) print(edited.letter_annotations["phred_quality"])
notebooks/04 - Sequence Annotation objects.ipynb
tiagoantao/biopython-notebook
mit
Easy and intuitive? We hope so! You can make this shorter with just:
edited = record[:20] + record[21:]
notebooks/04 - Sequence Annotation objects.ipynb
tiagoantao/biopython-notebook
mit
Now, for an example with features, we'll use a GenBnak file. Suppose you have a circular genome:
record = SeqIO.read("data/NC_005816.gb", "genbank") print(record)
notebooks/04 - Sequence Annotation objects.ipynb
tiagoantao/biopython-notebook
mit
You can shift the origin like this:
print(len(record)) print(len(record.features)) print(record.dbxrefs) print(record.annotations.keys())
notebooks/04 - Sequence Annotation objects.ipynb
tiagoantao/biopython-notebook
mit
You can shift the origin like this:
shifted = record[2000:] + record[:2000] print(shifted) print(len(shifted))
notebooks/04 - Sequence Annotation objects.ipynb
tiagoantao/biopython-notebook
mit
Note that this isn't perfect in that some annotation like the database cross references and one of the features (the source feature) have been lost:
print(len(shifted.features)) print(shifted.dbxrefs) print(shifted.annotations.keys())
notebooks/04 - Sequence Annotation objects.ipynb
tiagoantao/biopython-notebook
mit
This is because the SeqRecord slicing step is cautious in what annotation it preserves (erroneously propagating annotation can cause major problems). If you want to keep the database cross references or the annotations dictionary, this must be done explicitly:
shifted.dbxrefs = record.dbxrefs[:] shifted.annotations = record.annotations.copy() print(shifted.dbxrefs) print(shifted.annotations.keys())
notebooks/04 - Sequence Annotation objects.ipynb
tiagoantao/biopython-notebook
mit
Also note that in an example like this, you should probably change the record identifiers since the NCBI references refer to the original unmodified sequence. Reverse-complementing SeqRecord objects One of the new features in Biopython 1.57 was the SeqRecord object's reverse_complement method. This tries to balance easy of use with worries about what to do with the annotation in the reverse complemented record. For the sequence, this uses the Seq object's reverse complement method. Any features are transferred with the location and strand recalculated. Likewise any per-letter-annotation is also copied but reversed (which makes sense for typical examples like quality scores). However, transfer of most annotation is problematical. For instance, if the record ID was an accession, that accession should not really apply to the reverse complemented sequence, and transferring the identifier by default could easily cause subtle data corruption in downstream analysis. Therefore by default, the SeqRecord's id, name, description, annotations and database cross references are all not transferred by default. The SeqRecord object's reverse_complement method takes a number of optional arguments corresponding to properties of the record. Setting these arguments to True means copy the old values, while False means drop the old values and use the default value. You can alternatively provide the new desired value instead. Consider this example record:
record = SeqIO.read("data/NC_005816.gb", "genbank") print("%s %i %i %i %i" % (record.id, len(record), len(record.features), len(record.dbxrefs), len(record.annotations)))
notebooks/04 - Sequence Annotation objects.ipynb
tiagoantao/biopython-notebook
mit
Here we take the reverse complement and specify a new identifier - but notice how most of the annotation is dropped (but not the features):
rc = record.reverse_complement(id="TESTING") print("%s %i %i %i %i" % (rc.id, len(rc), len(rc.features), len(rc.dbxrefs), len(rc.annotations)))
notebooks/04 - Sequence Annotation objects.ipynb
tiagoantao/biopython-notebook
mit
Banana-shaped target distribution
dtarget = lambda x: exp( (-x[0]**2)/200. - 0.5*(x[1]+(0.05*x[0]**2) - 100.*0.05)**2) x1 = np.linspace(-20, 20, 101) x2 = np.linspace(-15, 10, 101) X, Y = np.meshgrid(x1, x2) Z = np.array(map(dtarget, zip(X.flat, Y.flat))).reshape(101, 101) plt.figure(figsize=(10,7)) plt.contour(X, Y, Z) plt.show() start = np.array([[2., 5.] for i in xrange(4)]) chains = HMC(dtarget, start, Eps=0.5, L=200, m=0.5, N=5000) plt.figure(figsize=(10,7)) plt.contour(X, Y, Z) plt.plot(chains[0][:, 0], chains[0][:, 1], alpha=0.8) plt.plot(chains[1][:, 0], chains[1][:, 1], alpha=0.8) plt.plot(chains[2][:, 0], chains[2][:, 1], alpha=0.8) plt.plot(chains[3][:, 0], chains[3][:, 1], alpha=0.8) plt.show() plt.subplot(211) plt.title(Gelman(chains)[0]) for i in xrange(chains.shape[0]): plt.plot(chains[i,:,0]) plt.ylabel('x1') plt.subplot(212) for i in xrange(chains.shape[0]): plt.plot(chains[i,:,1]) plt.ylabel('x2') plt.tight_layout() plt.show()
Hamiltonian MCMC (HMC).ipynb
erickpeirson/statistical-computing
cc0-1.0
Retrieving training and test data The MNIST data set already contains both training and test data. There are 55,000 data points of training data, and 10,000 points of test data. Each MNIST data point has: 1. an image of a handwritten digit and 2. a corresponding label (a number 0-9 that identifies the image) We'll call the images, which will be the input to our neural network, X and their corresponding labels Y. We're going to want our labels as one-hot vectors, which are vectors that holds mostly 0's and one 1. It's easiest to see this in a example. As a one-hot vector, the number 0 is represented as [1, 0, 0, 0, 0, 0, 0, 0, 0, 0], and 4 is represented as [0, 0, 0, 0, 1, 0, 0, 0, 0, 0]. Flattened data For this example, we'll be using flattened data or a representation of MNIST images in one dimension rather than two. So, each handwritten number image, which is 28x28 pixels, will be represented as a one dimensional array of 784 pixel values. Flattening the data throws away information about the 2D structure of the image, but it simplifies our data so that all of the training data can be contained in one array whose shape is [55000, 784]; the first dimension is the number of training images and the second dimension is the number of pixels in each image. This is the kind of data that is easy to analyze using a simple neural network.
# Retrieve the training and test data trainX, trainY, testX, testY = mnist.load_data(one_hot=True) trainX[0]
tutorials/intro-to-tflearn/TFLearn_Digit_Recognition.ipynb
wbbeyourself/cn-deep-learning
mit
Visualize the training data Provided below is a function that will help you visualize the MNIST data. By passing in the index of a training example, the function show_digit will display that training image along with it's corresponding label in the title.
# Visualizing the data import matplotlib.pyplot as plt %matplotlib inline # Function for displaying a training image by it's index in the MNIST set def show_digit(index): label = trainY[index].argmax(axis=0) # Reshape 784 array into 28x28 image image = trainX[index].reshape([28,28]) plt.title('Training data, index: %d, Label: %d' % (index, label)) plt.imshow(image, cmap='gray_r') plt.show() # Display the first (index 0) training image show_digit(3)
tutorials/intro-to-tflearn/TFLearn_Digit_Recognition.ipynb
wbbeyourself/cn-deep-learning
mit
Building the network TFLearn lets you build the network by defining the layers in that network. For this example, you'll define: The input layer, which tells the network the number of inputs it should expect for each piece of MNIST data. Hidden layers, which recognize patterns in data and connect the input to the output layer, and The output layer, which defines how the network learns and outputs a label for a given image. Let's start with the input layer; to define the input layer, you'll define the type of data that the network expects. For example, net = tflearn.input_data([None, 100]) would create a network with 100 inputs. The number of inputs to your network needs to match the size of your data. For this example, we're using 784 element long vectors to encode our input data, so we need 784 input units. Adding layers To add new hidden layers, you use net = tflearn.fully_connected(net, n_units, activation='ReLU') This adds a fully connected layer where every unit (or node) in the previous layer is connected to every unit in this layer. The first argument net is the network you created in the tflearn.input_data call, it designates the input to the hidden layer. You can set the number of units in the layer with n_units, and set the activation function with the activation keyword. You can keep adding layers to your network by repeated calling tflearn.fully_connected(net, n_units). Then, to set how you train the network, use: net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy') Again, this is passing in the network you've been building. The keywords: optimizer sets the training method, here stochastic gradient descent learning_rate is the learning rate loss determines how the network error is calculated. In this example, with categorical cross-entropy. Finally, you put all this together to create the model with tflearn.DNN(net). Exercise: Below in the build_model() function, you'll put together the network using TFLearn. You get to choose how many layers to use, how many hidden units, etc. Hint: The final output layer must have 10 output nodes (one for each digit 0-9). It's also recommended to use a softmax activation layer as your final output layer.
# Define the neural network def build_model(): # This resets all parameters and variables, leave this here tf.reset_default_graph() #### Your code #### # Include the input layer, hidden layer(s), and set how you want to train the model net = tflearn.input_data([None, 784]) net = tflearn.fully_connected(net, n_units=200, activation='ReLU') net = tflearn.fully_connected(net, n_units=30, activation='ReLU') net = tflearn.fully_connected(net, n_units=10, activation='softmax') net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy') # This model assumes that your network is named "net" model = tflearn.DNN(net) return model # Build the model model = build_model()
tutorials/intro-to-tflearn/TFLearn_Digit_Recognition.ipynb
wbbeyourself/cn-deep-learning
mit
'Hello World!' is a data structure called a string.
type('Hello World') 4+5 type(9.) 4 * 5 4**5 # exponentiation # naming things and storing them in memory for later use x = 2**10 print(x) whos # with explanation print('The value of x is {:,}.'.format(x)) # you can change the formatting of x inside the brackets type(3.14159) print('The value of pi is approximately {0:.2f}.'.format(3.14159)) # 0 is the argument, .2 means two past .
notebooks/introduction_to_python.ipynb
jwjohnson314/data-801
mit
Lists Lists are a commonly used python data structure.
x = [1, 2, 3] type(x) whos x.append(4) x # throws an error x.prepend(0) y = [0] x+y y+x whos # didn't save it - let's do it again y = y+x y # Exercise: there is a more efficient way - find the reference in the docs for the insert command. # insert the value 2.5 into the list into the appropriate spot # your code here: y.insert(3, 2.5) print(y) # a bigger list - list is a function too z = list(range(100)) # range is a special type in Python print(z) # getting help ?range # try shift+tab when calling unfamiliar function for quick access to docstring range() # Exercise: get the docstring for the 'open' function # your code here: open() # Exercise: get the docstring for the 'list' function # your code here: list() # often we need to get extract elements from a list. Python uses zero-based indexing print(z[0]) print(z[5]) # ranges/slices print(z[4:5]) # 4 is included print(z[4:]) # 4 is included print(z[:4]) # 4 is not included z[2:4] + z[7:9] # Exercise: write a list consisting of the entries in z whose first digit is a prime number # your code here: # from the end of the list z[-10:] # by step size other than 1 z[10:20:2] # start:stop:step # when you're going all the way to the end z[10::2] # stop omitted # exercise: can you write a single operation to return z in reversed order? # your code here: z[::-1] # removing values z.remove(2) print(z[:10]) # strings are a lot like lists string = 'This is A poOrly cAPitAlized string.' string[:4] type(string[:4]) string[::2] string[-1] print(string.lower()) print(string.upper()) print(string.split('A')) type(string.split('A')) address = 'http://www.wikiart.org/en/jan-van-eyck/the-birth-of-john-the-baptist-1422' artist, painting = address.split('/')[4:] print(artist) print(painting) # digression - unicode ord('.') # encoding chr(97) # homework reading: http://www.joelonsoftware.com/articles/Unicode.html # string arithmetic x = 'Hello ' y = 'World' print(x+y) x[:5] + '_' + y x.replace(' ', '_') x.append('_') x x.replace(' ', '_') + y # strings are not exactly lists! x*5 + y.append(' ')*3
notebooks/introduction_to_python.ipynb
jwjohnson314/data-801
mit
Exercise: take a few minutes to read the docs for text strings here: https://docs.python.org/3/library/stdtypes.html#textseq Immutable means 'can't be changed' So if you want to change a string, you need to make a copy of some sort.
x * 5 + (y + str(' ')) * 3
notebooks/introduction_to_python.ipynb
jwjohnson314/data-801
mit
Tuples Exercise: Find the doc page for the tuples datatype. What is the difference between a tuple and a list?
# Exercise: write a tuple consisting of the first five letters of the alphabet (lower-case) in reversed order # your code here tup = ('z', 'y', 'x', 'w', 'v') type(tup) tup[3]
notebooks/introduction_to_python.ipynb
jwjohnson314/data-801
mit
Dicts The dictionary data structure consists of key-value pairs. This shows up a lot; for instance, when reading JSON files (http://www.json.org/)
x = ['Bob', 'Amy', 'Fred'] y = [32, 27, 19] z = dict(zip(x, y)) type(z) z z[1] z['Bob'] z.keys() z.values() detailed ={'amy': {'age': 32, 'school': 'UNH', 'GPA':4.0}, 'bob': {'age': 27, 'school': 'UNC', 'GPA':3.4}} detailed['amy']['school'] # less trivial example # library imports; ignore for now from urllib.request import urlopen import json url = 'http://www.wikiart.org/en/App/Painting/' + \ 'PaintingsByArtist?artistUrl=' + \ 'pablo-picasso' + '&json=2' raw = urlopen(url).read().decode('utf8') d = json.loads(raw) type(d) d type(d[0]) d[0].keys()
notebooks/introduction_to_python.ipynb
jwjohnson314/data-801
mit
Control structures: the 'for' loop
# indents matter in Python for i in range(20): print('%s: %s' % (d[i]['title'], d[i]['completitionYear'])) # exercises: print the sizes and titles of the last ten paintings in this list. # The statement should print as 'title: width pixels x height pixels' # your code here:
notebooks/introduction_to_python.ipynb
jwjohnson314/data-801
mit
The 'if-then' statement
data = [1.2, 2.4, 23.3, 4.5] new_data = [] for i in range(len(data)): if round(data[i]) % 2 == 0: # modular arithmetic, remainder of 0 new_data.append(round(data[i])) else: new_data.append(0) print(new_data)
notebooks/introduction_to_python.ipynb
jwjohnson314/data-801
mit
Digression - list comprehensions Rather than a for loop, in a situation like that above, Python has a method called a list comprehension for creating lists. Sometimes this is more efficient. It's often nicer syntactically, as long as the number of conditions is not too large (<= 2 is a good guideline).
print(data) new_new_data = [round(i) if round(i) % 2 == 0 else 0 for i in data] print(new_new_data) data = list(range(20)) for i in data: if i % 2 == 0: print(i) elif i >= 10: print('wow, that\'s a big odd number - still no fun') else: print('odd num no fun')
notebooks/introduction_to_python.ipynb
jwjohnson314/data-801
mit
The 'while' loop
# beware loops that don't terminate counter = 0 tmp = 2 while counter < 10: tmp = tmp**2 counter += 1 print('{:,}'.format(tmp)) print('tmp is %d digits long, that\'s huge!' % len(str(tmp))) # the 'pass' command for i in range(10): if i % 2 == 0: print(i) else: pass # the continue command for letter in 'Python': if letter == 'h': continue print('Current Letter :', letter) # the pass command for letter in 'Python': if letter == 'h': pass print('Current Letter :', letter) # the break command for letter in 'Python': if letter == 'h': break print('Current Letter :', letter)
notebooks/introduction_to_python.ipynb
jwjohnson314/data-801
mit
Functions Functions take in inputs and produce outputs.
def square(x): '''input: a numerical value x output: the square of x ''' return x**2 square(3.14) # Exercise: write a function called 'reverse' to take in a string and reverse it # your code here: # test reverse('Hi, my name is Joan Jett') def raise_to_power(x, n=2): # 2 is the default for n return x**n raise_to_power(3) raise_to_power(3,4) def write_to_file(filepath, string): '''make sure the file doesn\'t exist; this will overwrite''' with open(filepath, 'w+') as f: f.writelines(string) write_to_file('test.txt', 'fred was here') ! cat test.txt with open('test.txt') as f: content = f.read() print(content) write_to_file('test.txt', 'goodbye for now\n') # \n is the newline character ! cat test.txt # Exercise: what are the modes for editing a file?
notebooks/introduction_to_python.ipynb
jwjohnson314/data-801
mit
Make a PMF of <tt>numkdhh</tt>, the number of children under 18 in the respondent's household.
numkdhh = thinkstats2.Pmf(resp.numkdhh) numkdhh
code/chap03ex.ipynb
goodwordalchemy/thinkstats_notes_and_exercises
gpl-3.0
Display the PMF.
thinkplot.Hist(numkdhh, label='actual') thinkplot.Config(title="PMF of num children under 18", xlabel="number of children under 18", ylabel="probability")
code/chap03ex.ipynb
goodwordalchemy/thinkstats_notes_and_exercises
gpl-3.0
Make a the biased Pmf of children in the household, as observed if you surveyed the children instead of the respondents.
biased_pmf = BiasPmf(numkdhh, label='biased') thinkplot.Hist(biased_pmf) thinkplot.Config(title="PMF of num children under 18", xlabel="number of children under 18", ylabel="probability")
code/chap03ex.ipynb
goodwordalchemy/thinkstats_notes_and_exercises
gpl-3.0
Display the actual Pmf and the biased Pmf on the same axes.
width = 0.45 thinkplot.PrePlot(2) thinkplot.Hist(biased_pmf, align="right", label="biased", width=width) thinkplot.Hist(numkdhh, align="left", label="actual", width=width) thinkplot.Config(title="PMFs of children under 18 in a household", xlabel='number of children', ylabel='probability')
code/chap03ex.ipynb
goodwordalchemy/thinkstats_notes_and_exercises
gpl-3.0
Compute the means of the two Pmfs.
print "actual mean:", numkdhh.Mean() print "biased mean:", biased_pmf.Mean()
code/chap03ex.ipynb
goodwordalchemy/thinkstats_notes_and_exercises
gpl-3.0
Verification of the FUSED-Wind wrapper common inputs
v80 = wt.WindTurbine('Vestas v80 2MW offshore','V80_2MW_offshore.dat',70,40) HR1 = wf.WindFarm('Horns Rev 1','HR_coordinates.dat',v80) WD = range(0,360,1)
examples/Script.ipynb
rethore/FUSED-Wake
agpl-3.0
The following figure shows the distribution of the sum of three dice, pmf_3d6, and the distribution of the best three out of four, pmf_best3.
pmf_3d6.plot(label='sum of 3 dice') pmf_best3.plot(label='best 3 of 4', style='--') decorate_dice('Distribution of attributes')
notebooks/chap07.ipynb
AllenDowney/ThinkBayes2
mit
Most characters have at least one attribute greater than 12; almost 10% of them have an 18. The following figure shows the CDFs for the three distributions we have computed.
import matplotlib.pyplot as plt cdf_3d6 = pmf_3d6.make_cdf() cdf_3d6.plot(label='sum of 3 dice') cdf_best3 = pmf_best3.make_cdf() cdf_best3.plot(label='best 3 of 4 dice', style='--') cdf_max6.plot(label='max of 6 attributes', style=':') decorate_dice('Distribution of attributes') plt.ylabel('CDF');
notebooks/chap07.ipynb
AllenDowney/ThinkBayes2
mit
Here's what it looks like, along with the distribution of the maximum.
cdf_min6.plot(color='C4', label='minimum of 6') cdf_max6.plot(color='C2', label='maximum of 6', style=':') decorate_dice('Minimum and maximum of six attributes') plt.ylabel('CDF');
notebooks/chap07.ipynb
AllenDowney/ThinkBayes2
mit
We can compare it to the distribution of attributes you get by rolling four dice at adding up the best three.
cdf_best3.plot(label='best 3 of 4', color='C1', style='--') cdf_standard.step(label='standard set', color='C7') decorate_dice('Distribution of attributes') plt.ylabel('CDF');
notebooks/chap07.ipynb
AllenDowney/ThinkBayes2
mit
I plotted cdf_standard as a step function to show more clearly that it contains only a few quantities.
# Solution goes here # Solution goes here # Solution goes here # Solution goes here # Solution goes here # Solution goes here
notebooks/chap07.ipynb
AllenDowney/ThinkBayes2
mit
Exercise: Suppose you are fighting three monsters: One is armed with a short sword that causes one 6-sided die of damage, One is armed with a battle axe that causes one 8-sided die of damage, and One is armed with a bastard sword that causes one 10-sided die of damage. One of the monsters, chosen at random, attacks you and does 1 point of damage. Which monster do you think it was? Compute the posterior probability that each monster was the attacker. If the same monster attacks you again, what is the probability that you suffer 6 points of damage? Hint: Compute a posterior distribution as we have done before and pass it as one of the arguments to make_mixture.
# Solution goes here # Solution goes here # Solution goes here # Solution goes here
notebooks/chap07.ipynb
AllenDowney/ThinkBayes2
mit
Exercise: Henri Poincarรฉ was a French mathematician who taught at the Sorbonne around 1900. The following anecdote about him is probably fiction, but it makes an interesting probability problem. Supposedly Poincarรฉ suspected that his local bakery was selling loaves of bread that were lighter than the advertised weight of 1 kg, so every day for a year he bought a loaf of bread, brought it home and weighed it. At the end of the year, he plotted the distribution of his measurements and showed that it fit a normal distribution with mean 950 g and standard deviation 50 g. He brought this evidence to the bread police, who gave the baker a warning. For the next year, Poincarรฉ continued to weigh his bread every day. At the end of the year, he found that the average weight was 1000 g, just as it should be, but again he complained to the bread police, and this time they fined the baker. Why? Because the shape of the new distribution was asymmetric. Unlike the normal distribution, it was skewed to the right, which is consistent with the hypothesis that the baker was still making 950 g loaves, but deliberately giving Poincarรฉ the heavier ones. To see whether this anecdote is plausible, let's suppose that when the baker sees Poincarรฉ coming, he hefts n loaves of bread and gives Poincarรฉ the heaviest one. How many loaves would the baker have to heft to make the average of the maximum 1000 g? To get you started, I'll generate a year's worth of data from a normal distribution with the given parameters.
mean = 950 std = 50 np.random.seed(17) sample = np.random.normal(mean, std, size=365) # Solution goes here # Solution goes here
notebooks/chap07.ipynb
AllenDowney/ThinkBayes2
mit
In the meanwhile we are trying to have more information about pandas. In the following sections we are using the value_counts method to have more information about each feature values. This method specify number of different values for given feature.
housing['total_rooms'].value_counts() housing['ocean_proximity'].value_counts()
ml/housing/Housing.ipynb
1995parham/Learning
gpl-2.0
See the difference between loc and iloc methods in a simple pandas DataFrame.
pd.DataFrame([{'a': 1, 'b': '1'}, {'a': 2, 'b': 1}, {'a': 3, 'b': 1}]).iloc[1] pd.DataFrame([{'a': 1, 'b': '1'}, {'a': 2, 'b': 1}, {'a': 3, 'b': 1}]).loc[1] pd.DataFrame([{'a': 1, 'b': '1'}, {'a': 2, 'b': 1}, {'a': 3, 'b': 1}]).loc[1, ['b']] pd.DataFrame([{'a': 1, 'b': '1'}, {'a': 2, 'b': 1}, {'a': 3, 'b': 1}]).loc[[True, True, False]]
ml/housing/Housing.ipynb
1995parham/Learning
gpl-2.0
Here we want to see the apply function of pandas for an specific feature.
pd.DataFrame([{'a': 1, 'b': '1'}, {'a': 2, 'b': 1}, {'a': 3, 'b': 1}])['a'].apply(lambda a: a > 10)
ml/housing/Housing.ipynb
1995parham/Learning
gpl-2.0
The following function helps to split the given dataset into test and train sets.
from zlib import crc32 import numpy as np def test_set_check(identifier, test_ratio): return crc32(np.int64(identifier)) & 0xffffffff < test_ratio * 2**32 def split_train_test_by_id(data, test_ratio, id_column): ids = data[id_column] in_test_set = ids.apply(lambda _id: test_set_check(_id, test_ratio)) return data.loc[~in_test_set], data.loc[in_test_set] housing_with_id = housing.reset_index() # adds an "index" column train_set, test_set = split_train_test_by_id(housing_with_id, 0.2, 'index') housing = train_set.copy() housing.plot(kind="scatter", x="longitude", y="latitude", alpha=0.1) import matplotlib.pyplot as plt housing.plot(kind='scatter', x='longitude', y='latitude', alpha=0.4, s=housing['population']/100, label='population', c='median_house_value', cmap=plt.get_cmap('jet'), colorbar=True, )
ml/housing/Housing.ipynb
1995parham/Learning
gpl-2.0
Below is a plot of the signal.
plt.figure(figsize=(figWidth, 4)) plt.plot(signalTime, signalSamples) plt.xlabel("t") plt.ylabel("Amplitude") plt.suptitle('Source Signal') plt.show()
src/articles/PDMPlayground/index.ipynb
bradhowes/keystrokecountdown
mit
To verify that the signal really has only two frequency components, here is the output of the FFT for it.
fftFreqs = np.arange(bandwidth) fftValues = (np.fft.fft(signalSamples) / sampleFrequency)[:int(bandwidth)] plt.plot(fftFreqs, np.absolute(fftValues)) plt.xlim(0, bandwidth) plt.ylim(0, 0.3) plt.xlabel("Frequency") plt.ylabel("Magnitude") plt.suptitle("Source Signal Frequency Components") plt.show()
src/articles/PDMPlayground/index.ipynb
bradhowes/keystrokecountdown
mit
PDM Modulation Now that we have a signal to work with, next step is to generate a pulse train from it. The code below is a simple hack that generates 64 samples for every one in the original signal. Normally, this would involve interpolation so that the 63 additional samples vary linearly from the previous sample to the current one. This lack will introduce some noise due to discontinuities. The setting pdmFreq is the number of samples to create for each element in signalSamples.
pdmFreq = 64 pdmPulses = np.empty(sampleFrequency * pdmFreq) pdmTime = np.arange(0, pdmPulses.size) pdmIndex = 0 signalIndex = 0 quantizationError = 0 while pdmIndex < pdmPulses.size: sample = signalSamples[signalIndex] signalIndex += 1 for tmp in range(pdmFreq): if sample >= quantizationError: bit = 1 else: bit = -1 quantizationError = bit - sample + quantizationError pdmPulses[pdmIndex] = bit pdmIndex += 1 print(pdmIndex, signalIndex, pdmPulses.size, signalSamples.size)
src/articles/PDMPlayground/index.ipynb
bradhowes/keystrokecountdown
mit
Visualize the first 4K PDM samples. We should be able to clearly see the pulsing.
span = 1024 plt.figure(figsize=(16, 6)) counter = 1 for pos in range(0, pdmIndex, span): from matplotlib.ticker import MultipleLocator plt.subplot(4, 1, counter) counter += 1 # Generate a set of time values that correspond to pulses with +1 values. Remove the rest # and plot. plt.vlines(np.delete(pdmTime[pos:pos + span], np.nonzero(pdmPulses[pos:pos + span] > 0.0)[0]), 0, 1, 'g') plt.ylim(0, 1) plt.xlim(pos, pos + span) plt.tick_params(axis='both', which='major', labelsize=8) ca = plt.gca() axes = ca.axes axes.yaxis.set_visible(False) # axes.yaxis.set_ticklabels([]) axes.xaxis.set_ticks_position('bottom') # axes.xaxis.set_ticks(np.arange(pos, pos + span, 64)) axes.xaxis.set_major_locator(MultipleLocator(64)) spines = axes.spines for tag in ('top', 'bottom'): spines[tag].set_visible(False) if counter == 5: break plt.show()
src/articles/PDMPlayground/index.ipynb
bradhowes/keystrokecountdown
mit
Low-pass Filter A fundamental nature of high-frequency sampling for PCM is that the noise from the quantization resulting from the PCM modulator is also of high-frequency (in a real system, there is also low-freq noise from clock jitter, heat, etc). When we decimate the signal, we do not want to bring the noise into the lower frequencies so we need to filter the samples before incorporating them into the new, lower-frequency signal. Our low-pass filter is a finite impulse response (FIR) type, with tap values taken from the TFilter web application. Our filter is designed to operate at 2 x sampleFrequency so that it will cover our original bandwidth (512 Hz) in the pass-band and heavily attenuate everything else above. LowPassFilter.py source
import LowPassFilter lpf = LowPassFilter.LowPassFilter()
src/articles/PDMPlayground/index.ipynb
bradhowes/keystrokecountdown
mit
PDM Decimation Our PDM signal has a sampling frequency of 64 &times; sampleFrequency or 65.536 kHz. To get to our original sampleFrequency we need to ultimately use one sample out of every 64 we see in the PDM pulse train. Since we want to filter out high-frequency noise, and our filter is tuned for 2 &times; sampleFrequency (2.048 kHz), will take every 32nd sample and send each to our filter, but with will only use every other filtered sample. NOTE: the reconstruction here of a sample value from PDM values is not what would really take place. In particular, the code below obtains an average of the +/- unity values in the chain, where a real implementation would count bits and convert into a sample value.
derivedSamples = [] pdmIndex = 0 while pdmIndex < pdmPulses.size: lpf(pdmPulses[int(pdmIndex)]) pdmIndex += pdmFreq / 2 filtered = lpf(pdmPulses[int(pdmIndex)]) pdmIndex += pdmFreq / 2 derivedSamples.append(filtered) derivedSamples = np.array(derivedSamples) signalSamples.size, derivedSamples.size
src/articles/PDMPlayground/index.ipynb
bradhowes/keystrokecountdown
mit
Now plots of the resulting signal in both time and frequency domains
plt.figure(figsize=(figWidth, 4)) plt.plot(signalTime, derivedSamples) plt.xlabel("t") plt.ylabel("Amplitude") plt.suptitle('Derived Signal') plt.show() fftFreqs = np.arange(bandwidth) fftValues = (np.fft.fft(derivedSamples) / sampleFrequency)[:int(bandwidth)] plt.plot(fftFreqs, np.absolute(fftValues)) plt.xlim(0, bandwidth) plt.ylim(0, 0.3) plt.xlabel("Frequency") plt.ylabel("Magnitude") plt.suptitle("Derived Signal Frequency Components") plt.show()
src/articles/PDMPlayground/index.ipynb
bradhowes/keystrokecountdown
mit
Filtering Test Let's redo the PCM modulation / decimation steps but this time while injecting a high-frequency (32.767 kHz) signal with 30% intensity during the modulation. Hopefully, we will not see this noise appear in the final result.
pdmFreq = 64 pdmPulses = np.empty(sampleFrequency * pdmFreq) pdmTime = np.arange(0, pdmPulses.size) pdmIndex = 0 signalIndex = 0 quantizationError = 0 noiseFreq = 32767 # Hz noiseAmplitude = .30 noiseSampleDuration = 1.0 / (sampleFrequency * pdmFreq) noiseTime = np.arange(0, 1, noiseSampleDuration) noiseSamples = np.sin(2.0 * np.pi * noiseFreq * noiseTime) * noiseAmplitude while pdmIndex < pdmPulses.size: sample = signalSamples[signalIndex] + noiseSamples[pdmIndex] signalIndex += 1 for tmp in range(pdmFreq): if sample >= quantizationError: bit = 1 else: bit = -1 quantizationError = bit - sample + quantizationError pdmPulses[pdmIndex] = bit pdmIndex += 1 print(pdmIndex, signalIndex, pdmPulses.size, signalSamples.size, noiseSamples.size) derivedSamples = [] pdmIndex = 0 lpf.reset() while pdmIndex < pdmPulses.size: lpf(pdmPulses[int(pdmIndex)]) pdmIndex += pdmFreq / 2 filtered = lpf(pdmPulses[int(pdmIndex)]) pdmIndex += pdmFreq / 2 derivedSamples.append(filtered) derivedSamples = np.array(derivedSamples) plt.figure(figsize=(figWidth, 4)) plt.plot(signalTime, derivedSamples) plt.xlabel("t") plt.ylabel("Amplitude") plt.suptitle('Derived Signal') plt.show() fftFreqs = np.arange(bandwidth) fftValues = (np.fft.fft(derivedSamples) / sampleFrequency)[:int(bandwidth)] plt.plot(fftFreqs, np.absolute(fftValues)) plt.xlim(0, bandwidth) plt.ylim(0, 0.3) plt.xlabel("Frequency") plt.ylabel("Magnitude") plt.suptitle("Derived Signal Frequency Components") plt.show()
src/articles/PDMPlayground/index.ipynb
bradhowes/keystrokecountdown
mit
ไฝฟ็”จsklearnๅฎž็Žฐk-means่š็ฑป sklearn.cluster.KMeansๆไพ›ไบ†ไธ€ไธช็”จไบŽๅšk-means่š็ฑป็š„ๆŽฅๅฃ.
from sklearn.cluster import KMeans import numpy as np X = np.array([[1, 2], [1, 4], [1, 0],[4, 2], [4, 4], [4, 0]]) kmeans = KMeans(n_clusters=2, random_state=0).fit(X)
ipynbs/unsupervised/Kmeans.ipynb
NLP-Deeplearning-Club/Classic-ML-Methods-Algo
mit
ๆŸฅ็œ‹ๆจกๅž‹่ฎญ็ปƒ็ป“ๆŸๅŽๅ„ไธชๅ‘้‡็š„ๆ ‡็ญพ
kmeans.labels_
ipynbs/unsupervised/Kmeans.ipynb
NLP-Deeplearning-Club/Classic-ML-Methods-Algo
mit
ๆจกๅž‹่ฎญ็ปƒ็ป“ๆŸๅŽ็”จไบŽ้ข„ๆต‹ๅ‘้‡็š„ๆ ‡็ญพ
kmeans.predict([[0, 0], [4, 4]])
ipynbs/unsupervised/Kmeans.ipynb
NLP-Deeplearning-Club/Classic-ML-Methods-Algo
mit
ๆจกๅž‹่ฎญ็ปƒ็ป“ๆŸๅŽ็š„ๅ„ไธช็ฐ‡็š„ไธญๅฟƒ็‚น
kmeans.cluster_centers_
ipynbs/unsupervised/Kmeans.ipynb
NLP-Deeplearning-Club/Classic-ML-Methods-Algo
mit
Process MEG data
data_path = sample.data_path() raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif' raw = mne.io.read_raw_fif(raw_fname) raw.set_eeg_reference() # set EEG average reference events = mne.find_events(raw, stim_channel='STI 014') event_id = dict(aud_r=1) # event trigger and conditions tmin = -0.2 # start of each epoch (200ms before the trigger) tmax = 0.5 # end of each epoch (500ms after the trigger) raw.info['bads'] = ['MEG 2443', 'EEG 053'] picks = mne.pick_types(raw.info, meg=True, eeg=False, eog=True, exclude='bads') baseline = (None, 0) # means from the first instant to t = 0 reject = dict(grad=4000e-13, mag=4e-12, eog=150e-6) epochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=True, picks=picks, baseline=baseline, reject=reject)
0.14/_downloads/plot_mne_dspm_source_localization.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
This is an alternative way of calculating the capacity by approximating the integral using the Gauss-Hermite Quadrature (https://en.wikipedia.org/wiki/Gauss%E2%80%93Hermite_quadrature). The Gauss-Hermite quadrature states that \begin{equation} \int_{-\infty}^\infty e^{-x^2}f(x)\mathrm{d}x \approx \sum_{i=1}^nw_if(x_i) \end{equation} where $w_i$ and $x_i$ are the respective weights and roots that are given by the Hermite polynomials. We have to rearrange the integral $I = \int_{-\infty}^\infty f_Y(y)\log_2(f_Y(y))\mathrm{d}y$ a little bit to put it into a form suitable for the Gauss-Hermite quadrature \begin{align} I &= \frac{1}{2}\sum_{x\in{\pm 1}}\int_{-\infty}^\infty f_{Y|X}(y|X=x)\log_2(f_Y(y))\mathrm{d}y \ &= \frac{1}{2}\sum_{x\in{\pm 1}}\int_{-\infty}^\infty \frac{1}{\sqrt{2\pi}\sigma_n}e^{-\frac{(y-x)^2}{2\sigma_n^2}}\log_2(f_Y(y))\mathrm{d}y \ &\stackrel{(a)}{=} \frac{1}{2}\sum_{x\in{\pm 1}}\int_{-\infty}^\infty \frac{1}{\sqrt{\pi}}e^{-z^2}\log_2(f_Y(\sqrt{2}\sigma_n z + x))\mathrm{d}z \ &\approx \frac{1}{2\sqrt{\pi}}\sum_{x\in{\pm 1}} \sum_{i=1}^nw_i \log_2(f_Y(\sqrt{2}\sigma_n x_i + x)) \end{align} where in $(a)$, we substitute $z = \frac{y-x}{\sqrt{2}\sigma}$
# alternative method using Gauss-Hermite Quadrature (see https://en.wikipedia.org/wiki/Gauss%E2%80%93Hermite_quadrature) # use 40 components to approximate the integral, should be sufficiently exact x_GH, w_GH = np.polynomial.hermite.hermgauss(40) print(w_GH) def C_BIAWGN_GH(sigman): integral_xplus1 = np.sum(w_GH * [np.log2(f_Y(np.sqrt(2)*sigman*xi + 1, sigman)) for xi in x_GH]) integral_xminus1 = np.sum(w_GH * [np.log2(f_Y(np.sqrt(2)*sigman*xi - 1, sigman)) for xi in x_GH]) integral = (integral_xplus1 + integral_xminus1)/2/np.sqrt(np.pi) return -integral - 0.5*np.log2(2*np.pi*np.exp(1)*sigman**2)
SC468/BIAWGN_Capacity.ipynb
kit-cel/wt
gpl-2.0
Plot the capacity curves as a function of $E_s/N_0$ (in dB) and $E_b/N_0$ (in dB). In order to calculate $E_b/N_0$, we recall from the lecture that \begin{equation} \frac{E_s}{N_0} = r\cdot \frac{E_b}{N_0}\qquad\Rightarrow\qquad\frac{E_b}{N_0} = \frac{1}{r}\cdot \frac{E_s}{N_0} \end{equation} Next, we know that the best rate that can be achieved is the capacity, i.e., $r=C$. Hence, we get $\frac{E_b}{N_0}=\frac{1}{C}\cdot\frac{E_s}{N_0}$. Converting to decibels yields \begin{align} \frac{E_b}{N_0}\bigg|{\textrm{dB}} &= 10\cdot\log{10}\left(\frac{1}{C}\cdot\frac{E_s}{N_0}\right) \ &= 10\cdot\log_{10}\left(\frac{1}{C}\right) + 10\cdot\log_{10}\left(\frac{E_s}{N_0}\right) \ &= \frac{E_s}{N_0}\bigg|{\textrm{dB}} - 10\cdot\log{10}(C) \end{align}
fig = plt.figure(1,figsize=(15,7)) plt.subplot(121) plt.plot(esno_dB_range, capacity_AWGN) plt.plot(esno_dB_range, capacity_BIAWGN) plt.xlim((-10,10)) plt.ylim((0,2)) plt.xlabel('$E_s/N_0$ (dB)',fontsize=16) plt.ylabel('Capacity (bit/channel use)',fontsize=16) plt.grid(True) plt.legend(['AWGN','BI-AWGN'],fontsize=14) # plot Eb/N0 . Note that in this case, the rate that is used for calculating Eb/N0 is the capcity # Eb/N0 = 1/r (Es/N0) plt.subplot(122) plt.plot(esno_dB_range - 10*np.log10(capacity_AWGN), capacity_AWGN) plt.plot(esno_dB_range - 10*np.log10(capacity_BIAWGN), capacity_BIAWGN) plt.xlim((-2,10)) plt.ylim((0,2)) plt.xlabel('$E_b/N_0$ (dB)',fontsize=16) plt.ylabel('Capacity (bit/channel use)',fontsize=16) plt.grid(True) from scipy.stats import norm # first compute the BSC error probability # the Q function (1-CDF) is also often called survival function (sf) delta_range = [norm.sf(1/sigman) for sigman in sigman_range] capacity_BIAWGN_hard = [1+delta*np.log2(delta)+(1-delta)*np.log2(1-delta) for delta in delta_range] fig = plt.figure(1,figsize=(15,7)) plt.subplot(121) plt.plot(esno_dB_range, capacity_AWGN) plt.plot(esno_dB_range, capacity_BIAWGN) plt.plot(esno_dB_range, capacity_BIAWGN_hard) plt.xlim((-10,10)) plt.ylim((0,2)) plt.xlabel('$E_s/N_0$ (dB)',fontsize=16) plt.ylabel('Capacity (bit/channel use)',fontsize=16) plt.grid(True) plt.legend(['AWGN','BI-AWGN', 'Hard BI-AWGN'],fontsize=14) # plot Eb/N0 . Note that in this case, the rate that is used for calculating Eb/N0 is the capcity # Eb/N0 = 1/r (Es/N0) plt.subplot(122) plt.plot(esno_dB_range - 10*np.log10(capacity_AWGN), capacity_AWGN) plt.plot(esno_dB_range - 10*np.log10(capacity_BIAWGN), capacity_BIAWGN) plt.plot(esno_dB_range - 10*np.log10(capacity_BIAWGN_hard), capacity_BIAWGN_hard) plt.xlim((-2,10)) plt.ylim((0,2)) plt.xlabel('$E_b/N_0$ (dB)',fontsize=16) plt.ylabel('Capacity (bit/channel use)',fontsize=16) plt.grid(True) W = 4
SC468/BIAWGN_Capacity.ipynb
kit-cel/wt
gpl-2.0
Time evolution of Spin Squuezing Parameter $\xi^2= \frac{N \langle\Delta J_y^2\rangle}{\langle J_z\rangle^2}$
#set initial state for spins (Dicke basis) nt = 1001 td0 = 1/(N*Lambda) tmax = 10 * td0 t = np.linspace(0, tmax, nt) excited = dicke(N, N/2, N/2) load_file = False if load_file == False: # cycle over all states in Dicke space xi2_1_list = [] xi2_2_list = [] xi2_1_min_list = [] xi2_2_min_list = [] for j in j_vals(N): #for m in m_vals(j): m = j rho0 = dicke(N, j, m) #solve using qutip (Dicke basis) # Dissipative dynamics: Only collective emission result = mesolve(liouv, rho0, t, [], e_ops = [jz, jy, jy**2,jz**2, jx], options = Options(store_states=True)) rhot = result.states jz_t = result.expect[0] jy_t = result.expect[1] jy2_t = result.expect[2] jz2_t = result.expect[3] jx_t = result.expect[4] Delta_jy = jy2_t - jy_t**2 xi2_1 = N * Delta_jy / (jz_t**2+jx_t**2) # Dissipative dynamics: Only local emission result2 = mesolve(liouv2, rho0, t, [], e_ops = [jz, jy, jy**2,jz**2, jx], options = Options(store_states=True)) rhot2 = result2.states jz_t2 = result2.expect[0] jy_t2 = result2.expect[1] jy2_t2 = result2.expect[2] jz2_t2 = result2.expect[3] jx_t2 = result2.expect[4] Delta_jy2 = jy2_t2 - jy_t2**2 xi2_2 = N * Delta_jy2 / (jz_t2**2+jx_t2**2) xi2_1_min = np.min(xi2_1) xi2_2_min = np.min(xi2_2) xi2_1_list.append(xi2_1) xi2_2_list.append(xi2_2) xi2_1_min_list.append(xi2_1_min) xi2_2_min_list.append(xi2_2_min) print("|j, m> = ",j,m)
examples/piqs-spin-squeezing-noise.ipynb
qutip/qutip-notebooks
lgpl-3.0
Visualization
label_size2 = 20 lw = 3 texplot = False # if texplot == True: # plt.rc('text', usetex = True) # plt.rc('xtick', labelsize=label_size) # plt.rc('ytick', labelsize=label_size) fig1 = plt.figure(figsize = (10,6)) for xi2_1 in xi2_1_list: plt.plot(t*(N*Lambda), xi2_1, '-', label = r' $\gamma_\Downarrow=0.2$', linewidth = lw) for xi2_2 in xi2_2_list: plt.plot(t*(N*Lambda), xi2_2, '-.', label = r'$\gamma_\downarrow=0.2$') plt.plot(t*(N*Lambda), 1+0*t, '--k') plt.xlim([0,3]) plt.ylim([0,8000.5]) plt.ylim([0,2.5]) plt.xlabel(r'$ N \Lambda t$', fontsize = label_size2) plt.ylabel(r'$\xi^2$', fontsize = label_size2) #plt.legend(fontsize = label_size2*0.8) plt.title(r'Spin Squeezing Parameter, $N={}$'.format(N), fontsize = label_size2) plt.show() plt.close() ## Here we find for how long the spin-squeezing parameter, xi2, ## is less than 1 (non-classical or "quantum" condition), in the two dynamics dt_quantum_xi1_list = [] dt_quantum_xi2_list = [] dt1_jm =[] dt2_jm =[] ds = dicke_space(N) i = 0 for j in j_vals(N): #for m in m_vals(j): m = j rho0 = dicke(N, j, m) quantum_xi1 = xi2_1_list[i][xi2_1_list[i] < 1.0] quantum_xi2 = xi2_2_list[i][xi2_2_list[i] < 1.0] # first ensemble if len(quantum_xi1)>0: dt_quantum_xi1 = len(quantum_xi1) dt1_jm.append((dt_quantum_xi1, j, m)) else: dt_quantum_xi1 = 0.0 # second ensemble if len(quantum_xi2)>0: dt_quantum_xi2 = len(quantum_xi2) dt2_jm.append((dt_quantum_xi2, j, m)) else: dt_quantum_xi2 = 0.0 dt_quantum_xi1_list.append(dt_quantum_xi1) dt_quantum_xi2_list.append(dt_quantum_xi2) i = i+1 print("collective emission: (squeezing time, j, m)") print(dt1_jm) print("local emission: (squeezing time, j, m)") print(dt2_jm)
examples/piqs-spin-squeezing-noise.ipynb
qutip/qutip-notebooks
lgpl-3.0
Visualization
plt.rc('text', usetex = True) label_size = 20 label_size2 = 20 label_size3 = 20 plt.rc('xtick', labelsize=label_size) plt.rc('ytick', labelsize=label_size) lw = 3 i0 = -3 i0s=2 fig1 = plt.figure(figsize = (8,5)) # excited state spin squeezing plt.plot(t*(N*Lambda), xi2_1_list[-1], 'k-', label = r'$|\frac{N}{2},\frac{N}{2}\rangle$, $\gamma_\Downarrow=0.2\Lambda$', linewidth = 0.8) plt.plot(t*(N*Lambda), xi2_2_list[-1], 'r--', label = r'$|\frac{N}{2},\frac{N}{2}\rangle$, $\gamma_\downarrow=0.2\Lambda$', linewidth = 0.8) # state with max time of spin squeezing plt.plot(t*(N*Lambda), xi2_1_list[i0], 'k-', label = r'$|j,j\rangle$, $\gamma_\Downarrow=0.2\Lambda$', linewidth = 0.8+0.4*i0s*lw) plt.plot(t*(N*Lambda), xi2_2_list[i0], 'r--', label = r'$|j,j\rangle$, $\gamma_\downarrow=0.2\Lambda$', linewidth = 0.8+0.4*i0s*lw) plt.plot(t*(N*Lambda), 1+0*t, '--k') plt.xlim([0,2.5]) plt.yticks([0,1,2]) plt.ylim([-1,2.]) plt.xlabel(r'$ N \Lambda t$', fontsize = label_size3) plt.ylabel(r'$\xi^2$', fontsize = label_size3) plt.legend(fontsize = label_size2*0.8, ncol=2) fname = 'figures/spin_squeezing_N_{}_states.pdf'.format(N) plt.title(r'Spin Squeezing Parameter, $N={}$'.format(N), fontsize = label_size2) plt.show() plt.close()
examples/piqs-spin-squeezing-noise.ipynb
qutip/qutip-notebooks
lgpl-3.0
The plot shows the spin squeezing parameter for two different dynamics -- only collective de-excitation, black curves; only local de-excitation, red curves -- and for two different inital states, the maximally excited state (thin curves) and another Dicke state with longer squeezing time (thick curves). This study, performed in Refs. [5,6] for the maximally excited state has been extended to any Dicke state in Ref. [7].
# plot the dt matrix in the Dicke space plt.rc('text', usetex = True) label_size = 20 label_size2 = 20 label_size3 = 20 plt.rc('xtick', labelsize=label_size) plt.rc('ytick', labelsize=label_size) lw = 3 i0 = 7 i0s=2 ratio_squeezing_local = 3 fig1 = plt.figure(figsize = (6,8)) ds = dicke_space(N) value_excited = 3 ds[0,0]=value_excited ds[int(N/2-i0),int(N/2-i0)]=value_excited * ratio_squeezing_local plt.imshow(ds, cmap="inferno_r") plt.xticks([]) plt.yticks([]) plt.xlabel(r"$j$", fontsize = label_size3) plt.ylabel(r"$m$", fontsize = label_size3) plt.title(r"Dicke space $(j,m)$ for $N={}$".format(N), fontsize = label_size3) plt.show() plt.close()
examples/piqs-spin-squeezing-noise.ipynb
qutip/qutip-notebooks
lgpl-3.0
The Plot above shows the two initial states (darker dots) $|\frac{N}{2},\frac{N}{2}\rangle$ (top edge of the Dicke triangle, red dot) and $|j,j\rangle$, with $j=\frac{N}{2}-3=7$ (black dot). A study of the Dicke triangle (dark yellow space) and state engineering is performed in Ref. [8] for different initial state. References [1] D. J. Wineland, J. J. Bollinger, W. M. Itano, F. L. Moore, and D. J. Heinzen, Spin squeezing and reduced quantum noise in spectroscopy, Phys. Rev. A 46, R6797 (1992) [2] M. Kitagawa and M. Ueda, Squeezed spin states, Phys. Rev. A 47, 5138 (1993) [3] J. Ma, X. Wang, C.-P. Sun, and F. Nori, Quantum spin squeezing, Physics Reports 509, 89 (2011) [4] L. Pezzeฬ€, A. Smerzi, M. K. Oberthaler, R. Schmied, and P. Treutlein, Quantum metrology with nonclassical states of atomic ensembles, Reviews of Modern Physics, in press (2018) [5] B. A. Chase and J. Geremia, Collective processes of an ensemble of spin-1 particles, Phys. Rev. A 78,0521012 (2008) [6] B. Q. Baragiola, B. A. Chase, and J. Geremia, Collective uncertainty in partially polarized and partially deco- hered spin-1 systems, Phys. Rev. A 81, 032104 (2010) [7] N. Shammah, S. Ahmed, N. Lambert, S. De Liberato, and F. Nori, Open quantum systems with local and collective incoherent processes: Efficient numerical simulation using permutational invariance https://arxiv.org/abs/1805.05129 [8] N. Shammah, N. Lambert, F. Nori, and S. De Liberato, Superradiance with local phase-breaking effects, Phys. Rev. A 96, 023863 (2017).
qutip.about()
examples/piqs-spin-squeezing-noise.ipynb
qutip/qutip-notebooks
lgpl-3.0
2) Create classes/bins Instead of having a range of values you can discretize in classes/bins. Make use of pandas' qcut: Discretize variable into equal-sized buckets.
data['height'].hist(bins=100) plt.title('Height population distribution') plt.xlabel('cm') plt.ylabel('freq')
course/class2/01-clean/examples/00-kill.ipynb
hershaw/data-science-101
mit
Step 1: Fit the Initial Random Forest Just fit every feature with equal weights per the usual random forest code e.g. DecisionForestClassifier in scikit-learn
# Load the iris data iris = load_iris() # Create the train-test datasets X_train, X_test, y_train, y_test = train_test_split(iris.data, iris.target) np.random.seed(1039) # Just fit a simple random forest classifier with 2 decision trees rf = RandomForestClassifier(n_estimators = 2) rf.fit(X = X_train, y = y_train) # Now plot the trees individually for idx, dtree in enumerate(rf.estimators_): print(idx) utils.draw_tree(inp_tree = dtree) #utils.draw_tree(inp_tree = rf.estimators_[1])
jupyter/backup_deprecated_nbs/06_explore_binary_decision_tree.ipynb
Yu-Group/scikit-learn-sandbox
mit
Get the second Decision tree to use for testing
estimator = rf.estimators_[1] from sklearn.tree import _tree estimator.tree_.node_count estimator.tree_.children_left[0] estimator.tree_.children_right[0] _tree.TREE_LEAF
jupyter/backup_deprecated_nbs/06_explore_binary_decision_tree.ipynb
Yu-Group/scikit-learn-sandbox
mit
Write down an efficient Binary Tree Traversal Function
# Now plot the trees individually utils.draw_tree(inp_tree = estimator) def binaryTreePaths(dtree, root_node_id = 0): # Use these lists to parse the tree structure children_left = dtree.tree_.children_left children_right = dtree.tree_.children_right if root_node_id is None: paths = [] if root_node_id == _tree.TREE_LEAF: raise ValueError("Invalid node_id %s" % _tree.TREE_LEAF) # if left/right is None we'll get empty list anyway if children_left[root_node_id] != _tree.TREE_LEAF: paths = [str(root_node_id) + '->' + str(l) for l in binaryTreePaths(dtree, children_left[root_node_id]) + binaryTreePaths(dtree, children_right[root_node_id])] else: paths = [root_node_id] return paths x1 = binaryTreePaths(rf.estimators_[1], root_node_id = 0) x1 def binaryTreePaths2(dtree, root_node_id = 0): # Use these lists to parse the tree structure children_left = dtree.tree_.children_left children_right = dtree.tree_.children_right if root_node_id is None: paths = [] if root_node_id == _tree.TREE_LEAF: raise ValueError("Invalid node_id %s" % _tree.TREE_LEAF) # if left/right is None we'll get empty list anyway if children_left[root_node_id] != _tree.TREE_LEAF: paths = [np.append(root_node_id, l) for l in binaryTreePaths2(dtree, children_left[root_node_id]) + binaryTreePaths2(dtree, children_right[root_node_id])] else: paths = [root_node_id] return paths x = binaryTreePaths2(rf.estimators_[1], root_node_id = 0) x leaf_nodes = [y[-1] for y in x] leaf_nodes n_node_samples = estimator.tree_.n_node_samples num_samples = [n_node_samples[y].astype(int) for y in leaf_nodes] print(n_node_samples) print(len(n_node_samples)) num_samples print(num_samples) print(sum(num_samples)) print(sum(n_node_samples)) X_train.shape value = estimator.tree_.value values = [value[y].astype(int) for y in leaf_nodes] print(values) # This should match the number of rows in the training feature set print(sum(values).sum()) values feature_names = ["X" + str(i) for i in range(X_train.shape[1])] np.asarray(feature_names) print(type(feature_names)) print(feature_names[0]) print(feature_names[-2]) feature = estimator.tree_.feature z = [feature[y].astype(int) for y in x] z #[feature_names[i] for i in z] max_dpth = estimator.tree_.max_depth max_dpth max_n_class = estimator.tree_.max_n_classes max_n_class print("nodes", np.asarray(a = nodes, dtype = "int64"), sep = ":\n") print("node_depth", node_depth, sep = ":\n") print("leaf_node", is_leaves, sep = ":\n") print("feature_names", used_feature_names, sep = ":\n") print("feature", feature, sep = ":\n")
jupyter/backup_deprecated_nbs/06_explore_binary_decision_tree.ipynb
Yu-Group/scikit-learn-sandbox
mit
Options
## Retrieve the bounding box of the specified county - if no county is specified, the bounding boxes for all NM counties will be requested countyBBOXlink = "http://gstore.unm.edu/apps/epscor/search/nm_counties.json?limit=100&query=" + county_name ## define the request URL print countyBBOXlink ## print the request URL for verification print bboxFile = urllib.urlopen(countyBBOXlink) ## request the bounding box information from the server bboxData = json.load(bboxFile) # print bboxData # Get data for BBOX defined by specified county(ies) myCounties = [] for countyBBOX in bboxData["results"]: minx,miny,maxx,maxy = countyBBOX[u'box'] myDownloadLink = "http://waterservices.usgs.gov/nwis/iv/?bBox=%f,%f,%f,%f&format=json&period=P7D&parameterCd=00060" % (minx,miny,maxx,maxy) # retrieve data for the specified BBOX for the last 7 days as JSON print myDownloadLink myCounty = {u'name':countyBBOX[u'text'],u'minx':minx,u'miny':miny,u'maxx':maxx,u'maxy':maxy,u'downloadLink':myDownloadLink} myCounties.append(myCounty) #countySubset = [myCounties[0]] #print countySubset valueList = [] for county in myCounties: print "processing: %s" % county["downloadLink"] try: datafile = urllib.urlopen(county["downloadLink"]) data = json.load(datafile) values = data["value"]["timeSeries"][0]["values"] for item in values: for valueItem in item["value"]: #print json.dumps(item["value"], sort_keys=True, indent=4) myValue = {"dateTime":valueItem["dateTime"].replace("T"," ").replace(".000-06:00",""),"value":valueItem["value"], "county":county["name"]} #print myValue valueList.append(myValue) #print valueList except: print "\tfailed for this one ..." #print json.dumps(values, sort_keys=True, indent=4) df = pandas.DataFrame(valueList) df['dateTime'] = pandas.to_datetime(df["dateTime"]) df['value'] = df['value'].astype(float).fillna(-1) print df.shape print df.dtypes print "column names" print "------------" for colName in df.columns: print colName print print df.head() %matplotlib inline fig,ax = plt.subplots(figsize=(10,8)) ax.width = 1 ax.height = .5 plt.xkcd() #plt.ylim(-25,30) ax.plot_date(df['dateTime'], df['value'], '.', label="Discharge (cf/sec)", color="0.2") fig.autofmt_xdate() plt.legend(loc=2, bbox_to_anchor=(1.0,1)) plt.title("15-minute Discharge - cubic feet per second") plt.ylabel("Discharge") plt.xlabel("Date") plt.show()
presentations/2014-04-CI-day/examples/notebook_02-Copy1.ipynb
karlbenedict/karlbenedict.github.io
mit
<p style="text-align: right; direction: rtl; float: right; clear: both;"> ื”ืชื•ืฆืื” ื”ื™ื ืจืฉื™ืžื” ืฉืœ ื›ืœ ื”ืคืขื•ืœื•ืช ืฉืืคืฉืจ ืœื”ืคืขื™ืœ ืขืœ <i>str</i>.<br> ื‘ืฉืœื‘ ื”ื–ื”, ืืžืœื™ืฅ ืœื›ื ืœื”ืชืขืœื ืžืคืขื•ืœื•ืช ื‘ืจืฉื™ืžื” ืฉืฉืžืŸ ืžืชื—ื™ืœ ื‘ืงื• ืชื—ืชื•ืŸ. </p> <p style="text-align: right; direction: rtl; float: right; clear: both;"> ื˜ืจื™ืง ื ื•ืกืฃ ืฉื›ื ืจืื” ื ื•ื— ื™ื•ืชืจ, ื–ืžื™ืŸ ื‘ืกื‘ื™ื‘ื•ืช ืคื™ืชื•ื— ืจื‘ื•ืช ืฉื™ืฆื ืœื›ื ืœืขื‘ื•ื“ ื‘ื”ืŸ.<br> ื”ื˜ืจื™ืง ื”ื•ื ืฆื™ื•ืŸ ืกื•ื’ ื”ื ืชื•ืŸ ืื• ื”ืžืฉืชื ื” ืฉืืชื ืขื•ื‘ื“ื™ื ืขืœื™ื•, ื”ืกื™ืžืŸ "ื ืงื•ื“ื”" ื•ืื– ืœื—ื™ืฆื” ืขืœ <kbd dir="ltr" style="direction: ltr">โ†น TAB</kbd>. </p>
# ืžืงืžื• ืืช ื”ืกืžืŸ ืื—ืจื™ ื”ื ืงื•ื“ื”, ื•ืื– ืœื—ืฆื• ืขืœ ื”ืžืงืฉ "ื˜ืื‘" ื‘ืžืงืœื“ืช str. # ื ื™ืชืŸ ื’ื ื›ืš: "Hello". # ืื• ื›ืš: s = "Hello" s.
week02/6_Documentation.ipynb
PythonFreeCourse/Notebooks
mit
<span style="text-align: right; direction: rtl; float: right; clear: both;">ืชื™ืขื•ื“ ืขืœ ืคืขื•ืœื” ืื• ืขืœ ืคื•ื ืงืฆื™ื”</span> <p style="text-align: right; direction: rtl; float: right; clear: both;"> ื‘ืžืงืจื” ืฉื ืจืฆื” ืœื—ืคืฉ ืคืจื˜ื™ื ื ื•ืกืคื™ื ืขืœ ืื—ืช ื”ืคื•ื ืงืฆื™ื•ืช ืื• ื”ืคืขื•ืœื•ืช (ื ื ื™ื— <code>len</code>, ืื• <code dir="ltr" style="direction: ltr">str.upper()</code>), ื”ืชื™ืขื•ื“ ืฉืœ ืคื™ื™ืชื•ืŸ ื”ื•ื ืžืงื•ืจ ืžื™ื“ืข ื ื”ื“ืจ ืœื›ืš.<br> ืื ืื ื—ื ื• ื ืžืฆืื™ื ื‘ืชื•ืš ื”ืžื—ื‘ืจืช, ื™ืฉ ื˜ืจื™ืง ื ื—ืžื“ ืœืงื‘ืœ ื—ืœืง ืžื”ืชื™ืขื•ื“ ื”ื–ื” ื‘ืฆื•ืจื” ืžื”ื™ืจื” โ€“ ืคืฉื•ื˜ ื ืจืฉื•ื ื‘ืชื ืงื•ื“ ืืช ืฉื ื”ืคื•ื ืงืฆื™ื”, ื•ืื—ืจื™ื• ืกื™ืžืŸ ืฉืืœื”: </p>
len?
week02/6_Documentation.ipynb
PythonFreeCourse/Notebooks
mit
<p style="text-align: right; direction: rtl; float: right; clear: both;"> ื‘ืจื’ืข ืฉื ืจื™ืฅ ืืช ื”ืชื, ืชืงืคื•ืฅ ืœื ื• ื—ืœื•ื ื™ืช ืขื ืžื™ื“ืข ื ื•ืกืฃ ืขืœ ื”ืคื•ื ืงืฆื™ื”.<br> ืื ืื ื—ื ื• ืจื•ืฆื™ื ืœืงื‘ืœ ืžื™ื“ืข ืขืœ ืคืขื•ืœื”, ื ื›ืชื•ื‘ ืืช ืกื•ื’ ื”ืขืจืš ืฉืขืœื™ื• ืื ื—ื ื• ืจื•ืฆื™ื ืœื‘ืฆืข ืื•ืชื” (ื ื ื™ื—, str): </p>
# str - ื”ืฉื ืฉืœ ื˜ื™ืคื•ืก ื”ื ืชื•ื ื™ื (ื”ืกื•ื’ ืฉืœ ื”ืขืจืš) # . - ื”ื ืงื•ื“ื” ื”ื™ื ืกื™ืžื•ืŸ ืฉื”ืคืขื•ืœื” ืฉื›ืชื‘ื ื• ืื—ืจื™ื” ืฉื™ื™ื›ืช ืœืกื•ื’ ืฉื›ืชื‘ื ื• ืœืคื ื™ื” # upper - ื”ืฉื ืฉืœ ื”ืคืขื•ืœื” ืฉืขืœื™ื” ืจื•ืฆื™ื ืœืงื‘ืœ ืขื–ืจื” # ? - ืžื‘ืงืฉ ืืช ื”ืžื™ื“ืข ืขืœ ื”ืคืขื•ืœื” str.upper?
week02/6_Documentation.ipynb
PythonFreeCourse/Notebooks
mit
<div class="align-center" style="display: flex; text-align: right; direction: rtl;"> <div style="display: flex; width: 10%; float: right; "> <img src="images/warning.png" style="height: 50px !important;" alt="ืื–ื”ืจื”!"> </div> <div style="width: 90%"> <p style="text-align: right; direction: rtl;"> ืงืจื™ืื” ืœืคื•ื ืงืฆื™ื”, ืงืจื™ ื”ื•ืกืคืช ื”ืชื•ื•ื™ื <code>()</code> ืœืคื ื™ ืกื™ืžืŸ ื”ืฉืืœื”, ืชืคืขื™ืœ ืืช ื”ืคื•ื ืงืฆื™ื” ืื• ื”ืคืขื•ืœื” ื‘ืžืงื•ื ืœืชืช ืœื›ื ืขื–ืจื”. </p> </div> </div> <p style="text-align: right; direction: rtl; float: right; clear: both;"> ื‘ืชื•ืš ื—ืœื•ื ื™ืช ื”ืขื–ืจื” ืฉืชื™ืคืชื— ื‘ืžื—ื‘ืจืช ื ืจืื” ืฉื•ืจื•ืช ื”ืžื›ื™ืœื•ืช ืคืจื˜ื™ื ืžืขื ื™ื™ื ื™ื: </p> <ul style="text-align: right; direction: rtl; float: right; clear: both;"> <li><dfn>Signature</dfn> โ€“ ื—ืชื™ืžืช ื”ืคืขื•ืœื” ืื• ื”ืคื•ื ืงืฆื™ื”, ื”ื›ื•ืœืœืช ืืช ื”ืฉื ืฉืœื” ื•ืืช ื”ืคืจืžื˜ืจื™ื ืฉืœื”.</li> <li><dfn>Docstring</dfn> โ€“ ื›ืžื” ืžื™ืœื™ื ืฉืžืชืืจื•ืช ื”ื™ื˜ื‘ ืžื” ื”ืคื•ื ืงืฆื™ื” ืขื•ืฉื”, ื•ืœืขื™ืชื™ื ื ื•ืชื ื•ืช ืžื™ื“ืข ื ื•ืกืฃ ืขืœ ื”ืคืจืžื˜ืจื™ื.</li> </ul> <p style="text-align: right; direction: rtl; float: right; clear: both;"> ืœืขืช ืขืชื”, ื ืชืขืœื ืžื”ืจื›ื™ื‘ื™ื <em>self</em>, <em>*</em> ืื• <em>/</em> ืฉื™ื•ืคื™ืขื• ืžื“ื™ ืคืขื ื‘ืฉื“ื” Signature. </p> <p style="text-align: right; direction: rtl; float: right; clear: both;">ืžืฉืื‘ื™ ืขื–ืจื” ื ื•ืกืคื™ื</p> <p style="text-align: right; direction: rtl; float: right; clear: both;"> ืขื•ืœื ื”ืชื›ื ื•ืช ื”ื•ื ืื“ื™ืจ ื‘ืžืžื“ื™ื•, ื•ืงื™ื™ืžื™ื ืžืฉืื‘ื™ื ื ื”ื“ืจื™ื ืฉืžื˜ืจืชื ืœืขื–ื•ืจ ืœืžืชื›ื ืช.<br> ืœืคื ื™ื›ื ื›ืžื” ืžื”ืคื•ืคื•ืœืจื™ื™ื ืฉื‘ื”ื: </p> <ul style="text-align: right; direction: rtl; float: right; clear: both;"> <li><a href="https://google.com">Google</a> โ€“ ื—ืคืฉื• ื”ื™ื˜ื‘ ืืช ื”ืฉืืœื” ืฉืœื›ื ื‘ึพGoogle. ืžืชื›ื ืช ื˜ื•ื‘ ืขื•ืฉื” ืืช ื–ื” ืคืขืžื™ื ืจื‘ื•ืช ื‘ื™ื•ื. ืงืจื•ื‘ ืœื•ื•ื“ืื™ ืฉืžื™ืฉื”ื• ื‘ืขื•ืœื ื›ื‘ืจ ื ืชืงืœ ื‘ืขื‘ืจ ื‘ื‘ืขื™ื” ืฉืœื›ื.</li> <li><a href="https://docs.python.org/3">ื”ืชื™ืขื•ื“ ืฉืœ ืคื™ื™ืชื•ืŸ</a> โ€“ ื›ื•ืœืœ ื”ืจื‘ื” ืžื™ื“ืข, ื•ืœืขื™ืชื™ื ื“ื•ื’ืžืื•ืช ืžื•ืขื™ืœื•ืช.</li> <li><a href="https://stackoverflow.com">Stack Overflow</a> โ€“ ืื—ื“ ื”ืืชืจื™ื ื”ื›ื™ ืžื•ื›ืจื™ื ื‘ืขื•ืœื ื”ืคื™ืชื•ื—, ื”ืžื›ื™ืœ ืžืขืจื›ืช ืฉืืœื•ืช ื•ืชืฉื•ื‘ื•ืช ืขื ื“ื™ืจื•ื’ ื‘ื ื•ื’ืข ืœื›ืœ ืžื” ืฉืงืฉื•ืจ ื‘ืชื›ื ื•ืช.</li> <li><a href="https://github.com">GitHub</a> โ€“ ืืชืจ ืฉื‘ื• ืื ืฉื™ื ืžื ื”ืœื™ื ืืช ื”ืงื•ื“ ืฉืœื”ื ื•ืžืฉืชืคื™ื ืื•ืชื• ืขื ืื—ืจื™ื. ื™ืฉ ื‘ื• ืฉื•ืจืช ื—ื™ืคื•ืฉ, ื•ื”ื•ื ืžืขื•ืœื” ืœืฆื•ืจืš ืžืฆื™ืืช ื“ื•ื’ืžืื•ืช ืœืฉื™ืžื•ืฉ ื‘ืงื•ื“.</li> </ul> <p style="text-align: right; direction: rtl; float: right; clear: both;">ืชืจื’ื•ืœ</p> <p style="text-align: right; direction: rtl; float: right; clear: both;"> ื‘ืชืจื’ื•ืœ ื–ื” ื”ืฉืชืžืฉื• ื‘ืžื™ื“ืช ื”ืฆื•ืจืš ื‘ืชื™ืขื•ื“ ืฉืœ ืคื™ื™ืชื•ืŸ ื›ื“ื™ ืœื’ืœื•ืช ืคืขื•ืœื•ืช ืฉืœื ืœืžื“ื ื• ืขืœื™ื”ืŸ. </p> <p style="text-align: right; direction: rtl; float: right; clear: both;">ืกื™ื“ื•ืจ ืจืฉื™ืžื”</p> <p style="text-align: right; direction: rtl; float: right; clear: both;"> ืœืคื ื™ื›ื ืจืฉื™ืžืช ื”ืžืกืคืจื™ื ื”ื˜ื‘ืขื™ื™ื ืžึพ1 ืขื“ 10 ื‘ืกื“ืจ ืžื‘ื•ืœื‘ืœ.<br> ื”ืื ืชื•ื›ืœื• ืœืกื“ืจ ืื•ืชื” ื‘ืฉื•ืจื” ืื—ืช ืฉืœ ืงื•ื“, ื•ืœื”ื“ืคื™ืก ืื•ืชื” ื‘ืฉื•ืจื” ืื—ืช ื ื•ืกืคืช?<br> ื”ืคืœื˜ ืฉื™ื•ื“ืคืก ืขืœ ื”ืžืกืš ืฆืจื™ืš ืœื”ื™ื•ืช: <samp dir="ltr" style="direction: ltr;">[1, 2, 3, 4, 5, 6, 7, 8, 9, 10]</samp>. </p>
numbers = [2, 9, 10, 8, 7, 4, 3, 5, 6, 1]
week02/6_Documentation.ipynb
PythonFreeCourse/Notebooks
mit
In this example, it is True that our variable m is larger than zero, and therefore, the print call ('if code' in the above figure) is executed. Now, what if the condition were not True? Well...
n = -5 if n > 0: print("Larger than zero.")
docs/mpg-if_error_continue/examples/e-02-2_conditionals.ipynb
marburg-open-courseware/gmoc
mit