_id
stringlengths 2
6
| partition
stringclasses 3
values | text
stringlengths 4
46k
| language
stringclasses 1
value | title
stringclasses 1
value |
---|---|---|---|---|
d1 | train | The SearchDatabase class that SphinxSearch extends was changed from REL1_31 to REL1_32. It now requires you to define doSearchTextInDB and doSearchTitleInDB methods.
See REL1_31 https://doc.wikimedia.org/mediawiki-core/REL1_31/php/classSearchDatabase.html
vs REL1_32 https://doc.wikimedia.org/mediawiki-core/REL1_32/php/classSearchDatabase.html
This is sortof mentioned in the patch notes if you search for Search under deprecation (note this is a Backwards compatibility break instead) https://www.mediawiki.org/wiki/Release_notes/1.32#Compatibility:
Overriding SearchEngine::{searchText,searchTitle,searchArchiveTitle} in extending classes is deprecated. Extend related doSearch* methods instead.
If you are like me and not comfortable fixing the extension yourself, you will have to wait for one of the extension contributors to update the extension to work with REL1_32. Until then you will have to stay on REL1_31 if you wish to use the extension.
A: Just adding these two empty functions to SphinxMWSearch.php under the definition of SphinxMWSearch class seems to do the trick. It makes it stop complaining and - as far as I can tell - search function is working fine.
function doSearchTextInDB($term) {
}
function doSearchTitleInDB($term) {
}
Hopefully developers of this extension will come up with a proper fix soon. | unknown | |
d2 | train | you have to write below way because CTE is part of the SELECT not the UPDATE
update work_request
set name = name || '_old'
where exists (
with wr_double as
(select...)
select 1 from wr_double wd wd.name = work_request.name and wd.wr_id = work_request.id
); | unknown | |
d3 | train | A table view isn't really designed to display hierarchical views. What's supposed to happen is that you drill down (push a new view controller on the stack) to get to the next level of the hierarchy.
However, if your hierarchy is only as deep as you suggest, then you could have your items as headers and subitems as rows. If you don't want to create a custom view (why not?) you'd be limited to just text for your top level items. | unknown | |
d4 | train | It is not very clear what you are trying to do but:
*
*Every EF context can have each table and entity mapped only once
*It means if you load configuration for AssemblyA you cannot use configuration for AssemblyB
*It also means you cannot use default way how EF mapping is constructed inside OnModelCreating because that method is normally called only once for whole application lifetime.
*You can manually construct two DbModel insnaces, compile them to DbCompiledModel and pass them to DbContext constructor - that should allow you to have two different mapping configuration for AssemblyA and AssemblyB but you will never have both of them in the same context instance.
*EF migrations will most probably don't work because they expect single mapping set per database
Anyway if you are using MEF and modular architecture each entity should be either core (not related to any particular module and shared as is among modules) or module (not used by any other modules or core). | unknown | |
d5 | train | Open returns a *sql.DB, a pointer to a sql.Db. Change the function signature to also return a *sql.DB:
func establish_db_connection() *sql.DB { | unknown | |
d6 | train | I am not sure if there is more elegant way but there are definitely some other ways...
I'd prefer if my initial view
controller isn't even instantiated until it completes
This is not a problem. All you have to do is to delete a UIMainStoryboardFile or NSMainNibFile key from the Info.plist which tells the UIApplicationMain what UI should be loaded. Subsequently you run your "cleanup logic" in the AppDelegate and once you are done you initiate the UI yourself as you already shown in the example.
Alternative solution would be to create a subclass of UIApplicationMain and run the cleanup in there before the UI is loaded.
Please see App Life Cycle below:
A: *
*You can add a UIImageView on your initial ViewController which will contain the splash image of your app.
*In viewDidLoad()... make the imageview.hidden property False... do you cleanup operation and on completion of cleanup task make the imageview.hidden property TRUE.
By this user will be unaware of what job you are doing and this approach is used in many recognized app.
A: I faced a very simmilar situation
where i needed to run code, which was only ready after didFinishLaunchingNotification
i came up with this pattern which also works with state restoration
var finishInitToken: Any?
required init?(coder: NSCoder) {
super.init(coder: coder)
finishInitToken = NotificationCenter.default.addObserver(forName: UIApplication.didFinishLaunchingNotification, object: nil, queue: .main) { [weak self] (_) in
self?.finishInitToken = nil
self?.finishInit()
}
}
func finishInit() {
...
}
override func decodeRestorableState(with coder: NSCoder) {
// This is very important, so the finishInit() that is intended for "clean start"
// is not called
if let t = finishInitToken {
NotificationCenter.default.removeObserver(t)
}
finishInitToken = nil
}
alt you can make a notif for willFinishLaunchingWithOptions
A: Alternatively, if you want a piece of code to be run before everything, you can override the AppDelegate's init() like this:
@main
class AppDelegate: UIResponder, UIApplicationDelegate {
override init() {
DoSomethingBeforeEverthing() // You code goes here
super.init()
}
...
}
Few points to remember:
*
*You might want to check, what's allowed/can be done here, i.e. before app delegate is initialized
*Also cleaner way would be, subclass the AppDelegate, and add new delegate method that you call from init, say applicationWillInitialize and if needed applicationDidInitialize | unknown | |
d7 | train | I have changed my Google Play Services library version from 9.0.0 to 9.0.1 and it solved it. | unknown | |
d8 | train | There are a few considerations here. Firstly, as matt has said, objects are destroyed by formatting commands. Only use them if you are trying to achieve a pretty view of the properties of objects, otherwise use select.
I would argue that you would be doing that unnecessarily: why throw away information that you might later want? I've lost count of the number of times that I've realised I can do something new and useful with the script or function I've just written, if I just expand it a little.
When you import a .csv file, you create objects with the property names of the column headings, so you can just refer to that property by name later.
Your problem with the replace command is related to the fact that you need to loop through the objects stored in $l, when there is more than one, Foreach-Object will do that.
So, for the file servers.csv:
TARGET,VALUE2,VALUE3
server1.domain.com,2,3
server2.domain.com,2,3
server3.domain.com,2,3
server4.domain.com,2,3
server5.domain.com,2,3
The import gives us a nice array of objects (nicely formatted in a table):
Import-Csv .\servers.csv | ft -a
TARGET VALUE2 VALUE3
------ ------ ------
server1.domain.com 2 3
server2.domain.com 2 3
server3.domain.com 2 3
server4.domain.com 2 3
server5.domain.com 2 3
We can use Foreach-Object (% is the alias) to get the 'TARGET' property of each object (equivalent to each line the in file), use the 'Split' method to split on the dots of the FQDN and then take the first token of that split:
Import-Csv .\servers.csv | % {$_.TARGET.Split('.')[0]}
server1
server2
server3
server4
server5
You can import to a variable first and then pipe that to the loop if you want.
Cheers
A: ALWAYS: If you plan on doing data manipulation remove the | Format-Table TARGET as it destroys the objects.
One approach would be to extract a string array of the column TARGET which you could then process.
$l=Import-Csv .\filename.csv | Select -Expand TARGET
Assuming you have a properly formed CSV your code could be simplified. Since it does not require regex you could also do.
$l=Import-Csv .\filename.csv | Select -Expand TARGET | ForEach-Object{$_ -Split "." | Select -First 1}
$l should contain just the server names at this point. You regex is not wrong however in order to use it you would have to refine it or use it in a loop similar to how I use -split.
$l=Import-Csv .\filename.csv | Select -Expand TARGET
$l | ForEach-Object{$_ -replace '(.+?)\..+','$1'} | unknown | |
d9 | train | Use synchronous ajax call.Synchronous AJAX (Really SJAX -- Synchronous Javascript and XML) is modal which means that javascript will stop processing your program until a result has been obtained from the server. While the request is processing, the browser is effectively frozen.
function getFile(url) {
if (window.XMLHttpRequest) {
AJAX=new XMLHttpRequest();
} else {
AJAX=new ActiveXObject("Microsoft.XMLHTTP");
}
if (AJAX) {
AJAX.open("GET", url, false); //notice the 'false' attribute
AJAX.send(null);
return AJAX.responseText;
} else {
return false;
}
}
If you are using jQuery then put async:false in your ajax call
var fileFromServer = getFile('http://somedomain.com/somefile.txt'); | unknown | |
d10 | train | Why does the first example work but the second one emit that error?
When the function is called as Base64Encode(), the this context is implicitly set to window. However, when you call it as a method on Test.Base64Encode(), this will refer to Test and btoa grumps about that.
What's the correct way to fix this particular error?
You will need to bind it to the expected context:
Base64Encode = window.btoa
? window.btoa.bind(window)
: CryptoJS.enc.Base64.stringify;
A: Use .bind():
var Test = {
Base64Encode: function() {
if (window.btoa)
return window.btoa.bind(window);
return CryptoJS.enc.Base64.stringify;
}()
};
You got the error because you invoked the function via that object property reference. When you do that, the value of this is set to a reference to the object involved. The btoa() function doesn't like that (for who knows what reason), but .bind() creates a wrapper function for you that ensures the proper this.
A: It appears as that btoa function is a member function of Window class. And so it has to be called with this set to window.
In order it to work in your setup you should call it this way:
Test.Base64Encode.call(window,"Please work."); | unknown | |
d11 | train | Once pointed to look in the right direction I found that I moved the the files/renamed the bundle and hadn't updated the call.
instead of
@Scripts.Render("~/bundles/jquery")
@Scripts.Render("~/bundles/bootstrap")
I should have had
@Scripts.Render("~/Content/bundles/jquery")
@Scripts.Render("~/Content/bundles/bootstrap")
to match the bundle config
var jqueryBundle = new ScriptBundle("~/Content/bundles/jquery");
...
var bootstrapBundle = new ScriptBundle("~/Content/bundles/bootstrap"); | unknown | |
d12 | train | I think you don't have resolve in your webpack file.
Could you please try with the resolve config.
{
// ...
resolve: {
alias: {
'react': 'preact-compat',
'react-dom': 'preact-compat',
// Not necessary unless you consume a module using `createClass`
'create-react-class': 'preact-compat/lib/create-react-class',
// Not necessary unless you consume a module requiring `react-dom-factories`
'react-dom-factories': 'preact-compat/lib/react-dom-factories'
}
}
// ...
}
A: Thanks to @Dominic for helping me clean up my dependencies.
So basically the new dependencies look like this:
"devDependencies": {
"@babel/core": "^7.0.0",
"@babel/preset-env": "^7.0.0",
"@babel/preset-react": "^7.0.0",
"babel-loader": "^8.0.2",
"react": "^16.5.0",
"react-dom": "^16.5.0",
"webpack": "^4.17.2",
"webpack-cli": "^3.1.0"
},
"dependencies": {
"substate": "^4.0.0",
}
Important to note: I didn't need React as a dependency. Any use of webpack and babel are strictly for dev purposes and testing.
The actual final product switched from being a compiled index.js file to simply this:
import React from 'react';
/**
*
* @param {Object} state - Reference to SubState instance
* @param {Object} chunk - object of props you want maps to from state to props
*/
const connect = (state, chunk)=> Comp => props =>{
const newProps = {};
for (let key in chunk){
newProps[key] = state.getProp(chunk[key]);
}
return (<Comp {...newProps} {...props} />)
};
export {
connect
}
The assumption (and I think a safe and fair one) is that anyone using this will compile as needed and alias preact into their existing react project.
This assumption allowed me to remove any minification, webpack, or any real compilation from the actual library. In essence, just use this file as a normal Higher Order Component, React will do the rest with a bundler and swapping React for Preact according to the docs will work as needed.
Thanks all. | unknown | |
d13 | train | You are trying to print byte[] as a String which uses String.valueOf(bytes) and appears as something like: [B@23ab930d which is [B followed by the hashcode of the array. Hence different byte arrays have different string values. Compare these:
byte[]a=new byte[]{65};
System.out.println(new String(a));
=> A
System.out.println(a);
// Prints something like:
[B@6438a396
Change to print the byte array contents converted as a String:
System.out.println(new String(bytes));
OR: read the text file as a String directly:
String contents = Files.readString(file.toPath());
System.out.println(contents); | unknown | |
d14 | train | You can mark default value "1" for the third field and then import the csv file. Or you can mark the fields as nullable that are not included in the csv file, then execute a query to update the null values. | unknown | |
d15 | train | You'll be wanting the - (void)setStatusBarHidden:(BOOL)hidden animated:(BOOL)animated on the UIApplication class.
Something like this:
[[UIApplication sharedApplication] setStatusBarHidden:YES animated:YES];
That should hide the status bar with a nice fade animation.
A: Joining the discusion late, but I think I can save others some trouble.
I have a VC several pushes into a NavController (let's call that VC the PARENT). Now I want to display a modal screen (the CHILD) with the nav bar AND status bar hidden. After much experimentation, I know this works...
1) Because I present the CHILD VC by calling presentModalViewController:(UIViewController *)modalViewController animated:(BOOL)animated in the PARENT, the nav bar is not involved anymore (no need to hide it).
2) The view in the CHILD VC nib is sized to 320x480.
3) The CHILD VC sets self.wantsFullScreenLayout = YES; in viewDidLoad
4) just before presenting the CHILD, hide the status bar with [[UIApplication sharedApplication] setStatusBarHidden:YES withAnimation:YES];
5) dismiss the CHILD VC using a delegate protocol methods in the PARENT, and call [[UIApplication sharedApplication] setStatusBarHidden:NO withAnimation:YES]; before dismissModalViewControllerAnimated:YES] to make sure the nav bar is drawn in the correct location
Hope this helps. | unknown | |
d16 | train | Use absolute site URI routing whereby your sound file address starts with a / and so indicates the path from the root of the URL.
For instance:
Your current output file is, say, www.site.com/output/file.html, and you want to load a sound from, say, www.site.com/sounds/laugh.mp3 then you simply use the /sounds/laugh.mp3 part of the address as your reference. Becauseit starts with a / this indicates it's an absolute site URL and indicates the root HTML directory, rather than a page specific url.
<source src="/sounds/filename.mp3" type=mpeg">
If you used a relative path such as ../sounds/filename.mp3 this would break if you used it in the base folder (www.site.com/index.html) or in a deeper directory tree such as www.site.com/sounds/silly/horses.html. But the absolute path would always work (as long as the destinaton file exists and is accessible).
Tip: Make sure you've also uploaded your sound file! | unknown | |
d17 | train | You are calling urllib.request.urlopen(req).read, correct syntax is: urllib.request.urlopen(req).read() also you are not closing the connection, fixed that for you.
A better way to open connections is using the with urllib.request.urlopen(url) as req: syntax as this closes the connection for you.
from bs4 import BeautifulSoup
import urllib
import html5lib
class Websites:
def __init__(self):
self.header = {"User-Agent": "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/47.0.2526.106 Safari/537.36"}
def free_proxy_list(self):
print("Connecting to free-proxy-list.net ...")
url = "https://free-proxy-list.net"
req = urllib.request.Request(url, None, self.header)
content = urllib.request.urlopen(req)
html = content.read()
soup = BeautifulSoup(str(html), "html5lib")
print("Connected. Loading the page ...")
print("Print page")
print("")
print(soup.prettify())
content.close() # Important to close the connection
For more info see: https://docs.python.org/3.0/library/urllib.request.html#examples | unknown | |
d18 | train | I would suggest avoiding initial component setup in beforeAll/beforeEach functions. Each test case should run in isolation and not be affected by operations executed in other tests.
Instead, create a helper function with that logic and call it on every test.
import React from 'react';
import Counter from './Counter';
import { render, fireEvent } from '@testing-library/react';
describe('Counter works', () => {
let comp;
let inp;
let btnPlus;
let val;
const renderComponent = () => {
comp = render(<Counter />);
inp = comp.getByTestId('inp');
btnPlus = comp.getByTestId('btn-+');
val = comp.getByTestId('counter-value');
}
it('Counter exists', () => {
renderComponent();
expect(comp).toBeTruthy();
});
it('Input works', () => {
renderComponent();
expect(inp.value).toBe('');
fireEvent.change(inp, { target: { value: 10 } });
expect(inp.value).toBe('10');
fireEvent.click(btnPlus);
expect(val.textContent).toBe('10');
});
}); | unknown | |
d19 | train | The two dynamic values are represented as fields, not properties. And since you can only bind to properties in XAML, you'll need to create a strong-typed data structure with your two properties, and select the values into that.
Something like:
class V
{
public string tag1 { get; set; }
public string tag2 { get; set; }
}
var result = XResult.Descendants("Table").Select(t => new V
{
tag1 = t.Descendants("tag1").First().Value,
tag2 = t.Descendants("tag2").First().Value,
}); | unknown | |
d20 | train | I figured it out. The Objective C to Swift translator wasn't working as expected so I just coded it in Objective C and everything works, although the documentation on the Azure website needs to be updated to iOS10. | unknown | |
d21 | train | fixed it:
items.sort(key = lambda x:((float)(x.price), (int)(x.amount))) | unknown | |
d22 | train | Probably too convoluted and not very idiomatic, but here's an option anyway. Run your command as a python subprocess, analysed the output and decide what to do after so many retries. For example:
rule flaky_rule:
input:
infile = "{sample}/foo.txt"
output:
outfile = "{sample}/bar.txt"
params:
retries = 3
log:
flaky_rule_log = "{sample}/logs/flaky_rule.log"
run:
import subprocess
for retry in range(0, params.retries):
p = subprocess.Popen('flacky_script {input} {output} >> "{log.flaky_rule_log}" 2>&1',
stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True)
stdout, stderr = p.communicate()
if p.returncode == 0:
break
if p.returncode != 0:
raise Exception('Failed after %s retries' % params.retries)
A: Here's a variation based on an answer to a related question. The idea is to specify the path to the log file as a resource. This means that snakemake will not automatically clear it (which can be undesirable in some circumstances, so use with caution):
rule flaky_rule:
input:
infile = "{sample}/foo.txt"
output:
outfile = "{sample}/bar.txt"
resources:
flaky_rule_log = lambda wildcards, attempt: f"{wildcards.sample}/logs/flaky_rule_attempt{attempt}.log"
retries: 3
shell:
"""
flaky_script -i "{input.infile}" -o "{output.outfile}" >> "{resources.flaky_rule_log}" 2>&1
"""
It would be great if params or logs supported attempt, but right now this is still an open issue. | unknown | |
d23 | train | you can use the format string in for the date
string url = string.Format("someUrl/SomeControllerName/WriteLogFile/{0}/{1}", currentId, DateTime.Now.ToString("MM-dd-yyyy"));
and add an entry in to the routes table to route it to the appropriate controller and action
routes.MapRoute("SomeRoutename",
"SomeControllerName/WriteLogFile/{id}/{date}",
new { controller = "SomeControllerName", action = "WriteLogFile",
date= DateTime.Now});
A: Add a query string parameter:
var toWrite = DateTime.Now;
string url = string.Concat(someUrl, "SomeControllerName/", currentId, "/WriteLogFile");
url = string.Concat(url, "?date=", toWrite.ToString("s")); | unknown | |
d24 | train | Found the solution! It is a bug from mapbox.
You have to use the version 1.13.0. | unknown | |
d25 | train | Nothing stops you from creating claims to store extra information in your token if they can be useful for your client.
However I would rely on JWT only for authentication (who the caller is). If you need to perform authorization (what the caller can do), look up the caller roles/permissions from your persistent storage to get the most updated value.
For short-lived tokens (for example, when propagating authentication and authorization in a microservices cluster), I find it useful to have the roles in the token.
A: As mentioned here, ASP.NET Core will automatically detect any roles mentioned in the JWT:
{
"iss": "http://www.jerriepelser.com",
"aud": "blog-readers",
"sub": "123456",
"exp": 1499863217,
"roles": ["Admin", "SuperUser"]
}
and 'map' them to ASP.NET Roles which are commonly used to secure certain parts of your application.
[Authorize(Roles = "Admin")]
public class SettingsController : Controller
The server which is giving out (and signing) the JWT is commonly called an authorization server and not just an authentication server, so it makes sense to include role information (or scope) in the JWT, even though they're not registered claims.
A: The official JWT site explicitly mentions "authorization" (in contrast to "authentication") as a usecase for JWTs:
When should you use JSON Web Tokens?
Authorization: This is the most common scenario for using JWT. Once the user is logged in, each subsequent request will include the JWT, allowing the user to access routes, services, and resources that are permitted with that token. Single Sign On is a feature that widely uses JWT nowadays, because of its small overhead and its ability to be easily used across different domains.
That being said, from a security-perspective you should think twice whether you really want to include roles or permissions in the token.
(The text below can be understood as a more "in-depth" follow up to the rather short-kept accepted answer)
Once you created and signed the token you grant the permission until the token expires. But what if you granted admin permissions by accident? Until the token expires, somebody is now operating on your site with permissions that were assigned by mistake.
Some people might argue that the token is short-lived, but this is not a strong argument given the amount of harm a person can do in short time. Some other people advocate to maintain a separate blacklist database table for tokens, which solves the problem of invalidating tokens, but adds some kind of session-state tracking to the backend, because you now need to keep track of all current sessions that are out there – so you would then have to make a db-call to the blacklist every time a request arrives to make sure it is not blacklisted yet. One may argue that this defeats the purpose of "putting the roles into the JWT to avoid an extra db-call" in the first place, since you just traded the extra "roles db-call" for an extra "blacklist db-call".
So instead of adding authorization claims to the token, you could keep information about user roles and permissions in your auth-server's db over which you have full control at any time (e.g. to revoke a certain permission for a user). If a request arrives, you fetch the current roles from the auth-server (or wherever you store your permissions).
By the way, if you have a look at the list of public claims registered by the IANA, you will see that these claims evolve around authentication and are not dealing with what the user is allowed to do (authorization).
So in summary you can...
*
*add roles to your JWT if (a) convenience is important to you and (b) you want to avoid extra database calls to fetch permissions and (c) do not care about small time windows in which a person has rights assigned he shouldn't have and (d) you do not care about the (slight) increase in the JWT's payload size resulting from adding the permissions.
*add roles to your JWT and use a blacklist if (a) you want to prevent any time windows in which a person has rights assigned he shouldn't have and (b) accept that this comes at the cost of making a request to a blacklist for every incoming request and (c) you do not care about the (slight) increase in the JWT's payload size resulting from adding the permissions.
*not add roles to your JWT and fetch them on demand if (a) you want to prevent any time windows in which a person has rights assigned he shouldn't have or (b) avoid the overhead of a blacklist or (c) avoid increasing the size of your JWT payload to increase slightly and (d) if you accept that this comes at the cost of sometimes/always querying the roles on incoming requests. | unknown | |
d26 | train | If clicking the link throws an error, it is because the link provided is incorrect, but since nothing happens, the problem is probably caused by a wrong datatype in your table.
Go to the design view of your table, and set the datatype to Hyperlink. | unknown | |
d27 | train | The first request uses v1 of the Zipkin api while the second uses v2 (see https://github.com/openzipkin/zipkin/issues/1499 for the v2 specification). Spans are broken up by kind (SERVER and CLIENT) instead of having client receive, server receive, client send, and server send annotations (hence why there are more spans). | unknown | |
d28 | train | Well I'm not sure if you only load the DLLs without registering them in the system registry. However your first EDIT shows an error triggered by attempts to access some stack of the registry, so I assume you are. In that case, I simply use a batch file (which fires commands in the CMD console) to register my DLLs as I would one by one:
@echo off
echo Registering DevExpress DLLs
%~dp0gacutil.exe /i %~dp0DevExpress.BonusSkins.v12.1.dll
%~dp0gacutil.exe /i %~dp0DevExpress.Charts.v12.1.Core.dll
So, I place this in the RUN section of the iss script:
[Run]
Filename:C:\myFolder\RegisterDevExpress.bat"
Hope this helps. | unknown | |
d29 | train | Qt needs it because it is a library that can be dynamically loaded. Users can compile and link without having to worry about implementation details. You can at runtime use many versions of Qt without having to recompile. This is pretty powerful and flexible. This wouldn't be possible if private object instances were used inside classes.
A: It depends. Reducing dependencies is definitely a good thing,
per se, but it must be weighed against all of the other issues.
Using the compilation firewall idiom, for example, can move the
dependencies out of the header file, at the cost of one
allocation.
As for what QT does: it's a GUI framework, which (usually---I've
not looked at QT) means lots of polymorphism, and that most
classes have identity, and cannot be copied. In such cases, you
usually do have to use dynamic allocation and work with
pointers. The rule to avoid pointers mainly concerns objects
with value semantics.
(And by the way, there's nothing "old-fashioned" or
"out-of-date" about using too many pointers. It's been the rule
since I started using C++, 25 years ago.) | unknown | |
d30 | train | You can get headers from Request object in your controller methods.
use Symfony\Component\HttpFoundation\Request;
public function someAction(Request $request){
$request->headers //get headers
} | unknown | |
d31 | train | I managed to make it works. For those who want to know, please read this.
Firstly, change the Flume version. I use now flume 1.7.0 https://flume.apache.org/releases/1.7.0.html. But maybe a newer version would work, I don't want to break it down :)
Secondly, clone this repo https://github.com/cloudera/cdh-twitter-example. Inside, there is a flume.conf file. I configured it like that :
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
# The configuration file needs to define the sources,
# the channels and the sinks.
# Sources, channels and sinks are defined per agent,
# in this case called 'TwitterAgent'
TwitterAgent.sources = Twitter
TwitterAgent.channels = MemChannel
TwitterAgent.sinks = HDFS
TwitterAgent.sources.Twitter.type = org.apache.flume.source.twitter.TwitterSource
TwitterAgent.sources.Twitter.channels = MemChannel
TwitterAgent.sources.Twitter.consumerKey = xx
TwitterAgent.sources.Twitter.consumerSecret = xx
TwitterAgent.sources.Twitter.accessToken = xx
TwitterAgent.sources.Twitter.accessTokenSecret = xx
TwitterAgent.sources.Twitter.keywords = hadoop, bigdata
TwitterAgent.sources.Twitter.locations = -54.5247541978, 2.05338918702, 9.56001631027, 51.1485061713
TwitterAgent.sources.Twitter.language = fr
TwitterAgent.sinks.HDFS.channel = MemChannel
TwitterAgent.sinks.HDFS.type = hdfs
TwitterAgent.sinks.HDFS.hdfs.path = hdfs://localhost:9000/user/Hadoop/twitter_data/%Y/%m/%d/%H/
#It specifies the File format. File formats that are currently supported are SequenceFile, DataStream or CompressedStream.
#The DataStream will not compress the output file and please don’t set codeC. The CompressedStream requires set hdfs.codeC with an available codeC
TwitterAgent.sinks.HDFS.hdfs.fileType = DataStream
TwitterAgent.sinks.HDFS.hdfs.writeFormat = Text
# It specifies the suffix to append to file. For eg, .avro
TwitterAgent.sinks.HDFS.hdfs.fileSuffix = .json
#It specifies the number of events written to file before it is flushed to HDFS.
TwitterAgent.sinks.HDFS.hdfs.batchSize = 10000
# It specifies the file size to trigger roll, in bytes. If it is equal to 0 then it means never roll based on file size.
TwitterAgent.sinks.HDFS.hdfs.rollSize = 0
#It specifies the number of events written to the file before it rolled. If it is equal to 0 then it means never roll based on the number of events.
TwitterAgent.sinks.HDFS.hdfs.rollCount = 0
#It specifies the number of seconds to wait before rolling the current file. If it is equal to 0 then it means never roll based on the time interval.
TwitterAgent.sinks.HDFS.hdfs.rollInterval = 60
TwitterAgent.sinks.HDFS.hdfs.callTimeout = 180000
TwitterAgent.sinks.HDFS.hdfs.useLocalTimeStamp = true
TwitterAgent.channels.MemChannel.type = memory
TwitterAgent.channels.MemChannel.capacity = 10000
TwitterAgent.channels.MemChannel.transactionCapacity = 1000
Then, modifie the pom.xml (the version):
<dependency>
<groupId>org.twitter4j</groupId>
<artifactId>twitter4j-stream</artifactId>
<version>3.0.3</version>
</dependency>
Package-it with maven
cd flume-sources
mvn package
It creates a target/flume-sources-1.0-SNAPSHOT.jar
Copy it to your <YOUR_FLUME_HOME>/lib
cp ./target/flume-sources-1.0-SNAPSHOT.jar ~/flume/lib
I changed the CLASSPATH in the file I showed earlier talked to :
FLUME_CLASSPATH="/home/jb/flume/lib/flume-sources-1.0-SNAPSHOT.jar"
Copy the conf/flume.conf we just write into <YOUR_FLUME_HOME>/conf
Thirdly, verify if lib/ twitter4j-core.jar, media-support.jar et stream.jar are in version 3.0.3. If not go download them.
An finally :
cd $FLUME_HOME
bin/flume-ng agent --conf ./conf/ -f ./conf/flume.conf -Dflume.root.logger=INFO,console -n TwitterAgent
Halleluja :
2020-12-18 02:48:38,805 (Twitter4J Async Dispatcher[0]) [INFO - org.apache.flume.source.twitter.TwitterSource.onStatus(TwitterSource.java:173)] Processed 100 docs
2020-12-18 02:48:40,777 (Twitter4J Async Dispatcher[0]) [INFO - org.apache.flume.source.twitter.TwitterSource.onStatus(TwitterSource.java:173)] Processed 200 docs
2020-12-18 02:48:42,017 (Twitter4J Async Dispatcher[0]) [INFO - org.apache.flume.source.twitter.TwitterSource.onStatus(TwitterSource.java:173)] Processed 300 docs
2020-12-18 02:48:44,772 (Twitter4J Async Dispatcher[0]) [INFO - org.apache.flume.source.twitter.TwitterSource.onStatus(TwitterSource.java:173)] Processed 400 docs
2020-12-18 02:48:46,779 (Twitter4J Async Dispatcher[0]) [INFO - org.apache.flume.source.twitter.TwitterSource.onStatus(TwitterSource.java:173)] Processed 500 docs
2020-12-18 02:48:47,875 (Twitter4J Async Dispatcher[0]) [INFO - org.apache.flume.source.twitter.TwitterSource.onStatus(TwitterSource.java:173)] Processed 600 docs
2020-12-18 02:48:49,852 (Twitter4J Async Dispatcher[0]) [INFO - org.apache.flume.source.twitter.TwitterSource.onStatus(TwitterSource.java:173)] Processed 700 docs
2020-12-18 02:48:52,789 (Twitter4J Async Dispatcher[0]) [INFO - org.apache.flume.source.twitter.TwitterSource.onStatus(TwitterSource.java:173)] Processed 800 docs
2020-12-18 02:48:54,791 (Twitter4J Async Dispatcher[0]) [INFO - org.apache.flume.source.twitter.TwitterSource.onStatus(TwitterSource.java:173)] Processed 900 docs
2020-12-18 02:48:56,805 (Twitter4J Async Dispatcher[0]) [INFO - org.apache.flume.source.twitter.TwitterSource.onStatus(TwitterSource.java:173)] Processed 1 000 docs
2020-12-18 02:48:56,805 (Twitter4J Async Dispatcher[0]) [INFO - org.apache.flume.source.twitter.TwitterSource.logStats(TwitterSource.java:295)] Total docs indexed: 1 000, total skipped docs: 0
2020-12-18 02:48:56,805 (Twitter4J Async Dispatcher[0]) [INFO - org.apache.flume.source.twitter.TwitterSource.logStats(TwitterSource.java:297)] 47 docs/second
2020-12-18 02:48:56,805 (Twitter4J Async Dispatcher[0]) [INFO - org.apache.flume.source.twitter.TwitterSource.logStats(TwitterSource.java:299)] Run took 21 seconds and processed:
2020-12-18 02:48:56,806 (Twitter4J Async Dispatcher[0]) [INFO - org.apache.flume.source.twitter.TwitterSource.logStats(TwitterSource.java:301)] 0,013 MB/sec sent to index
2020-12-18 02:48:56,807 (Twitter4J Async Dispatcher[0]) [INFO - org.apache.flume.source.twitter.TwitterSource.logStats(TwitterSource.java:303)] 0,266 MB text sent to index
2020-12-18 02:48:56,807 (Twitter4J Async Dispatcher[0]) [INFO - org.apache.flume.source.twitter.TwitterSource.logStats(TwitterSource.java:305)] There were 0 exceptions ignored: | unknown | |
d32 | train | You can use conditional aggregation in a single query:
SELECT VisitDate = LEFT(Datename(month,v.VisitDate),3),
COUNT( distinct i.InspectorID) AS TotalUsed,
COUNT(distinct case when i.OfficeID IN (5) then i.InspectorID end) AS TotalContractorUsed
FROM Visits v
INNER JOIN InspectionScope insp ON insp.AssignmentID = v.AssignmentID
INNER JOIN Assignments a ON a.AssignmentID = insp.AssignmentID
INNER JOIN Inspectors i ON i.InspectorID = insp.InspectorID
WHERE a.ClientID IN (22,33)
Group by Datename(month,v.VisitDate);
A: I have not executed the query. Pls check.
;WITH CTE (VisitDate, TotalUsed) AS
(SELECT
VisitDate = LEFT(Datename(month,v.VisitDate),3)
,COUNT( distinct i.InspectorID) AS TotalUsed
FROM Visits v
INNER JOIN InspectionScope insp ON insp.AssignmentID = v.AssignmentID
INNER JOIN Assignments a ON a.AssignmentID = insp.AssignmentID
INNER JOIN Inspectors i ON i.InspectorID = insp.InspectorID
WHERE a.ClientID IN (22,33)
Group by Datename(month,v.VisitDate))
SELECT
CTE.VisitDate
,CTE.TotalUsed
,ISNULL(COUNT(distinct i.InspectorID),0) AS TotalContractorUsed
FROM CTE
LEFT JOIN Visits v ON CTE.VisitDate = v.VisitDate
INNER JOIN InspectionScope insp ON insp.AssignmentID = v.AssignmentID
INNER JOIN Assignments a ON a.AssignmentID = insp.AssignmentID
INNER JOIN Inspectors i ON i.InspectorID = insp.InspectorID
WHERE a.ClientID IN (22,33) AND i.OfficeID IN (5)
Group by Datename(month,v.VisitDate) | unknown | |
d33 | train | I suggest to add a con.BeginTransaction() before the ExecuteNonQuery and a con.Commit() after the ExecuteNonQuery.
A: You should open the connection before you make the command instance, like this:
void metroButton1_Click(object sender, EventArgs e)
{
try
{
con = new SqlConnection(cs.DBcon);
con.Open(); //Open the connection
for (int i = 0; i < dataGridView1.Rows.Count - 1; i++)
{
using (SqlCommand cmd = new SqlCommand("INSERT INTO tbl_employee VALUES(@Designation, @Date, @Employee_name,@Leave,@L_Reason,@Performance,@Payment,@Petrol,@Grand_Total)", con)) //Now create the command
{
cmd.Parameters.AddWithValue("@Designation", dataGridView1.Rows[i].Cells[0].Value);
cmd.Parameters.AddWithValue("@Date", dataGridView1.Rows[i].Cells[1].Value);
cmd.Parameters.AddWithValue("@Employee_name", dataGridView1.Rows[i].Cells[2].Value);
cmd.Parameters.AddWithValue("@Leave", dataGridView1.Rows[i].Cells[3].Value);
cmd.Parameters.AddWithValue("@L_Reason", dataGridView1.Rows[i].Cells[4].Value);
cmd.Parameters.AddWithValue("@Performance", dataGridView1.Rows[i].Cells[5].Value);
cmd.Parameters.AddWithValue("@Payment", dataGridView1.Rows[i].Cells[6].Value);
cmd.Parameters.AddWithValue("@Petrol", dataGridView1.Rows[i].Cells[7].Value);
cmd.Parameters.AddWithValue("@Grand_Total", dataGridView1.Rows[i].Cells[8].Value);
cmd.ExecuteNonQuery();
}
}
con.Close();
MessageBox.Show("Records inserted.");
}
catch (Exception ex)
{
MessageBox.Show(ex.Message);
}
}
A: Perform everything inside connection as below
void metroButton1_Click(object sender, EventArgs e)
{
try
{
for (int i = 0; i < dataGridView1.Rows.Count - 1; i++)
{
using(SqlConnection connection = new SqlConnection(cs.DBcon))
{
connection.Open();
cmd.Parameters.AddWithValue("@Designation", dataGridView1.Rows[i].Cells[0].Value);
cmd.Parameters.AddWithValue("@Date", dataGridView1.Rows[i].Cells[1].Value);
cmd.Parameters.AddWithValue("@Employee_name", dataGridView1.Rows[i].Cells[2].Value);
cmd.Parameters.AddWithValue("@Leave", dataGridView1.Rows[i].Cells[3].Value);
cmd.Parameters.AddWithValue("@L_Reason", dataGridView1.Rows[i].Cells[4].Value);
cmd.Parameters.AddWithValue("@Performance", dataGridView1.Rows[i].Cells[5].Value);
cmd.Parameters.AddWithValue("@Payment", dataGridView1.Rows[i].Cells[6].Value);
cmd.Parameters.AddWithValue("@Petrol", dataGridView1.Rows[i].Cells[7].Value);
cmd.Parameters.AddWithValue("@Grand_Total", dataGridView1.Rows[i].Cells[8].Value);
cmd.ExecuteNonQuery();
connection.Close();
}
}
MessageBox.Show("Records inserted.");
}
catch (Exception ex)
{
MessageBox.Show(ex.Message);
}
} | unknown | |
d34 | train | Gitweb at kernel.org allows to view diff between arbitrary commits, see for example the following link for diff between v2.6.32-rc6 and v2.6.32-rc7:
http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commitdiff;hp=refs/tags/v2.6.32-rc6;h=refs/tags/v2.6.32-rc7
(use patch link to get plain patch that you can apply), and between arbitrary versions of file / between arbitrary versions of arbitrary files, e.g.: diff to current link in history view.
Unfortunately neither official gitweb version (distributed together with Git itself), nor the fork used by kernel.org generates links between arbitrary commits, so you would have to handcraft (create by hand) URLs to give to gitweb. In the case of commitdiff view (action) the iparameters you need are 'h' (hash) and 'hp' (hash parent); in the case of blobdiff view they are 'hb' (hash base) and 'hpb' (hash parent base), and also 'f' (filename) and 'fp' (file parent).
Templates
*
*For diff between two arbitrary commits (equivalent of git diff A B from command line)
http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commitdiff;hp=A;h=B
*For diff between two versions of the same file (equivalent of git diff A B <filename>).
http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=blobdiff;f=<filename>;hpb=A;hp=B
Note that core gitweb (but not the fork used by kernel.org, currently) you can use path_info version, e.g.:
http://repo.or.cz/w/git.git/blobdiff/A..B:/<filename>
How to find it
*
*Find in a web interface a commit which is a merge commit, for example
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=1c5aefb5b12a90e29866c960a57c1f8f75def617
*Find a link to diff between a commit and a second parent, for example
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/diff/?id=1c5aefb5b12a90e29866c960a57c1f8f75def617&id2=54a217887a7b658e2650c3feff22756ab80c7339
*Replace SHA-1 of compared commits with revision names or revision identifiers you want to compare, for example to generate diff between v3.15-rc8 and v3.15-rc7
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/diff/?id=v3.15-rc8&id2=v3.15-rc7
or to generate patch (rawdiff)
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/rawdiff/?id=v3.15-rc8&id2=v3.15-rc7
A: The system which creates the diff (whether that might be your webserver or your local system) must have a full copy (clone) of the git repo.
So you cannot do "remote diffs".
So, if you want to avoid doing a git clone of the whole kernel, why not just point your web browser to http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=summary?
A: Since 2013, the reworked kernel.org website uses cgit to browse repositories.
As an example of cgit URL for a diff between two tags:
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/diff/?id=v3.19-rc2&id2=v3.19-rc1&dt=2
That is also why Git 2.38 (Q3 2022) modified gitweb: gitweb had legacy URL shortener that is specific to the way projects hosted on kernel.org. It used to (but no longer) work, and has been removed.
See commit 75707da (26 Jul 2022) by Julien Rouhaud (rjuju).
(Merged by Junio C Hamano -- gitster -- in commit dcdcc37, 05 Aug 2022)
gitweb: remove title shortening heuristics
Signed-off-by: Julien Rouhaud
Those heuristics are way outdated and too specific to the kernel project to be useful outside of kernel.org.
Since kernel.org doesn't use gitweb anymore and at least one project complained about incorrect behavior, entirely remove them. | unknown | |
d35 | train | to convert json array to tuple:
feed = LOAD '$INPUT' USING com.twitter.elephantbird.pig.load.JsonLoader() AS products_json;
extracted_products = FOREACH feed GENERATE
products_json#'id' AS id:chararray,
products_json#'name' AS name:chararray,
products_json#'colors' AS colors:{t:(i:chararray)},
products_json#'sizes' AS sizes:{t:(i:chararray)};
to flatten a tuple
flattened = foreach extracted_products generate id,flatten(colors); | unknown | |
d36 | train | INSERT:
insert into myTable (col1) VALUES (to_char(systimestamp, 'dd-mon-yyyy hh.mi.ss.ff4 AM') );
SELECT:
select to_timestamp(col1, 'dd-mon-yyyy hh.mi.ss.ff4 AM') from myTable ;
But it is much better to store the data directly as a timestamp.
Then you can compare the values or modify them directly.
create table myTable1( col1 timestamp default systimestamp); | unknown | |
d37 | train | I think your test would be just as valid with two windows as it would with one window and two tabs.
You can call the open browser keyword multiple times, giving each window its own unique alias. You can then switch between them with the switch browser keyword and the appropriate alias.
Example
*** Settings ***
Library SeleniumLibrary
Suite Teardown close all browsers
*** Variables ***
${browser} chrome
*** Test cases ***
Example using two windows
open browser http://www.example.com ${browser} alias=tab1
open browser http://www.w3c.org ${browser} alias=tab2
switch browser tab1
location should be http://www.example.com/
switch browser tab2
location should be https://www.w3.org/ | unknown | |
d38 | train | First using flatMap create Stream<SimpleEntry<Integer, EHourQuarter>, Double> then using toMap collect as Map<SimpleEntry<Integer, EHourQuarter>, Double>. Then map into you DTO class.
List<QuarterlyOccupancyDTO> result = map.entrySet().stream()
.flatMap(d -> d.getValue().entrySet().stream()
.flatMap(h -> h.getValue().entrySet().stream().map(
e -> new SimpleEntry<>(new SimpleEntry<>(h.getKey(), e.getKey()), e.getValue()))))
.collect(Collectors.toMap(Map.Entry::getKey, Map.Entry::getValue, (a, b) -> a + b))
.entrySet()
.stream()
.map(m -> new QuarterlyOccupancyDTO(m.getKey().getKey(), m.getKey().getValue().getValue(), m.getValue()))
.collect(Collectors.toList());
Note: As you don't show your code some part may not work. Full code here
A: First group and sum the occupancy by hour/quarter pair
(Avoid nested flatMap, as it makes code less readable)
Map<Entry<Integer, Integer>, Double> groups
= map.entrySet()
.stream()
// Flatten the outer map, since you don't care about the days
.flatMap(de -> de.getValue().entrySet().stream())
// Flatten the map by combining hour key and quarter key into a single one
.flatMap(he -> he.getValue()
.entrySet()
.stream()
.map(qe -> new SimpleEntry<>(new SimpleEntry<>(he.getKey(), qe.getKey().getValue()), qe.getValue())))
// Sum the occupancy per each hour/quarter pair
.collect(groupingBy(Entry::getKey, summingDouble(Entry::getValue)));
Then map the grouped entries into your DTO objects
List<QuarterlyOccupancyDTO> list =
groups.entrySet()
.stream()
.map(e -> new QuarterlyOccupancyDTO(e.getKey().getKey(), e.getKey().getValue(), e.getValue()))
.collect(toList());
Another pure functional approach:
(It's pure, but seems less readable, IMO)
Collection<QuarterlyOccupancyDTO> dtos =
map.entrySet()
.stream()
// Flatten the outer map, since you don't care about the days
.flatMap(de -> de.getValue().entrySet()
.stream())
// Flatten the map by merging hour key and quarter key into a single one
.flatMap(he -> he.getValue()
.entrySet()
.stream()
.map(qe -> new SimpleEntry<>(new SimpleEntry<>(he.getKey(), qe.getKey().getValue()),
qe.getValue())))
// Map each entry into a DTO object and then reduce the occupancy per each hour/quarter pair
.collect(
groupingBy(Entry::getKey,
mapping(e -> new QuarterlyOccupancyDTO(e.getKey().getKey(), e.getKey().getValue(), e.getValue()),
reducing(new QuarterlyOccupancyDTO(0, 0, 0.0),
(a, b) -> new QuarterlyOccupancyDTO(b.getHour(), b.getMinute(), a.getOccupancy() + b.getOccupancy())))))
.values(); | unknown | |
d39 | train | If I believe issue 1246, you need to have git installed for the hg convert extension to work.
Even with Git installed, you might experience some other issues with the import, in which case you could consider other alternatives such as:
*
*converting the git repo to a svn one, and then importing that svn repo into a mercurial one
*or trying the hg-git mercurial plugin, which specifically mentions:
This plugin is implemented entirely in Python - there are no Git binary dependencies, you do not need to have Git installed on your system.
(But I don't know if hg-git works with recent 1.7+ Mercurial versions) | unknown | |
d40 | train | Use 13.6.1 BEGIN ... END Compound-Statement Syntax.
Try (maybe you need to use DELIMITER):
DELIMITER //
CREATE TRIGGER `update_queue_after_insert` AFTER INSERT ON `encounter_note`
FOR EACH ROW
BEGIN -- <- BEGIN
DECLARE is_exist INT;
SET is_exist = ( SELECT count(*) FROM practice_last_updated_module WHERE practice_id = NEW.practice_id );
IF NEW.enc_source = 'OP' THEN
UPDATE practice_queue_list PQL
SET PQL.vital_check = IF (NEW.vs_weight <> 0 OR NEW.vs_height <> 0 OR NEW.vs_temperature <> 0 OR LENGTH(NEW.vs_blood_pressure) > 0 <> NEW.vs_pulse <> 0 OR NEW.vs_respiration <> 0, 1, 0)
WHERE PQL.encounter_id = NEW.id AND PQL.practice_place_id = NEW.practice_id;
END IF;
IF is_exist > 0 THEN
UPDATE practice_last_updated_module SET encounter = UNIX_TIMESTAMP(NOW()) where practice_id = NEW.practice_id;
-- ELSE:
ELSE
INSERT INTO practice_last_updated_module (practice_id, encounter) VALUES (NEW.practice_id, UNIX_TIMESTAMP(NOW()));
END IF;
END// -- <- END
DELIMITER ; | unknown | |
d41 | train | You can achieve this using the Workbook_SheetActivate event:
http://msdn.microsoft.com/en-us/library/office/ff195710.aspx
if you put this code in your ThisWorkbook Object, then each time you change the active worksheet, it will...
*
*Cycle through all open workbooks with a different name
*Look for a worksheet with the same name as the sheet you just clicked on
*Activate the worksheet in the other Workbook
Private Sub Workbook_SheetActivate(ByVal Sh As Object)
For i = 1 To Application.Workbooks.Count
If Application.Workbooks(i).Name <> ThisWorkbook.Name Then
Dim otherWB As Workbook
Set otherWB = Application.Workbooks(i)
otherWB.Sheets(Sh.Name).Activate
End If
Next
End Sub
Note that this requires the worksheet to exist in all open workbooks. An error will result if it does not. However, you could easily add error handling to ignore workbooks with unfound corresponding worksheets.
Also note that it's probably best to use this when only two workbooks are open. I have not looked into the other methods you mentioned, but there may exist a way to identify the two workbooks that are currently in side-by-side mode, at which point the code could shed its for loop and become more concise. | unknown | |
d42 | train | Switch your main and playground script tags. The order matters:
<script src="scripts/playground.js"></script>
<script src="scripts/main.js"></script> | unknown | |
d43 | train | If I understand the question correctly, your issue is with image.open(), I believe the problem is that you are treating the directory and file name as strings when you join them together:
path1 = dir + img1
Instead you should try using the os.path module to combine the two:
path1 = os.path.join(dir, img1) | unknown | |
d44 | train | Assuming that no file name a newline:
find . -type f -printf '%s %p\n' \
| sort -nr \
| while read -r size file; do
if ! [ -e "dest/${file#./*/}" ]; then
cp "$file" "dest/${file#./*/}";
fi;
done
The output of find is a list of "filesize path":
221 ./dir1/a
1002 ./dir1/b
11 ./dir2/a
Then we sort the list numeric:
1002 ./dir1/b
221 ./dir1/a
11 ./dir2/a
And fianlly we reach the while read -r size filename loop, where each file is copied over to the destination dest/${file#./*/} if they don't already exists.
${file#./*/} expands to the value of the parameter file with the leading directory removed:
./abc/def/foo/bar.txt -> def/foo/bar.txt, which means you might need to create the directory def/foo in the dest directory:
| while read -r size file; do
dest=dest/${file#./*/}
destdir=${dest%/*}
[ -e "$dest" ] && continue
[ -e "$destdir" ] || mkdir -p -- "$destdir"
cp -- "$file" "$dest"
done
A: I cannot comment on the other answer due to not enough reputation, but I was getting a syntax error due to missing fi. I also got an error where the target directory needed to be created before copying. So:
find . -type f -printf '%s %p\n' | sort -nr | while read -r size file; do if ! [ -e "dest/${file#./*/}" ]; then mkdir -p "$(dirname "dest/${file#./*/}")" && cp "$file" "dest/${file#./*/}"; fi; done | unknown | |
d45 | train | The logic in your answer is correct but displaying the path would require you to push the successful links onto a stack or onto the beginning of a list.
If you can only add to the end of a list (as is the case with trying to just Console.WriteLine each step) then you can do this without remembering (or returning) the path, but you have to be clever by building path in reverse order. (find the last step of the chain first)
If you have a link from n -> end, and there is a path from start -> n. Then you can display the link from n -> end. In so determining that there is a path from start -> n, you will have displayed all those links already.
using System;
class Program
{
public static Boolean FindNode(int[][] graph, int start, int end)
{
if (start == end) {
return true;
}
for (int i = 0; i < graph.Length; i++) {
if (graph[i][1] == end) {
if (FindNode(graph, start, graph[i][0])) {
Console.WriteLine("{0} -> {1}", graph[i][0], end);
return true;
}
}
}
return false;
}
static void Main(string[] args)
{
int[][] graph = new int[5][];
graph[0] = new int[] { 1, 2 };
graph[1] = new int[] { 1, 3 };
graph[2] = new int[] { 2, 4 };
graph[3] = new int[] { 3, 4 };
graph[4] = new int[] { 4, 5 };
if (FindNode(graph, 1, 5)) {
//Success
}
}
} | unknown | |
d46 | train | So this was a bug in the integration they fixed in 2.01.10. | unknown | |
d47 | train | These conditions can be prioritized using a case expression in order by with a function like row_number.
select A,B,frequency,timekey
from (select t.*
,row_number() over(partition by A order by cast((B = 'unknown') as int), B) as rnum
from tbl t
) t
where rnum = 1
Here for each group of A rows, we prioritize rows other than B = 'unknown' first, and then in the order of B values.
A: Use row_number analytic function. If you want to select not unknown record first, then use the query below:
select A, B, Frequency, timekey
from
(select
A, B, Frequency, timekey,
row_number() over(partition by A,Frequency order by case when B='unknown' then 1 else 0 end) rn
)s where rn=1
And if you want to select unknown if they exist, use this row_number in the query above:
row_number() over(partition by A,Frequency order by case when B='unknown' then 0 else 1 end) rn | unknown | |
d48 | train | With an INNER JOIN, it makes no difference if predicates are specified in the JOIN or WHERE clauses. The queries are semantically identical so the SQL Server optimizer should generate the same (optimal) executional plan. You will get the same performance as a result. | unknown | |
d49 | train | Slice a 2D Array
*
*It is assumed that the data starts in A1 and has a row of headers.
*You have to use 7 instead of 8 to copy column G instead of column H.
*[A1].CurrentRegion.Rows.Count may yield different results depending on which worksheet is active, so you should rather qualify the range: rp.Range("A1").CurrentRegion.Rows.Count.
*rp.Range("A1").CurrentRegion.Rows.Count may be different than rp.Range("H" & rp.Rows.Count).End(xlUp).Row, so you should opt for one (I've opted for the former).
*A quick fix could be [a1].CurrentRegion.Rows.Count - 1, but is not recommended due to the previous two reasons. rp.Range("A1").CurrentRegion.Rows.Count - 1 would be better.
*Both solutions do the same except for the out-commented 'exclude-headers-parts' in the first solution, which you could use to not include the headers, and the 'clear-contents-part' in the second solution, which you could use to clear the contents below the destination range.
*Adjust the values in the constants section, the workbook reference, and if headers should be excluded (and if the contents below the destination range should be cleared).
Option Explicit
Sub sliceArray()
Const sName As String = "RESOURCE_PLANNING"
Const sCols As String = "A:H"
Const dName As String = "RP_Output"
Const dFirst As String = "A1"
Dim dCols As Variant: dCols = VBA.Array(1, 2, 4, 6, 8) ' 'VBA': zero-based
Dim wb As Workbook: Set wb = ThisWorkbook ' workbook containing this code
'Dim wb As Workbook: Set wb = ActiveWorkbook ' workbook you're looking at
' Source Worksheet
Dim sws As Worksheet: Set sws = wb.Worksheets(sName)
' Source/Destination Rows Count
Dim rCount As Long: rCount = sws.Range("A1").CurrentRegion.Rows.Count
' No headers
'Dim rCount As Long: rCount = sws.Range("A1").CurrentRegion.Rows.Count - 1
' Source Array
Dim sData As Variant
sData = sws.Columns(sCols).Resize(rCount).Value
' No headers
'sData = sws.Columns(sCols).Resize(rCount).Offset(1).Value
' Destination Array
Dim dData As Variant
dData = Application.Index(sData, Evaluate("Row(1:" & rCount & ")"), dCols)
' Destination Columns Count
Dim dcCount As Long: dcCount = UBound(dCols) + 1 ' = UBound(dData, 2)
' Destination Worksheet
Dim dws As Worksheet: Set dws = wb.Worksheets(dName)
' Destination Range
Dim drg As Range: Set drg = dws.Range(dFirst).Resize(rCount, dcCount)
' Write
drg.Value = dData
End Sub
Sub sliceArrayShort()
Const sName As String = "RESOURCE_PLANNING"
Const sCols As String = "A:H"
Const dName As String = "RP_Output"
Const dFirst As String = "A1"
Dim dCols As Variant: dCols = VBA.Array(1, 2, 4, 6, 8)
Dim wb As Workbook: Set wb = ThisWorkbook
' Read
Dim rCount As Long
Dim sData As Variant
With wb.Worksheets(sName)
rCount = .Range("A1").CurrentRegion.Rows.Count
sData = .Columns(sCols).Resize(rCount).Value
End With
' Slice
Dim dData As Variant
dData = Application.Index(sData, Evaluate("Row(1:" & rCount & ")"), dCols)
' Write (& Clear)
With wb.Worksheets(dName).Range(dFirst).Resize(, UBound(dCols) + 1)
.Resize(rCount).Value = dData
'.Resize(.Worksheet.Rows.Count - .Row - rCount + 1) _
.Offset(rCount).ClearContents
End With
End Sub | unknown | |
d50 | train | Change your functionGeneratingTheError function to return the chained promise like below:
function functionGeneratingTheError() {
var getTokenCallPayload = {
"client_id" : clientId,
"client_secret" : clientSecret,
"refresh_token" : refreshToken,
"grant_type" : "refresh_token"
};
var getTokenCallOptions = {
"method" : "POST",
"body" : JSON.stringify(getTokenCallPayload),
"muteHttpExceptions" : false
};
return
fetch(tokenURL, getTokenCallOptions)
.then(response => {
if (response.ok) {
return response.json();
} else {
throw new Error("Error");
}
})
.then(data => {
doSomething();
})
.then(response=> {
doSomethingAgain();
})
.catch(error => {
throw error;
});
}
And then await it in your calling code by wrapping the calling code inside an async self invoking function like so:
(async function() {
try {
await functionGeneratingTheError();
} catch (error) {
doSomethingElse();
}
})();
You can read more about async/await here: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/async_function | unknown | |
d51 | train | You can query the "through" table directly with the ORM:
UserProfile.favorite_books.through.objects.filter(book_id=book.id).count()
A: You'll need to replace appname_* with the name of the M2M table in your DB, but you can do something like this:
from django.db import connections
cursor = connections['default'].cursor()
cursor.execute("""
SELECT count(*) FROM appname_userprofile_books
WHERE book_id = {book_id};
""".format(book_id=book_id))
favorited_count_list = cursor.fetchall()
You can then pull the number from favorited_count_list. | unknown | |
d52 | train | You can use discard in purrr
purrr::discard(df, ~any(.x$Var1 == 2))
#Or using keep
#purrr::keep(df, ~any(.x$Var1 != 2))
#[[1]]
# Var1 Var2
#1 1 2
#2 1 3
#3 1 4
#4 1 5
#[[2]]
# Var1 Var2
#1 3 2
#2 3 3
#3 3 4
#4 3 5
Or Filter in base R :
Filter(function(x) any(x$Var1 != 2), df)
Some variations :
df[sapply(df, function(x) any(x$Var1 != 2))]
df[purrr::map_lgl(df, ~any(.x$Var1 != 2))] | unknown | |
d53 | train | It's not that difficult once you understand why the second form isn't shown when page is loaded. There's nothing to do with ASP itself, it's rather due to CSS style set to "display: none;" and will only be active once the first Option Select has been filled.
Here is a full sample of what you're after, implemented in Selenium, it's similar to machanize I believe.
The basic flow should be:
*
*load the web browser and load the page
*find the first Option Select and fill it in
*trigger a change event (I chose to send a TAB key) and the second Option Select is shown
*fill in the second Select and find the submit button and click it
*assign a name to the content you get from the div id=stats text
*compared if the text has changed from last fetch
a) if YES, do your BEEP and close the driver page etc.
b) if NO, set a scheduler (I use Python's Event's Scheduler, and run the crawling function again...
That's it! Easy, ok code time -- I used United Kingdom + Working Holiday pair for test:
import selenium.webdriver
from selenium.webdriver.common.keys import Keys
import sched, time
driver = selenium.webdriver.Firefox()
url = 'http://www.cic.gc.ca/english/work/iec/index.asp'
driver.get(url)
html_content = ''
# construct a scheduler
s = sched.scheduler(time.time, time.sleep)
def crawl_me():
global html_content
driver.refresh()
time.sleep(5) # wait 5s for page to be loaded
country_name = driver.find_element_by_name('country-name')
country_name.send_keys('United Kingdom')
# trick's here to send a TAB key to trigger the change event
country_name.send_keys(Keys.TAB)
# make sure the second Option Select is active (none not in style)
assert "none" not in driver.find_element_by_id('category_dropdown').get_attribute('style')
cateogory_name = driver.find_element_by_name('category-name')
cateogory_name.send_keys('Working Holiday')
btn_go = driver.find_element_by_id('submit')
btn_go.send_keys(Keys.RETURN)
# again, check if the content has been loaded
assert "United Kingdom - Working Holiday" not in driver.page_source
compared_content = driver.find_element_by_id('stats').text
# here we will end this script if content has changed already
if html_content != '' and html_content != compared_content:
# do whatever you want to play the beep sound
# at the end exit the loop
driver.close()
exit(-1)
# if no changes are found, trigger the schedule_crawl() function, like recursively
html_content = compared_content
print html_content
return schedule_crawl()
def schedule_crawl():
# set your time interval here, 15*60 = 15 minutes
s.enter(15*60, 1, crawl_me, ())
s.run() # and run it of course
crawl_me()
To be honest, this is quite easy and straight forward however it does require you fully understand how html/css/javascript (not javascript in this case, but you do need to know the basic) and all their elements how they work together.
You do need to learn from the basic read => digest => code => experience => do it in cycles, Programming doesn't have a shortcut or the fastest way.
Hope this helps (and I really hope you do not just copy & paste mine, but learn and implement your own in mechanize by the way).
Good Luck! | unknown | |
d54 | train | I would recommend the use of an IIFE (immediatly invoked function expression):
var coolObj=(function(){
var public={};
var nonpublic={};
nonpublic.a=0;
public.getA=function(){nonpublic.a++;return nonpublic.a;};
return public;
})();
Now you can do:
coolObj.getA();//1
coolObj.getA();//2
coolObj.a;//undefined
coolObj.nonpublic;//undefined
coolObj.nonpublic.a;//undefined
I know this is not the answer youve expected, but i think its the easiest way of doing sth like that.
A: You can use a proxy which requires a key in order to define properties:
function createObject() {
var key = {configurable: true};
return [new Proxy({}, {
defineProperty(target, prop, desc) {
if (desc.value === key) {
return Reflect.defineProperty(target, prop, key);
}
}
}), key];
}
function func() {
var [obj, key] = createObject();
key.value = 0;
Reflect.defineProperty(obj, "value", {value: key});
key.value = function() {
key.value = obj.value + 1;
Reflect.defineProperty(obj, "value", {value: key});
};
Reflect.defineProperty(obj, "increase", {value: key});
return obj;
}
var obj = func();
console.log(obj.value); // 0
try { obj.value = 123; } catch(err) {}
try { Object.defineProperty(obj, "value", {value: 123}); } catch(err) {}
console.log(obj.value); // 0
obj.increase();
console.log(obj.value); // 1 | unknown | |
d55 | train | 1) Windows Sysinternals VMMap can give you quite good insight into the virtual memory layout in a particular process. If comparing visualizations provided by this tool on the 2 PCs does not help then...
2) ...Google: "virtual memory configuration windows 7" should throw you quite quickly in the right direction
3) Also in your original question https://stackoverflow.com/q/25263223/2626313 the problem with exact address may be that the address range is already used by a hardware component. You can check that by using Control Panel → Device Manager, switch menu View → Resources by type and check what you see under the Memory node
4) finally this https://superuser.com/a/61604/304578 article contains link to Mark Russinovich's (original author of the Windows Sysinternals tool set) explaining blog article perhaps related to your problem | unknown | |
d56 | train | Declare data as an empty array; if you don't do it will be undefined until the API response is completed. If you want to know when the response is actually coming just use tap. So If you want to call importData() after API response is received then call this.importData() method inside subscribe like
// declare as empty array
oData = [];
buttonEnabled = false;
ngOnInit(): void {
this.mydataser.getDbData().pipe(
tap(d => this.buttonEnabled = true)
).subscribe(data => {
this.oData = data;
});
}
importData(): void{
console.log(this.o.Data);
}
In template:
<button [disabled]="!buttonEnabled">Import</button> | unknown | |
d57 | train | Try this below code :
$email = 'example@email.com';
$customer = Mage::getModel('customer/customer');
$customer->setWebsiteId(Mage::app()->getWebsite()->getId());
$customer->loadByEmail(trim($email));
Mage::getSingleton('customer/session')->loginById($customer->getId()); | unknown | |
d58 | train | $list = json_decode($jsonList, true);
foreach ($list['product'] as $key => $product) {
if ($product['cartID'] == $remove_cartid) {
unset($list['product'][$key]);
}
}
$jsonList = json_encode($list); | unknown | |
d59 | train | In pseudo code using information_schema tables:
$rows = "SELECT TABLE_NAME
FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_SCHEMA = 'yourDBName'
AND ENGINE LIKE 'engineA'";
foreach ($rows as $table) {
$query = 'ALTER TABLE '.$table.' ENGINE = engineB';
} | unknown | |
d60 | train | If I understand you correctly you would like to get the distinct customers from your projects ?
In this case I think something like this should work (this should give you the dictinct project IDs from this criteria - maybe there is a way to get the projects directly - but I can't test this right now:
ICriteria criteria = session.CreateCriteria(typeof(Project),"Project")
.SetProjection(Projections.Distinct(Projections.Property("project.Id")))
.CreateAlias("Project.customer","customer",NHibernate.SqlCommand.JoinType.InnerJoin)
.CreateAlias("Project.Coordinator", "Coordinator", NHibernate.SqlCommand.JoinType.InnerJoin)
.Add(Restrictions.Eq("Project.ProjectType", projectType)); | unknown | |
d61 | train | So finally I figured out the problem, if anyone is looking for an answer.
Tensorflow does not have complete GPU implementation of Adagrad optimizer as of now. ResourceSparseApplyAdagradV2 operation gives error on GPU, which is integral to embedding layer. So it can not be used with embedding layer with data parallelism strategies. Using Adam or rmsprop works fine. | unknown | |
d62 | train | Use the JsonProperty attribute :
[JsonProperty(PropertyName = "cost")]
public int Cost
{
get
{
return cost;
}
private set { cost = value; }
}
[JsonProperty(PropertyName = "igloo_id")]
public int Id
{
get
{
return igloo_id;
}
private set { igloo_id = value; }
}
[JsonProperty(PropertyName = "name")]
public string Name
{
get
{
return name;
}
private set { name = value; }
}
Basically you need to say for each property what is the Json key that will match. Plus, you will need to have a setter for each of your property, but it can be private.
A: I'm not entirely sure I understand what you want. The code you posted generated an exception when I tried running it,
Additional information: Cannot deserialize the current JSON object (e.g. {"name":"value"}) into type 'System.Collections.Generic.List1[ConsoleApplication2.Program+Igloo]' because the type requires a JSON array (e.g. [1,2,3]) to deserialize correctly.
It did deserialize correctly if I used:
var igloosList = JsonConvert.DeserializeObject<Dictionary<string, Igloo>>( json );
If you want to loop through it you could then use:
foreach( var igloo in igloosList.Values ) | unknown | |
d63 | train | It is now possible to using bindings even on elements that aren't derived from FrameworkElement however the property of the element being bound must be defined as a DependencyProperty which Header is not.
Since Header is simply a place marker for any content to be placed in the header you could simply do this:-
<DataGridTextColumn.Header>
<TextBlock Text="{Binding Path=Dummy,Source={StaticResource languagingSource},Converter={StaticResource languagingConverter},ConverterParameter=vehicleDescription}" />
</DataGridTextColumn.Header>
A: After some further searching I found this thread that answers the question and gives some suggested solutions.
Dynamically setting the Header text of a Silverlight DataGrid Column | unknown | |
d64 | train | I fixed it. I had code that was reading the version out of MANIFEST.MF from an InputStream returned by URL.openStream():
String manifestPath = classPath.substring(0, webInfIndex) +
"/META-INF/MANIFEST.MF";
// DON'T DO THIS!!!
// openStream() returns an InputStream that never gets closed.
Manifest manifest = new Manifest(new URL(manifestPath).openStream());
Attributes attr = manifest.getMainAttributes();
String version = attr.getValue(Attributes.Name.IMPLEMENTATION_VERSION);
Fixing the leak using Java 7 try-with-resources:
try (InputStream inputStream = new URL(manifestPath).openStream()) {
Manifest manifest = new Manifest(inputStream);
Attributes attr = manifest.getMainAttributes();
String version = attr.getValue(Attributes.Name.IMPLEMENTATION_VERSION);
} | unknown | |
d65 | train | Well, you can add numbers on custom markers using the following tutorial:
Link
and after that attach infowindow to each custom marker:
Link | unknown | |
d66 | train | This is pure speculation, but sometimes you need to set a frame explicitly. (e.g. when adding a custom button to a UIBarButtonItem) Try adding a line
animation1.frame = CGRectMake(0,0,50,50); | unknown | |
d67 | train | The outer join will return all columns from both tables.Also,we got to fill null values in qty_users as sum will also return null.
Finally, we can select using coalsece function,
from pyspark.sql import functions as F
newdf = df1.join(df2,(df1.pk==df2.pk2) & (df1.num_pk==df2.num_pk2) & (df1.num_id==df2.num_id2),'outer').fillna(0,subset=["qty_users","qty_users2"])
newdf = newdf.withColumn('total', sum(newdf[col] for col in ["qty_users","qty_users2"]))
newdf.select(*[F.coalesce(c1,c2).alias(c1) for c1,c2 in zip(df1.columns,df2.columns)][:-1]+['total']).show()
+--------+--------+------+-----+
| pk| num_id|num_pk|total|
+--------+--------+------+-----+
|63479840|12556940|298620| 13|
|63480030|12557110|298620| 10|
|63835520|12627890|299750| 10|
|63479800|11156940|298620| 10|
+--------+--------+------+-----+
Hope this helps.!
A: Does this output what you want?
df3 = pd.concat([df1, df2], as_index=False).groupby(['pk','num_id','num_pk'])['qty_users'].sum()
The merging of your 2 dataframes is achieved via pd.concat([df1, df2], as_index=False)
Finding the sum of the qty_users columns when all other columns are the same first requires grouping by those columns
groupby(['pk','num_id','num_pk'])
and then finding the grouped sum of qty_users
['qty_users'].sum() | unknown | |
d68 | train | For custom count by intervals, you can create an User Defined Aggregate (UDA). Read my blob posts to avoid the caveats:
UDF deep dive: http://www.doanduyhai.com/blog/?p=1876
UDF/UDA best practices & caveats: http://www.doanduyhai.com/blog/?p=2015 | unknown | |
d69 | train | Tables inside dialogs work just fine in Vuetify:
var app = new Vue({
el: '#app',
template: '#main',
vuetify: new Vuetify(),
data:
{
dlgVisible: false,
header:
[
{
value: 'fname',
text: 'First Name' ,
class: 'font-weight-bold text-subtitle-1',
},
{
value: 'lname',
text: 'Last Name' ,
class: 'font-weight-bold text-subtitle-1',
},
{
value: 'salary',
text: 'Salary' ,
class: 'font-weight-bold text-subtitle-1',
},
],
rows:
[
{
fname: 'John',
lname: 'Doe',
salary: 1000
},
{
fname: 'John',
lname: 'Doe',
salary: 1000
},
{
fname: 'John',
lname: 'Doe',
salary: 1000
},
],
},
});
<link href="https://fonts.googleapis.com/css?family=Roboto:100,300,400,500,700,900" rel="stylesheet">
<link href="https://cdn.jsdelivr.net/npm/@mdi/font@6.x/css/materialdesignicons.min.css" rel="stylesheet">
<link href="https://cdn.jsdelivr.net/npm/vuetify@2.x/dist/vuetify.min.css" rel="stylesheet">
<div id="app">
</div>
<template id="main">
<v-app>
<v-btn color="primary" @click="dlgVisible = true">Show dialog</v-btn>
<v-dialog v-model="dlgVisible">
<v-card>
<v-card-title class="primary white--text py-2">Dialog</v-card-title>
<v-card-text>
<v-data-table :items="rows" :headers="header">
</v-data-table>
</v-card-text>
<v-card-actions class="justify-center">
<v-btn color="primary" @click="dlgVisible = false">Close</v-btn>
</v-card-actions>
</v-card>
</v-dialog>
</v-app>
</template>
<script src="https://cdn.jsdelivr.net/npm/vue@2/dist/vue.js"></script>
<script src="https://cdn.jsdelivr.net/npm/vuetify@2.x/dist/vuetify.js"></script> | unknown | |
d70 | train | If I understand right, you just need to use Or keyword and search thru all the kw-fields
sqlite> select issue,url,steps from help where kw1='ie' or kw2='ie' or kw3='ie' or kw4='ie' or kw5='ie' or kw6='ie' or kw7='ie' or kw8='ie' or kw9='ie' or kw10='ie' or kw11='ie' or kw12='ie'; | unknown | |
d71 | train | Since you are using LINQ to SQL you should use Convert.ToInt32 for converting to string to number, so your query would be:
var bb =(from c in Office_TBLs select Convert.ToInt32(c.CodeNumber)).Max();
See: Standard Query Operator Translation
C# casts are supported only in projection. Casts that are used
elsewhere are not translated and are ignored. Aside from SQL
function names, SQL really only performs the equivalent of the common
language runtime (CLR) Convert. That is, SQL can change the value of
one type to another. There is no equivalent of CLR cast because there
is no concept of reinterpreting the same bits as those of another
type. That is why a C# cast works only locally. It is not remoted.
A: "999" > "1601" as string comparison - so to get result you want you need to convert string values to numbers.
The easiest approach would be to use .Select(s => int.Parse(s)).Max()) (or .Max(s => int.Parse(s))) instead of .Max() which ends up using regular string comparison.
Note that depending on where data is coming from there could be much better ways to get integer results (including changing field type in database). Using .Select on query most likely force query to return all rows from DB and only than compute Max in memory.
A: Try this
var bb=(from c in Office_TBLs select (e => e.CodeNumber).Max())
A: If you use NVarChar then you are asking to have the values sorted alphabetically. "999" comes after "1601" just like "ZZZ" comes after "HUGS".
If the column is supposed to only contain numeric values then the best fix is to change the datatype to a more appropriate choice. | unknown | |
d72 | train | I'm not sure why you don't want to use strftime, but if you absolutely wanted a different way, try altering your last three lines to this:
print(f"Yesterday : {yesterday.day}/{yesterday.month}/{yesterday.year}")
print(f"Today : {today.day}/{today.month}/{today.year}")
print(f"Tomorrow : {tomorrow.day}/{tomorrow.month}/{tomorrow.year}")
which produces:
Yesterday : 9/11/2022
Today : 10/11/2022
Tomorrow : 11/11/2022
You could make it a little more compact like this:
days = {'yesterday' : yesterday, 'today' : today, 'tomorrow' : tomorrow}
for daystr, day in days.items():
print (f"{daystr.title()} : {day.day}/{day.month}/{day.year}") | unknown | |
d73 | train | As one-liner :
$ perl -F'/' -lane 'print join "/", @{F[0..4]}' <<< "/abcd/prod/Cid/1234/Did"
As a script :
while (<>) {
chomp $_;
my @F = split(m[/], $_, 0);
print join('/', @F[0..4]), "\n";
}
Output :
/abcd/prod/Cid/1234 | unknown | |
d74 | train | As Mephy says, implementing IEquatable<User> would make this simpler - at that point, you could just perform a join:
var changes = usersOld.Join(usersNew, o => o.Id, n => n.Id,
(o, n) => new { Old = o, New = n })
.Where(pair => !pair.Old.Equals(pair.New));
Then whenever you add a relevant property, you just need to change the Equals implementation to take account of that.
The result is a sequence of pairs - currently as an anonymous type, but you could use Tuple<,> if you wanted to return from a method. | unknown | |
d75 | train | You are looking for text-transform: uppercase and nth-child selector.
Something like this:
header nav ul li:nth-child(-n+2) {
text-transform: uppercase;
}
<header>
<h1>Title</h1>
<nav>
<ul>
<li><a href="#">Developers</a></li>
<li><a href="#">Designers</a></li>
<li><a href="#">How it Works</a></li>
<li><a href="#">Our Team</a></li>
<li><a href="#">Blog</a></li>
</ul>
</nav>
</header>
A: You can use nth-of-type().
For your example you will want to use nth-of-type on the <li>.
header nav li:nth-of-type(-n+2) {
text-transform: capitalize;
}
https://jsfiddle.net/qfLrsdwr/1
My JSFiddle has slightly different selectors and markup for demonstration purposes.
<header>
<h1>Title</h1>
<nav class="primary-nav">
<ul>
<li><a href="#">Developers</a></li>
<li><a href="#">Designers</a></li>
<li><a href="#">How it Works</a></li>
<li><a href="#">Our Team</a></li>
<li><a href="#">Blog</a></li>
</ul>
</nav>
</header>
.primary-nav li {
text-transform: lowercase;
}
.primary-nav li:nth-of-type(-n+2) {
text-transform: capitalize;
} | unknown | |
d76 | train | Have you tried using sys.argv instead of argparse? You can do something like that then:
import sys
dict = {}
tmp = []
key = ''
for arg in sys.argv:
if arg[0] == '-':
if tmp != []:
dict[key] = tmp
tmp = []
key = arg
if key == '--args':
dict[key] = sys.argv[sys.argv.find(key)+1:]
break
continue
tmp.append(arg)
This basically constructs an argument dictionary with flags as keys and list of arguments to flags as values. You might want to check what sys.argv is. If you invoke your python script with
python script.py -h yes -f yes no yes
sys.argv might be ['python', 'script.py', '-h', ...] but I am leaving that to you to find out. Then instead of for arg in sys.argv you want to do for arg in sys.argv[2:] or something you want.
So if we call
program.py --valid-arg1 value1 --valid-arg2 value2 --args binary --bin-arg1 bin_arg1_value --bin-arg2 bin_arg2_value
the dict will look like this:
dict = {
'--valid-arg1': ['value1'],
'--valid-arg2': ['value2'],
'args': ['binary', '--bin-arg1', 'bin_arg1_value', '--bin-arg2', 'bin_arg2_value']
}
Now if you want to use what is the value for --valid-arg1, you can do something like
try:
if dict['--valid-arg1'] == some_value1:
...
elif dict['--valid-arg1'] == some_value2:
...
except KeyError:
# no flag --valid-arg1
pass
With args you can do something like
try:
if something in dict['args']:
...
except KeyError:
# no additional args were given
pass | unknown | |
d77 | train | Unfortunately they cannot. All of the parameters that you can configure through the Google Kubernetes Engine API are here.
If you want to customize the nodes beyond what is offered through the API you can create your own instance template as described in this stackoverflow answer. The downside is that you will no longer be able to manage the nodes via the Google Kubernetes Engine API (e.g. for upgrading). | unknown | |
d78 | train | Here's some code that should help you get started.
function addMarkers(map, locations) {
$.each(locations, function(index, location) {
var marker = new google.maps.Marker({
position: new google.maps.LatLng(location[0], location[1]),
map: map,
icon: 'http://html.realia.byaviators.com/assets/img/marker-transparent.png'
});
var myOptions = {
content: '<div class="infobox"><div class="image"><img src="http://html.realia.byaviators.com/assets/img/tmp/property-tiny-1.png" alt=""></div><div class="title"><a href="detail.html">1041 Fife Ave</a></div><div class="area"><span class="key">Area:</span><span class="value">200m<sup>2</sup></span></div><div class="price">€450 000.00</div><div class="link"><a href="detail.html">View more</a></div></div>',
disableAutoPan: false,
maxWidth: 0,
pixelOffset: new google.maps.Size(-146, -190),
zIndex: null,
closeBoxURL: "",
infoBoxClearance: new google.maps.Size(1, 1),
position: new google.maps.LatLng(location[0], location[1]),
isHidden: false,
pane: "floatPane",
enableEventPropagation: false
};
marker.infobox = new InfoBox(myOptions);
marker.infobox.isOpen = false;
var myOptions = {
draggable: true,
content: '<div class="marker"><div class="marker-inner"></div></div>',
disableAutoPan: true,
pixelOffset: new google.maps.Size(-21, -58),
position: new google.maps.LatLng(location[0], location[1]),
closeBoxURL: "",
isHidden: false,
// pane: "mapPane",
enableEventPropagation: true
};
marker.marker = new InfoBox(myOptions);
marker.marker.open(map, marker);
markers.push(marker);
google.maps.event.addListener(marker, "click", function(e) {
var curMarker = this;
$.each(markers, function(index, marker) {
// if marker is not the clicked marker, close the marker
if (marker !== curMarker) {
marker.infobox.close();
marker.infobox.isOpen = false;
}
});
if (curMarker.infobox.isOpen === false) {
curMarker.infobox.open(map, this);
curMarker.infobox.isOpen = true;
map.panTo(curMarker.getPosition());
} else {
curMarker.infobox.close();
curMarker.infobox.isOpen = false;
}
});
});
}
// Assign handlers immediately after making the request,
// and remember the jqXHR object for this request
var jqxhr = $.ajax( "getlocations.php" ).done(function() {
addMarkers(map, locations)
});
Basically, the code fetches data from the script on getlocations.php and calls addMarkers using the data (it must be returned in JSON form) in "locations". You can add parameters to the AJAX to make it more flexible, like this:
$.ajax(
{
type: "GET",
url: "getlocations.php",
data: { city: "Boston", page: 1 } // or whatever parameters you need to send in the URL
}).done(function() {
addMarkers(map, locations)
}); | unknown | |
d79 | train | There are many solutions for your problem, but I'm gonna try to focus on the most mature one. I assume you are using windows as your environment.
You said you were running your projects on localhost, which is where your first mistake is.
Localhost is nothing but a form of weird domain name you specify. When you request a domain like "www.google.com" or "localhost" the following steps occur:
*
*The browser checks a specific file for the requested domain. This file is called hosts file
*If the domain name is found in the hosts file, the browser sends a request to the IP address specified in the hosts file
*If the domain name is not found in the hosts file, the browser queries domain name servers which return the corresponding IP adress.
Now localhost is nothing but a domain name specified in your hosts file, which points to the loopback address.(127.0.0.1).
So the magic trick here is to bind your custom hosts like "project1", "project2" etc.. to that loopback adress(127.0.0.1).
Than when you send a request to "project1" in your browser, the server running on port 80 will respond as if you typed "localhost".
The second part you need to take care of is called virtual hosts. When you send a request with a specific domain name, a special header is included in your http request called "Host".
Lets assume, that you redirect all this custom domains to the same IP (127.0.0.1). In order for apache to serve a different project, you should instruct apache to look at the "Host" header and resolve it for the corresponding project.
Again you do that by setting virtual hosts.
A lot of frameworks and content managing systems in PHP have some ugly ways to insert some magic "$BASE_PATH" variable, which is a bad practice as that could be achieved with relative paths in pure html and a properly configured server. | unknown | |
d80 | train | You're trying to sample a texture (on line 4) outside of any function. Where do you expect this code to run, in the vertex shader (VS) or the pixel shader (PS)?
Remove line 4 and change your pixel shader to:
return txDiffuse.Sample(samLinear, input.Tex);
You seem to have some structures that aren't used at all (VSINPUT, PSINPUT). The last line of the vertex shader won't compile either because you're using "input.Tex" and the vertex shader has no variable called "input". If you're not going to use them, change the first line of the VS to:
VS_OUTPUT VS(float4 Pos : POSITION, float3 NormalL : NORMAL, float2 Tex : TEXCOORD0)
and the end of the shader to be:
output.Color.a = DiffuseMtrl.a;
output.Tex = Tex;
return output; | unknown | |
d81 | train | Here is a conceptual example for you.
It covers one-to-many scenario similar to yours for Order and OrderDetails.
SQL
-- DDL and sample data population, start
USE tempdb;
GO
CREATE TABLE #orders (
OurOrderID INT IDENTITY PRIMARY KEY,
OrderID CHAR(5) NOT NULL,
CustomerID CHAR(5) NOT NULL,
OrderDate DATE NOT NULL,
EmployeeID INT NOT NULL
);
CREATE TABLE #details (
OrderDetailID INT IDENTITY,
OurOrderID INT NOT NULL FOREIGN KEY REFERENCES #orders(OurOrderID),
ProductID INT NOT NULL,
Price DECIMAL(10,2) NOT NULL,
Qty INT NOT NULL,
PRIMARY KEY (OrderDetailID, OurOrderID, ProductID)
);
DECLARE @orderidmap TABLE (
OurOrderID INT PRIMARY KEY,
TheirOrderID INT NOT NULL UNIQUE
);
DECLARE @xml XML =
N'<Orders>
<Order OrderID="13000" CustomerID="ALFKI" OrderDate="2006-09-20Z" EmployeeID="2">
<OrderDetails ProductID="76" Price="123" Qty="10"/>
<OrderDetails ProductID="16" Price="3.23" Qty="20"/>
</Order>
<Order OrderID="13001" CustomerID="VINET" OrderDate="2006-09-20Z" EmployeeID="1">
<OrderDetails ProductID="12" Price="12.23" Qty="1"/>
</Order>
</Orders>';
-- DDL and sample data population, end
/*
Propagate generated IDENTITY values for PRIMARY KEY as FOREIGN KEY in the child table
=============================================================================================
We have an XML document with order data, and there is an order ID in that data.
To be able to store both header and details, we need a mapping,
and to this end we use the MERGE statement with the odd condition 1 = 0
in the USING clause and there is only one branch for WHEN NOT MATCHED.
We use the OUTPUT clause, and we insert both order IDs into the @orderidmap table.
*/
;WITH OrderData AS
(
SELECT TheirOrderID = c.value('@OrderID[1]', 'INT'),
CustomerID = c.value('@CustomerID[1]', 'CHAR(5)'),
OrderDate = c.value('@OrderDate[1]', 'DATETIME'),
EmployeeID = c.value('@EmployeeID[1]', 'SMALLINT')
FROM @xml.nodes('/Orders/Order') AS t(c)
)
MERGE #orders AS o
USING OrderData AS od ON 1 = 0
WHEN NOT MATCHED THEN
INSERT(OrderID, CustomerID, OrderDate, EmployeeID)
VALUES(od.TheirOrderID, od.CustomerID, od.OrderDate, od.EmployeeID)
OUTPUT inserted.OurOrderID, od.TheirOrderID
INTO @orderidmap (OurOrderID, TheirOrderID);
;WITH Details AS
(
SELECT TheirOrderID = o.value('@OrderID[1]', 'INT'),
ProductID = od.value('@ProductID[1]', 'SMALLINT'),
Price = od.value('@Price[1]', 'DECIMAL(10,2)'),
Qty = od.value('@Qty[1]', 'INT')
FROM @xml.nodes('/Orders/Order') AS A(o)
CROSS APPLY A.o.nodes('OrderDetails') AS B(od)
)
INSERT #details (OurOrderID, ProductID, Price, Qty)
SELECT m.OurOrderID, d.ProductID, d.Price, d.Qty
FROM Details AS d
INNER JOIN @orderidmap AS m ON d.TheirOrderID = m.TheirOrderID;
-- test
SELECT * FROM #orders;
SELECT * FROM @orderidmap;
SELECT * FROM #details;
GO
DROP TABLE #orders, #details; | unknown | |
d82 | train | It seems FNH is confused because you seem to map the same object (ChildEntity) to two different tables, if I'm not mistaken.
If you don't really need the two lists to get separated, perhaps using a discriminating value for each of your lists would solve the problem. Your first ChildEntity list would bind to the discriminationg value A, and you sesond to the discriminating value B, for instance.
Otherwise, I would perhaps opt for a derived class of your ChildEntity, just not to have the same name of ChildEntity.
IList<ChildEntity> ChildEntities
IList<IncludedChildEntity> IncludedChildEntities
And both your objects classes would be identitical.
If you say it works with NH, then it might be a bug as already stated. However, you may mix both XML mappings and AutoMapping with FNH. So, if it does work in NH, this would perhaps be my preference. But think this workaround should do it.
A: You know I'm just shooting in the dark here, but it almost sounds like your ChildEntity class isn't known by Hibernate .. that's typically where I've seen that sort of message. Hibernate inspects your class and sees this referenced class (ChildEntity in this case) that id doesn't know about.
Maybe you've moved on and found the issue at this point, but thought I'd see anyway.
A: Fluent is confused because you are referencing the same parent column twice. That is a no-no. And as far as I can tell from the activity i have seen, a fix is not coming any time soon.
You would have to write some custom extensions to get that working, if it is possible.
A: To my great pity, NHibernate cannot do that. Consider using another ORM. | unknown | |
d83 | train | Replace:
ActiveWorkbook.Connections("Query - Entries").Refresh
with:
With ActiveWorkbook.Connections("Query - Entries")
.OLEDBConnection.BackgroundQuery = False
.Refresh
End With | unknown | |
d84 | train | Try using properties.x then all values are extracted from the properties array.
Example:
df.withColumn("x_values",col("properties.x")).show(10,False)
#or by using higher order functions
df.withColumn("x_values",expr("transform(properties,p -> p.x)")).show(10,False)
#+---+-------------------------+--------+
#|id |properties |x_values|
#+---+-------------------------+--------+
#|1 |[[11, str1a]] |[11] |
#|2 |[[21, str2a], [22, 0.22]]|[21, 22]|
#+---+-------------------------+--------+ | unknown | |
d85 | train | Set both columns in a single foreach loop. | unknown | |
d86 | train | That data is already in the store. The point is that RTK Query manages the data for you - you don't have to write a slice by hand any more. Just use the hook in your component and you're done. | unknown | |
d87 | train | if this is a browser activity, a better way to do this may be a javascript setInverval() with an ajax call to a php function.
Otherwise, I'd recommend running the PHP script via CRON. There's no other way that I know to run PHP asynchronously, which is probably what you're trying to accomplish. | unknown | |
d88 | train | db.getCollection("commentsURL").aggregate([
{$project:
{
_id:1,
url2:{$arrayElemAt:[{$split:["$url", "/"]}, 4]},
url3:{$arrayElemAt:[{$split:["$url", "/"]}, 5]}
}
},
{ $addFields: { full_name: { $concat: [ "$url2", "/", "$url3" ] } } },
{$out:"comments2"}
]);
After that a projection query, I can get full_name of repos;
db.getCollection("comments").find({},{"_id":1,"full_name":1}).forEach(function(doc){
db.comments2.insert(doc);}); | unknown | |
d89 | train | You could use the Focused event instead of TextChanged event.
<StackLayout>
<Entry ClassId="1" x:Name="myWord1" Focused="EntryFocused"/>
<Entry ClassId="2" x:Name="myWord2" Focused="EntryFocused"/>
</StackLayout>
private void EntryFocused(object sender, FocusEventArgs e)
{
var EntryTapped = (Xamarin.Forms.Entry)sender;
if (EntryTapped.ClassId == "1")
{
myWord2.Text = "Noo";
}
else if (EntryTapped.ClassId == "2")
{
myWord1.Text = "yess";
}
}
A: There are several ways of doing this:
*
*Using bindings
In this case you would have 2 private variables and 2 public variables, and the entries binded to each one. Check this link how to implement INotifyPropertyChanged
private string entry1String;
private string entry2String;
public string Entry1String {
get => entry1String;
set
{
entry2String = "Noo";
entry1String = value;
OnPropertyChanged(Entry1String);
OnPropertyChanged(Entry2String);
}
}
public string Entry2String {
get => entry2String;
set
{
entry1String = "Yees";
entry2String = value;
OnPropertyChanged(Entry1String);
OnPropertyChanged(Entry2String);
}
}
Another way could be using a variable as a Semaphore. While the variable is True, the method cannot be fired at the same time by another.
private bool semaphoreFlag=false;
private async void OnEntryTextChange(object sender, TextChangedEventArgs e)
{
if(semaphoreFlag) return;
semaphoreFlag=true;
var EntryTapped = (Xamarin.Forms.Entry)sender;
Device.BeginInvokeOnMainThread(() => {
if (EntryTapped.ClassId == "1")
{
myWord2.Text="Noo";
}
else if (EntryTapped.ClassId == "2")
{
myWord1.Text="yess";
}
});
semaphoreFlag=false;
} | unknown | |
d90 | train | After some research, I've discovered you can make use of a .runsettings file (documentation).
You can customize your code coverage results within this file like so:
<CodeCoverage>
<ModulePaths>
<Exclude></Exclude>
</ModulePaths>
<Functions>
<Exclude>
<Function>.*c__DisplayClass.*</Function>
</Exclude>
</Functions>
</CodeCoverage>
This gave me the results I wanted. All auto generated c__DisplayClass functions are excluded from the results.
A: Just to add to Anthony's excellent answer, I had lots of auto generated rubbish which can be hidden neatly with the following .runsettings file:
<CodeCoverage>
<ModulePaths>
<Exclude></Exclude>
</ModulePaths>
<Functions>
<Exclude>
<Function>.*<*>.*</Function>
</Exclude>
</Functions>
</CodeCoverage>
Note that < and > are the triangular brackets < and > so this should (in my experience) cover all automatically generated code in the coverage results. | unknown | |
d91 | train | The flash messages are stored in the user's session. If a user opens up two browser windows. And performs some action on one window that causes a flash, but the user reloads a page in the second browser before the first one redirects the second one will show the flash.
With that said does that sound like your issue? Does it show the flash twice? Please elaborate and be more specific as to when 'it displays when it shouldn't'.
A: Maybe the layout of the page where you want the flash message to show doesn't print flash messages and then it shows up in the layout where you print the flash message | unknown | |
d92 | train | I hope this answer helps you or anyone else looking for a solution. This one worked for me.
I'm trying to do the same. I have a page that calculates shipping on an iframe and i want it to show a loading gif image while the shipping cost loads as an xml from my shipping provider.
The IFRAME is covered with a DIV where the loading image (or any image) is found. The image is at the center of the DIV area and its size can be up to 950x633 pixels. When the IFRAME page is loaded, the loading image will be hidden.
What you need to change is the image URL for your site. You might also want to change the DIV background color (currently set to #FFF).
<style>
#loadImg{position:absolute;z-index:999;}
#loadImg div{display:table-cell;width:950px;height:633px;background:#fff;text- align:center;vertical-align:middle;}
</style>
<div id="loadImg"><div><img src="loading.gif" /></div></div>
<iframe border=0 name=iframe src="html/wedding.html" width="950" height="633" scrolling="no" noresize frameborder="0" onload="document.getElementById('loadImg').style.display='none';"></iframe>
This worked for me. It shows a loading Gif animation before the page is loaded in my iframe box. :) | unknown | |
d93 | train | Close to a duplicate of may other questions here, like this one.
You need to set the types used in your chain in you polkadot.js API consuming apps: https://polkadot.js.org/docs/api/start/types.extend | unknown | |
d94 | train | REPLACE is used when you want to have an altogether different columns to your table.If not it is better to rename the column_name with the CHANGE option in the alter statement.
A: The ALTER TABLE <TableName> REPLACE COLUMNS removes all existing columns and adds the new set of columns.
ALTER TABLE <TableName> REPLACE COLUMNS
(EID INT,
EName STRING);
REPLACE COLUMNS
For your scenario you can make use of ALTER TABLE <TableName> CHANGE <ColumnName>
ALTER TABLE <TableName> CHANGE ID EID INT;
This page will give you a lots of information ALTER COLUMNS | unknown | |
d95 | train | This assumes SQL Server, but the SqlParameter type could be changed to match the connection type. As items are added to this list the data type would have to be identified.
Imports System.Data.SqlClient
Dim Params As List(Of SqlParameter)
Public Property ParameterList() As List(Of SqlParameter)
Get
Return Params
End Get
Set(ByVal value As List(Of SqlParameter))
Params = value
End Set
End Property
You'll have to loop through the list and add each parameter to a command object. | unknown | |
d96 | train | Your upload.php-file is already in the api-folder, just change this line:
define('DESTINATION_FOLDER','../api/upload/');
to:
define('DESTINATION_FOLDER','upload/');
And it should work. You have quite a few weird variables, unused variables etc., so there might be other things wrong as well.
A: I've managed to make it work, it costed me 4 days of work but now it works :
you have to be careful in some part of your code :
Html
Do not add enctype="multipart/form-data" or if you do remember to add rewrite rule
"multipart/form-data" make dropzone AUTOMATICALLY set Options REQUEST
virtual host or a .ini / .htAccess with Xampp/wampp/vagrant
VirtualHost *:80>
DocumentRoot "#folderOfyourWebsite"
ServerName yoursite.name
<Directory "#path To The HTML of Your Index Page">
Options Indexes FollowSymLinks MultiViews ExecCGI
AllowOverride Authconfig FileInfo
Require all granted
</Directory>
Header set Access-Control-Allow-Origin "*"
Header set Access-Control-Allow-Methods "POST, GET, OPTIONS, DELETE, PUT"
Header set Access-Control-Max-Age "1000"
Header set Access-Control-Allow-Headers "x-requested-with, Content-Type, origin, authorization, accept, client-security-token"
RewriteEngine On
RewriteCond %{REQUEST_METHOD} OPTIONS
RewriteRule ^(.*)$ $1 [R=200,L]
</VirtualHost>
In rewrite engine we're telling him : ok,if you receive An options request, rewrite it with a 200 (successfull post)
So your TO-DO list can be recapped with this :
*
*check your html dropzone Form, if you have enctype you'll have to write
a Rewrite Rule in your vhost or htaccess
*Did you write a Rewrite Rule in order to make cors Successfull?
*Make sure your path have all permissions and your upload folder exists
*Check your static paths wherever you are doing an http request | unknown | |
d97 | train | As per my knowledge, one approach solving the issue is using Construct using below code
cfg.CreateMap<Address, AddressDto>();
cfg.CreateMap<AddressExtended, AddressExtendedDto>();
cfg.CreateMap<IAddress, IAddressDto>().ConstructUsing((IAddress addressDto) =>
{
if (addressDto is AddressExtended) return Mapper.Map<AddressExtendedDto>(addressDto);
return Mapper.Map<AddressDto>(addressDto);
});
Edit 1:
Here is the final answer and it solves your problem
cfg.CreateMap<Address, AddressDto>();
cfg.CreateMap<AddressExtended, AddressExtendedDto>();
cfg.CreateMap<IAddress, IAddressDto>().ConstructUsing((addressDto, ctx) =>
{
var destination = Mapper.Instance.ConfigurationProvider.GetAllTypeMaps()
.First(t => t.SourceType == addressDto.GetType());
return ctx.Mapper.Map(addressDto, addressDto.GetType(), destination.DestinationType) as IAddressDto;
});
Instead of getting destination type using LINQ you can build a dictionary and get from it for faster execution. | unknown | |
d98 | train | I wonder if a good way might be to go to mxtoolbox, do a blacklist test, then get a list of blacklist sites and see if you can contact them to get a list?
I suspect that such companies may consider those datasets their intellectual property and probably won't publish these - it may not be possible.
Good luck!
Also Akismet may have such a dataset?
Additionally, the more powerful email classifying software works by using patterns that you can make. Check out MailMarshall88 for example. You could use this to build your own dataset, but remember that just because someone is on a blacklist today, doesn't mean that they're always bad. For example, you might get a virus outbreak in your company which spams people and gets your IP blacklisted. You then fix the virus and are now incorrectly blacklisted. In this scenario a pattern would work much better. | unknown | |
d99 | train | Emacs lisp only has dynamic scoping. There's a lexical-let macro that approximates lexical scoping through a rather terrible hack.
A: Found another solution with lexical-let
(defun foo (n)
(lexical-let ((n n)) #'(lambda() n)))
(funcall (foo 10)) ;; => 10
A: Emacs 24 has lexical binding.
http://www.emacswiki.org/emacs/LexicalBinding
A: ;; -*- lexical-binding:t -*-
(defun create-counter ()
(let ((c 0))
(lambda ()
(setq c (+ c 1))
c)))
(setq counter (create-counter))
(funcall counter) ; => 1
(funcall counter) ; => 2
(funcall counter) ; => 3 ...
A: Real (Not Fake) Closures in Emacs 24.
Although Emacs 24 has lexical scooping when the variable lexical-binding has value t, the defun special form doesn’t work properly in lexically bound contexts (at least not in Emacs 24.2.1.) This makes it difficult, but not impossible, to define real (not fake) closures. For example:
(let ((counter 0))
(defun counting ()
(setq counter (1+ counter))))
will not work as expected because the symbol counter in the defun will be bound to the global variable of that name, if there is one, and not the lexical variable define in the let. When the function counting is called, if the global variable doesn’t, exist then it will obviously fail. Hoever if there is such a global variable it be updated, which is probably not what was intended and could be a hard to trace bug since the function might appear to be working properly.
The byte compiler does give a warning if you use defun in this way and presumably the issue will be addressed in some future version of Emacs, but until then the following macro can be used:
(defmacro defun** (name args &rest body)
"Define NAME as a function in a lexically bound context.
Like normal `defun', except that it works correctly in lexically
bound contexts.
\(fn NAME ARGLIST [DOCSTRING] BODY...)"
(let ((bound-as-var (boundp `,name)))
(when (fboundp `,name)
(message "Redefining function/macro: %s" `,name))
(append
`(progn
(defvar ,name nil)
(fset (quote ,name) (lambda (,@args) ,@body)))
(if bound-as-var
'nil
`((makunbound `,name))))))
If you define counting as follows:
(let ((counter 0))
(defun** counting ()
(setq counter (1+ counter))))
it will work as expected and update the lexically bound variable count every time it is invoked, while returning the new value.
CAVEAT: The macro will not work properly if you try to defun** a function with the same name as one of the lexically bound variables. I.e if you do something like:
(let ((dont-do-this 10))
(defun** dont-do-this ()
.........
.........))
I can’t imagine anyone actually doing that but it was worth a mention.
Note: I have named the macro defun** so that it doesn’t clash with the macro defun* in the cl package, however it doesn’t depend in any way on that package.
A: Stupid idea: how about:
(defun foo (x)
`(lambda () ,x))
(funcall (foo 10)) ;; => 10
A: http://www.emacswiki.org/emacs/FakeClosures | unknown | |
d100 | train | toolbox-loader only imports the _colors.scss file (see first code line)
you have to import the _globals.scss file manually (or fork toolbox-loader). | unknown |
End of preview. Expand
in Dataset Viewer.
Employing the CoIR evaluation framework's dataset version, utilize the code below for assessment:
import coir
from coir.data_loader import get_tasks
from coir.evaluation import COIR
from coir.models import YourCustomDEModel
model_name = "intfloat/e5-base-v2"
# Load the model
model = YourCustomDEModel(model_name=model_name)
# Get tasks
#all task ["codetrans-dl","stackoverflow-qa","apps","codefeedback-mt","codefeedback-st","codetrans-contest","synthetic-
# text2sql","cosqa","codesearchnet","codesearchnet-ccr"]
tasks = get_tasks(tasks=["codetrans-dl"])
# Initialize evaluation
evaluation = COIR(tasks=tasks,batch_size=128)
# Run evaluation
results = evaluation.run(model, output_folder=f"results/{model_name}")
print(results)
- Downloads last month
- 98