_id
stringlengths 2
6
| partition
stringclasses 3
values | text
stringlengths 4
46k
| language
stringclasses 1
value | title
stringclasses 1
value |
---|---|---|---|---|
d901 | train | Just use TRY_TO_DATE, it will return NULL for values where it can't parse the input.
A: If you are certain that all of your values other than NULLs are of the string 'yyyymmdd' then the following will work in snowflake.
TO_DATE(TO_CHAR(datekey),'yyyymmdd')
A: Sounds like some of your col4 entries are NULL or empty strings. Try maybe
... FROM df_old WHERE col4 ~ '[0-9]{8}'
to select only valid inputs. To be clear, if you give the format as 'YYYYMMDD' (you should use uppercase) an entry in col4 has to look like '20190522'.
A: All dates are stored in an internal format.
You cannot influence how a date is STORED. However you have it format it the way you want when you pull the data from the database via query.
A: I was able to resolve by the following:
CREATE OR REPLACE TABLE df_new
AS SELECT
,col1 AS NAME
,col2 AS first_name
,col3 AS last_name
,CASE WHEN "col4" = '' then NULL ELSE TO_DATE(col4, 'YYYYMMDD') AS date
FROM df_old
A: I resolved the issue as follows:
CREATE OR REPLACE TABLE df_new
AS SELECT
col1 AS NAME
,col2 AS first_name
,col3 AS last_name
,CASE WHEN col4 = '' THEN NULL ELSE TO_DATE(col4, 'YYYY-MM-DD') END AS date
FROM df_old | unknown | |
d902 | train | The keyword for your case is 'Service Instance'
You can create a service instance of database server within the environment specific for your application and bind it via application manifest.
e.g.
cf create-service rabbitmq small-plan myapplication-rabbitmq-instance
As long as you have a binding to myapplication-rabbitmq-instance in your application manifest it would be preserved/be the same between application deployments within this space.
e.g. in your application manifest:
---
...
services:
- myapplication-rabbitmq-instance
More on https://docs.cloudfoundry.org/devguide/services/ | unknown | |
d903 | train | Change normalized_term function:
def normalized_term(document):
result = []
for term in document:
if term in normalizad_word_dict:
for word in normalizad_word_dict[term].split(' '):
result.append(word)
else:
result.append(term)
return result
Or if you want to use inline loop:
import itertools
def normalized_term(document):
return list(itertools.chain(*[normalizad_word_dict[term].split() if term in normalizad_word_dict else term.split() for term in document])) | unknown | |
d904 | train | For starters in the code there is no overloaded functions. The declaration of update in the derived class hides the declaration of the function with the same name in the base class.
As the member function add is declared in the base class then the name of the function update also is searched in the base class.
Declare the function update as a virtual function.
class Hand
{
public:
// ...
virtual void update();
};
class StandHand : public Hand
{
public:
// ///
void update() override;
};
A: There are two issues in your code, first void update(); in the base class needs to be declared virtual, so that compiler will know that derived classes might override it and use it.
virtual void update();
// add "=0;" if there is no base version of update()
And in the derived class write
void update() override;
//override is unnecessary but recommended
//it verifies that you do indeed overload a method from a base class
Second problem, you cannot call methods from derived class in constructor of its base class. What happens is that calling virtual base method results in calling the base version instead of the derived version. Think of it: the derived object isn't constructed yet but you already call one of its methods? It doesn't make any sense. | unknown | |
d905 | train | When you remove the option at i, you're shuffling all the other options down; so now, the next option is at i. But then because you're using a for loop, you're incrementing iΒ β and you never looked at the option after the option you removed.
Instead, use a while loop and only increment i if you don't remove the option.
var selectobject = document.getElementById("os0"); //this is the select
var i = 0;
while (i < selectobject.length) {
if (selectobject.options[i].value != <?php echo $people; ?> ){
selectobject.remove(i);
alert(i);
} else {
++i;
}
}
Live Example using 3 (and without the alert):
var people = 3;
var selectobject = document.getElementById("os0"); //this is the select
var i = 0;
while (i < selectobject.length) {
if (selectobject.options[i].value != people){
selectobject.remove(i);
} else {
++i;
}
}
<select name="os0" id="os0">
<option id="1" value="1">1 β¬123,97 EUR</option>
<option id="2" value="2">2 β¬249,94 EUR</option>
<option id="3" value="3">3 β¬371,91 EUR</option>
<option id="4" value="4">4 β¬495,88 EUR</option>
<option id="5" value="5">5 β¬619,85 EUR</option>
<option id="6" value="6">6 β¬743,82 EUR</option>
<option id="7" value="7">7 β¬867,79 EUR</option>
<option id="8" value="8">8 β¬991,76 EUR</option>
<option id="9" value="9">9 β¬1.115,73 EUR</option>
<option id="10" value="10">10 β¬1.239,70 EUR</option>
</select>
A: When you remove items while iterating over them, the element currently processed in i will be offset each time an item is removed.
If you remove them in reverse you shouldn't have the issue. See example below if element with value 9 was desired.
document.addEventListener('DOMContentLoaded', function() {
var selectobject = document.getElementById("os0"); //this is the select
for (var i = selectobject.length - 1; i >= 0; i--) {
if (selectobject.options[i].value != 9) {
selectobject.remove(i);
}
}
}, false);
<input type="hidden" name="on0" value="people">people</td>
</tr>
<tr>
<td><select name="os0" id="os0">
<option id="1" value="1">1 β¬123,97 EUR</option>
<option id="2" value="2">2 β¬249,94 EUR</option>
<option id="3" value="3">3 β¬371,91 EUR</option>
<option id="4" value="4">4 β¬495,88 EUR</option>
<option id="5" value="5">5 β¬619,85 EUR</option>
<option id="6" value="6">6 β¬743,82 EUR</option>
<option id="7" value="7">7 β¬867,79 EUR</option>
<option id="8" value="8">8 β¬991,76 EUR</option>
<option id="9" value="9">9 β¬1.115,73 EUR</option>
<option id="10" value="10">10 β¬1.239,70 EUR</option>
</select> | unknown | |
d906 | train | You'll need the div have position fixed instead of absolute.
Fiddle: http://jsfiddle.net/hqkm7/
A: <\span style="position: absolute; bottom: 0pt; right: 0pt;">Load time: 1.1920928955078E-5 seconds<\/span>
should be
<span style="position: absolute; bottom: 0pt; right: 0pt;">Load time: 1.1920928955078E-5 seconds</span>
A: you need to use position:fixed instead of position:absolute. position:absolute does not scroll with the page while positions:fixed does (by taking your span out of the flow of the page). | unknown | |
d907 | train | Use slash at the beginning like
<img src="/images/header.jpg" width="790" height="228" alt="" />
You can also use image_tag (which is better for routing)
image_tag('/images/header.jpg', array('alt' => __("My image")))
In the array with parameters you can add all HTML attributes like width, height, alt etc.
P.S. IT's not easy to learn Symfony. You need much more time
A: If you don't want a fully PHP generated image tag but just want the correct path to your image, you can do :
<img src="<?php echo image_path('header.jpg'); ?>"> width="700" height="228" alt="" />
Notice that the path passed to image_path excludes the /images part as this is automatically determined and created for you by Symfony, all you need to supply is the path to the file relative to the image directory.
You can also get to your image directory a little more crudely using
sfConfig::get('sf_web_dir')."/images/path/to/your/image.jpg"
It should be noted that using image_tag has a much larger performance cost attached to it than using image_path as noted on thirtyseven's blog
Hope that helps :) | unknown | |
d908 | train | It should be:
def str1 = 'C:\\mkjk\\sys' // single quotes
or
def str1 = "C:\\mkjk\\sys" // double quotes
or
def str1 = """C:\\mkjk\\sys""" // three double quotes (multiline string)
or
def str = '''C:\\mkjk\\sys''' // three single quotes (multiline string)
or
def str1 = /C:\mkjk\sys/ // forward slashes (slashy string) | unknown | |
d909 | train | Git has self-detected an internal error. Report this to the Git mailing list (git@vger.kernel.org). The output from git config --list --show-origin may also be useful to the Git maintainers, along with the output of git ls-remote on the remote in question (origin, probably). (The bug itself is in your Windows Git; the server version of Git should be irrelevant, but it won't hurt to mention that too.)
A: Based on reporting problem to git team, the problem was caused by branch with an empty name "". After removing this form .git/config, pushing works again.
However, the problem is passed to git team and will probably be solved in future version. | unknown | |
d910 | train | SQL Fiddle Demo
SELECT FC, MAX(RC) RC, aa
FROM YourTable
GROUP BY FC, aa
OUTPUT
| FC | RC | aa |
|-----|----|----|
| F90 | NA | 13 |
| F90 | OT | 48 |
| F92 | SA | 1 |
| F93 | EU | 2 |
| F93 | GT | 16 |
| F94 | AP | 2 | | unknown | |
d911 | train | Install the btree_gist contrib module.
Then you have a gist_int8_ops operator class that you can use to create a GiST index on a bigint column. | unknown | |
d912 | train | Page 1
constructor(public nav: NavController){}
pushToNextScreenWithParams(pageUrl: any, params: any) {
this.nav.navigateForward(pageUrl, { state: params });
}
Page 2
constructor(public router: Router){
if (router.getCurrentNavigation().extras.state) {
const pageName = this.router.getCurrentNavigation().extras.state;
console.log(pageName)
}
}
A: so after looking at a variety of sources, if your only passing data forward and you don't care about passing data back, here's an example:
// from
export class SomePage implements OnInit {
constructor (private nav: NavController) {}
private showDetail (_event, item) {
this.nav.navigateForward(`url/${item.uid}`, { state: { item } })
}
}
// to
export class SomeOtherPage implements OnInit {
item: any
constructor (private route: ActivatedRoute, private router: Router) {
this.route.queryParams.subscribe(_p => {
const navParams = this.router.getCurrentNavigation().extras.state
if (navParams) this.item = navParams.item
})
}
}
Hope that's clear.
A: I found a solution for passing the parameters between the pages using the 'ActivatedRoute' and the 'Router' in "@angular/router". In here we can use the url to pass the parameters. Following youtube video will help to solve this problem.
https://youtu.be/C6LmKCSU8eM
A: The easiest way is pass it to a service in the first page, and pick it from same service on the next page.
First Create a service and then declare a public variable inside your service like this
public result: any;
then go down and declare a function to always change this variable anytime
changeData(result){
this.result = result;
}
Second
go to the page where you want to pass the data and pass it like below,
using the name of the service and the public variable.
this.util.changeData(data);
note: data here is the data you want to pass
Thirdly
you can pick the data from anywhere on your app,
example i am accessing the data from my view like below
{{this.util.result}} | unknown | |
d913 | train | You really shouldn't rely on the output of ls in this way, since you can have filenames with embedded spaces, newlines and so on.
Thankfully there's a way to do this in a more reliable manner:
((i == 0))
for fspec in *pattern_* ; do
((i = i + 1))
doSomethingWith "$(printf "%03d" $i)"
done
This loop will run the exact number of times related to the file count regardless of the weirdness of said files. It will also use the correctly formatted (three-digit) argument as requested. | unknown | |
d914 | train | Sure there is. This is how all the 3rd party packages we are all using did.
The formal pypa explain how to do it here.
Basically you need to package your project to a wheel file and upload it to the pypi repository. To do this you need to declare (mainly in setup.py), what is your package name, version, which sub-packages you want to pack to the wheel etc..
A: If your packages are required for a particular project, it is straightforward to contain them in the Git repository. You can put them in the directory named wheelhouse, which comes from the name of the previous default directory created by pip wheel.
If you put the private package foo in the wheelhouse, you can install as follows:
pip install foo -f wheelhouse | unknown | |
d915 | train | Depending on the testing framework you are using junit or testng you can use the concept of soft assertion. Basically it will collect all the errors and throw an assertion error if something is amiss.
To fail a scenario you just need an assertion to fail, no need to set the status of the scenario. Cucumber will take care of that if an assertion fails.
For testng you can use the SoftAssert class - http://testng.org/javadocs/org/testng/asserts/SoftAssert.html You will get plenty tutorials for this. Call to doAssert will trigger of the verification of all stored assertions.
For junit you can use the ErrorCollector Rule class -
http://junit.org/junit4/javadoc/4.12/org/junit/rules/ErrorCollector.htmlenter link description here As cucumber does not support @Rule annotation, you need to inherit from this class and override the verify property to change its modifier to public in place of protected. Create an instance of the new class and add the assertions. Call to verify method will start the verification.
A: QAF provides assertion and verification concepts, where on assertion failure scenario exited with failed status and in case of verification scenario continue for next step and final status of step/scenario is failed if one or more verification failure.
You also can set step status failure using step listener which result in test failure. With use of step listener, you also can continue even step failure by converting exception to verification failure.
A: It is not a good idea to continue executing steps after a step failure because a step failure can leave the World with an invariant violation. A better strategy is to increase the granularity of your scenarios. Instead of writing a single scenario with several "Then" statements, use a list of examples to separately test each postconditions. Sometimes a scenario outline and list of examples can consolidate similar stories. https://cucumber.io/docs/reference#scenario-outline
There is some discussion about adding a feature to tag certain steps to continue after failure. https://github.com/cucumber/cucumber/issues/79
There are some other approaches to continuing through scenario steps after a failure here: continue running cucumber steps after a failure | unknown | |
d916 | train | Just a partial idea.
The DFT is separable. It is always computed by first applying the FFT algorithm to rows of the image, then to the columns of the result (or the other way around, the order doesn't matter).
If you want only an ROI of the output, in the second step you only need to process the columns that fall within the ROI.
I don't think you'll find a way to compute only a subset of frequencies along each 1D row/column. That would likely entail hacking your own FFT, which will likely be more computationally expensive than using the one in IPP or FFTW. | unknown | |
d917 | train | java.lang.Thread.setDefaultUncaughtExceptionHandler(UncaughtExceptionHandler handler)
Is this what you want?
A: Extend Application class
import android.app.Application;
import android.util.Log;
public class MyApplication extends Application {
@Override
public void onCreate() {
super.onCreate();
Thread.setDefaultUncaughtExceptionHandler(new UncaughtExceptionHandler() {
@Override
public void uncaughtException(Thread thread, Throwable ex) {
Log.e("MyApplication", ex.getMessage());
}
});
}
}
Add the following line in AndroidManifest.xml file as Application attribute
android:name=".MyApplication" | unknown | |
d918 | train | It's everything in the documentation. If you want custom contexts, you have to add them first:
$this->_helper
->getHelper('contextSwitch')
->addContext('print', array(
// context options go here
))
->addActionContext('history', 'print')
// more addActionContext()s goes here
->initContext();
A: What you might do instead of using a context for the print view is just have a parameter in the URL like /print/1. Then in the controller action, check to see if that parameter is true, and if it is, render the "print" view script instead of the regular view script. | unknown | |
d919 | train | Can you please try below code. You do small mistake in if condition.
d={}
for row, item in enumerate(df['Messung']):
key=item[0:2]
key = "RP_"+key
if key not in d:
d[key] = []
d[key].append(df.iloc[row])
ALso you can use setdefault() of python.Then your code looks like as below:
d={}
for row, item in enumerate(df['Messung']):
key=item[0:2]
key = "RP_"+key
d.setdefault(key, []).append(df.iloc[row])
A: While trying your solution I noticed, I can even delete the line with key=item[0:2] and directly build my key with 'RP_' and the item[0:2]
d={}
for row, item in enumerate(df['Messung']):
key = "RP_"+item[0:2]
d.setdefault(key, []).append(df.iloc[row]) | unknown | |
d920 | train | Select the whole sheet, right click and then select Format Cells.... In the popup window, select Protection tab. Unselect both options and press OK button. This will unlock all cells on the sheet as by default all cells are locked. Next, select your range, repeat the above process again but this time ensure that both options (Locked and Hidden) are selected this time and press OK. Now protect your sheet (in Excel 2013, select the REVIEW tab and select Protect Sheet option and follow the steps).
This will hide your formulas and stop anyone changing the values in the protected cells | unknown | |
d921 | train | preamble
repeating notes I left as a comment on the question, because I'm not sure there was enough emphasis placed on these points:
"I don't think the slowness is due to three separate statements."
"It looks like the statements have the potential to churn through a lot of rows, even with appropriate indexes defined."
"Use EXPLAIN to see the execution plan, and ensure suitable indexes are being used, ..."
answer
The DELETE statement can't be combined with an UPDATE statement. SQL doesn't work like that.
It might be possible to combine the two UPDATE statements, if those are visiting the same rows, and the conditions are the same, and the second UPDATE is not dependent on the preceding UPDATE and DELETE statement.
We see the first UPDATE statement requires a matching row from bidbutler table. The second UPDATE statement has no such requirement.
We notice that the predicates in the WHERE clause are negating the "outerness" of the LEFT JOIN. If the original statements are correctly implemented (and are performing the required operation), then we can eliminate the LEFT keyword.)
We also find a condition a.auctionID=a.auctionID which boils down to a.auctionID IS NOT NULL. We already have other conditions that require a.auctionID to be non-NULL. (Why is that condition included in the statement?)
We also see a condition repeated: b.auc_id=c.auction_id appearing in both the ON clause and the WHERE clause. That condition only needs to be specified once. (Why is it written like that? Maybe something else was intended?)
The first UPDATE statement could be rewritten into the equivalent:
UPDATE auction a
JOIN auc_due_table d
ON d.auction_id = a.auctionID
AND d.auc_due_time = a.total_time
AND d.auc_due_price <> a.auc_final_price
LEFT
JOIN bidbutler b
ON b.auc_id = a.auctionID
AND b.auc_id = d.auction_id
AND b.butler_status <> 0
SET b.butler_status = 0
WHERE a.auc_status = 3
The second UPDATE statement can be rewritten into an equivalent:
UPDATE auction a
JOIN auc_due_table d
ON d.auction_id = a.auctionID
AND d.auc_due_time = a.total_time
AND d.auc_due_price <> a.auc_final_price
SET a.auc_status = 2
WHERE a.auc_status = 3
The difference is the extra outer join to bidbutler table, and the SET clause.
But before we combine these, we need to decipher whether the operations performed in the first UPDATE or the DELETE statement) influence the second UPDATE statement. (If we run these statements in a different order, do we get a different outcome?)
A simple example to illustrate the type of dependency we're trying to uncover:
UPDATE foo SET foo.bar = 1 WHERE foo.bar = 0;
DELETE foo.* FROM foo WHERE foo.bar = 0;
UPDATE foo SET foo.qux = 2 WHERE foo.bar = 1;
In the example, we see that the outcome is (potentially) dependent on the order the statements are executed. The first UPDATE statement will modify the rows that won't be removed by the DELETE. If we were to run the DELETE first, that would remove rows that would have been updated... the order the statements are executed influence the result.
Back to the original statements in the question. We see auc_status column being set, but also a condition on the same column.
If there are no dependencies between the statements, then we could re-write the two UPDATE statements into a single statement:
UPDATE auction a
JOIN auc_due_table d
ON d.auction_id = a.auctionID
AND d.auc_due_time = a.total_time
AND d.auc_due_price <> a.auc_final_price
LEFT
JOIN bidbutler b
ON b.auc_id = a.auctionID
AND b.auc_id = d.auction_id
AND b.butler_status <> 0
SET b.butler_status = 0
, a.auc_status = 2
WHERE a.auc_status = 3 | unknown | |
d922 | train | You are using txtAddress : OleVariant but without any structure behind. So you cannot use something like txtAddress.text, because there is nothing where this can be mapped.
Simply change the type to string, there is no need for txtAddress to be of type OleVariant.
procedure TForm1.FormCreate(Sender: TObject);
Const
NET_FW_IP_PROTOCOL_TCP = 6;
NET_FW_IP_PROTOCOL_UDP = 17;
NET_FW_ACTION_BLOCK = 0;
NET_FW_ACTION_ALLOW = 1;
NET_FW_RULE_DIR_IN = 1;
var
CurrentProfiles : OleVariant;
fwPolicy2 : OleVariant;
RulesObject : OleVariant;
NewRule : OleVariant;
txtAddress : string; // OleVariant;
begin
// Create the FwPolicy2 object.
fwPolicy2 := CreateOleObject('HNetCfg.FwPolicy2');
RulesObject := fwPolicy2.Rules;
CurrentProfiles := fwPolicy2.CurrentProfileTypes;
txtaddress{.text}:='192.168.1.33';
//Create a Rule Object.
NewRule := CreateOleObject('HNetCfg.FWRule');
Newrule.Name := 'BrutalNT: IP Access Block ' + txtAddress{.Text};
Newrule.Description := 'Block Incoming Connections from IP Address.';
Newrule.Action := NET_FW_ACTION_BLOCK{1};
Newrule.Direction := NET_FW_RULE_DIR_IN;
Newrule.Enabled := true;
Newrule.InterfaceTypes := 'All';
Newrule.RemoteAddresses := txtAddress{.Text};
//Add a new rule
RulesObject.Add(NewRule);
end;
BTW If you want to block you have to set NewRule.Action := 0; (NET_FW_ACTION_BLOCK) | unknown | |
d923 | train | You'll want to read-up about the offline_access permission.
https://developers.facebook.com/docs/reference/api/permissions/
With this permission, you'll be able to query facebook for information about one of your users even when that user is offline. It gives you a "long living" access token. This token does expire after a while or if the user changes his/her facebook password.
A: I would suggest looking into the Facbook Realtime API https://developers.facebook.com/docs/reference/api/realtime/
You can subscribe to different user fields (e.g. user.friends), and whenever these fields update, FB hit your server. It doesn't say whether you can subscribe to user.friendlists or not, but it would be worth a try.
With regards to the answer from Lix; the offline_access permission is being deprecated. See here: https://developers.facebook.com/docs/offline-access-deprecation/ | unknown | |
d924 | train | Your program is perfectly correct.
The error message -bash: syntax error near unexpected token 'newline' is produced by bash, the command line interpreter, not the compiler.
There are a few potential reasons for this, but here is the most likely:
*
*You are running the program with bash instead of having the system execute the binary, which is what happens if you typed . ./prog or . prog instead of ./prog. | unknown | |
d925 | train | Maybe something like:
Espresso.onView(withId(R.id.tv))
.perform(object :ViewAction{
override fun getDescription(): String {
return "Normalizing the string"
}
override fun getConstraints(): Matcher<View> {
return isAssignableFrom(TextView::class.java)
}
override fun perform(uiController: UiController?, view: View?) {
val tv = view as TextView
if (tv.text.matches(Regex("You saved (.)*? with (.)*"))) {
tv.text = "You saved %1\$s with %2\$s"
}
}
}).check(matches(withText(R.string.saving))) | unknown | |
d926 | train | I took what EasyJoin Dev said, and tweaked it a little, I created a Relative layout using the layout_toEndOf and layout_below options, and then in the activities create method I overrode the width and height programmatically to get my percentage based sizing. | unknown | |
d927 | train | Demo Fiddle
You were very close:
body {
counter-reset: listCounter;
}
ol {
counter-increment: listCounter;
counter-reset: itemCounter;
list-style:none;
}
li{
counter-increment: itemCounter;
}
li:before {
content: counter(listCounter) "." counter(itemCounter);
left:10px;
position:absolute;
} | unknown | |
d928 | train | You stated using MPU6050, which contains both an accelerometer and a gyrosocpe. You could use them independantly - get acceleration from the accelerometer and get angles from the gyroscope, and then use the angles to compensate for rotation. There is no need for the angle to depend on your accelerometer.
A: Using DMP library fromJeff Rowberg will do the work for you.
it can compensate gravity acceleration internally way faster that Arduino code.
Link to github | unknown | |
d929 | train | Use nginx reverse proxy to redirect based on url which will point to your different applications.
You can maintain the same IP for all of them. | unknown | |
d930 | train | You can try the following code :
int pos = Array.IndexOf(arrString, lookupValue.LongName);
if (pos > -1)
{
//// DO YOUR STUF
}
Following is the reference:
Checking if a string array contains a value, and if so, getting its position | unknown | |
d931 | train | One of the simplest ways to backup a mysql database is by creating a dump file. And that is what mysqldump is for. Please read the documentation for mysqldump.
In its simplest syntax, you can create a dump with the following command:
mysqldump [connection parameters] database_name > dump_file.sql
where the [connection parameters] are those you need to connect your local client to the MySQL server where the database resides.
mysqldump will create a dump file: a plain text file which contains the SQL instructions needed to create and populate the tables of a database. The > character will redirect the output of mysqldump to a file (in this example, dump_file.sql). You can, of course, compress this file to make it more easy to handle.
You can move that file wherever you want.
To restore a dump file:
*
*Create an empty database (let's say restore) in the destination server
*Load the dump:
mysql [connection parameters] restore < dump_file.sql
There are, of course, some other "switches" you can use with mysqldump. I frequently use these:
*
*-d: this wil tell mysqldump to create an "empty" backup: the tables and views will be exported, but without data (useful if all you want is a database "template")
*-R: include the stored routines (procedures and functions) in the dump file
*--delayed-insert: uses insert delayed instead of insert for populating tables
*--disable-keys: Encloses the insert statements for each table between alter table ... disable keys and alter table ... enable keys; this can make inserts faster
You can include the mysqldump command and any other compression and copy / move command in a batch file.
A: My solution to extract a backup and push it onto Dropbox is as below.
A sample of Ubuntu batch file can be downloaded here.
In brief
*
*Prepare a batch script backup.sh
*Run backup.sh to create a backup version e.g. backup.sql
*Copy backup.sql to Dropbox folder
*Schedule Ubuntu/Windows task to run backup.sh task e.g. every day at night
Detail steps
*
*All about backing up and restoring an MySQL database can be found here.
Back up to compressed file
mysqldump -u [uname] -p [dbname] | gzip -9 > [backupfile.sql.gz]
*How to remote from Windows to execute the 'backup' command can be found here.
plink.exe -ssh -pw -i "Path\to\private-key\key.ppk" -noagent username@server-ip
*How to bring the file to Dropbox can be found here
Create a app
https://www2.dropbox.com/developers/apps
Add an app and choose Dropbox API App. Note the created app key and app secret
Install Dropbox API in Ubuntu; use app key and app secret above
$ wget https://raw.github.com/andreafabrizi/Dropbox-Uploader/master/dropbox_uploader.sh
$ chmod +x dropbox_uploader.sh
Follow the instruction to authorize access for the app e.g.
http://www2.dropbox.com/1/oauth/authorize?oauth_token=XXXXXXX
Test the app if it is working right - should be ok
$ ./dropbox_uploader.sh info
The app is created and a folder associating with it is YourDropbox\Apps\<app name>
Commands to use
List files
$ ./dropbox_uploader.sh list
Upload file
$ ./dropbox_uploader.sh upload <filename> <dropbox location>
e.g.
$ ./dropbox_uploader.sh upload backup.sql .
This will store file backup.sql to YourDropbox\Apps\<app name>\backup.sql
Done
*How to schedule a Ubuntu can be view here using crontab
Call command
sudo crontab -e
Insert a line to run backup.sh script everyday as below
0 0 * * * /home/userName/pathTo/backup.sh
Explaination:
minute (0-59), hour (0-23, 0 = midnight), day (1-31), month (1-12), weekday (0-6, 0 = Sunday), command
Or simply we can use
@daily /home/userName/pathTo/backup.sh
Note:
*
*To mornitor crontab tasks, here is a very good guide. | unknown | |
d932 | train | What I've been using for a peak meter (a progress bar) is the following, passing in the device from my IWaveIn.DataAvailable
MMDevice.AudioMeterInformation.MasterPeakValue * 100 | unknown | |
d933 | train | You can leave the Id on the base class and in this use case you have to configure your one-to-one releshinship with Fluent API.
protected override void OnModelCreating(DbModelBuilder modelBuilder)
{
modelBuilder.Entity<ArqAppRole>()
.HasRequired(s => s.Application)
.WithRequiredPrincipal(ad => ad.ArqAppRole);
}
Fluent API will override your code first. But putting the Id in the base class is a bad practice and you have to find tricks everywhere. Just use the conventional way and use the EF as should be used.
More info: Code first self referencing foreign key (more than one) | unknown | |
d934 | train | I think you were missing a closing div tag to the whole code block ( certainly in the code posted above anyway ) which would throw the html alignment out in some instances. I have corrected that in the following - though I cannot test under the circumstances that you are using the code.
<div class='col-lg-12 col-md-12' style='display:block'>
<div class='col-xs-1 col-sm-1' style='padding:0;display:inline'>
<img src='http://i.imgur.com/8wclWry.png' width='65px' height='65px' id='sigPhoto'>
</div>
<div class='col-lg-11 col-md-10' style='padding:0; display:inline'>
<p class='col-lg-11 col-md-10' style='padding:0'>
<span id='name'>
Ta Quynh Giang <!-- Name here-->
</span>
</p>
<p class='col-lg-11 col-md-10' style='padding:0; display:inline'>
<span>
Marketing Manager - ABIVIN Vietnam, JSC.
</span>
</p>
<div class='col-lg-11 col-md-10' style='padding:0; margin-top: 5px; display:inline'>
<div class='col-md-2 col-sm-4 info' style='padding:0 ;display:inline'>
<span id='head-info'> M </span> +84 168 992 1733
</div>
<div class='col-md-2 col-sm-4 info' style='padding:0; display:inline'>
<span id='head-info'> W </span> <a href='http://abivin.com' target='_blank'>http://abivin.com</a>
</div>
</div>
<div class='col-lg-11 col-md-10' style='padding:0; margin-top: 5px; display:inline'>
<div class='col-md-2 col-sm-4 info' style='padding:0; display:inline'>
<span id='head-info'> E </span> giangta@abivin.com
</div>
<div class='col-md-3 col-sm-5 info' style='padding:0; display:inline'>
<span id='head-info'> A </span> R503, 35 Lang Ha, Hanoi, Vietnam
</div>
</div>
</div>
</div> | unknown | |
d935 | train | You don't actually have to specify any fields for the get_stats method, but the reason you're not seeing any actions is probably because you don't have any. Try it against a campaign that you know people have taken action on. :)
Evan | unknown | |
d936 | train | private const string _textBoxName = "TextBox";
The method count textboxes sum by given range of text box ids. Be aware this will throw exception if the text box texts / name id are not intgeres or
private int Count(int from, int to)
{
int GetIdFromTextBox(TextBox textBox) => int.Parse(new string(textBox.Name.Skip(_textBoxName.Length).ToArray()));
var textBoxes = Controls.OfType<TextBox>().ToList();
var textBoxesWithIds = textBoxes.Select(textBox => (textBox: textBox, id: GetIdFromTextBox(textBox))).ToList();
var sum = textBoxesWithIds.Where(x => x.id >= from && x.id <= to).Sum(x => int.Parse(x.textBox.Text));
return sum;
} | unknown | |
d937 | train | I just solved this issue.
It was due to the flag android:launchMode="singleInstance" on the activity presenting the interstitial.
i think it is an adMob bug, so please check and in case just remove this flag to get interstitial working.
A: I finally figured out the problem. There was no problem; it is by design. The ads load if clicked near the middle but not if clicked near the corners. That's why sometimes they seemed to randomly not click through. Took me a while to figure that out. | unknown | |
d938 | train | I was writing the hostname in the target URL which PI was not able to recognise.
I changed it to IP.
It's working fine now. | unknown | |
d939 | train | In the upcoming jParsec 2.2 release, the API makes it more clear what Terminals does:
http://jparsec.github.io/jparsec/apidocs/org/codehaus/jparsec/Terminals.Builder.html
You cannot even define your keywords without first providing a scanner that defines "words".
The implementation first uses the provided word scanner to find all words, and then identifies the special keywords on the scanned words.
So, why does it do it this way?
*
*If you didn't need case insensitivity, you could have passed the keywords as "operators". Yes, you read it right. One can equally use Terminals.token(op) or Terminals.token(keyword) to get the token level parser of them. What distinguishes operators from keywords is just that keywords are "special" words. Whether they happen to be alphabet characters or other printable characters is just by convention.
*Another way to do it, is to define your word scanner precisely as Parsers.or(Scanners.string("keyword1"), Scanners.string("keyword2"), ...). Then Terminals won't try to tokenize anything else.
*The above assumes that you want to do the 2-phase parsing. But that's optional. Your test shows that you weren't feeding the tokenizer to a token-level parser using Parser.from(tokenizer, delim). If two-phase parsing isn't needed, it can be as simple as: or(stringCaseInsensitive("true"), stringCaseInsensitive("false"))
More on point 3. The 2-phase parsing creates a few extra caveats in jParsec that you don't find in other parser combinators like Haskell's Parsec. In Haskell, a string is no different from a list of character. So there really isn't anything to gain by special casing them. many(char 'x') parses a string just fine.
In Java, String isn't a List or array of char. It would be very inefficient if we take the same approach and box each character into a Character object so that the character level and token level parsers can be unified seamlessly.
Now that explains why we have character level parsers at all. But it's completely optional to use the token level parsers (By that, I mean Terminals, Parser.from(), Parser.lexer() etc).
You could create a fully functional parser with only character-level parsers, a.k.a scanners.
For example: Scanners.string("true").or(Scanners.string("false")).sepEndBy1(delim)
A: From the documentation of Tokenizer#caseInsensitive:
org.codehaus.jparsec.Terminals
public static Terminals caseInsensitive(String[] ops,
String[] keywords)
Returns a Terminals object for lexing and parsing the operators with names specified in
ops, and for lexing and parsing the keywords case insensitively. Keywords and operators
are lexed as Tokens.Fragment with Tokens.Tag.RESERVED tag. Words that are not among
keywords are lexed as Fragment with Tokens.Tag.IDENTIFIER tag. A word is defined as an
alphanumeric string that starts with [_a - zA - Z], with 0 or more [0 - 9_a - zA - Z]
following.
Actually, the result returned by your parser is a Fragment object which is tagged according to its type. In your case, d is tagged as IDENTIFIER which is expected.
It is not clear to me what you want to achieve though. Could you please provide a test case ?
A: http://blog.csdn.net/efijki/article/details/46975979
The above blog post explains how to define your own tag. I know it's in Chinese. You just need to see the code. Especially the withTag() and patchTag() part. | unknown | |
d940 | train | You could use .filter:
_.sample([homephone, altphone].filter(_.identity))
Another way would be:
_.sample([homephone, altphone]) || homephone || altphone;
A: What about:
var phone = (homephone && altphone)? _.sample([homephone, altphone]) : (homephone || altphone);
A: Since you're already using underscore, I would suggest leveraging compact:
var phone = _.sample(_.compact([homephone, altphone]));
This is basically a shortened version of dave's answer, since compact is literally implemented as function(array) { return _.filter(array, _.identity); }.
A: Array literals in JavaScript:
[ 1, 2, 3 ]
...are a way to statically declare which things go in which positions in an array. In other words, when you write the code, you already know where things will go.
In your scenario, the positions are only known dynamically. In other words, you don't know where they'll go until you run the program on a given set of inputs.
So basically what you're asking for is impossible, barring any radical changes to how array literals work in future versions of JS. However, if all you want is to save typing, @dave's answer is pretty nice. I'm mainly just clarifying that array literals by themselves don't have this capability. | unknown | |
d941 | train | Your best bet is to have that attribute's value in a hidden input field somewhere on the page, so you can then read it in with jQuery.
Unforunately, to the best of my knowledge jQuery or javascript does not have access to request, session or application scope variables.
So, if you do something like this:
<input type='hidden' name='${sessionVarName}' value='${sessionVarValue}' id='sessionVar'/>
You can access it after the page loads like this:
$(function(){
var sessionVar = $('#sessionVar').val();
alert(sessionVar);
});
This is the solution I've used when needing to get tomcat session vars into javascript, the method should work for you too.
Hope this helps
A: I suggest u to send a AttributeName which will successfully make u to get the sessionScope Attribute,that will be simple. | unknown | |
d942 | train | To resolve the Maps grey area issue do the following:
*
*Open Google Developers Console
*Select the project you are working on (or create it if it doesn't exist)
*Select APIs & Auth
*Then Credentials
*Find the section with the title "Key for Android applications"
*Click Edit allowed Android applications
*Execute the keytool command to generate the SHA1 fingerprint for your release keystore file
*Then add the SHA1 and package name to the list of allowed Android applications
And for the crashes, try using crash reporting tool (Crashlytics for example) | unknown | |
d943 | train | Instead of reflection, you could use the EF Core public (and some internal) metadata services to get the key values needed for Find method. For setting the modified values you could use EntityEntry.CurrentValues.SetValues method.
Something like this:
using Microsoft.EntityFrameworkCore.Metadata.Internal;
public static void AddEntities<T>(List<T> entities, DbContext db) where T : class
{
using (db)
{
var set = db.Set<T>();
var entityType = db.Model.FindEntityType(typeof(T));
var primaryKey = entityType.FindPrimaryKey();
var keyValues = new object[primaryKey.Properties.Count];
foreach (T e in entities)
{
for (int i = 0; i < keyValues.Length; i++)
keyValues[i] = primaryKey.Properties[i].GetGetter().GetClrValue(e);
var obj = set.Find(keyValues);
if (obj == null)
{
set.Add(e);
}
else
{
db.Entry(obj).CurrentValues.SetValues(e);
}
}
db.SaveChanges();
}
} | unknown | |
d944 | train | You can do that with convert, with a little help from find so you don't have to write a loop:
find /Users/KanZ/Desktop/Project/Test/ -type f -name "M*.jpg" -exec convert {} -flip {} \;
Explanation:
*
*find /Users/KanZ/Desktop/Project/Test/ - Invoke find tool and specify the base directory to perform the search for files recursively.
*-type f - Find only files
*-name "M*.jpg" - Find only files with names that start with M and end with .jpg
*-exec ... \; - For each such file found, perform the command in ...
*convert {} -flip {} - This is the actual command that flips your images. The {}'s are syntax as part of the find command, they represent where the found files from find would be substituted into. So here we are saying to use convert to flip the images vertically with the -flip option, but keep the file names unchanged.
Alternatively:
You can also do it with a loop and globbing:
for file in /Users/KanZ/Desktop/Project/Test/M*.jpg; do convert "$file" -flip "$file"; done | unknown | |
d945 | train | You don't need combinations at all. What you want looks more like a sliding window.
for i in range(2, 6):
for j in range(len(lst) - i + 1):
print(lst[j:j + i])
A: You can loop over the list as following:
a = [1,2,3,4,5,6]
for i in range(2, len(a)):
for j in range(len(a)-i + 1):
print(a[j:j+i])
print()
The trick here is that a[j:j+i] returns the list from j, up until j+i.
Putting this into a function form:
def continuous(lst, length):
slices = []
for j in range(len(lst)-length + 1):
slices.append(a[j:j+length])
return slices
A: You can loop through the list in the following method
lst = [1, 2, 3, 4, 5 ,6]
def continuous(lst, length):
result = []
for i in range (0, len(lst)-length):
entry = []
for j in range (0, length):
entry.append(lst[i+j])
result.append(entry)
return result
print(continuous(lst, 2))
or if you are just looking to print it line by line
def continuous(lst, length):
for i in range (0, len(lst)-length):
entry = []
for j in range (0, length):
entry.append(lst[i+j])
print(entry) | unknown | |
d946 | train | It works for me.
Make sure you have your "Device ram size" setting for this AVD set high. It will default to 256, but I recommend 1024 (MB) if you can spare it. You can adjust this via the SDK and AVD Manager. | unknown | |
d947 | train | In my tests, even if I deleted the <hr />, the error was still reproduced. I noticed, that it occurs after changing h2#app_status text. If you wrap div#drop_zone and all next elements like div#object... with div that has inline-block as display style, then there will be no such disappearing.
<style>
#drop-zone-wrapper {display: inline-block;}
</style>
<div id="drop-zone-wrapper">
<div id="drop_zone" ondragenter="drag_enter(event)"
ondrop="drag_drop(event)" ondragover="return false"
ondragleave="drag_leave(event)"></div>
<div id="object1" class="objects" draggable="true"
ondragstart="drag_start(event)" ondragend="drag_end(event)">object 1</div>
<div id="object2" class="objects" draggable="true"
ondragstart="drag_start(event)" ondragend="drag_end(event)">object 2</div>
<div id="object3" class="objects" draggable="true"
ondragstart="drag_start(event)" ondragend="drag_end(event)">object 3</div>
</div> | unknown | |
d948 | train | You could consider creating an event and handler to handle the timer ticks and then invoke your check.
public class PresenceMonitor {
private volatile bool _running;
private Timer timer;
private readonly TimeSpan _presenceCheckInterval = TimeSpan.FromMinutes(1);
public PresenceMonitor() {
Tick += OnTick;
}
public void Start() {
if (_running) {
return; //already running
}
// Start the timer
timer = new System.Threading.Timer(_ => {
Tick(this, EventArgs.Empty);//rasie event
}, null, TimeSpan.Zero, _presenceCheckInterval);
}
private event EventHandler Tick = delegate { };
private async void OnTick(object sender, EventArgs args) {
if (_running) {
return;
}
_running = true;
await DoworkAsync();
}
private Task DoworkAsync() {
//...
}
}
A: If I understand correctly your requirements, you can get rid of timer and use asynchronous loop.
But you need make Start method asynchronous too
public class PresenceMonitor
{
private volatile bool _running; // possible not needed "volatile" anymore
private readonly int _presenceCheckInterval = 60000; // Milliseconds
public PresenceMonitor()
{
}
public async Task Start()
{
while (true) // may be use some "exit" logic
{
await CheckAsync();
await Task.Delay(_presenceCheckInterval)
}
}
private async Task CheckAsync()
{
if (_running)
{
return;
}
_running = true;
// await DoworkAsync
}
}
Then you can start monitoring
var monitor = new PresenceMonitor();
await monitor.Start();
You can even start monitoring in synchronous way
var monitor = new PresenceMonitor();
monitor.Start(); // Will start monitoring
But approach above is "dangerous" in the way, that any exception thrown inside CheckAsync method will not be propagated. When you start using async-await be ready to "convert" whole application to support it. | unknown | |
d949 | train | To get a distance from a Google Maps you can use Google Directions API and JSON parser to retrieve the distance value.
Sample Method
private double getDistanceInfo(double lat1, double lng1, String destinationAddress) {
StringBuilder stringBuilder = new StringBuilder();
Double dist = 0.0;
try {
destinationAddress = destinationAddress.replaceAll(" ","%20");
String url = "http://maps.googleapis.com/maps/api/directions/json?origin=" + latFrom + "," + lngFrom + "&destination=" + latTo + "," + lngTo + "&mode=driving&sensor=false";
HttpPost httppost = new HttpPost(url);
HttpClient client = new DefaultHttpClient();
HttpResponse response;
stringBuilder = new StringBuilder();
response = client.execute(httppost);
HttpEntity entity = response.getEntity();
InputStream stream = entity.getContent();
int b;
while ((b = stream.read()) != -1) {
stringBuilder.append((char) b);
}
} catch (ClientProtocolException e) {
} catch (IOException e) {
}
JSONObject jsonObject = new JSONObject();
try {
jsonObject = new JSONObject(stringBuilder.toString());
JSONArray array = jsonObject.getJSONArray("routes");
JSONObject routes = array.getJSONObject(0);
JSONArray legs = routes.getJSONArray("legs");
JSONObject steps = legs.getJSONObject(0);
JSONObject distance = steps.getJSONObject("distance");
Log.i("Distance", distance.toString());
dist = Double.parseDouble(distance.getString("text").replaceAll("[^\\.0123456789]","") );
} catch (JSONException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
return dist;
}
For details on parameters and more details on what are the different options available, please refer this.
https://developers.google.com/maps/documentation/directions/
A: public class ApiDirectionsAsyncTask extends AsyncTask<URL, Integer, StringBuilder> {
private static final String TAG = makeLogTag(ApiDirectionsAsyncTask.class);
private static final String DIRECTIONS_API_BASE = "https://maps.googleapis.com/maps/api/directions";
private static final String OUT_JSON = "/json";
// API KEY of the project Google Map Api For work
private static final String API_KEY = "YOUR_API_KEY";
@Override
protected StringBuilder doInBackground(URL... params) {
Log.i(TAG, "doInBackground of ApiDirectionsAsyncTask");
HttpURLConnection mUrlConnection = null;
StringBuilder mJsonResults = new StringBuilder();
try {
StringBuilder sb = new StringBuilder(DIRECTIONS_API_BASE + OUT_JSON);
sb.append("?origin=" + URLEncoder.encode("Your origin address", "utf8"));
sb.append("&destination=" + URLEncoder.encode("Your destination address", "utf8"));
sb.append("&key=" + API_KEY);
URL url = new URL(sb.toString());
mUrlConnection = (HttpURLConnection) url.openConnection();
InputStreamReader in = new InputStreamReader(mUrlConnection.getInputStream());
// Load the results into a StringBuilder
int read;
char[] buff = new char[1024];
while ((read = in.read(buff)) != -1){
mJsonResults.append(buff, 0, read);
}
} catch (MalformedURLException e) {
Log.e(TAG, "Error processing Distance Matrix API URL");
return null;
} catch (IOException e) {
System.out.println("Error connecting to Distance Matrix");
return null;
} finally {
if (mUrlConnection != null) {
mUrlConnection.disconnect();
}
}
return mJsonResults;
}
}
I hope that help you!
A: Just check below link. You will probably get idea of it and try it on your own.
http://about-android.blogspot.in/2010/03/sample-google-map-driving-direction.html
Also you can use Google Distance Matrix API
https://developers.google.com/maps/documentation/distancematrix/
A: String url = getDirectionsUrl(pickupLatLng, dropLatLng);
new GetDisDur().execute(url);
Create URL using latlng
private String getDirectionsUrl(LatLng origin, LatLng dest) {
String str_origin = "origin=" + origin.latitude + "," + origin.longitude;
String str_dest = "destination=" + dest.latitude + "," + dest.longitude;
String sensor = "sensor=false";
String mode = "mode=driving";
String parameters = str_origin + "&" + str_dest + "&" + sensor + "&" + mode;
String output = "json";
return "https://maps.googleapis.com/maps/api/directions/" + output + "?" + parameters;
}
Class GetDisDur
private class GetDisDur extends AsyncTask<String, String, String> {
@Override
protected String doInBackground(String... url) {
String data = "";
try {
data = downloadUrl(url[0]);
} catch (Exception e) {
Log.d("Background Task", e.toString());
}
return data;
}
@Override
protected void onPostExecute(String result) {
super.onPostExecute(result);
try {
JSONObject jsonObject = new JSONObject(result);
JSONArray routes = jsonObject.getJSONArray("routes");
JSONObject routes1 = routes.getJSONObject(0);
JSONArray legs = routes1.getJSONArray("legs");
JSONObject legs1 = legs.getJSONObject(0);
JSONObject distance = legs1.getJSONObject("distance");
JSONObject duration = legs1.getJSONObject("duration");
distanceText = distance.getString("text");
durationText = duration.getString("text");
} catch (JSONException e) {
e.printStackTrace();
}
}
} | unknown | |
d950 | train | Your expected output /api?invoice=12345&67890&supplier=78326832 is rather bizarre: there's no context where it makes sense to escape some ampersands (at the XML/HTML level) and leave others unescaped.
I think that what you really want is to use URI escaping (not XML escaping) for the first ampersand, that is you want /api?invoice=12345%2667890&supplier=78326832. If you're building the URI using XSLT 2.0 you can achieve this by passing the strings through encode-for-uri() before you concatenate them into the URI.
But you've given so little information about the context of your processing that it's hard to be sure exactly what you want. | unknown | |
d951 | train | First you have to add display: flex; to #Container
#Container{
display: flex;
}
If you want to equally distribute the space between children then you can use flex property as
.item{
flex: 1;
}
Above CSS is minimum required styles, rest is for demo
#Container {
display: flex;
margin-top: 1rem;
}
.item {
flex: 1;
display: flex;
justify-content: center;
align-items: center;
padding: 1rem;
}
.item:nth-child(1) {
background-color: red;
}
.item:nth-child(2) {
background-color: blueviolet;
}
.item:nth-child(3) {
background-color: aquamarine;
}
<div id="Container">
<div class="item">33 %</div>
<div class="item">33 %</div>
<div class="item">33 %</div>
</div>
<div id=Container>
<div class="item"> 50 % </div>
<div class="item"> 50 % </div>
</div>
<div id=Container>
<div class="item">100 %</div>
</div>
A: I think that this example could give you an idea of how to achieve what you want:
https://codepen.io/Eylen/pen/vYJBpMQ
.Container {
display: flex;
flex-wrap: wrap;
margin-bottom: 8px;
}
.item {
flex-grow: 1;
margin: 0 12px;
background: #f1f1f1;
}
Your main issue in the code that you gave, is that you're missing the flex item behaviour. I have just set that the item can grow to fill the space with the flex-grow:1.
A: You can make sure a flex child covers up the space if it can, you can provide flex-grow: 1
#Container {
display:flex;
}
.item {
width: 100%;
flex-grow: 1;
border: 1px solid;
text-align: center;
}
<h1> Scenario 1 </h1>
<div id=Container>
<div class="item">33 %</div>
<div class="item">33 %</div>
<div class="item">33%</div>
</div>
<h1> Scenario 2 </h1>
<div id=Container>
<div class="item">50 %</div>
<div class="item">50 %</div>
</div>
<h1> Scenario 3 </h1>
<div id=Container>
<div class="item">100 %</div>
</div>
A: Added a demo below.
$( document ).ready(function() {
$( "#add" ).click(function() {
$('#container').append('<div class="item"></div>');
});
$( "#remove" ).click(function() {
$('#container').children().last().remove();
});
});
#container {
width:100%;
height:500px;
background-color:#ebebeb;
display:flex;
flex-direction: column;
}
.item {
width:100%;
display: flex;
flex: 1;
border-bottom:1px solid #007cbe;
}
.item1 {
background:#007cbe;
}
.item2 {
background: #d60000;
}
.item3 {
background: #938412
}
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script>
<div id="container">
<div class="item item1">1</div>
<div class="item item2">2</div>
<div class="item item3">3</div>
</div>
<button id="add"> Add </div>
<button id="remove"> Remove </div>
A: Apply to the below CSS to fulfill your requirement.
#Container {
display: flex;
}
.item {
width: 100%;
min-height: 100px;
border: 1px solid;
} | unknown | |
d952 | train | Use the RODBC package to connect to a MS SQL Server database.
First you need to do some setup. Open the "Data Sources (ODBC)" application. (In Control Panel\System and Security\Administrative Tools, or search under the Start Menu.) Add a User DSN (or a System DSN if you have admin rights and want the connection for all users).
Step 1: Give it a name like MyDataBase and select the server that it lives on. The name shouldn't be more than 32 characters or you'll get a warning.
Step 2: Connection details are the same as you would use in SQL Server.
Step 3: Change the default database to the one that you want to connect to.
Finish and test your connection.
Now you get to use R. It's as easy as
library(RODBC)
channel <- odbcConnect("MyDataBase") #or whatever name you gave
query <- "SELECT * FROM MyTable WHERE x > 10"
results <- sqlQuery(query, channel)
odbcClose(channel)
If you are feeling fancy or hate wizards, you can set up the ODBC connection by writing registry entries. Apologies for the big code chunk.
#' Reads the Windows registry
#'
#' Wrapper for readRegistry that replace environment variables.
#' @param ... Passed to readRegistry
#' @return A list of registry keys. See \code{readRegistry}.
#' @examples
#' \dontrun{
#' key <- "Software\\ODBC\\ODBCINST.INI\\SQL Server"
#' hive <- "HLM"
#' read_registry(key, hive)
#' readRegistry(key, hive)
#' }
read_registry <- function(...)
{
ans <- readRegistry(...)
lapply(
ans,
function(x)
{
rx <- "%([[:alnum:]]+)%"
if(is.character(x) && grepl(rx, x))
{
env_var <- stringr::str_match(x, rx)[, 2]
x <- gsub(rx, Sys.getenv(env_var), x)
}
x
}
)
}
#' Add an ODBC data source to the Windows registry.
#'
#' Adds an ODBC data source to the Windows registry.
#'
#' @param data_source_name String specifying the name of the data source to add.
#' @param database The name of the database to use as the default.
#' @param database_server The name of the server holding the database.
#' @param type Type of connection to add. Either ``sql'' or ``sql_native''.
#' @param permission Whether the connection is for the user or the system.
#' @return Nothing. Called for the side-effect of registering ODBC data sources.
#' @details A key with the specified data source name is created in
#' ``Software\\ODBC\\ODBC.INI'', either in ``HKEY_CURRENT_USER'' or
#' ``HKEY_LOCAL_MACHINE'', depending upon the value of \code{permission}.
#' Four values are added to this key. ``Database'' is given the value of the
#' \code{database} arg. ``Server'' is given the value of the
#' \code{database_server} arg. ``Trusted_Connection'' is given the value ``Yes''.
#' ``Driver'' is given the value from the appropriate subkey of
#' ``HKEY_LOCAL_MACHINE\\SOFTWARE\\ODBC\\ODBCINST.INI'', depending upon the type.
#' Another key with the specified data source name is created in
#' ``Software\\ODBC\\ODBC.INI\\ODBC Data Sources''.
register_odbc_data_source <- function(data_source_name, database, database_server, type = c("sql", "sql_native"), permission = c("user", "system"))
{
#assert_os_is_windows()
#data_source_name <- use_first(data_source_name)
permission <- match.arg(permission)
type <- match.arg(type)
#Does key exist?
odbc_key <- readRegistry(
file.path("Software", "ODBC", "ODBC.INI", fsep = "\\"),
switch(permission, user = "HCU", system = "HLM")
)
if(data_source_name %in% names(odbc_key))
{
message("The data source ", sQuote(data_source_name), " already exists.")
return(invisible())
}
hive <- switch(
permission,
user = "HKEY_CURRENT_USER",
system = "HKEY_LOCAL_MACHINE"
)
key <- shQuote(
file.path(hive, "Software", "ODBC", "ODBC.INI", data_source_name, fsep = "\\")
)
odbc_data_sources_key <- shQuote(
file.path(hive, "Software", "ODBC", "ODBC.INI", "ODBC Data Sources", fsep = "\\")
)
type_name <- switch(
type,
sql = "SQL Server",
sql_native = "SQL Server Native Client 11.0"
)
driver <- read_registry(
file.path("SOFTWARE", "ODBC", "ODBCINST.INI", type_name, fsep = "\\"),
"HLM"
)$Driver
system0(key)
system0(key, "/v Database /t REG_SZ /d", database)
system0(key, "/v Driver /t REG_SZ /d", shQuote(driver))
system0(key, "/v Server /t REG_SZ /d", database_server)
system0(key, "/v Trusted_Connection /t REG_SZ /d Yes")
system0(odbc_data_sources_key, "/v", data_source_name, "/t REG_SZ /d", shQuote(type_name))
}
#' Wrapper to system for registry calls
#'
#' Wraps the \code{system} function that calls the OS shell.
#' @param ... Passed to \code{paste} to create the command.
#' @return The command that was passed to system is invisibly returned.
#' @note Not meant to be called directly.
system0 <- function(...)
{
cmd <- paste("reg add", ...)
res <- system(cmd, intern = TRUE)
if(res != "The operation completed successfully.\r")
{
stop(res)
} else
{
message(res)
}
invisible(cmd)
} | unknown | |
d953 | train | If this is a long-running process I doubt that using blob storage would add that much overhead, although you don't specify what the tasks are.
On Zudio long-running tasks update Table Storage tables with progress and completion status, and we use polling from the browser to check when a task has finished. In the case of a large result returning to the user, we provide a direct link with a shared access signature to the blob with the completion message, so they can download it directly from storage. We're looking at replacing the polling with SignalR running over Service Bus, and having the worker roles send updates directly to the client, but we haven't started that development work yet so I can't tell you how that will actually work. | unknown | |
d954 | train | The problem is that myMessage.length() is the number of characters in myMessage, whereas numbers.size is the number of integers represented in myMessage.
In your example run, myMessage is "22 12 20 28", which has 11 characters so you are iterating from 0 to 10; but numbers is an array of just four numbers (0 through 3), so numbers[i] will raise this exception for any i greater than 3.
If I'm understanding correctly what you are trying to do, you just need to change this:
for (int i=0; i < myMessage.length(); ){
to this:
for (int i=0; i < numbers.size; ){ | unknown | |
d955 | train | Keep in mind below important points regarding to UITableView
*
*UITableView has inherited property from UIScrollView i.e. UITableView is also below like a UIScrollView so you don't need to take UIScrollView for the specially scroll the UITableView. If you do it behaves weird.
*In cellForRow, you are creating condition with param tableView to outlet typeView & typeView1 by comparing tag which is not a standard format. Because tableView.tag may be changed and gives you wrong output. So try to use below format
if tableView == typeView { }
else { } //tableView == typeView1
Compare UITableView objects with pram tableView.
*cellForRow method returns the cell from the if-else so you don't need to write
return UITableViewCell(style: UITableViewCellStyle.default, reuseIdentifier: "Cell")
If I debug your cellForRow method code then your this above line never executes.
First try this standards and remove your mistake and then post your question and issue you are facing.
Hope my above work helps you.
Edit
Your required output will be like this below image. You don't need to take 2 UItableView and thse tableview in single scrollview. You can do the same with one tableView.
Go through this tutorial | unknown | |
d956 | train | Try adding required attribute to input element, data-* at label element; css :invalid, :after pseudo element, content property of label to display message when input is invalid.
input:invalid + label:after {
content: " " attr(data-name) " should not be blank";
color: red;
}
<input type="text" name="company_name" required /><label data-name="Company Name"></label><br>
<input type="text" name="login_name" required /><label data-name="Login Name"></label><br>
<input type="email" name="email" required /><label data-name="Email"></label><br>
<input type="password" name="password" required /><label data-name="Password"></label><br>
<input type="password" name="password_confirm" required /><label data-name="Password Confirm"></label> | unknown | |
d957 | train | You are passing a string....cast it to number
$scope.range = function(n) {
return new Array(+n||0);
};
DEMO | unknown | |
d958 | train | Here is an example, just like your case,
The results show that the algorithm indicates the signal frequencies just right.
Each column of matrix, y is a sinusoidal to check how it works.
The windows are 3 seconds with 2 seconds of overlapping,
Fs = 256;
T = 1/Fs;
t = (0:30*Fs-1)*T;
y = sin(2 * pi * repmat(linspace(1,100,32)',1,length(t)).*repmat(t,32,1))';
for i = 1 : 32
[pxx(:,i), freq] = pwelch(y(:,i),3*Fs,2*Fs,[],Fs); %#ok
end
plot(freq,pxx);
xlabel('Frequency (Hz)');
ylabel('Spectral Density (Hz^{-1})'); | unknown | |
d959 | train | You need to use expression, here an example:
tibble(x = 1,y = 1) %>%
ggplot(aes(x = 1,y = 1))+
geom_point()+
scale_x_continuous(
breaks = 1,
labels = expression(paste("Ambient ",CO[2]))
) | unknown | |
d960 | train | Note: Previous to Delphi 10.4 the mobile compilers used by default 0-based indexing for strings. See Zero-based strings.
Use the Low() and High() intrinsic functions to iterate strings.
The irregularities you are seeing is because of indexing outside of the boundries of the string. When debugging, use overflow and range checking on to detect these kind of errors.
Your code will look like this:
function TJotto.OccurrencesOfChar(const aWord, aChar: string): integer;
var
i: integer;
begin
result := 0;
for i := Low(aWord) to High(aWord) do
if aWord[i] = aChar then
inc(result);
end;
and
function TJotto.MakeGuess(aGuessWord: string): string;
var
i: integer;
total: integer;
wordToDisplay: string;
begin
total := 0; // number of matches
wordToDisplay := aGuessWord;
// save copy of guess before deleting duplicate letters
// because guess will be displayed
// did user solve puzzle?
if aGuessWord = FSecretWord then
Exit('You did it! The word was ' + aGuessWord);
// make sure all letters in aGuessWord are different
// otherwise a guess like 'vexed' will say an E is present in FSecretWord twice
for i := High(aGuessWord) downto Low(aGuessWord) do
if OccurrencesOfChar(aGuessWord, aGuessWord[i]) > 1 then
Delete(aGuessWord, i+1-Low(aGuessWord), 1); // Delete uses one-based array indexing even in platforms where the strings are zero-based.
// go through each letter in aGuessWord to see if it's in FSecretWord
// keep a running total number of matches
for i := Low(aGuessWord) to High(aGuessWord) do
total := total + OccurrencesOfChar(FSecretWord, aGuessWord[i]);
result := wordToDisplay + #9 + total.ToString;
end; | unknown | |
d961 | train | Assuming you're using jQuery validate, you can use the submitHandler property to run code when the validation passes, for example:
$("#myForm").validate({
submitHandler: function(form) {
// display overlay
form.submit();
}
});
Further reading
A: Try to return false; on validation errors while submiting. | unknown | |
d962 | train | You could use separate branches for each feature. I personally use a hierarchy similar to below.
/
|---features
|--- A
|--- B
That would result in /features/A and /features/B branches respectively. That way you could work on your features on separate branches and use main branch as stable version of your application.
After your last edit, I would definitely recommend the solution below.
Or a better solution for entirely different jobs, you could use git-worktree command.
git worktree add [-b <new-branch>] <path> [<the branch/tree you want to base this worktree>]
This is the best I could from what I've understand from your question.
A: For my understanding, you have a main project and a different project which is based on the main, while being significantly different.
If that's the case, I would have different repositories for the two, so to keep them more maintainable.
The main repo will stay as it is, while on the second one we want to add the main repo as a remote origin.
To do that let's first:
*
*Create a new repo at github
*Clone the main repo locally if not done already
You can then add the main repo as a remote origin:
*git remote rename origin upstream
*git remote add origin URL_TO_MAIN_GITHUB_REPO
*git push origin master
Now you can work with it just like any other github repo.
To pull in patches from upstream, simply run git pull upstream master && git push origin master.
To underline I would follow this solution if you have two different projects based on some common changes, as branches works well for features / patches you would then want to merge to master once they are completed.
If, on the other hand, you simply have the same project with different features, then @Deniz da King solution is great. | unknown | |
d963 | train | 127.0.0.1 as an IP address means "this machine". More formally, it's the loopback interface. On your laptop you have a MySQL server running. Your heroku dyno does not, so your connection attempt fails.
You won't be able to connect from your program running on your heroku dyno to your laptop's MySQL server without some network configuration work on your office or home router / firewall. But to begin doing that work you'll need to come up to speed on networking. (Teaching you how is beyond the scope of a Stack Overflow answer.)
You can add a MySQL add-on to your Heroku app from a third party. Heroku themselves offer a postgreSQL add-on with a free tier, but not MySql. For US$10 per month you can subscribe to one of these MySQL add-ons and you'll be up and running. | unknown | |
d964 | train | getNBPRates <- function(year) {
url1 <- sprintf(paste0("https://www.nbp.pl/kursy/Archiwum/archiwum_tab_a_", year, ".csv"))
url1 <- read.csv2(url1, header=TRUE, sep=";", dec=",", fileEncoding = "Windows-1250")
url1 <- url1 |>
select(data, X1USD, X1EUR) |>
slice(-1) |>
filter(row_number()<= n()-3) |>
mutate(data = as.Date(data, format = "%Y%m%d"), usd = as.numeric(gsub(",", ".", X1USD)), eur = as.numeric(gsub(",", ".", X1EUR))) |>
select(-c(X1USD, X1EUR))
}
years<- c(2013, 2014, 2015, 2016, 2017, 2018, 2019, 2020)
result <- lapply(years, getNBPRates)
result3 <- Reduce(rbind, result)
And what you understand with "to get rid of the missing values in dataframe named result3."? If that's the missing dates, then you have to substitute it with some logic. If I'm not mistaken - if there is no NBP for particular day, a last one has to be taken.
A: To change a column to numeric you can use as.numeric(column_name)
Based on the date format in the archiwum_tab_a_2015.csv file, you can change the date column with as.Date(column_name, format = "%Y%m%d")
To remove all missing values you can use complete.cases(data):
mydata[complete.cases(mydata),] | unknown | |
d965 | train | Make sure form Athentication is enabled in your web.config file.
<system.web>
<authentication mode="Forms">
<forms loginUrl="~/Account/Login" timeout="2880" />
</authentication>
...
</system.web>
A: MVC5 comes with Identity instead of the older SimpleMembership and ASP.NET Membership. Identity doesn't use forms auth, hence why what you're doing has no effect.
To log a user in via Identity, first, you need an actual user instance, which you can get by doing something like:
var userManager = new UserManager<ApplicationUser>(context);
var user = userManager.FindByName(username);
Then if you have a valid user (user != null), you need to generate a claims identity for that user via:
var identity = UserManager.CreateIdentity(user, DefaultAuthenticationTypes.ApplicationCookie);
Finally, you can use that identity to sign the user in:
var authenticationManager = HttpContext.GetOwinContext().Authentication;
authenticationManager.SignIn(new AuthenticationProperties() { IsPersistent = false }, identity);
(If you want a persistent login, change that to true) | unknown | |
d966 | train | As markE said set the transform-origin to the center of the image, so something like this:
elem.style.transform-origin = "50% 50%";
elem.style.transform = "rotate("+degrees+"deg)";
You can use -ms- and -webkit- for this in your code too for cross compatability.
Slightly unrelated, I suggest using:
degrees = degrees%360;
instead of your if statement where you wrap from 359 to 1 degrees.
This is because it will work in more general situations, so is less likely to break. For example if you changed the amount of degrees no by +1 but by +10 or -10 it would still wrap correctly between 0 and 360. | unknown | |
d967 | train | By default when Spring encounters a auto wiring field of type Map<String, [type]> it will inject a map of beans of the specific [type]. In your case String. You will not get your configured map.
See: http://docs.spring.io/spring/docs/current/spring-framework-reference/htmlsingle/#beans-autowired-annotation.
You are basically running into a corner case as you have a map with String as keys. To get your bean you will either have to put an @Qualifier("profileDependentProps") next to the @Autowired or use @Resource("profileDependentProps") instead of @Autowired. | unknown | |
d968 | train | As you said Qt does not use exceptions, building a QObject will not fail on the Qt side (still the C++ memory allocation could fail).
What kind of error in constructor do you have in mind?
Qt will create object with an invalid state if necessary, in my opinion it is not a constructor error that should cancel the object creation but more of a not-fully-initialized-yet-to-work-well object-state.
Examples:
QRegExp regex1; // isValid() : false
QRegExp regex2("nop{"); // isValid() : false
// regex2.errorString() => "bad repetition syntax"
QSqlDatabase db; // isValid() : false
QDate date1; // isValid() : false
QDate date2(0, 0, 0); // isValid() : false
QDate date3(-1, 0, 1024); // isValid() : false
QString str1; // isNull() : true, isEmpty() : true
QString str2(""); // isNull() : false, isEmpty() : true | unknown | |
d969 | train | if typo has only three possible values define it like so
type Typo = 1 | 2 | 3;
const MyModal: React.FC<{onClose: any; tipo: Typo;}>
Your error must vanish :) | unknown | |
d970 | train | The simple answer to order functions after an event would be to add a single event handler function that runs the 2 functions one after the other.
$("select#myDropdownlist").change(function(){
callFirstFunction();
callSecondAjaxFunction();
}
A: How about putting the contents of the first function in a method:
$("select#myDropdownlist").change(function(){
submitPostFunction1();
}
submitPostFunction1 = function() {
//do function 1 stuff here
}
then in your more specific function - function2:
$("select#myDropdownlist").change(function(){
$("select").unbind();
submitPostFunction1();
//do function 2 stuff
$("select").bind("change", function() {
submitPostFunction1();
})
}
keep in mind that i used $("select") in function 2 because you said it was a more generic call, so that means you wont be using the id in the first function call.
A: Events are not specified to fire in any specific order.
If you want a specific order, you need to either invoke them from a meta handler, or chain them by calling one from the other. The meta handler is more flexible in the long run though.
$("select#myDropdownlist").change(function(){
firstHandler();
secondHandler();
}
or
function firstHandler() {
...
secondHandler();
}
function secondHandler() {
...
}
$("select#myDropdownlist").change(firstHandler); | unknown | |
d971 | train | It would not be recommended to start all of your custom properties with the same dollar convention. The dollar sign convention is meant to denote properties that the Mixpanel SDKs track automatically or properties that have some special meaning within Mixpanel itself. That link you shared is great for the default properties and there is also documentation for the reserved user profile properties and reserved event properties if you were curious about those. | unknown | |
d972 | train | You can do it using the LOAD DATA command in MySQL:
http://blog.tjitjing.com/index.php/2008/02/import-excel-data-into-mysql-in-5-easy.html
Save your Excel data as a csv file (In Excel 2007 using Save As)
Check the saved file using a text editor such as Notepad to see what it actually looks like, i.e. what delimiter was used etc.
Start the MySQL Command Prompt (I usually do this from the MySQL Query Browser β Tools β MySQL Command Line Client to avoid having to enter username and password etc.)
Enter this command:
LOAD DATA LOCAL INFILE βC:\\temp\\yourfile.csvβ INTO TABLE database.table FIELDS TERMINATED BY β;β ENCLOSED BY βββ LINES TERMINATED BY β\r\nβ (field1, field2);
[Edit: Make sure to check your single quotes (') and double quotes (") if you copy and paste this code]
Done!
A: You can try use Navicat MySQL. I've done this with 250MB+ xlsx file and Navicat handle it flawlessly without breaking a sweat.
Just make sure your MySQL is configured to be able to receive large amount of data by changing the max_allowed_packet option in your my.ini to a larger amount, say, 128M.
A: Toad for MySQL (Freeware) would be another alternative. | unknown | |
d973 | train | Try calling ArrayAdapter.notifyDataSetChanged(). This tells the ListView that the underlying data has changed and it should invalidate.
A: at the end in the method of onClick() try calling adapter.notifyDataSetChanged(); This refreshes all the views that are using the adapter to set values to the view.
A: values = new ArrayList<String>();
values is null.
Change:
dbHelper = new DBHelper(this);
lvMain = (ListView) findViewById(R.id.lvMain);
values = new ArrayList<String>();
}
public void onButtonClick(View v) {
ContentValues cv = new ContentValues();
String name = et.getText().toString();
SQLiteDatabase db = dbHelper.getWritableDatabase();
switch (v.getId()) {
case R.id.btnAdd:
cv.put("name", name);
long rowID = db.insert("mytable", null, cv);
break;
case R.id.btnRead:
c = db.query("mytable", null, null, null, null, null, null);
if (c.moveToFirst()) {
idColIndex = c.getColumnIndex("id");
nameColIndex = c.getColumnIndex("name");
do {
c.getInt(idColIndex);
names = c.getString(nameColIndex);
values.add(names);
} while (c.moveToNext());
} else {
c.close();
}
break;
case R.id.btnClear:
int clearCount = db.delete("mytable", null, null);
break;
case R.id.btnShow:
break;
}
adapter = new ArrayAdapter<String>(this,
android.R.layout.simple_list_item_1, values);
lvMain.setAdapter(adapter);
dbHelper.close();
adapter.notifyDataSetChanged();
}
} | unknown | |
d974 | train | Here's scikit learns' k-means:
from sklearn.cluster import KMeans
import pandas as pd
import matplotlib.pyplot as plt
df = pd.read_csv('stack_overflow.csv')
X = df.iloc[:,1:]
plt.scatter(
X['DATE_ID'], X.iloc[:, -1],
c='white', marker='o',
edgecolor='black', s=50
)
plt.show()
k = 3
km = KMeans(
n_clusters=k, init='random',
n_init=10, max_iter=300,
tol=1e-04, random_state=0
)
y_km = km.fit_predict(X)
X['label'] = y_km
Output:
Now use the labels to graph the clusters:
import matplotlib.pyplot as plt
# plot the 3 clusters
plt.scatter(
X[X['label'] == 0]['DATE_ID'], X[X['label'] == 0].iloc[:,-2],
s=50, c='lightgreen',
marker='s', edgecolor='black',
label='cluster 1'
)
plt.scatter(
X[X['label'] == 1]['DATE_ID'], X[X['label'] == 1].iloc[:,-2],
s=50, c='orange',
marker='o', edgecolor='black',
label='cluster 2'
)
plt.scatter(
X[X['label'] == 2]['DATE_ID'], X[X['label'] == 2].iloc[:,-2],
s=50, c='lightblue',
marker='v', edgecolor='black',
label='cluster 3'
)
# plot the centroids
plt.scatter(
km.cluster_centers_[:, 0], km.cluster_centers_[:, 1],
s=250, marker='*',
c='red', edgecolor='black',
label='centroids'
)
plt.legend(scatterpoints=1)
plt.grid()
plt.show()
Output: | unknown | |
d975 | train | The SIGSTOP signal does this. With a negative PID, the kill command will send it to the entire process group.
kill -s SIGSTOP -$pid
Send a SIGCONT to resume. | unknown | |
d976 | train | your route would be :
Route::get('/user/verify', 'UserController@verifyEmail');
Now can access :
website.com/user/verify?email=example@gmail.com&token=38757e18aad8808832ace900f418b0376378975
In your controller you can get the parameter value like that :
public function show(Request $request)
{
$email = $request->email ?? null;
$token = $request->token ?? null;
}
A: You can run php artisan route:list in your console and check how exatctly every route looks like.
Query params like that: ?email={email}&token={token} in this url: '/user/verify?email={email}&token={token}' are ignored by laravel router. | unknown | |
d977 | train | Try this:
from tkinter import *
def entry():
ent[i].configure(state = NORMAL)
window=Tk()
nac = {}
ent = {}
for i in range(10):
de = IntVar()
nac[i]=IntVar()
na=Checkbutton(window, text='%s' % (i), borderwidth=1,variable = nac[i],
onvalue = 1, offvalue = 0,command=entry)
na.grid(row=i, column=0)
ent[i]=Entry(window,textvariable=de, state = DISABLED)
ent[i].grid(column=1,row=i,padx=20)
window.mainloop() | unknown | |
d978 | train | An update if anyone else has the same issue. Selecting the listview item called for it to be removed from Controls array. Removing the listview also cause the selected item to be deselected, thus 4 calls to the handler. | unknown | |
d979 | train | WebChimera.js could not be used with regular browser. It could be used only with NW.js or Electron or any other Node.js based frameworks. | unknown | |
d980 | train | header and footer make 100% width and content fix it a 95% width, so header and footer are flexible.
css:
header {
width:100%;
background:#ccc;
}
footer {
width:100%;
background:#ccc;
}
#content {
width:95%;
margin:0 auto;
}
A: Here's the other way of doing it. Not necessarily better. Your method looks fine.
<div class="wrapper">
<header>
</header>
<div id="content">
</div>
<footer>
</footer>
</div>
.wrapper { width: 950px; margin: 0 auto; }
header, footer { margin: 0px -9999px; padding: 0px 9999px; }
A: Why do you add another <div> in the header and footer?
Also please drop the inline css (if it isn't there just for this example).
Other than that your code looks fine with me | unknown | |
d981 | train | From CSV Examples:
Since open() is used to open a CSV file for reading, the file will by default be decoded into unicode using the system default encoding (see locale.getpreferredencoding()). To decode a file using a different encoding, use the encoding argument of open:
import csv
with open('some.csv', newline='', encoding='utf-8') as f:
reader = csv.reader(f)
for row in reader:
print(row)
The same applies to writing in something other than the system default encoding: specify the encoding argument when opening the output file. | unknown | |
d982 | train | i dont think there is any straight forward way of disabling a DropdownMenuItem
but you can have a list of the DropdownMenuItems you want to disable and then when you run setState you can check if that DropdownMenuItem is contained in that list and if it is then do nothing, also check by the DropdownMenuItem text if its contained in that list and if it is then change the color to be greyed out.
Like this
class MyWidget extends StatefulWidget {
@override
_MyWidgetState createState() => _MyWidgetState();
}
class _MyWidgetState extends State<MyWidget> {
var rentPeriods = <String>['one', 'two'];
final disabledItems = ['one'];
var rentPeriod;
@override
Widget build(BuildContext context) {
return DropdownButton<String>(
value: rentPeriod,
items: rentPeriods.map((String value) {
return DropdownMenuItem<String>(
value: value,
child: Text(
translate("expense.$value"),
style: TextStyle(
color: disabledItems.contains(value) ? Colors.grey : null,
),
),
);
}).toList(),
onChanged: (value) async {
if (!disabledItems.contains(value)) {
setState(() {
rentPeriod = value;
});
}
},
);
}
}
A: You can create your own disable customization, changing the color and the callback of onChangedfunction in the DropdownButton, like this example:
https://dartpad.dev/587b44d2f1b06e056197fcf705021699?null_safety=true | unknown | |
d983 | train | I'll hazard a guess that you're working in a form, so add type="button" to the button <button class="btn btn-success" (click)="addData(newData.value)">ADD</button>. That should prevent it from thinking the form is submitting and clearing the data. | unknown | |
d984 | train | New answer
Use cSplit from my "splistackshape" package:
cSplit(cases, "helplinks", ",", "long")[, helplinks := gsub(
'character\\(0|c\\(|\\"', "", helplinks)][, list(
caseid = list(caseid)), by = helplinks]
# helplinks caseid
# 1: 7703415,7858259,8802954,8847200
# 2: 60107 7758128,8829620,8829620
# 3: 56085 7758128,8829620
# 4: 57587 7758128,8829620
# 5: 3000020 7758128,8829620
# 6: 3000023 8829620
Old answer
I'm assuming you're starting with something like this:
df <- data.frame(
name = c("x", "y", "q", "w"),
alias = I(list(c("R","V","Q"), "Z", c("A", "R", "M"), c("C","A","R")))
)
df
# name alias
# 1 x R, V, Q
# 2 y Z
# 3 q A, R, M
# 4 w C, A, R
If that's the case, here's one approach using listCol_l from my "splitstackshape" package in conjunction with "data.table".
library(splitstackshape)
listCol_l(df, "alias")[, list(name = list(name)), by = alias_ul]
# alias_ul name
# 1: R x,q,w
# 2: V x
# 3: Q x
# 4: Z y
# 5: A q,w
# 6: M q
# 7: C w
You don't really need "splitstackshape" for that, so if you want to remove the self-promotion part of my answer and just use "data.table", you could do:
library(data.table)
as.data.table(df)[, list(
alias = unlist(alias)), by = name][, list(
name = list(name)), by = alias]
A: First we clean up the "character(0"'s. Then we read in those character values that were once lists but now need to be scan-ned. Then we apply a function that makes a dataframe from every line:
good.case <- cases[ grepl("c\\(", cases$helplinks),]
lapply( split(good.case, row.names(good.case) ), function(d){
vec <- scan(text=gsub("c\\(|[, ]", "", d$helplinks) ,what="")
do.call( data.frame, list(caseid=d$caseid, alias=vec) )
}
)
#-------
#Read 4 items
#Read 6 items
$`2`
caseid alias
1 7758128 60107
2 7758128 56085
3 7758128 57587
4 7758128 3000020
$`5`
caseid alias
1 8829620 60107
2 8829620 3000023
3 8829620 3000020
4 8829620 60107
5 8829620 56085
6 8829620 57587
expanded <- lapply( split(good.case, row.names(good.case) ), function(d){
vec <- scan(text=gsub("c\\(|[, ]", "", d$helplinks) ,what="")
do.call( data.frame, list(caseid=rep(d$caseid, length(vec)), alias=vec) )
}
)
#Read 4 items
#Read 6 items
Now we bind the dataframes together:
do.call(rbind, expanded)
#---------------
caseid alias
2.1 7758128 60107
2.2 7758128 56085
2.3 7758128 57587
2.4 7758128 3000020
5.1 8829620 60107
5.2 8829620 3000023
5.3 8829620 3000020
5.4 8829620 60107
5.5 8829620 56085
5.6 8829620 57587
But only half the way there I suppose. No point in pursuing further with Ananda's 5 caret answer sitting there. | unknown | |
d985 | train | It sounds like you used XRow.getString, which (sensibly enough) retrieves the array as a single large string. Instead, use XRow.getArray and then XArray.getArray. Here is a working example:
sSQL = "SELECT id, ""roleArray""[2] FROM mytablethathasarrays;"
oResult = oStatement.executeQuery(sSQL)
s = ""
Do While oResult.next()
sql_array = oResult.getArray(2)
basic_array = sql_array.getArray(Null)
s = s & oResult.getInt(1) & " " & basic_array(1) & CHR$(10)
Loop
MsgBox s | unknown | |
d986 | train | I was able to figure it out with more googling.
This great article.
I replaced this in style.css:
.services .services-box:before {
content: "";
display: table;
}
.services .services-box:after {
content: "";
display: table;
clear: both;
}
With this:
.services .services-box:before {
content: "";
display: table;
table-layout: fixed;
max-width: none;
width: auto;
min-width: 100%;
}
.services .services-box:after {
content: "";
display: table;
clear: both;
table-layout: fixed;
max-width: none;
width: auto;
min-width: 100%;
}
And problem solved. | unknown | |
d987 | train | Since Spark retains the right to regenerate datasets, at any time, that may be what's happening, in which case caching the results of expensive transformations can lead to dramatic improvements in performance.
In this case, it looks at first glance like itemset is the heavy hitter, so
itemset = getCombinations(itemset_count.select("itemsets")).cache
May pay dividends.
It should also be noted that building up a list by appending in a loop is generally a lot slower (O(n^2)) than building it by prepending. If correctness isn't affected by the order of itemset_counts, then:
itemset_counts = itemset_count :: itemset_counts
will produce at least a marginal speed-up. | unknown | |
d988 | train | To remove quotes:
$ cat test.json | jq -r '.[] | [ .host, .ip ] | @csv' | sed 's/"//g'
a.com,1.2.2.3
b.com,2.5.0.4
c.com,9.17.6.7
If using OS X, use Homebrew to install GNU sed.
A: Use the @csv format to produce CSV output from an array of the values.
cat test.json | jq -r '.[] | [.host, .ip] | @csv'
The -r option is needed to get raw output rather than JSON, which would wrap an extra set of quotes around the result and escape the quotes that surround each field. | unknown | |
d989 | train | So after having contacted the cpanel support, they could not answer why the method I used above wasnt working and they gave an alternative solution. I ended up using an interface called Application manager on Cpanel. It's the easiest way of installing a nodejs application on a cpanel server. Below is the documentation on how to use it to run yourr application
https://docs.cpanel.net/knowledge-base/web-services/how-to-install-a-node.js-application/
Hope this helps someone | unknown | |
d990 | train | assign overwrites the content of the vector where as copy with back_insert_iterator does a push_back on the vector thus preseving its content.
EDIT: If the question is generic (i.e. whether to use a member function defined in the container or an algorithm), I prefer to use the member function as it might have been optimized for the particular container compared to a generic algorithm.
A: Complementing Naveen's answer, using std::copy() is also much more versatile, as this way you can easily write to any output iterator. This could be a stream or something entirely custom.
A: In the general case, prefer member functions to functionally-equivalent algorithms. Scott Meyers discusses this in depth in Effective STL.
A: In essence, they are the same. Vector's reallocation behavior (defined in terms of how it behaves with push_back) avoids too many reallocations by progressively using more memory.
If you need identical code to work with multiple container types (i.e. you're writing a template), including those containers not in the stdlib, then generally prefer free functions.
If you don't mind rewriting this code if/when container types change, then you can prematurely optimize if you like, or simply use whichever is more convenient.
And just for completeness' sake, copy+back_inserter is equivalent to vector::insert at the end(), while vector::clear + copy+back_inserter is equivalent to vector::assign.
A: Your question can be generalized as follows:
When dealing with STL containers,
should I prefer to use member
functions or free-standing functions
from <algorithm> when there are functional equivalents?
Ask 10 programmers, and you'll get 12 responses. But they fall in to 2 main camps:
1) Prefer member functions. They are custom-designed for the container in question and more efficient than the <algorithm> equivalent.
2) Prefer free-standing functions. They are more generic and their use is more easily maintained.
Ultimately, you have to decide for yourself. Any conclusion you come to after giving it some reasoned, researched thought is better than following anyone else's advice blindly.
But if you just want to blindly follow someone's advice, here's mine: prefer the free-standing functions. Yes, they can be slower than the member functions. And "slow" is such a dirty word. But 9 times out 10 you just don't (or shouldn't) care how much more efficient one method is than the other. Most of the time when you need a collection, you're going to periodically add a few elements, do something, and then be done. Sometimes you need ultra-high performance lookup, insertion or removal, but not normally. So if you're going to come down one one side or the other with a "Prefer Method X" approach, it should be geared for the general case. And the approach that prefers the member methods seems to be slanted towards optimization -- and I call that a premature micro-optimization. | unknown | |
d991 | train | Did you get over this issue?
I've tried with bootstrap 4.0 but I didn't see any issue, so my suggestions are:
*
*check your java version, make sure it is 1.8.171+
*make sure the corda.jar (in your build /nodes/notary/corda.jar) is correct because bad network may cause the incomplete corda.jar downloaded
*make sure you've got the tools from official website instead of copying from other way where the bootstrap jar file might be broken
*last, as always, please try to use the latest version bootstrap: 4.3, to utilise the best Corda:
https://software.r3.com/artifactory/corda-releases/net/corda/corda-tools-network-bootstrapper/4.3/
A: I had a similar problem (not with that particular JAR file, but I'm using that file's name in my examples). Here's how I worked out what was wrong.
These troubleshooting steps may help others who find this issue (but not the OP as the dates show it can't be this issue).
I started by trying to validate the JAR file by inspecting its contents:
$ jar -tf corda.jar
This showed me that it was indeed invalid. In my case, I saw:
java.util.zip.ZipException: error in opening zip file
at java.util.zip.ZipFile.open(Native Method)
at java.util.zip.ZipFile.<init>(ZipFile.java:225)
at java.util.zip.ZipFile.<init>(ZipFile.java:155)
at java.util.zip.ZipFile.<init>(ZipFile.java:126)
at sun.tools.jar.Main.list(Main.java:1115)
at sun.tools.jar.Main.run(Main.java:293)
at sun.tools.jar.Main.main(Main.java:1288)
I then looked at the size of the file:
$ ls -alh corda.jar
In my case, it was 133 bytes, which seems a bit small for a JAR file, so I cated it and saw this:
$ cat corda.jar
501 HTTPS Required.
Use https://repo1.maven.org/maven2/
More information at https://links.sonatype.com/central/501-https-required
It turned out that my script (a Dockerfile in fact) was downloading the file with curl -o but from a URL which was no longer supported. As https://links.sonatype.com/central/501-https-required says:
Effective January 15, 2020, The Central Repository no longer supports insecure communication over plain HTTP and requires that all requests to the repository are encrypted over HTTPS.
If you're receiving this error, then you need to replace all URL references to Maven Central with their canonical HTTPS counterparts:
Replace http://repo1.maven.org/maven2/ with https://repo1.maven.org/maven2/
Replace http://repo.maven.apache.org/maven2/ with https://repo.maven.apache.org/maven2/
If for any reason your environment cannot support HTTPS, you have the option of using our dedicated insecure endpoint at http://insecure.repo1.maven.org/maven2/
For further context around the move to HTTPS, please see https://blog.sonatype.com/central-repository-moving-to-https.
This is fairly specific, but may help someone. | unknown | |
d992 | train | The root of your problem appears to be that your server does not support SSL or does not have it enabled. The message:
The server does not support SSL
may only be emitted by org/postgresql/core/v3/ConnectionFactoryImpl.java in enableSSL(...) when the server refuses or doesn't understand SSL requests.
Sure enough, in your update you say that you had the SSL-related options in postgresql.conf commented out. Them being commented out is the same as them being not there at all to the server; it will ignore them. This will cause the server to say it doesn't support SSL and refuse SSL connections because it doesn't know what server certificate to send. PgJDBC will report the error above when this happens.
When you un-commented the SSL options in postgresql.conf and re-started the server it started working.
You were probably confused by the fact that:
&ssl
&ssl=true
&ssl=false
all do the same thing: they enable SSL. Yes, that's kind of crazy. It's like that for historical reasons that we're stuck with, but it's clearly documented in the JDBC driver parameter reference:
ssl
Connect using SSL. The driver must have been compiled with SSL
support. This property does not need a value associated with it. The
mere presence of it specifies a SSL connection. However, for
compatibility with future versions, the value "true" is preferred. For
more information see Chapter 4, Using SSL.
As you can see, you should still write ssl=true since this may change in future.
Reading the server configuration and client configuration sections of the manual will help you with setting up the certificates and installing the certificate in your local certificate list so you don't have to disable certificate trust checking.
For anyone else with this problem: There will be more details in your PostgreSQL error logs, but my guess is your PostgreSQL config isn't right or you're using a hand-compiled PostgreSQL and you didn't compile it with SSL support.
A: If you are using a self-signed certificate you need to add it to your trusted key store of your Java installation on the client side.
You find the detailed instructions to achieve this here: telling java to accept self-signed ssl certificate
A: In your connection string, try
?sslmode=require
instead of
?ssl=true
A: Use param sslmode=disable. Work for me. Postgresql 9.5 with jdbc driver SlickPostgresDriver. | unknown | |
d993 | train | You do not need that function. Just use
count(table2.tbl2_outcome = 'VALIDATED' or null) | unknown | |
d994 | train | Get list of Excel sheet names in ADF is not support yet and you can vote here.
*
*So you can use azure funcion to get the sheet names.
import pandas
xl = pandas.ExcelFile('data.xlsx')
# see all sheet names
print(xl.sheet_names )
*Then use an Array type variable in ADF to get and traverse this array. | unknown | |
d995 | train | System Events doesn't have a "copy" command. Where did you get that? You might try "move" instead. Plus "aVolume" is not a folder, it's a disk. You probably want to change "folder aVolume" to "disk aVolume". And you might even need to use "disk (contents of aVolume)"
EDIT: Try the following script. I didn't test it but it should work. Good luck.
property ignoredVolumes : {"DD APPLE", "MobileBackups", "home", "net"} -- leave home and net in this list
set Destination_Folder to ((path to downloads folder as text) & "Test:") as alias
set mountedVolumes to list disks
repeat with i from 1 to count of mountedVolumes
set thisVolume to item i of mountedVolumes
if thisVolume is not in ignoredVolumes then
tell application "Finder"
set theItems to items of disk thisVolume
move theItems to Destination_Folder
end tell
end if
end repeat | unknown | |
d996 | train | Generally I would recommend that you make the changes immediately. If there's to be a "grace period", then implement that on the server side (you can do it client side too if it will improve user experience).
So if someone upvotes a post, it is saved immediately via ajas, but then if they change their minds within the grace period, the server undoes the vote. Once the "grace period" is up, the server rejects the change.
A Facebook post would (obviously) be saved when you click "Post", etc. -- but it wouldn't be saved before then.
Something like Blogger or Google Docs is another issue altogether -- where it's automatic saving every x number of seconds. That is purely up to the developer. Generally you want to make it as often as possible without impacting performance or decreasing the user experience. | unknown | |
d997 | train | Your algorithm logic structure smells a lot, this is what I see:
*
*read all non empty lines into lines_in_file (looks good to me)
*for EVERY line (problematic, requires additional logic in inner loop):
*
*if not "P3", try to parse [EVERY] line as integer and set effect_choice (it's not clear from your code, what happens on lines where several integers are provided, but judging from your problem description the first integer is successfully parsed by strToInt function)
*if "P3", the current line and next two are copied to output
*[EVERY] line is parsed as vector of triplets of numbers
*effect is set by new effect for actual value of effect_choice (for EVERY line, also you don't delete the effect at end, so you are leaking memory in per-line counts. Also your current effects look like they may be implemented as static functions of "process" function type, so you don't need to allocate each of them, just store the particular memory address of requested function. And you call it processImage, while you are processing only line, not whole image.
*effect is run for current line triplets
*the line triplets are outputted
*loop to next line (!)
So for example for input:
2
3
P3
1 2
255
50 50 50
1 2 3
I believe (can't run it, as you didn't provide lot of code) this happens:
lines are read, and per particular line this happens:
line "2": effect_choice = 2, effect = RemoveGreen, zero triplets parsed into points, RemoveGreen::processImage() run over empty vector, empty vector printed (ie nothing).
line "3": effect_choice = 3, effect = RemoveBlue, zero triplets parsed into points, RemoveBlue::processImage() run over empty vector, empty vector printed.
line "P3": Lines: {"P3", "1 2", "255"} are printed, zero triplets parsed into points, RemoveGreen::processImage() run over empty vector, empty vector printed.
line "1 2": effect_choice = 1, effect = RemoveRed, zero triplets parsed into points, RemoveRed::processImage() run over empty vector, empty vector printed.
line "255": effect_choice = 255, zero triplets parsed into points, RemoveRed::processImage() run over empty vector, empty vector printed.
line "50 50 50": effect_choice = 50, one triplet {50, 50, 50} parsed into points, RemoveRed::processImage() run over it, modified triplet outputs {0, 50, 50}.
line "1 2 3": effect_choice = 1, effect = RemoveRed, one triplet {1, 2, 3} parsed into points, RemoveRed::processImage() run over it, modified triplet outputs {0, 2, 3}.
All of this should be clearly visible in debugger, while stepping over the code, so you probably are not debugging it, which gets downvoting the question from me, and you will pay in tremendous pain over time, as debugging without debugger is lot more difficult.
Also writing code without thinking about algorithm and code architecture makes the need of debugging lot more likely, so you wasted even more time here, by starting by writing the code.
You should have first design some algorithm and code architecture (what data are processed, how, when new memory is needed, how it will be freed, where the code need to loop, where it need to skip over, or run only once, etc).
Write only overview of how it will work into single-line comments, then split too generic comments into simpler steps until they can be implemented by few lines of C++ code, and move/modify them around until you feel the wanted algorithm will be implemented with minimal "cruft" added (most of the comments does, what is really requested, like "set red in point to zero", and any processing/preparations/moving/etc is minimized only to cases where you can't avoid it by smarter design). (for example in your current code you can read through the header of the file without looping, and start looping only after the pixel data pours in)
Then write the code, start probably with some empty function definition so you can already "run" it in debugger and verify the emptiness works, then implement the comment (or small group of them) which you feel is clear enough to be implemented and can be tested easily (no big dependency on yet-to-implement parts). Debug + test new code. If it works, try to clean up the source to remove anything not really needed, work-in-progress variable names, etc... Then verify it works in final version.
And do it again for another comment (group of), until the implementation is done.
Using unit-testing makes the write-short-code, test+debug, clean-up-source rounds even easier, especially in cases like this, where I/O are pure data, so it's easy to feed specialized test input data into test, and verify the expected output data were produced. | unknown | |
d998 | train | You can use rack-mini-profiler gem to monitor time response. It will display result top left corner. And by default rails does what you want. You can check the response time on the bottom of every request.
Completed 200 OK in 2203ms (Views: 95.3ms | ActiveRecord: 71.5ms)
I strongly recommend you to use NewRelic for monitoring your application. Because it will handle very smoothly on both your system level and application level. It will monitor your database too. | unknown | |
d999 | train | I took a look at the repository. You are correct that svndumpfilter cannot be used to rename a file throughout the history, so I wrote a small script that does the renaming in the dump file. The only tricky part was to add the creation of the tags and branches folder. To use the script, you should make a cronjob or similar that:
*
*downloads the latest Putty SVN dump file:
$ wget http://www.chiark.greenend.org.uk/~sgtatham/putty/putty-svn.dump.gz
*fixes the dump file with the script:
$ zcat putty-svn.dump.gz | fix-dump.py > fixed.dump
*loads it into a new empty repository:
$ svnadmin create putty
$ svnadmin load putty < fixed.dump
*converts the Subversion repository into a Mercurial repository:
$ hg convert file://$PWD/putty
As far as I can see, the branches and tags are created correctly.
You ask for continuous pulling (incremental conversion). Luckily, both hg convert and hgsubversion support this. You'll need to redo steps 1β3 every day before you can convert the changesets into Mercurial. This will work since the first three steps are deterministic. That means that your putty SVN repository behaves as if the Putty developers worked directly in it using the proper branch and tag names you maintain there.
The script is below:
#!/usr/bin/python
import sys, re
moves = [(r"^Node(-copyfrom|)?-path: %s" % pattern, r"Node\1-path: %s" % repl)
for (pattern, repl) in [(r"putty-branch-(0\...)", r"branches/\2"),
(r"putty-(0\...)", r"tags/\2"),
(r"putty(/|\n)", r"trunk\2")]]
empty_dir_template = """\
Node-path: %s
Node-kind: dir
Node-action: add
Prop-content-length: 10
Content-length: 10
PROPS-END\n\n"""
created_dirs = False
for line in sys.stdin:
if not created_dirs and line == "Node-path: putty\n":
sys.stdout.write(empty_dir_template % "tags")
sys.stdout.write(empty_dir_template % "branches")
created_dirs = True
for pattern, repl in moves:
line, count = re.subn(pattern, repl, line, 1)
if count > 0: break
sys.stdout.write(line)
A: I have decided to keep track of ONLY the released source code, not every revision.
So the result is here: https://bitbucket.org/daybreaker/iputty/changesets .
To do this, I have followed these steps (for example):
svn ls -R svn://svn.tartarus.org/sgt/putty-0.58 > 58.txt
svn ls -R svn://svn.tartarus.org/sgt/putty-0.59 > 59.txt
svn ls -R svn://svn.tartarus.org/sgt/putty-0.60 > 60.txt
svn ls -R svn://svn.tartarus.org/sgt/putty-0.61 > 61.txt
svn ls -R svn://svn.tartarus.org/sgt/putty-0.62 > 62.txt
hg init iputty
cd iputty
svn export --force svn://svn.tartarus.org/sgt/putty-0.58 .
hg branch original
hg add
hg commit -m 'Imported PuTTY 0.58 release.'
svn export --force svn://svn.tartarus.org/sgt/putty-0.59 .
diff -U3 ../58.txt ../59.txt
hg add (added files from diff)
hg rm (removed files from diff)
hg commit -m 'Imported PuTTY 0.59 release.'
(repeat this for the remaining releases)
hg up -r(rev# of 0.60 release)
svn export --force (URL of my own modified PuTTY repository) .
hg branch default
hg commit -m 'Imported the most recent dPuTTY source code. blah blah' | unknown | |
d1000 | train | At least on Debian O_DIRECTORY and O_CLOEXEC are defined only if _GNU_SOURCE is defined.
Although _GNU_SOURCE is set for certain modules in the current vsftp release it is not set generally.
As a work around you might use the following patch:
diff -Naur vsftpd-3.0.0.orig/seccompsandbox.c vsftpd-3.0.0/seccompsandbox.c
--- vsftpd-3.0.0.orig/seccompsandbox.c 2012-04-05 00:41:51.000000000 +0200
+++ vsftpd-3.0.0/seccompsandbox.c 2012-06-30 15:25:52.000000000 +0200
@@ -11,7 +11,7 @@
#include "seccompsandbox.h"
#if defined(__linux__) && defined(__x86_64__)
-
+#define _GNU_SOURCE
#include "session.h"
#include "sysutil.h"
#include "tunables.h
Disclaimer: Applying this patch makes the current vsftp release compile, I have now clue wether the created binaries work correctly or not.
A: I'm using SLES 11 sp1 64bit, kernel 2.6.32, gcc ver 4.3.4; changing or removing FORTIFY_SOURCE made no difference, get same error. I'm not a c programmer - the flags O_DIRECTORY and O_CLOEXEC are in seccompsandbox.c:
static const int kOpenFlags =
O_CREAT|O_EXCL|O_APPEND|O_NONBLOCK|O_DIRECTORY|O_CLOEXEC|O_LARGEFILE;
It compiles if you remove them but that really fills me with confidence....
The vrf_findlibs.sh is also broken, I had to rejig the script so it found a 64bit version of libcap first or it keeps selecting the 32bit copy (the -lcap doesn't work either, says it isn't found):
# Look for libcap (capabilities)
if locate_library /lib64/libcap.so; then
echo "/lib64/libcap.so.2";
elif locate_library /lib/libcap.so.1; then
echo "/lib/libcap.so.1";
elif locate_library /lib/libcap.so.2; then
echo "/lib/libcap.so.2";
else
locate_library /usr/lib/libcap.so && echo "-lcap";
locate_library /lib/libcap.so && echo "-lcap";
locate_library /lib64/libcap.so && echo "-lcap";
fi | unknown |