_id
stringlengths 2
6
| partition
stringclasses 3
values | text
stringlengths 4
46k
| language
stringclasses 1
value | title
stringclasses 1
value |
---|---|---|---|---|
d201 | train | One way
SELECT (SELECT TOP 1 [a]
FROM @T
WHERE [a] IS NOT NULL
ORDER BY [sort]) AS [a],
(SELECT TOP 1 [b]
FROM @T
WHERE [b] IS NOT NULL
ORDER BY [sort]) AS [b],
(SELECT TOP 1 [c]
FROM @T
WHERE [c] IS NOT NULL
ORDER BY [sort]) AS [c]
Or another
;WITH R
AS (SELECT [a],
[b],
[c],
[sort]
FROM @T
WHERE [sort] = 0
UNION ALL
SELECT Isnull(R.[a], T.[a]),
Isnull(R.[b], T.[b]),
Isnull(R.[c], T.[c]),
T.[sort]
FROM @T T
JOIN R
ON T.sort = R.sort + 1
AND ( R.[a] IS NULL
OR R.[b] IS NULL
OR R.[c] IS NULL ))
SELECT TOP 1 [a],
[b],
[c]
FROM R
ORDER BY [sort] DESC | unknown | |
d202 | train | A LinkedHashMap would be a nice starting point, as it maintains the order of insertion. The value needs to have a date, so maybe wrap the original value in some class with a date, or require an interface.
static class Dated<V> {
public final LocalDate date = LocalDate.now();
public final V value;
public Dated(V value) {
this.value = value;
}
}
Map<K, Dated<V>> map = new LinkedHashMap<>();
void insert(K key, V value) {
Dated<V> datedValue = new Dated<>(value);
LocalDate earliest = datedValue.date.minusDays(14);
Iterator<Map.Entry<K, Dated<V>> it = map.entries().iterator();
while (it.hasNext() && it.next().getValue().date.isBefore(earliest)) {
it.remove();
}
map.remove(key); // So at the end of the linked list.
map.put(key, datedValue);
}
The remove ensures that the latest addition is added at the end of the linked list, even if the key already existed.
Hence the iteration starts with the oldest elements, and deletes those.
Note: The question asked for a removeEldestOnes, based on the current time. That means when nothing is inserted, one could still remove old entries of more than 14 days in the past.
My code could be used for that too, but first doing map.remove on insert is essential, hence the integrated insert.
Making a custom collection class I leave to the OP. | unknown | |
d203 | train | Try this:
SELECT
T1.Id,
T2.Relatives
FROM SecondTable T2
LEFT JOIN FirstTable T1
ON T1.ID = T2.ID
GROUP BY T1.Id,
T2.Relatives
This is what I get exactly:
CREATE TABLE #a (
id int,
name varchar(10)
)
CREATE TABLE #b (
id int,
name varchar(10)
)
INSERT INTO #a
VALUES (1, 'sam')
INSERT INTO #a
VALUES (1, 'Dan')
INSERT INTO #b
VALUES (1, 'Uncle')
INSERT INTO #b
VALUES (2, 'Aunty')
SELECT
T1.Id,
T2.name
FROM #b T2
LEFT JOIN #a T1
ON T1.ID = T2.ID
GROUP BY T1.Id,
T2.name
DROP TABLE #a
DROP TABLE #b
Output:
Id name
NULL Aunty
1 Uncle
Hope, this is what you ask in your question.
A: As your question is not clear, so assuming that you need to retrieve id from table a and name from table b and you also want to avoid duplicate rows, then an option could be to use distinct along with left join:
select distinct a.id, b.name
from b
left outer join a
on b.id = a.id
order by id desc
Result:
+------+-------+
| id | name |
+------+-------+
| 1 | Uncle |
| NULL | Aunty |
+------+-------+
DEMO | unknown | |
d204 | train | Heaps only "grow" in a direction in very naive implementations. As Paul R. mentions, the direction a stack grows is defined by the hardware - on Intel CPUs, it always goes toward smaller addresses "i.e. 'Up'"
A: I have read the works of Miro Samek and various other embedded gurus and It seems that they are not in favor of dynamic allocation on embedded systems. That is probably due to complexity and the potential for memory leaks. If you have a project that absolutely can't fail, you will probably want to avoid using Malloc thus the heap will be small. Other non mission critical systems could be just the opposite. I don't think there would be a standard approach.
A: Maybe it is just dependent on the processor: If it supports the stack going upward or downward? | unknown | |
d205 | train | The error on the page says $.mobile is undefined. Include the proper URL to where $.mobile is defined and try again.
A: This line doesn't work:
$.mobile.allowCrossDomainPages = false;
If you take it off your javascript will work. Just so you know, I'm getting here that "could not connect to service".
Next time insert some logs or alerts in your code to debug. I just put one before and one after the line that was not working to see if the ajax request was being sent and saw that this line was the problem.
(In chrome ctrl+shift+c opens debug window, open console and you can see js logs (console.log). A lot better than alert for debug)
Ps: For cross domain ajax call use jsonp, as Ehsan Sajjad commented:
*
*jQuery AJAX cross domain
*Make cross-domain ajax JSONP request with jQuery
Ps2: I never used this, but it might be useful: Cross-origin Ajax | unknown | |
d206 | train | Solved this problem by using spyOn
spyOn works similar to the httpMock (but for calling some functions), here's an example:
it('form submit fail', () => {
email.value = 'test@test.test';
email.dispatchEvent(new Event('input'));
password.value = '123456';
password.dispatchEvent(new Event('input'));
spyOn(service, 'login').and
.returnValue(
Observable.throw(
new HttpErrorResponse({
error: {
message: 'Here some message...',
localizedKey: 'someKey'
},
status: 500
})
)
);
button.click();
expect(component.hasError).toBeTruthy();
expect(component.lockForm).toBeFalsy();
expect(component.error).toEqual('someKey');
});
In this example spyOn emulate response from AuthService.login() (when in call) and return expected response for unit testing
For success response use Observable.of() and new HttpResponse()
A: Manually call the fixture.detectChanges()
it('form submit fail', () => {
fixture.detectChanges();
expect(element.querySelector('#login-email')).toBeDefined();
expect(element.querySelector('#login-password')).toBeDefined();
updateForm('test@gmail.com', '123456');
component.login(component.loginForm);
httpMock
.expectOne(`${environment.apiProtocol}://${environment.apiHost}/auth`)
.error( new ErrorEvent( 'SOME_ERROR', {error: 400}), {status: 400, statusText: ''});
httpMock.verify();
expect(component.hasError).toBeTruthy();
expect(component.error).toEqual('unregisteredPair');
}); | unknown | |
d207 | train | Okay, I found the answer. Had to use some trigonometry.
h = tan(fov/2)*dist
dist is the distance to the object from the camera. h is half of the screen space in y axis.
to get x axis multiply by (screenwidth/screenheight) | unknown | |
d208 | train | Your teacher is probably compiling your code as C++.
With dummy functions added, if this code is placed in a .c file and compiled with MSVC 2015, it compiles fine. If the file has a .cpp extension, it generates the following errors:
x1.cpp(13): error C3260: ')': skipping unexpected token(s) before lambda body
x1.cpp(13): error C2143: syntax error: missing ';' before '}'
x1.cpp(13): warning C4550: expression evaluates to a function which is missing an argument list
x1.cpp(13): error C2676: binary '[': 'main::<lambda_8caaf9f5b122025ad6cda2ca258a66a7>' does not define this operator or a conversion to a type acceptable to the predefined operator
x1.cpp(13): error C2143: syntax error: missing ')' before ';'
x1.cpp(13): error C2059: syntax error: ';'
So it thinks your compound literal is a lambda function. Compound literals are a C-only construct, so unless your C++ compiler supports them as an extension it won't work.
Have your teacher compile your code as C by putting it in a file with a .c extension.
Alternately, don't use a compound literal. Create the array first, then index it and call the function.
void (*f[])() = {test1, test2, test3, test4};
f[n - 1](); | unknown | |
d209 | train | The above answer is correct and here's the exact copy-paste code in case you're struggling:
Accounts.setPassword(userId, password, {logout: false});
Note: make sure you are doing this call server side.
A: Accounts.setPassword(userId, password, options)
This method now supports options parameter that includes options.logout option which could be used to prevent the current user's logout.
A: You could use Accounts.changePassword (docs) to change the password instead, this will not affect the user's existing tokens (as from) https://github.com/meteor/meteor/blob/devel/packages/accounts-password/password_server.js#L299-L302
If you want to do this from the server without knowing the existing password you would have to fork the accounts-password package and remove this line: https://github.com/meteor/meteor/blob/devel/packages/accounts-password/password_server.js#L338 and add this package into the /packages directory of your app
If you want to downgrade your package (so long as the version you're using of meteor supports it):
meteor remove accounts-password
meteor add accounts-password@1.0.3 | unknown | |
d210 | train | The id attribute mustn't be a number, you can read more here What are valid values for the id attribute in HTML?
Surely this don't answer your question but you can prevent others problems in the future.
A: You have to move the .on call out of the get_rows function. Because otherwise every time you call get_rows it adds a new listener.
function get_rows(){
console.log("inside get_rows!");
$("#rows").empty();
// ajax call to return all rows
for (i=0; i < 5;i++){
var item = '<tr><td id="' + i + '" class="del_row">delete row ' + i + ' by clicking me!</td></tr>';
$("#rows").append(item);
}
}
$(document).on("click", ".del_row", function(){
id = $(this).prop('id');
console.log("delete row " + id);
// ajax call to delete on server
get_rows();
});
http://jsfiddle.net/Wng5a/6/ | unknown | |
d211 | train | In order to implement healthcheck in nodejs, use the following
use express-healthcheck as a dependency in your nodejs project
in your app.js or equivalent code, use the following line
app.use('/healthcheck', require('express-healthcheck')());
if your app is up your response will be like
{
"uptime":23.09
}
also it returns a status code of 200
Hope this helps | unknown | |
d212 | train | You need to nest the job relationship (you can also just us eq with user):
def minutes = c.get {
and{
eq("user", user)
job{
between("happening", firstDate, lastDate)
}
}
projections {
sum("minutesWorked")
}
}
cheers
Lee | unknown | |
d213 | train | The Fedlet is pretty bare bones and was designed by Sun (now Oracle) to work with OpenSSO as the IDP. While it is probably compliant to some degree, I would imagine that it may not be a full implementation of SAML 2.0 SP-Lite but a sub-set of that.
I'd check out PingFederate from PingIdentity if you are looking for a more robust option. We have dozens of SPs who are integrating with CA SM FSS as the IDP (and vice versa) using SAML 1.x and 2.0. It has a very light footprint, can support a multitude of development languages/platforms and can be setup and in Production extremely quickly.
HTH - IanB
A: If you already have a SiteMinder installation setup then SMFSS is the fastest, easiest and most robust solution, IMHO but then I support it. I am able get new customers up and running in less than a day for SAML 2.0 POC when they already have a working SiteMinder architecture in place and there are no known issues with OpenSSO. If you have a particular issue you should give a fiddler trace with HTTPS decryption enabled and logs so we can assist. Also, the R12 SP3 or SM6 SMFSS docs have the chapter on what settings need to match, the setting up the IDP and SP for SAML 2.0 chapters which are step by step as long as you have the settings for the matching values chapter which is the second to last chapter and chapter number changes depending on version of the docs.
You can also do Authorization on the SP side using the Attribute Authority we provide if your SP implements the Attribute Query SAML specification. In other words, if there was no attribute authority then you would need to store attributes on the SP side for use later. With that being said, if you used an SMFSS (SiteMinder Federation Security Services) SP you could use the Session Store on the SP side and store the assertions attributes there at authentication time. Let me know if you have any more questions on this. The thing I like about SMFSS is you really get a good idea of what your doing and can become quite proficient where a lot of other products seem to use a lot of the MetaData to add stuff into their UI's which IMHO results in people not really understanding the federation that they are setting up and administering.
I am wondering if IanB is my old co-worker Ian Barnett of Ping? If so hello!!!
Crissy Krueger Stone
SiteMinder Support est. 5/1/2000 | unknown | |
d214 | train | Take a look at https://github.com/kitconcept/robotframework-djangolibrary which seems to handle exactly this.
Or, even better, it is possible to make that the tests suites runs under the Django LiveServerTestCase?
This is a much more interesting approach as we could then mix robot tests with other tests. I'll post here if I figure out how to do it.
A: Robot has a library named Process which is specifically designed for starting and stopping processes. You can use the Start Process and Terminate Process keywords to start and stop the webserver via a suite setup and suite teardown. It would look something like this:
*** Settings ***
| Library | Process
| Suite Setup | Start the webserver
| Suite Teardown | Stop the webdserver
*** Keywords ***
| Start the webserver
| | ${django process}= | Start process | python | manage.py
| | Set suite variable | ${django process}
| Stop the webserver
| | Terminate Process | ${django process}
Of course, you'll want to add some bullet proofing such as making sure the process actually starts, and possibly catching errors if it doesn't exit cleanly. You will also probably need to give an explicit path to manage.py, but hopefully this gives the general idea. | unknown | |
d215 | train | One of many ways to do achieve your objective:
*
*Make an array of checkboxes on your current form that are checked.
*Go through the array to build the folder name based on the Text.
*Delete the entire folder, then replace it with an empty one.
You may want to familiarize yourself with System.Linq extension methods like Where and Any if you haven't already.
The [Clear] button should only be enabled if something is checked.
Making an array of the checkboxes will be handy. It can be used every time you Clear. At the same time, the [Clear] button shouldn't be enabled unless one or more of the checkboxes are marked.
public partial class MainForm : Form
{
public MainForm()
{
InitializeComponent();
// Make an array of the checkboxes to use throughout the app.
_checkboxes = Controls.OfType<CheckBox>().ToArray();
// This provides a way to make sure the Clear button is
// only enabled when one or more checkboxes is marked.
foreach (CheckBox checkBox in _checkboxes)
{
checkBox.CheckedChanged += onAnyCheckboxChanged;
}
// Attach the 'Click' handler to the button if you
// haven't already done this in the form Designer.
buttonClear.Enabled = false;
buttonClear.Click += onClickClear;
}
const string basePath = @"D:\java\";
CheckBox[] _checkboxes;
.
.
.
}
Set the Clear button Enabled (or not)
Here we respond to changes in the checkbox state.
private void onAnyCheckboxChanged(object sender, EventArgs e)
{
buttonClear.Enabled = _checkboxes.Any(_=>_.Checked);
}
Exec Clear
Build a subfolder path using the Text of the checkboxes. If the checkbox is selected, delete the entire folder, replacing it with a new, empty one.
private void onClickClear(object sender, EventArgs e)
{
// Get the checkboxes that are selected.
CheckBox[] selectedCheckBoxes =
_checkboxes.Where(_ => _.Checked).ToArray();
foreach (CheckBox checkBox in selectedCheckBoxes)
{
// Build the folder path
string folderPath = Path.Combine(basePath, checkBox.Text);
// Can't delete if it doesn't exist.
if (Directory.Exists(folderPath))
{
// Delete the directory and all its files and subfolders.
Directory.Delete(path: folderPath, recursive: true);
}
// Replace deleted folder with new, empty one.
Directory.CreateDirectory(path: folderPath);
}
}
A: I understand, you have this structure
D
|-java
|-Document
|-Person
|-Picture
And you said "delete the contents of the folder". So, I assume you need to keep folders
In this case
public void EmptyFolder(string root, IEnumerable<string> subfolders)
{
foreach(string folder in subfolders)
{
string dirPath = Path.Combine(root, folder);
foreach (string subdir in Directory.EnumerateDirectories(dirPath))
Directory.Delete(subdir, true);
foreach (string file in Directory.EnumerateFiles(dirPath))
File.Delete(file);
}
}
// (assuming check box text is the name of folder. Or you can use tag property to set real folder name there)
private IEnumerable<string> Getfolders()
{
foreach(control c in this.Controls) // "this" being a form or control, or use specificControl.Controls
{
if (c is Checkbox check && check.Checked)
yield return check.Text;
}
}
// USAGE
EmptyFolder(@"D:\java\", Getfolders());
NOTE: written from memory and not tested | unknown | |
d216 | train | Since no one has answered, I will. Yes, WebAPI2 does not wrap the call in a transaction. That would be silly, if you think about it. Also the code
using (var db = new MyContext()) {
// do stuff
}
does not implicitly create a transaction. Therefore, when you implement a RESTFUL PUT method to update your database, you have three options: (1) call db.SaveChanges() one time only and hope for the best, as the OP code, or (2) you can add a rowVersion column, and call db.SaveChanges() with try-catch in a loop, or (3) you can create an explicit transaction.
In my opinion, option 1 is evil, and option 2 is a terrible hack that was invented because transactions did not exist prior to EF6.
The correct way to implement Update:
[HttpPut, Route("ResponseSetStatus/{id:int}")]
public IHttpActionResult UpdateResponseSetStatus(int id, [FromUri] string status = null)
{
using (var db = new MyContext(MyContext.EntityContextString))
{
using (var tran = db.Database.BeginTransaction())
{
var responseSet = db.ResponseSets.FirstOrDefault(x => x.ResponseSetId == id);
if (responseSet == null)
{
return NotFound();
}
// ADD ONE SECOND DELAY HERE FOR TESTING
Thread.Sleep(1000);
responseSet.Status = status;
tran.Commit();
}
}
return Ok();
}
Please note that try-catch is not necessary. If anything fails, the using tran will automatically rollback, and the WebAPI2 will send a nice 500 response to the client.
p.s. i put the db = new MyContext also in using, because it's the right way to do it. | unknown | |
d217 | train | You could seach for same id in the result set and replace if the former type is undefined.
var array = [{ id: 1, type: 1 }, { id: 2, type: undefined }, { id: 3, type: undefined }, { id: 3, type: 0 }, { id: 4, type: 0 }],
result = array.reduce((r, o) => {
var index = r.findIndex(q => q.id === o.id)
if (index === -1) r.push(o);
else if (r[index].type === undefined) r[index] = o;
return r;
}, []);
console.log(result);
.as-console-wrapper { max-height: 100% !important; top: 0; }
A: Try this
const testArray = [
{id: 1, type: 1},
{id: 2, type: undefined},
{id: 3, type: undefined},
{id: 3, type: 0},
{id: 4, type: 0}
];
let newArray = [];
testArray.forEach(item => {
const newArrayIndex = newArray.findIndex(newItemArray => newItemArray.id === item.id);
if (newArrayIndex < 0) return newArray.push(item);
if (item.type === undefined) return
newArray[newArrayIndex].type = item.type;
});
console.log(newArray) | unknown | |
d218 | train | i think you may be messing when creating the strings
EditText sil_key = (EditText)findViewById(R.id.silent_key);
String silent_mode_key = sil_key.toString();
I think you meant to make it a string from the content of the edit text
like this
String silent_mode_key = sil_key.getText().toString();
try this
EditText sil_key = (EditText)findViewById(R.id.silent_key);
String silent_mode_key = sil_key.getText().toString();
EditText gen_key = (EditText)findViewById(R.id.general_key);
String general_mode_key = gen_key.getText().toString();
EditText vib_key = (EditText)findViewById(R.id.vibrate_key);
String vibrate_mode_key = vib_key.getText().toString();
SharedPreferences.Editor editor = sharedPreferences.edit();
if((sharedPreferences.contains("silent")) && (sharedPreferences.contains("general")) && (sharedPreferences.contains("vibrate")))
{
sil_key.setText(sharedPreferences.getString("silent","silent"));
gen_key.setText(sharedPreferences.getString("general","general"));
vib_key.setText(sharedPreferences.getString("vibrate","vibrate"));
}
editor.putString("silent",silent_mode_key);
editor.putString("general",general_mode_key);
editor.putString("vibrate",vibrate_mode_key);
editor.apply();
A: try to add getText() like below for all the strings:
EditText sil_key = (EditText)findViewById(R.id.silent_key);
String silent_mode_key = sil_key.getText().toString();
A: I think that you have to add this before sharedPreferences.getString
sharedPreferences = PreferenceManager.getDefaultSharedPreferences(this); | unknown | |
d219 | train | You need to create a data frame representing an edge list in the format of:
*
*column1 = node where the edge is coming from
*column2 = node where the edge is going to
*columnn... = attributes you want to store in the edge
Then you will need to input that df into graph_from_data_frame
Two of the attributes you can store in the edge list are color and width which will automatically be plotted in igraph's base plot function.
edge_list <- mymodCoef %>%
mutate(source = term,
target = 'mpg',
color = sapply(statistic, function(x){ifelse(x<0, 'red', 'green')}),
width = abs(statistic)/(max(abs(statistic))) * 10) %>%
filter(p.value <= .05) %>%
select(source, target, color, statistic, width)
g <- graph_from_data_frame(edge_list, directed = F)
plot(g)
If you wanted to explicitly plot the color and width, then
plot(g, edge.width = E(g)$width, edge.color = E(g)$color)
Sometimes you'll need to play around with scales - for instance, the difference in statistic scores are at most 2 and will look identical if you used the raw statistic score as the line width. If you want to get scaling for free, then you can use ggraph:
library(ggraph)
ggraph(g) +
geom_edge_link(aes(edge_colour = color,
edge_width = abs(statistic))) +
geom_node_text(aes(label = name)) +
scale_edge_color_manual(values = c('green' = 'green', 'red' = 'red'))
If you want to learn more about plotting with igraph, then some of the best tutorials for plotting in igraph can be found in Katherine Ognyanova's website: http://kateto.net/netscix2016 | unknown | |
d220 | train | Changes made in one database connection are invisible to all other database connections prior to commit.
So it seems a hybrid approach of having several connections open to the database provides adequate concurrency guarantees, trading off the expense of opening a new connection with the benefit of allowing multi-threaded write transactions.
A query sees all changes that are completed on the same database connection prior to the start of the query, regardless of whether or not those changes have been committed.
If changes occur on the same database connection after a query starts running but before the query completes, then it is undefined whether or not the query will see those changes.
If changes occur on the same database connection after a query starts running but before the query completes, then the query might return a changed row more than once, or it might return a row that was previously deleted.
For the purposes of the previous four items, two database connections that use the same shared cache and which enable PRAGMA read_uncommitted are considered to be the same database connection, not separate database connections.
Here is the SQLite information on isolation. Which is exceptionally useful to read and understand for this problem. | unknown | |
d221 | train | You can use the :not negation pseudo-class. Note that when combining pseudo-classes, you must put the second pseudo-class inside of brackets as :not(:first-of-type):
p:not(:first-of-type) {
background: red;
}
<p>The first paragraph.</p>
<p>The second paragraph.</p>
<p>The third paragraph.</p>
<p>The fourth paragraph.</p>
Note that if you're specifically looking to select every element other than the first child of an element, you can use :not(:first-child). Note that the selector goes on the child element in this case though, not the parent:
.parent p:not(:first-child) {
background: red;
}
<div class="parent">
<p>The first paragraph.</p>
<p>The second paragraph.</p>
<p>The third paragraph.</p>
<p>The fourth paragraph.</p>
</div>
A: Very simply:
p+p {
background: red;
}
<p>The first paragraph.</p>
<p>The second paragraph.</p>
<p>The third paragraph.</p>
<p>The fourth paragraph.</p>
The next-sibling combinator (+) targets an element that is immediately preceded by another element.
So in this case, only p elements following another p are selected. This excludes the first p.
You may also be interested in the subsequent-sibling combinator (~), which is similar to the above, except the first element does not need to immediately precede the second.
A:
.text p:not(:first-child) {
background: green;
}
<div class="text">
<p>The first paragraph.</p>
<p>The second paragraph.</p>
<p>The third paragraph.</p>
<p>The fourth paragraph.</p>
</div> | unknown | |
d222 | train | For portability, the appropriate way is to install the embedded database in the project directory & then specifying the relative path.
In general, you have to extract the content & specifying that path relative to the current directory as database url. Below are some examples.
*
*H2 Database - jdbc:h2:file:relative-database-path
*Apache Derby - By including required jars in classpath & configuring environment variable accordingly.
*HSQLDB - jdbc:hsqldb:file:relative-database-path | unknown | |
d223 | train | try it methood Custom Toast
public static void Toast(String textmessage) {
LinearLayout layout = new LinearLayout(getContext());
layout.setBackgroundResource(R.drawable.shape_toast);
layout.setPadding(30, 30, 30, 30);
TextView tv = new TextView(getContext());
tv.setTextColor(Color.WHITE);
tv.setTextSize(12);
tv.setTypeface(Typeface.createFromAsset(getContext().getAssets(), "fonts/font.ttf"));
tv.setGravity(Gravity.CENTER);
tv.setText(textmessage);
layout.addView(tv);
Toast toast = new Toast(getContext());
toast.setView(layout);
toast.setGravity(Gravity.BOTTOM, 0, 240);
toast.show();
}
you can try it methood Toast with duration
public class ToastExpander {
public static final String TAG = "ToastExpander";
public static void showFor(final Toast aToast, final long durationInMilliseconds) {
aToast.setDuration(Toast.LENGTH_SHORT);
Thread t = new Thread() {
long timeElapsed = 0l;
public void run() {
try {
while (timeElapsed <= durationInMilliseconds) {
long start = System.currentTimeMillis();
aToast.show();
sleep(1750);
timeElapsed += System.currentTimeMillis() - start;
}
} catch (InterruptedException e) {
Log.e(TAG, e.toString());
}
}
};
t.start();
}
}
and for show toast use this
Toast aToast = Toast.makeText(this, "Hello World", Toast.LENGTH_SHORT);
ToastExpander.showFor(aToast, 5000); | unknown | |
d224 | train | 1) as aka.nice already pointed out, it is not a good idea to fetch and remember the initial lastIndex. This will probably make things worse and lead to more trouble.
2) OrderedCollection as provided is not really prepared and does not like the receiver being modified while iterating over it.
3) A better solution would be to collect the elements to remove first, and then after the do:-processing remove them in a second step. However, I understand, that you cannot do this.
Possible solutions for you:
a) create a subclass of OrderedCollection, with redefined do:- and redefined removeXXX- and addXXX- methods. The later ones need to tell the iterator (i.e. the do-method) about what is going on.
(being careful if the index being removed/added is before the current do-index...).
The notification could be implemented via a proceedable signal/exception, which is signalled in the modifying methods and caught in the do-loop code.
b) create a wrapper class as subclass of Seq.Collection, which has the original collection as instvar and forwards selected messages to its (wrapped) original collection.
Similar to above, redefine do: and the remove/add methods in this wrapper and do the appropriate actions (.e. again signalling what changed).
Be careful where to keep the state, if the code needs to be reentrant (i.e. if another one does a loop on the wrapped collection); then you would have to keep the state in the do-method and use signals to communicate the changes.
Then enumerate the collection with sth like:
(SaveLoopWrapper on:myCollection) do:[: ...
].
and make sure that the code which does the remove also sees the wrapper-instance; not myCollection, so that the add/remove are really caught.
If you cannot to the later, there is another hack, coming to my mind: using MethodWrappers, you can change an individual instance's behavior and introduce hooks.
For example, create a subclass of OrderedCollection, with those hooks in, you could:
myColl changeClassTo: TheSubclassWithHooks
before iterating.
Then (protected by an ensure:) undo the wrapping after the loop. | unknown | |
d225 | train | :~A() {}
class B : public A
{
public:
virtual
~B()
{
}
std::string B_str;
};
class BB : public A
{
public:
virtual
~BB()
{
}
std::string BB_str;
};
class C : public A
{
protected:
virtual
~C()
{
}
virtual
void Print() const = 0;
};
class D : public B, public BB, public C
{
public:
virtual
~D()
{
}
};
class E : public C
{
public:
void Print() const
{
std::cout << "E" << std::endl;
}
};
class F : public E, public D
{
public:
void Print_Different() const
{
std::cout << "Different to E" << std::endl;
}
};
int main()
{
F f_inst;
return 0;
}
Compiling with g++ --std=c++11 main.cpp produces the error:
error: cannot declare variable ‘f_inst’ to be of abstract type ‘F’
F f_inst;
note: because the following virtual functions are pure within ‘F’:
class F : public E, public D
^
note: virtual void C::Print() const
void Print() const = 0;
^
So the compiler thinks that Print() is pure virtual.
But, I have specified what Print() should be in class E.
So, I've misunderstood some of the rules of inheritance.
What is my misunderstanding, and how can I correct this problem?
Note: It will compile if I remove the inheritance : public D from class F.
A: Currently your F is derived from C in two different ways. This means that an F object has two separate C bases, and so there are two instances of C::Print().
You only override the one coming via E currently.
To solve this you must take one of the following options:
*
*Also override the one coming via D, either by implementing D::Print() or F::Print()
*Make Print non-pure
*Use virtual inheritance so that there is only a single C base.
For the latter option, the syntax adjustments would be:
class E : virtual public C
and
class D : public B, public BB, virtual public C
This means that D and E will both have the same C instance as their parent, and so the override E::Print() overrides the function for all classes 'downstream' of that C.
For more information , look up "diamond inheritance problem". See also Multiple inheritance FAQ | unknown | |
d226 | train | If you want to connect flash with JS to actionscript use ExternalInterface. If you want to connect to e.g. PHP use NetConnection or UrlLoader
A: I've used XML-RPC in a Flash client before. I've gotten it to work pretty well too.
I've personally used this Action Script 3 implementation:
http://danielmclaren.com/2007/08/03/xmlrpc-for-actionscript-30-free-library
Of course, the server I was talking with was Java/Tomcat. However, I'm pretty sure there are XML-RPC implementations for JavaScript; a quick search found this:
http://phpxmlrpc.sourceforge.net/jsxmlrpc/
Don't know how much setup/overhead it would be for you server-wise, but I've had success with that protocol. | unknown | |
d227 | train | Define your lists inside the __init__ function.
class Unit:
def __init__(self):
self.arr = []
self.arr.clear()
for i in range(2):
self.arr.append(random.randint(1, 100))
print("arr in Unit ", self.arr)
class SetOfUnits:
def __init__(self):
self.lst = []
self.lst.clear()
for i in range(3):
self.lst.append(Unit())
print("arr in SetOfUnits ", self.lst[i].arr)
The way you are doing it, you define your variables class-wise, you need it to be instance-wise.
See: class variable vs instance variable --Python
A: lst and arr are class attributes, not instance attributes. | unknown | |
d228 | train | Since you are using FOP, I believe there is no provision to scale a background-image to fit. This would be an extension to the XSL FO Specification.
RenderX XEP supports this (http://www.renderx.com/reference.html#Background_Image).
It is unclear if you actually want the image behind the table (and you have other content) or you actually want the image behind the whole page.
You could put the image in an absolute positioned block-container and use content-width and content-height to scale, but this is not going to repeat for only the table. This would work for the page. If it is only the table, you are likely going to have to resize the actual image to fit correctly. | unknown | |
d229 | train | Map was added to the ECMAScript standard library in ECMAScript 2015. This is not just "something like a map", this is a map.
Here is a question with an answer of mine that uses a Map: How to declare Hash.new(0) with 0 default value for counting objects in JavaScript? | unknown | |
d230 | train | This Below lines give the list of Apps which are in background,
ActivityManager actvityManager = (ActivityManager)
this.getSystemService( ACTIVITY_SERVICE );
List<RunningAppProcessInfo> procInfos = actvityManager.getRunningAppProcesses();
procInfos.size() gives you number of apps | unknown | |
d231 | train | 'p4 files' will print the list of all the files in the repository, and for each file it will tell you the revision number. Then a little bit of 'awk' and 'sort' will find the files with the highest revision numbers. | unknown | |
d232 | train | In case you installed R through home-brew. This seems to be a known issue. youtrack.
I faced the same thing. Using the install from their website resolved the issue. | unknown | |
d233 | train | You main class should be something like this.
public class MonitoredStudentTester {
public static void main(String[] args) {
Scanner scan = new Scanner(System.in);
MonitoredStudent monStu = new MonitoredStudent();
String repeat = "n";
int currentScore = 0;
int minPassAv;
System.out.println("Enter the student's name:");
monStu.setName(scan.next());
System.out.println("What is the minimum passing average score: ");
minPassAv = scan.nextInt();
do {
System.out.println("Enter a quiz score: ");
currentScore = scan.nextInt();
monStu.addQuiz(currentScore);
monStu.setMinPassingAvg(minPassAv);
System.out.println("Would you like to enter any more scores?: (Y for yes, N for no)");
scan.nextLine();
repeat = scan.nextLine();
} while (repeat.equalsIgnoreCase("y"));
String studName = monStu.getName();
double totalScore = monStu.getTotalScore();
double avgScore = monStu.getAverageScore();
boolean offProb = monStu.isOffProbation();
System.out.println(studName + "'s Total Score is: " + totalScore);
System.out.println(studName + "'s Average Score is: " + avgScore);
System.out.println("Is " + studName + "off academic probation?: " + offProb);
}
}
When using inheritance you just need to create an object of child class. | unknown | |
d234 | train | The addChild method is used for adding a DisplayObject, which is a method from pure actionscript. So you would typically add Sprite, MovieClips etc. Widely used in action script based projects
The Flex classes like VGroup are written on top of the base actionscript classes, and therefore have an extra addElement method to be able to add an IVisualElement, which can also be an FXG for example, or other Flex based components such as UIComponent, Group etc.
However, I doubt that calling addChild on a VGroup will actually provide desired results and may result in errors.
Hope this answers your question.
A: The type IVisualElement and with it all the *Element methods from IVisualElementContainer were introduced in Flex SDK 4.0 as part of the spark layout and lifecycle features.
You are still free to build your own components based on UIComponents without having the spark features and only with the *Child methods.
All classes that extend DisplayObjectContainer of course inherit the *Child methods but in classes like Group (and SkinnableComponent) those methods have been overwritten to throw an error, as one needs to use the *Element methods to use the Spark features.
As Flex 4 supports using the MX layout as well as the Spark layout (there are options to use only one of them or both side by side when compiling) those methods could not be marked as deprecated in Group and SkinnableComponent, wich would at least trigger a compiler warning instead of a runtime error. | unknown | |
d235 | train | you go as
docker login your.domain.to.the.registr.without.protocol.or.port
enter username
enter password
now you can pull using docker pull your.domain.to.the.registr.without.protocol.or.port/youimage
Ensure your registry runs behind a SSL proxy / termination, or you run into security issues. Consider reading this in this case https://docs.docker.com/registry/insecure/ | unknown | |
d236 | train | ECDSA is supported in M2Crypto, but it can be optionally disabled. For example Fedora-based systems ship with ECDSA disabled in OpenSSL and M2Crypto. M2Crypto has some SMIME support as well, but since I haven't used it much I am not sure if that would be of help in this case. See the M2Crypto SMIME doc and SMIME unit tests, as well as ec unit tests.
A: Ecliptic Curve Cryptography (ECDSA) as well as the more common RSA is supported by the OpenSSL library. I recommend using the pyOpenSSL bridge.
A: You can try using the python ecdsa package, using Python3:
pip3 install ecdsa
Usage:
from ecdsa import SigningKey
sk = SigningKey.generate() # uses NIST192p
vk = sk.get_verifying_key()
sig = sk.sign(b"message")
vk.verify(sig, b"message") # True
To verify an existing signature with a public key:
from ecdsa import VerifyingKey
message = b"message"
public_key = '7bc9c7867cffb07d3443754ecd3d0beb6c4a2f5b0a06ea96542a1601b87892371485fda33fe28ed1c1669828a4bb2514'
sig = '8eb2c6bcd5baf7121facfe6b733a7835d01cef3d430a05a4bcc6c5fbae37d64fb7a6f815bb96ea4f7ed8ea0ab7fd5bc9'
vk = VerifyingKey.from_string(bytes.fromhex(public_key))
vk.verify(bytes.fromhex(sig), message) # True
The package is compatible with Python 2 as well | unknown | |
d237 | train | Add listView.notifyDataSetChanged(); after list is updated.
A: Create one public method in adapter class which is notifyDataSetChanged()
A: If you are using SQLite DB then to automatically populated the updated database values into your list view, you should extend your custom adapter with CursorAdapter. Once you fetch data in cursor it will automatically show you the updated values;
public class CustomAdapter extends CursorAdapter {
@Override
public View newView(Context context, Cursor cursor, ViewGroup parent) {
return LayoutInflater.from(context).inflate(R.layout.your_list_item, parent, false);
// return null;
}
@Override
public Object getItem(int position) {
return super.getItem(position);
}
@Override
public void bindView(View view, Context context, Cursor cursor) {
}
}
A: In your Fragment
private SharedPreferences.Editor editor;
private SharedPreferences sharedpreferences;
UpdatingReceiver updatingReceiver;
@Override
public View onCreateView(LayoutInflater inflater, @Nullable ViewGroup container, @Nullable Bundle savedInstanceState) {
View v =inflater.inflate(R.layout.tab_1,container,false);
sharedpreferences = PreferenceManager.getDefaultSharedPreferences(context);
editor = sharedpreferences.edit();
editor.putString("some unique key", name);
String name = "Inside";
}
@Override
protected void onPause() {
super.onPause();
editor = sharedpreferences.edit();
String name = "Outside";
editor.putString("some unique key", name);
editor.commit();
}
@Override
protected void onStart() {
super.onStart();
updatingReceiver = new UpdatingReceiver();
IntentFilter intentFilter = new IntentFilter();
intentFilter.addAction("updating receiver unique key");
registerReceiver(updatingReceiver, intentFilter);
}
@Override
protected void onStop() {
super.onStop();
unregisterReceiver(updatingReceiver);
}
class UpdatingReceiver extends BroadcastReceiver {
@Override
public void onReceive(Context context, Intent intent) {
//list all the data
//set adapter
//notifyDataSetChanged
}
}
So you can come to know if you are in inside the fragment or outside the fragment. Based on that you should trigger from your activity.
In your Activity, after you added the code in db just call the receiver based on inside or outside. So it will update immediately.
private SharedPreferences.Editor editor;
private SharedPreferences sharedpreferences;
sharedpreferences = PreferenceManager.getDefaultSharedPreferences(this);
String prefsString = sharedpreferences.getString("some unique key", "");
if (prefsString=="Inside")
{
Intent broadCast = new Intent();
broadCast.setAction("updating receiver unique key");
sendBroadcast(broadCast);
}
A: The solution:
@Override
protected void onActivityResult(int requestCode, int resultCode, Intent data) {adapter.notifyDataSetChanged();} | unknown | |
d238 | train | It means that element has all of the following classes: comment, smallertext and center-align. In HTML, spaces separate multiple class names in a single attribute.
In CSS, you can target this element using one or more of .comment, .smallertext or .center-align, either separately or together. Having three separate rules and a single element that has all three classes, all three rules will apply:
.comment {}
.smallertext {}
.center-align {}
You can also combine them together, if necessary, to apply styles specific to elements having all three classes:
.comment.smallertext.center-align {}
A: The code shown in the example links to 3 different css class selectors:
.comment {
}
.smallertext {
}
.center-align {
}
So instead of making lots of non-reusable css selectors, you split them up into lots of small ones that provide 1 functionality that will most likely be used for lots of different parts of your websites. In that example you have one for comments, one for smaller text and one for center aligning text.
A: It's a way of defining multiple classes to a single element. In this case it would match classes comment, smallertext and center-align.
A: Try this short explanation... Multiple classes can make it easier to add special effects to elements without having to create a whole new style for that element. | unknown | |
d239 | train | You could check the value of main_app.fk_lkp using a CASE expression
http://dev.mysql.com/doc/refman/5.7/en/control-flow-functions.html#operator_case
and perform a query based on that value.I could not test it but something like this should work
SELECT contact_profile.name, main_app.fk_lkp_app, main_app.id as main_id,
CASE
WHEN main_app.fk_lkp_app = 1 THEN (/* your query here */)
WHEN main_app.fk_lkp_app = 2 THEN (/* your query here */)
WHEN main_app.fk_lkp_app = 2 THEN (/* your query here */)
ELSE 0
END AS amount .... | unknown | |
d240 | train | Without seeing the HTML, it's hard to know what you're trying to accomplish.
I would suggest the following based on what you have provided:
$("#allimages").change(function() {
$("input[type='checkbox'].isImage:not('#checkAll')").prop('checked', $(this).prop("checked"));
});
When #allimages is changed, that same value will be sent to all checkboxes that have class isImage and is not #checkAll. | unknown | |
d241 | train | Write a BoolToVisibility IValueConveter and use it to bind to the Visibility property of your contentPanel
<StackPanel Visibility="{Binding YourBoolProperty, Converter={StaticResource boolToVisibilityResourceRef ..../>
You can find a BoolToVisibility pretty easy anywhere.
Check IValueConveter if you are new to that. http://msdn.microsoft.com/en-us/library/system.windows.data.ivalueconverter.aspx
A: I would recommend setting the ListBoxItem visibility at the ListBoxItem level or you will end up with tiny empty listbox items due to the default padding and border values e.g.
<ListBox>
<ListBox.Resources>
<Style TargetType="ListBoxItem">
<Setter Property="Visibility" Value="{Binding MyItem.IsVisible, Converter={StaticResource BooleanToVisibilityConverter}}" />
</Style>
</ListBox.Resources>
<ListBox.ItemTemplate>
<DataTemplate>
<StackPanel Orientation="Vertical">
<CheckBox Content="{Binding MyItemName}" IsChecked="{Binding IsVisible, Mode=TwoWay}"/>
</StackPanel>
</DataTemplate>
</ListBox.ItemTemplate>
</ListBox>
This hides the entire ListBoxItem not just the contents of it. | unknown | |
d242 | train | Use the onreadystatechange property to get the response after the success state, and then store the data in a custom header to troubleshoot issues with the POST body:
with(new XMLHttpRequest)
{
open("POST",{},true);
setRequestHeader("Foo", "Bar");
send("");
onreadystatechange = handler;
}
function handler(event)
{
!!event.target && !!event.target.readyState && event.target.readyState === 4 && ( console.log(event) );
}
References
*
*XMLHTTPRequest Living Standard
*Fetch Living Standard
A: It was same as GET
with(new XMLHttpRequest)
{
open("POST","http://google.com",true);
send("hello=world&no=yes");
onreadystatechange = function(){};
} | unknown | |
d243 | train | The code that crashes is deeply nested in AppKit. The window is busy redrawing a part of it's view hierarchy. In this process it uses a (private) _NSDisplayOperation objects, that responds to the mentioned rectSetBeingDrawnForView: selector.
The stack trace looks like AppKit tries to message an erroneously collected display operation object. The crash has probably nothing at all to do with your code.
So, what can you do about it?
*
*File a bug
*Avoid garbage collection | unknown | |
d244 | train | I know it's a long time since the question was asked, but after spending quite a bit of time on this, here is the issue:
In the config file of your module, you need to provide this line:
HTTP_AUX_FILTER_MODULES="$HTTP_AUX_FILTER_MODULES your_module_name"
And you can remove the HTTP_MODULES line if you only have a filter.
A: before
*
*ps awx | grep nginx to check nginx process id
*Stop Nginx server
gdb <path> // may be ->sr/local/nginx/sbin/nginx
(gdb) set-follow-fork-mode child
set detach-on-fork off
set logging on
set confirm off
rbreak ngx_http* // you want to break point ..
run | unknown | |
d245 | train | pixi.js (unminified) is 1.3MB, so what do you expect? If you want a smaller filesize you have to use a minification plugin for webpack, like uglify. | unknown | |
d246 | train | If you want to redirect traffic to different clusters based on the headers, you can define the following listener (the interesting part is the static_resources.listeners[0].filter_chains[0].filters[0].route_config.virtual_hosts[0].routes part, with the two matches defined) :
static_resources:
listeners:
- address:
socket_address:
address: 0.0.0.0
port_value: 8080
filter_chains:
- filters:
- name: envoy.filters.network.http_connection_manager
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
codec_type: AUTO
stat_prefix: ingress_http
route_config:
name: local_route
virtual_hosts:
- name: backend
domains:
- "*"
routes:
- match:
prefix: "/"
headers:
- name: "InstanceId"
exact_match: "1"
route:
cluster: clusterA
- match:
prefix: "/"
headers:
- name: "InstanceId"
exact_match: "2"
route:
cluster: clusterB
http_filters:
- name: envoy.filters.http.router | unknown | |
d247 | train | I've implemented this kind of thing and I'm satisfied with Apache Compress. Their examples helped enough to implement combination of tar and gzip. After you've tried to implement it with their examples you can come back to SO for further questions.
A: Checkout : https://github.com/zeroturnaround/zt-zip
Pack a complete folder :
ZipUtil.pack(new File("C:\\somewhere\\folder"), new File("C:\\somewhere\\folder.zip"))
and there is unpack. | unknown | |
d248 | train | if you do not need 'apple'effect, you can drag background image.
just effect set to 'default'.
and you may be need to change overlay style.
example(".overlay").overlay({
top: 50,
left: 50,
closeOnClick: false,
load: false,
effect: 'default',
speed: 1000,
oneInstance: false,
fixed: false,
});
.overlay {
display:none;
background-image:url(white.png);
background-size:cover;
width:160px;
padding:20px;
} | unknown | |
d249 | train | Maybe this will be helpful for your needs:
Tool Window
I dont know your other code parts, but I guess you initiate a window application, where you want to render the history list.
This window application needs:
private FirstToolWindow window;
private void ShowToolWindow(object sender, EventArgs e)
{
window = (FirstToolWindow) this.package.FindToolWindow(typeof(FirstToolWindow), 0, true);
... | unknown | |
d250 | train | No but you could do
function RunA(){
alert("I run function A!");
};
function RunB(){
alert("I run function B!");
};
function RunAB(){
RunA();
RunB();
};
A: Short answer: No.
Long answer: While it might seem like a good idea to save as much typing as possible, generally even if this was syntactically valid it is not a good idea to overcomplicate well defined customs in the name of fancy looking code.
Just use
function RunA(){
alert("I run function A!");
};
function RunB(){
alert("I run function B!");
}; | unknown | |
d251 | train | Hypothetically you need the distance between 2 geo locations. From the event geolocation to the the one calculated.
An identical thread from stackoverflow : Calculate distance between 2 GPS coordinates | unknown | |
d252 | train | The book Computational Geometry: an Introduction by Preparata and Shamos has a chapter on rectilinear polygons.
A: Use a sweep line algorithm, making use of the fact that a rectilinear polygon is defined by its vertices.
Represent the vertices along with the rectangle that they belong to, i.e. something like (x, y, #rect). To this set of points, add those points that result from the intersections of all edges. These new points are of the form (x, y, final), since we already know that they belong to the resulting set of points.
Now:
*
*sort all points by their x-value
*use a sweep line, starting at the first x-coordinate; for each new point:
*
*if it's a "start point", add it to a temporary set T. Mark it "final" if it's a point from rectangle A and between y-coordinates from points from rectangle B in T (or vice versa).
*if it's an "end point", remove it and its corresponding start point from T.
After that, all points that are marked "final" denote the vertices of the resulting polygon.
Let N be the total number of points. Further assuming that testing whether we should mark a point as being "final" takes time O(log(n)) by looking up T, this whole algorithm is in O(N*log(N)).
Note that the task of finding all intersections can be incorporated into the above algorithm, since finding all intersections efficiently is itself a sweep line algorithm usually. Also note that the resulting set of points may contain more than one polygon, which makes it slightly harder to reconstruct the solution polygons out of the "final" vertices. | unknown | |
d253 | train | var myFunc = function()
{
var size = $('.td-hide');
if ($('input#dimensions').is(':checked')) {
size.show();
$('.single-select-sm').css('width','148px')
} else {
size.hide();
$('.single-select-sm').css('width','230px')
}
}
$(document).on("click", ".dimensions", function() {
myFunc();
}
$(function() {
myFunc();
} | unknown | |
d254 | train | I'd probably start with $words = explode(' ', $string)
then sort the string by word length
usort($words, function($word1, $word2) {
if (strlen($word1) == strlen($word2)) {
return 0;
}
return (strlen($word1) < strlen($word2)) ? -1 : 1;
});
$longestWordSize = strlen(last($words));
Loop over the words and place in their respective buckets.
Rather than separate variables for each length array, you should consider something like
$sortedWords = array(
1 => array('a', 'I'),
2 => array('to', 'be', 'or', 'is'),
3 => array('not', 'the'),
);
by looping over the words you don't need to know the maximum word length.
The final solution is as simple as
foreach ($words as $word) {
$wordLength = strlen($word);
$sortedWords[ $wordLength ][] = $word;
}
A: You could use something like this:
$words = explode(" ", $string);
foreach ($words as $w) {
array_push(${"array" . strlen($w)}, $w);
}
This splits up $string into an array of $words and then evaluates each word for length and pushes that word to the appropriate array.
A: you can use explode().
$string = "The complete archive of The New York Times can now be searched from NYTimes.com " ;
$arr=explode(" ",$string);
$count=count($arr);
$big=0;
for ($i = 0; $i < $count; $i++) {
$p=strlen($arr[$i]);
if($big<$p){ $big_val=$arr[$i]; $big=$p;}
}
echo $big_val;
A: Just use the word length as the index and append [] each word:
foreach(explode(' ', $string) as $word) {
$array[strlen($word)][] = $word;
}
To remove duplicates $array = array_map('array_unique', $array);.
Yields:
Array
(
[3] => Array
(
[0] => The
[2] => New
[3] => can
[4] => now
)
[8] => Array
(
[0] => complete
[1] => searched
)
[7] => Array
(
[0] => archive
)
[2] => Array
(
[0] => of
[1] => be
)
[4] => Array
(
[0] => York
)
[5] => Array
(
[0] => Times
)
)
If you want to re-index the main array use array_values() and to re-index the subarrays use array_map() with array_values(). | unknown | |
d255 | train | Sure:
const nodemailer = require('nodemailer');
const transporter = nodemailer.createTransport({sendmail: true}, {
from: 'no-reply@your-domain.com',
to: 'your@mail.com',
subject: 'test',
});
transporter.sendMail({text: 'hello'});
Also see Configure sendmail inside a docker container
A: Nodemailer is a popular, stable, and flexible solution:
*
*http://www.nodemailer.com/
*https://github.com/andris9/Nodemailer
Full usage looks something like this (the top bit is just setup - so you would only have to do that once per app):
var nodemailer = require("nodemailer");
// create reusable transport method (opens pool of SMTP connections)
var smtpTransport = nodemailer.createTransport("SMTP",{
service: "Gmail",
auth: {
user: "gmail.user@gmail.com",
pass: "userpass"
}
});
// setup e-mail data with unicode symbols
var mailOptions = {
from: "Fred Foo ✔ <foo@blurdybloop.com>", // sender address
to: "bar@blurdybloop.com, baz@blurdybloop.com", // list of receivers
subject: "Hello ✔", // Subject line
text: "Hello world ✔", // plaintext body
html: "<b>Hello world ✔</b>" // html body
}
// send mail with defined transport object
smtpTransport.sendMail(mailOptions, function(error, response){
if(error){
console.log(error);
}else{
console.log("Message sent: " + response.message);
}
// if you don't want to use this transport object anymore, uncomment following line
//smtpTransport.close(); // shut down the connection pool, no more messages
}); | unknown | |
d256 | train | fpurge is not in the standard C library. It is nonstandard and not
portable. It is a BSD function.
http://bytes.com/topic/c/answers/845246-fpurge | unknown | |
d257 | train | Set the form's FormBorderStyle property to None. | unknown | |
d258 | train | At this time, I don't think you can do this. You need to quit your current figwheel session and restart in order to pick up new dependencies added to your :dependencies in your project.clj file. In fact, the figwheel docs also recommend running lein clean before you restart figwheel to be sure you don't end up with some old code.
I think this functionality is on the roadmap, but is not a high priority. There is considerable complexity in being able to have this functionality work reliably - especially if you add in the complexity of different repl environments (such as using piggyback, and cider with figwheel).
Note that this limitaiton is just with :dependency items in the project.clj. You can add :require lines in your cljs files dynamically and have them picked up (asusming the library is already in the dependencies list of course).
I suspect part of the compicaiton is ensuring the classpath is updated and that all processes already running which use the classpath are somehow updated and making sure all loaded classes are reloaded in case the dependency changes the dependencies of those loaded classes to keep things consistent. | unknown | |
d259 | train | gRPC offers several benefits over REST (and also some tradeoffs).
The three primary benefits are:
*
*Protocol Buffers are efficient. Because both sides already have the protobuf definitions, only the data needs to be transferred, not the structure. In contrast, for some JSON payloads, the names of the fields can make the payload significantly larger than just the data. Also, protobuf has a very compact representation "on the wire". Finally, you don't have the overhead of the HTTP headers for every request. (Again, for some smaller requests, the headers could be significantly larger than the request body itself.)
*gRPC can be bidirectional. gRPC services can have "streams" of data which can go both directions. HTTP must always have a request and response matched together.
*gRPC tries to make you think about version compatibility. Protocol Buffers are specifically designed to help you make sure you maintain backwards compatibility as your data structures change. If done properly, this can make future upgrades easier to implement.
With all those upsides, why not use gRPC for everything?
In my view, the main downside of gRPC is that it requires more tooling. Just about any programming language can handle HTTP and there are many debugging tools, but for gRPC you have to compile your .proto files for each language and share them. There also aren't as many testing tools. | unknown | |
d260 | train | use intersect(A,B) to get the answer.
Another option is to use ismember, for example A(ismember(A,B)). | unknown | |
d261 | train | Have a look at this, it might solve your problem.
http://allenfang.github.io/react-bootstrap-table/example.html#expand
A: You can easily access row data using the dot notation:
const expandRow = {
renderer: row => (
<div>
<p>Manager: {row.manager}</p>
<p>Revenue: {row.revenue}</p>
<p>Forecast: {row.forecast}</p>
</div>
)
}; | unknown | |
d262 | train | Bind to the lists Count property and create your own ValueConverter to convert from an int to a bool (in your case returning true if the int is larger than 0 and false otherwise). Note that your list would need to raise a PropertyChanged event when the count changes - ObservableCollection does that for example.
A: Either do it with a DataTrigger that binds to the Count property of the list and sets IsEnabled to false if it is zero, or use a ValueConverter.
However take care, that a List<T> does not implement INotifyPropertyChanged, which informs about changes of the Count property. An ObservableCollection<T> will do. | unknown | |
d263 | train | I would try to un-install and install again the Java8 JDK. Have you tried that?
Have you got multiple JDK installed? If yes try with just Java8 (un-install the others).
Or try also to run eclipse with
eclipse -vm c:\java8path\jre\bin\javaw.exe
or
eclipse -vm c:\java8path\jre\bin\client\jvm.dll | unknown | |
d264 | train | The setup for prerender and runtime server-side render is mostly similar, the only difference is one is static, the other dynamic. You will still configure everything Universal requires you to set up for it to work.
Before I go into your questions, I highly recommend you to follow this (step-by-step configurations) and this (useful sections about Angular Universal pitfalls) guides to configure Angular Universal as it is one of the more comprehensive and up-to-date write ups that I've read.
First question: How can I create a html file for each of these levels by using prerender.js and How should my static.paths.ts look like?
Your static.path.ts should contain all routes that you want to prerender:
export const ROUTES = [
'/',
'/category/1/subcategory/1/event/1/ticket/1',
'/category/1/subcategory/1/event/1/ticket/2',
...
];
Look tedious right? This is the trade off for having a static generated HTML as opposed to flexible run-time server-side rendering. You could and probably should write your own scripts to generate all routes available to your app (querying the database, generating all possible values, etc) to prerender all the pages that you want.
Second question: How can I set meta tags for each of these pages?
No different to how you set metatags or any Angular app, you can use the Title and Meta services that Angular provides.
Example:
constructor(
@Inject(PLATFORM_ID) private platformId: Object,
private meta: Meta,
private title: Title,
private pageMetaService: PageMetaService
) { }
ngOnInit() {
if (isPlatformBrowser(this.platformId)) {
this.title.setTitle((`${this.article.title} - Tilt Report`));
let metadata = [
{ name: 'title', content: 'Title' },
{ name: 'description', content: 'This is my description' },
{ property: 'og:title', content: 'og:Title' },
{ property: 'og:description', content: 'og:Description' },
{ property: 'og:url', content: 'http://www.example.com' },
];
metadata.forEach((tag) => { this.meta.updateTag(tag) });
};
}
Third question: How should my app-routing.module look like? Should I use children approach
You can choose to or not to use the 'children' approach, which I assume is lazy loading modules. As you configure Angular Universal, you should do certain setups to enable lazy loaded module to work server-side. | unknown | |
d265 | train | Adding comment as an actual answer...
The ts prefixed namespace isn't the problem because you're not accessing any elements in that namespace. The problem is the default namespace (the xmlns with no prefix).
What you need to do is add xmlns:a="http://www.w3.org/2005/Atom" to xsl:stylesheet and use that prefix in your selects. (select="a:feed/a:entry", select="a:title", and select="a:id")
Also note that you can use any prefix, not just "a". The only thing that has to be the same is the namespace itself (http://www.w3.org/2005/Atom). | unknown | |
d266 | train | If you're using Peewee 3.x, then:
class Post(Model):
timestamp = DateTimeField(default=datetime.datetime.now)
user = ForeignKeyField(
model=User,
backref='posts')
content = TextField()
class Meta:
database = DATABASE
Note: Meta.order_by is not supported in Peewee 3.x.
A: model=model.DO_NOTHING
try this hope this work | unknown | |
d267 | train | Since you cannot have more than five routes, I would suggest you use only one wild-carded route. So you run an if else on the wild card to call the appropriate method.
Route::get('{uri?}',function($uri){
if($uri == "/edit")
{
return app()->call('App\Http\Controllers\HomeController@editROA');
}else if($uri == "something else"){
return app()->call('App\Http\Controllers\SomeController@someMethod');
}
// add statements for other routes
});
view
<a type='button' class='btn-warning' href="{{url('edit')}}">Edit</a> | unknown | |
d268 | train | Read How to receive messages from a queue and make sure you use _queueClient.Complete() or _queueClient.Abandon() to finish with each message.
A: You can use "Microsoft.ServiceBus.Messaging" and purge messages by en-queue time. Receive the messages, filter by ScheduledEnqueueTime and perform purge when the message has been en-queued at the specific time.
using Microsoft.ServiceBus.Messaging;
MessagingFactory messagingFactory = MessagingFactory.CreateFromConnectionString(connectionString);
var queueClient = messagingFactory.CreateQueueClient(resourceName, ReceiveMode.PeekLock);
var client = messagingFactory.CreateMessageReceiver(resourceName, ReceiveMode.PeekLock);
BrokeredMessage message = client.Receive();
if (message.EnqueuedTimeUtc < MessageEnqueuedDateTime)
{
message.Complete();
} | unknown | |
d269 | train | Boy, do I want to shoot myself. I figured out my issue before I went to bed. My approach was correct; it was just a matter of me reading the output of Print statements wrong as well as underestimated just how nested the JSON was.
Internally, the JSONOBject class stores the JSON elements, pairs, etc. in a Hashtable. The Hashtable has a side-effect where it will sort the data that's given to it. This of course through off how the JSON was ordered. I figured it was consuming some parts of the JSON, while it really was just putting them to the back...the waaay back if not the end of the JSON. This greatly through me off. I did not realise this until I just ran toString on the Hashtable itself. I then also realise that the JSON was actually more nested than I thought. The four parts I wanted to get, where in 3 different nested JSON objects.
Thus, my solution was to save myself even more grief and just put the JSON through a pretty printer and looked and the structure properly.
Here is my Solution code:
public CustomerInfo(String jsonTxt) {
try {
JSONObject json = new JSONObject(jsonTxt);
JSONObject customer = new JSONObject(json.getString("CustomerInfo"));
JSONObject client = new JSONObject(customer.getString("clientDisplay"));
custNo = client.getString("globalCustNum");
custName = client.getString("displayName");
JSONObject cph = new JSONObject(customer.getString("clientPerformanceHistory"));
JSONObject caddress = new JSONObject(customer.getString("address"));
address = caddress.getString("displayAddress");
savAcctBal = cph.getDouble("totalSavingsAmount");
} catch (final JSONException je) {
je.printStackTrace();
}
}
protip: Always use a Pretty Printer on your JSON, and appreciate it's structure before you do anything. I swear this wont happen to me again.
A: You can parse the JSON string by the following example
public CustomerInfo(String jsonTxt) {
try {
JSONObject json= (JSONObject) new JSONTokener(jsonTxt).nextValue();
test = (String) json2.get("displayName");
}
catch (JSONException e) {
e.printStackTrace();
}
} | unknown | |
d270 | train | I'd added [assembly: XmlConfigurator(Watch = true)] to my Logging.Log4Net library, but I wasnt instantiating the TracerManager in my application on the tests I was performing...
ID-10Tango issue | unknown | |
d271 | train | Firstly we can use Intervention Image library. We must have php 7 and gd library installed. I am writing the commands to install gd library and webp library below (for ubuntu) :
sudo apt-get update
sudo apt-get install webp
sudo apt-get install php7.0-gd (check php version and then install accordingly)
now check file extension and if extension is webp, select your output file extension
$extension = $this->file->extension();
if($this->file->getMimeType() == 'image/webp'){
$extension = 'png';
}
// Generate a random filename
$fileName = time() . '_' . strtolower(uniqid()) . '.' . $extension;
Now encode the image to desired format
if($this->file->getMimeType() == 'image/webp'){
$image = $image->encode($extension);
}
$image = $image->stream();
Now upload the image to s3 bucket
Storage::disk('s3')->put($folderName . '/' . $fileName, $imageNormal->__toString()); | unknown | |
d272 | train | This is 2-D array no need to convert it into object you can still access it
To access uid you have to do something like this
echo $result[0]['uid'];
Hence you code will become
echo "<img src='http://graph.facebook.com/".$result[0]['uid']."/picture'>";
If you still want object instead of array you can do type cast.
$result_obj= (object) $result[0];
echo $result_obj->uid; | unknown | |
d273 | train | I think you need to remove the float and change br into div
<form method="post" action="">
{% csrf_token %}
{% for hidden in form.hidden_fields %}
{{ hidden }}
{% endfor %}
<div>
{# Include the visible fields #}
{% for field in form.visible_fields %}
<div className="fieldWrapper">
{{ field.errors }}
{{ field.label_tag }}
</div>
{{ field }}
{% endfor %}
</div>
<div><input type="submit" value="Submit"></div>
</form>
A: Check out this post Overwrite float:left property in span for the answer.
With Bens help I was able to overwrite some of the default CSS. | unknown | |
d274 | train | You can use css to disable it. I used this plug in (https://github.com/kevinburke/customize-twitter-1.1) to override twitters css and then just added:
.timeline .stream {overflow:hidden;}
I also hide the scrollbars by adding the same css directly into a locally stored copy of the twitter widget.js (around line 30).
A: Simple solution
Set the scrolling attribute to no in your iframe tag. like this:
scrolling="no"
Full iframe example:
<iframe scrolling="no" title="Twitter Tweet Button" style="border:0;overflow:hidden;" src="https://platform.twitter.com/widgets/tweet_button.html" height="28px" width="76px"></iframe> | unknown | |
d275 | train | Something like this as an idea.
select
t.name,c.name
from
sys.tables as t
left join sys.columns as c on t.object_id=c.object_id
order by t.name,c.column_id
A: Apparently, there's nothing wrong with the code. It's just the code is too long for the Console, and when copying from there, the content at the top is missing.
Disappointing mystery this time. Sorry! Thanks for the answers anyway! | unknown | |
d276 | train | The ../ syntax is correct to specify relative paths.
But this is not relative to the location of your Lua script but to your current working directory.
Refer to get current working directory in Lua
You cannot change the current working directory from within a Lua script unless you use libraries like LuaFileSystem.
If you're running a single script you can check if global arg[0] (if it is not nil) contains the path of that script. You can use that to build an absolute path from your script's location. | unknown | |
d277 | train | I think the problem is that git detects its own .git files and doesn't allow to work with them. If you however rename your test repo's .git folder to something different, e.g. _git it will work. Only one thing you need to do is to use GIT_DIR variable or --git-dir command line argument in your tests to specify the folder.
A: Even though it is not an "externally referenced piece of software", submodules are still a good approach, in that it helps to capture known state of repositories.
I would rather put both repo and test-repo within a parent repo "project":
project
repo
rest-repo
That way, I can record the exact SHA1 of both repo and test-repo. | unknown | |
d278 | train | ODBC-it is designed for connecting to relational databases.
However, OLE DB can access relational databases as well as nonrelational databases.
There is data in your mail servers, directory services, spreadsheets, and text files. OLE DB allows SQL Server to link to these nonrelational database systems. For instance, if you want to query, through SQL Server, the Active Directory on the domain controller, you couldn't do this with ODBC, because it's not a relational database. However, you could use an OLE DB provider to accomplish that.
http://www.sqlservercentral.com/Forums/Topic537592-338-1.aspx | unknown | |
d279 | train | You shouldn't push the default route from your OpenVPN server - you push only routes to the network you want to access. For example I have OpenVPN running on internal network, so in OpenVPN server.conf I have this:
push "route 10.10.2.0 255.255.255.0"
push "route 172.16.2.0 255.255.255.0"
This will cause Windows OpenVPN client to add only routes for these 2 networks after connect, so it won't affect the default route and internet traffic.
One caveat is that at least Windows 7 recognizes different networks by their gateways. If the network doesn't have a gateway, Windows is unable to recognize the network and you are unable to choose if is it Home/Work/Public network (which would deny samba access if using Windows Firewall).
The workaround I use is to add a default gateway route with big metric (999), so that it is never used for routing by Windows. I have this in the clients config file, but probably it can be put also to the server's config.
# dummy default gateway because of win7 network identity
route 0.0.0.0 0.0.0.0 vpn_gateway 999 | unknown | |
d280 | train | As mentioned by @MarkSeeman in this post about numbers
Currently, AutoFixture endeavours to create unique numbers, but it doesn't guarantee it. For instance, you can exhaust the range, which is most likely to happen for byte values [...]
If it's important for a test case that numbers are unique, I would recommend making this explicit in the test case itself. You can combine Generator with Distinct for this
So for this specific situation, I now use
string[] dates = new Generator<DateTime>(_fixture)
.Select(x => x.ToShortDateString())
.Distinct()
.Take(4).ToArray();
A: You can generate unique integers (lets say days) and then add it to some min date:
var minDate = _fixture.Create<DateTime>().Date;
var dates = _fixture.CreateMany<int>(4).Select(i => minDate.AddDays(i)).ToArray();
But I'm not sure that AutoFixture guarantees that all generated values will be unique (see this issue for example) | unknown | |
d281 | train | As mentioned under the Mass download of market data: section of the blog posted by the Author/Maintainer of the pip package: https://aroussi.com/post/python-yahoo-finance, You need to pass tickers within a single string, though space separated:
>>> import yfinance as yf
>>> data = yf.download("EURUSD=X GBPUSD=X", start="2020-01-01")
[*********************100%***********************] 2 of 2 completed
>>> data.keys()
MultiIndex([('Adj Close', 'EURUSD=X'),
('Adj Close', 'GBPUSD=X'),
( 'Close', 'EURUSD=X'),
( 'Close', 'GBPUSD=X'),
( 'High', 'EURUSD=X'),
( 'High', 'GBPUSD=X'),
( 'Low', 'EURUSD=X'),
( 'Low', 'GBPUSD=X'),
( 'Open', 'EURUSD=X'),
( 'Open', 'GBPUSD=X'),
( 'Volume', 'EURUSD=X'),
( 'Volume', 'GBPUSD=X')],
)
>>> data['Close']
EURUSD=X GBPUSD=X
Date
2019-12-31 1.120230 1.311303
2020-01-01 1.122083 1.326260
2020-01-02 1.122083 1.325030
2020-01-03 1.117144 1.315270
2020-01-06 1.116196 1.308010
... ... ...
2021-02-08 1.204877 1.373872
2021-02-09 1.205360 1.374570
2021-02-10 1.211999 1.381799
2021-02-11 1.212121 1.383260
2021-02-12 1.209482 1.382208
[294 rows x 2 columns]
You can also use group_by='ticker' in case you want to traverse over the ticker instead of the Closing price/Volume etc.
data = yf.download("EURUSD=X GBPUSD=X", start="2020-01-01", group_by='ticker')
A: you can download the stock prices of multiple assets at once, by providing a list (such as [‘TSLA', ‘FB', ‘MSFT']) as the tickers argument.
try like this :
data = yf.download(['EURUSD=X','GBPUSD=X'], start="2020-01-01") | unknown | |
d282 | train | http://code.google.com/apis/maps/index.html take a look at the google api its very easy to work with | unknown | |
d283 | train | You can put any downloadable files you like in a hello-world .war file and they'll be downloadable over HTTP. It's silly to use an application server without an application.
A: Liberty is not intended to be used as a generic file server. That said, there are MBean operations supporting file transfer capabilities via the Liberty REST Connector. Javadoc for these operations may be found at <liberty-install-root>/dev/api/ibm/javadoc/com.ibm.websphere.appserver.api.restConnector_1.3-javadoc.zip | unknown | |
d284 | train | With below snippest you can get a list of the matching index field from seconds dataframe.
import pandas as pd
df_ts = pd.DataFrame(data = {'index in df':[0,1,2,3,4,5,6,7,8,9,10,11,12],
"pid":[1,1,2,2,3,3,3,4,6,8,8,9,9],
})
df_cmexport = pd.DataFrame(data = {'index in df':[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20],
"pid":[1,1,1,2,3,3,3,3,3,4,4,4,5,5,6,7,8,8,9,9,9],
})
Create new dataframe by mearging the two
result = pd.merge(df_ts, df_cmexport, left_on=["pid"], right_on=["pid"], how='left', indicator='True', sort=True)
Then identify unique values in "index in df_y" dataframe
index_list = result["index in df_y"].unique()
The result you get;
index_list
Out[9]:
array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 14, 16, 17, 18, 19,
20], dtype=int64) | unknown | |
d285 | train | By definition all operations on NA will yield NA, therefore x == NA always evaluates to NA. If you want to check if a value is NA, you must use the is.na function, for example:
> NA == NA
[1] NA
> is.na(NA)
[1] TRUE
The function you pass to sapply expects TRUE or FALSE as return values but it gets NA instead, hence the error message. You can fix that by rewriting your function like this:
bobpresent <- function(x) { ifelse(is.na(x), 0, 1) }
In any case, based on your original post I don't understand what you're trying to do. This change only fixes the error you get with sapply, but fixing the logic of your program is a different matter, and there is not enough information in your post. | unknown | |
d286 | train | "{\"source\": \"FOO.\", \"model\": ...
Is a JSON object inside a JSON string literal. To get at the inner JSON's properties, you'll have to decode it again.
data = json.loads(line)
if 'derivedFrom' in data:
dFin = json.loads(data['derivedFrom'])
if 'derivedIds' in dFin:
....
JSON-in-JSON is typically a mistake as there's rarely a need for it - what is producing this output, does it need fixing?
A: Use:
'derivedIds' in dFin
This works both on dictionaries and on unicode, even though with unicode it could give false positives.
A more robust approach could use Duck Typing:
try:
dFin = json.loads(data['derivedFrom']) #assume new format
except TypeError:
dFin = data['derivedFrom'] #it's already a dict
if 'derivedIds' in dFin: # or dFin.has_key('derivedIds')
#etc
A: You are changing the derivedFrom property from a JSON object to a string. Strings don't have an attribute named has_key.
A: If you want the exact same block of code to work, consider slightly adjusting your new format to following:
"{\"derivedFrom\": {\"source\": \"FOO.\", \"model\": \"BAR\", \"derivedIds\": [\"123456\"]}}" | unknown | |
d287 | train | Doing str_replace("\t", ',', $output) would probably work.
Here's how you'd get it into an associative array (not what you asked but it could prove useful to helping you understanding how the output is formatted):
$output = $ssh->exec('mysql -uMyUser -pMyPassword MyTable -e "SELECT * FROM users LIMIT"');
$output = explode("\n", $output);
$colNames = explode("\t", $output[0]);
$colValues = explode("\t", $output[1]);
$cols = array_combine($colNames, $colValues); | unknown | |
d288 | train | You can try the PayloadFactory Mediator
<payloadFactory media-type="json">
<format>{}</format>
<args/>
</payloadFactory> | unknown | |
d289 | train | Regex in C# cannot check for external conditions: the result of the match is only dependent on the input string.
If you cannot add any other code and you are only able to change the expressions used then it cannot be done. | unknown | |
d290 | train | It's not clear from the question, but I guess you want something to appear when you click the checkbox? This should get you started.
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd">
<html>
<head>
<style>
#appear_div { display: none; }
</style>
<script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.3.2/jquery.min.js"></script>
<script>
$(document).ready(function() {
$('#appear').click(function() { $('#appear_div').show(); });
});
</script>
</head>
<body>
<input type="checkbox" id="appear">
<div id="appear_div">
<input type="checkbox" id="cb1">Check me <input type="text" id="text1">
</div>
</body>
</html> | unknown | |
d291 | train | You have to group by all fields that are not aggregated. So value needs to be summed up or grouped by.
Try:
var result = TestList
.GroupBy(t => t.id)
.Select(g => new { id = g.Key, g.OrderByDescending(c => c.dt).First().dt, g.OrderByDescending(c => c.dt).First().value });
A: Based on comments and the question, you want: for each distinct id, the instance with the maximum dt.
I would add a help method: MaxBy which allows a whole object to be selected based on the value of a function1:
public static T MaxBy<T,TValue>(this IEnumerable<T> input, Func<T,TValue> projector)
where TValue : IComparable<TValue> {
T found = default(T);
TValue max = default(TValue);
foreach (T t in input) {
TValue p = projector(t);
if (max.CompareTo(p) > 0) {
found = t;
max = p;
}
}
return found;
}
And then the query becomes:
var q = from p in TestList
group p by p.id into g
select g.MaxBy(w => w.dt);
NB. this implementation of MaxBy will only work for objects where the value of the member being compared is greater than its type's default value (e.g. for int: greater than zero). A better implementation of MaxBy would use the enumerator manually and initialise both found and max variables directly from the first element of the input.
1 if you are using The Reactive Extensions (Rx) this is included in the System.Interactive assembly. | unknown | |
d292 | train | take a look at the following script (Adventure Works DW 2008 R2):
It will return correlation of [Internet Sales Amount] measure for two different product subcategories ("Mountain Bikes"/"RoadBikes") for months of current date member on rows (Calendar Year 2007 quarters and Calendar Year 2007). I have left other comparable members in comments.
with
member ActualMeasure AS [Measures].[Internet Sales Amount]
member m1 AS
(
[Product].[Product Categories].[Subcategory].&[1] -- Mountain Bikes
-- [Sales Territory].[Sales Territory].[Group].&[North America]
-- [Customer].[Gender].&[F]
,ActualMeasure
)
member m2 AS
(
[Product].[Product Categories].[Subcategory].&[2] -- Road Bikes
-- [Sales Territory].[Sales Territory].[Group].&[Europe]
-- [Customer].[Gender].&[M]
, ActualMeasure
)
member x as
Correlation
(
{Descendants([Date].[Calendar].CurrentMember,[Date].[Calendar].[Month]) } as dates
, m1
, m2
), Format_String="Standard"
select
{ x,m1,m2 } on 0,
{
Descendants
(
[Date].[Calendar].[Calendar Year].&[2007]
, [Date].[Calendar].[Calendar Quarter]
)
,[Date].[Calendar].[Calendar Year].&[2007]
} on 1
from [Adventure Works]
HTH,
Hrvoje Piasevoli | unknown | |
d293 | train | Instead of ajaxOptions, use params. If I remember correctly, test and its value will be included in your POST request by x-editable. Try something like this:
Html
<a id="other1" data-pk="1" data-name="test">First Name</a>
AJAX
$(document).ready(function() {
$('#other1').editable({
type: 'text',
url: '/create_post/',
params : function(params) {
params.csrfmiddlewaretoken = '{{ csrf_token }}';
return params;
},
placement: 'top',
title: 'New Expense',
success: function(response, newValue) {
if(response.status == 'error') return response.msg; //ms
},
});
}); | unknown | |
d294 | train | // check if the array is empty
if(empty($files)){
// add a default image to the empty array (the timestamp doesn't matter because it will not be used)
$files[] = array('default_image.png');
// because it's empty you could also use:
$files = array(array('default_image.png'));
} | unknown | |
d295 | train | I have solved this as follows.
*
*Remote the annotation from the action.
*Add the following code at the beginning of the action (news and submit being the relevant controller and action respectively).
if (!springSecurityService.isLoggedIn()) {
flash.message = "You must be logged in to submit a news story."
redirect(controller:"login", action: "auth", params:["spring-security-redirect" : "/news/submit"])
}
*Add the following to the login form.
<input type='hidden' name='spring-security-redirect' value='${params['spring-security-redirect']}'/>
A: Add, for example, this to your login view:
<sec:noAccess url="/admin/listUsers">
You must be logged in to view the list of users.
</sec:noAccess>
See the security taglib's documentation. | unknown | |
d296 | train | As @Thomas has pointed out in the comments, iterating over the whole Map would be wasteful. And solution he proposed probably is the cleanest one:
map.getOrDefault(name, Collections.emptyList()).stream()
Alternatively, you can make use of flatMap() without performing redundant iteration through the whole map like that:
public static <K, V> Stream<V> getStreamByKey(Map<K, Collection<V>> map,
K key) {
return Stream.ofNullable(map.get(key))
.flatMap(Collection::stream);
}
The same result can be achieved with Java 16 mapMulti():
public static <K, V> Stream<V> getStreamByKey(Map<K, Collection<V>> map,
K key) {
return Stream.ofNullable(map.get(key))
.mapMulti(Iterable::forEach);
}
A: As others pointed out in comments, use flatMap instead of map in the last step to reduce the double nesting:
return table.entrySet().stream()
.filter(map -> map.getKey().equals(name))
.flatMap(entry -> entry.getValue().stream()); | unknown | |
d297 | train | You can just use the ŧf.keras.Model API:
actor_model = tf.keras.Model(inputs=...,outputs=...)
Q_model = tf.keras.Model(inputs=actor_model.outputs, outputs=...) | unknown | |
d298 | train | scope :book_form_sort_order, -> { order("ranking IS NULL, ranking ASC, name ASC") } | unknown | |
d299 | train | Usually, this error means there is an error in the Django project somewhere. It could be hard to locate.
You can try multiple solutions like:
1. Restart Apache
2. Execute makemigrations and migrate Django commands
3. Modify your wsgi.py file so you can manage the exception
A little note, in the line File "/home/abhadran/myenv/lib/python3.6/site-packages/django/apps/registry.py" indicate that you're using Python 3.6 and not Python 3.5 as you mentioned in your description.
Python 3.6 is officially supported until Django 1.11, so you maybe must upgrade your Django version or downgrade your Python in your venv.
Related posts:
- Post 1
- Post 2
- Post 3
- Post 4
- Post 5 | unknown | |
d300 | train | I solved this issue by increasing the default value(700) of Build process heap size on IntelliJ's compiler settings.
A:
I met the same problem
I solved it by changing the Target bytecode error from 1.5 to 8
A: You have to disabled the Javac Options: Use compiler from module target JDK when possible.
A: I changed my compiler to Eclipse and run my project. Afterwards changed back to Javac and problem solved. I don't know exact problem but it can help who is looking for solution.
A: In my case, it was response type in restTemplate:
ResponseEntity<Map<String, Integer>> response = restTemplate.exchange(
eurl,
HttpMethod.POST,
requestEntity,
new ParameterizedTypeReference<>() { <---- this causes error
}
);
Should be like this:
ParameterizedTypeReference<Map<String, Integer>> responseType = new ParameterizedTypeReference<>() {};
ResponseEntity<Map<String, Integer>> response = restTemplate.exchange(
url,
HttpMethod.POST,
requestEntity,
responseType
);
A: It May is not be relevant to this case, but:
I got this error when I change the Explicit type argument List of:
new ParameterizedTypeReference<List<SomeDtoObject>>()
to <> :
new ParameterizedTypeReference<>()
in restTemplate call after Intellij gave the warning to use <> instead.
It got fixed when I undo my changes back into the Explicit type argument.
A: In my case, using Java 11, I had:
public List<String> foo() {
...
return response.readEntity(new GenericType<List<String>>() {});
and Intellij suggested I should use <> instead of GenericType<List<String>>, as such:
public List<String> foo() {
...
return response.readEntity(new GenericType<>() {});
I did that in four functions and the project stopped compiling with an internal compiler error, reverted and it compiled again. Looks like a bug with type inference.
A: In JIdea 2020.1.2 and above,
This is may be the language-level set in Project Structure is not compatible with the target byte-code version.
You have to change the target bytecode version .
*
*Go to Settings [ Ctrl+Alt+S ]
*Select Java Compiler
*Select module in the table
*Change the byte-code version to map what you selected in the previous step for language-level
NOTE :
How to check the language-level
*
*Go to Project Structure [ Ctrl+Alt+Shift+S
]
*Select Modules sub section
*Select each module
*Under sources-section, check Language Level
A: In my case it was because of lombok library with intellij 2019.2 & java11.
According to this IDEA bug after workaround idea works again:
Disable all building from intelliJ and dedicate the build to Maven.
A: For me the module's target bytecode version was set to 5. I changed it to 8 and the error is gone:
A: Changing the Language Level in the Project Settings (Ctrl + Alt + Shift + S) to Java 8 solved the problem for me
A: *
*On Intellij IDEA Ctrl + Alt + S to open settings.
*Build, Execution, Deployment -> Compiler -> Java Compiler
*choose your java version from Project bytecode version
*Uncheck Use compiler from module target JDK when possible
*click apply and ok.
A: I had the same problem. I fixed changing my settings. Target bytecode version for equals Project bytecode version.
A: What worked for me is to update the Open JDK version
A: I got the same error with Community edition 2020.3 on Windows 10 with an older version of the JDK (openjdk version "11" 2018-09-25).
Updating the JDK to javac 11.0.10 fixed the issue.
Here's the stack trace that showed up with the error when using openjdk version "11" 2018-09-25:
java: compiler message file broken: key=compiler.misc.msg.bug arguments=11, {1}, {2}, {3}, {4}, {5}, {6}, {7}
java: java.lang.AssertionError
java: at jdk.compiler/com.sun.tools.javac.util.Assert.error(Assert.java:155)
java: at jdk.compiler/com.sun.tools.javac.util.Assert.check(Assert.java:46)
java: at jdk.compiler/com.sun.tools.javac.comp.DeferredAttr$2$1.setOverloadKind(DeferredAttr.java:172)
java: at jdk.compiler/com.sun.tools.javac.comp.ArgumentAttr.visitReference(ArgumentAttr.java:283)
java: at jdk.compiler/com.sun.tools.javac.tree.JCTree$JCMemberReference.accept(JCTree.java:2190)
java: at jdk.compiler/com.sun.tools.javac.comp.ArgumentAttr.attribArg(ArgumentAttr.java:197)
java: at jdk.compiler/com.sun.tools.javac.comp.Attr.attribTree(Attr.java:653)
java: at jdk.compiler/com.sun.tools.javac.comp.Attr.attribArgs(Attr.java:751)
java: at jdk.compiler/com.sun.tools.javac.comp.Attr.visitApply(Attr.java:1997)
java: at jdk.compiler/com.sun.tools.javac.tree.JCTree$JCMethodInvocation.accept(JCTree.java:1634)
java: at jdk.compiler/com.sun.tools.javac.comp.Attr.attribTree(Attr.java:655)
java: at jdk.compiler/com.sun.tools.javac.comp.Attr.visitSelect(Attr.java:3573)
java: at jdk.compiler/com.sun.tools.javac.tree.JCTree$JCFieldAccess.accept(JCTree.java:2110)
java: at jdk.compiler/com.sun.tools.javac.comp.Attr.visitApply(Attr.java:2006)
java: at jdk.compiler/com.sun.tools.javac.comp.Attr.visitReturn(Attr.java:1866)
java: at jdk.compiler/com.sun.tools.javac.tree.JCTree$JCReturn.accept(JCTree.java:1546)
java: at jdk.compiler/com.sun.tools.javac.comp.Attr.attribStat(Attr.java:724)
java: at jdk.compiler/com.sun.tools.javac.comp.Attr.attribStats(Attr.java:743)
java: at jdk.compiler/com.sun.tools.javac.comp.Attr.visitBlock(Attr.java:1294)
java: at jdk.compiler/com.sun.tools.javac.tree.JCTree$JCBlock.accept(JCTree.java:1020)
java: at jdk.compiler/com.sun.tools.javac.comp.DeferredAttr.attribSpeculative(DeferredAttr.java:498)
java: at jdk.compiler/com.sun.tools.javac.comp.DeferredAttr.attribSpeculative(DeferredAttr.java:481)
java: at jdk.compiler/com.sun.tools.javac.comp.DeferredAttr.attribSpeculativeLambda(DeferredAttr.java:456)
java: at jdk.compiler/com.sun.tools.javac.comp.DeferredAttr$DeferredAttrNode$StructuralStuckChecker.canLambdaBodyCompleteNormally(DeferredAttr.java:900)
java: at jdk.compiler/com.sun.tools.javac.comp.DeferredAttr$DeferredAttrNode$StructuralStuckChecker.visitLambda(DeferredAttr.java:878)
java: at jdk.compiler/com.sun.tools.javac.tree.JCTree$JCLambda.accept(JCTree.java:1807)
java: at jdk.compiler/com.sun.tools.javac.comp.DeferredAttr$DeferredAttrNode$StructuralStuckChecker.complete(DeferredAttr.java:832)
java: at jdk.compiler/com.sun.tools.javac.comp.DeferredAttr$DeferredType.check(DeferredAttr.java:335)
java: at jdk.compiler/com.sun.tools.javac.comp.DeferredAttr$DeferredAttrNode.process(DeferredAttr.java:779)
java: at jdk.compiler/com.sun.tools.javac.comp.DeferredAttr$DeferredAttrContext.complete(DeferredAttr.java:626)
java: at jdk.compiler/com.sun.tools.javac.comp.Infer.instantiateMethod(Infer.java:214)
java: at jdk.compiler/com.sun.tools.javac.comp.Resolve.rawInstantiate(Resolve.java:605)
java: at jdk.compiler/com.sun.tools.javac.comp.Resolve.selectBest(Resolve.java:1563)
java: at jdk.compiler/com.sun.tools.javac.comp.Resolve.findMethodInScope(Resolve.java:1733)
java: at jdk.compiler/com.sun.tools.javac.comp.Resolve.findMethod(Resolve.java:1802)
java: at jdk.compiler/com.sun.tools.javac.comp.Resolve.findMethod(Resolve.java:1776)
java: at jdk.compiler/com.sun.tools.javac.comp.Resolve$10.doLookup(Resolve.java:2654)
java: at jdk.compiler/com.sun.tools.javac.comp.Resolve$BasicLookupHelper.lookup(Resolve.java:3293)
java: at jdk.compiler/com.sun.tools.javac.comp.Resolve.lookupMethod(Resolve.java:3543)
java: at jdk.compiler/com.sun.tools.javac.comp.Resolve.resolveQualifiedMethod(Resolve.java:2651)
java: at jdk.compiler/com.sun.tools.javac.comp.Resolve.resolveQualifiedMethod(Resolve.java:2645)
java: at jdk.compiler/com.sun.tools.javac.comp.Attr.selectSym(Attr.java:3721)
java: at jdk.compiler/com.sun.tools.javac.comp.Attr.visitSelect(Attr.java:3601)
java: at jdk.compiler/com.sun.tools.javac.comp.Attr.visitLambda(Attr.java:2598)
java: at jdk.compiler/com.sun.tools.javac.comp.DeferredAttr$4.complete(DeferredAttr.java:374)
java: at jdk.compiler/com.sun.tools.javac.comp.DeferredAttr$DeferredType.check(DeferredAttr.java:321)
java: at jdk.compiler/com.sun.tools.javac.comp.Resolve$MethodResultInfo.check(Resolve.java:1060)
java: at jdk.compiler/com.sun.tools.javac.comp.Resolve$4.checkArg(Resolve.java:887)
java: at jdk.compiler/com.sun.tools.javac.comp.Resolve$AbstractMethodCheck.argumentsAcceptable(Resolve.java:775)
java: at jdk.compiler/com.sun.tools.javac.comp.Resolve$4.argumentsAcceptable(Resolve.java:896)
java: at jdk.compiler/com.sun.tools.javac.comp.Infer.instantiateMethod(Infer.java:181)
java: at jdk.compiler/com.sun.tools.javac.comp.Resolve.checkMethod(Resolve.java:644)
java: at jdk.compiler/com.sun.tools.javac.comp.Attr.checkMethod(Attr.java:4120)
java: at jdk.compiler/com.sun.tools.javac.comp.Attr.checkIdInternal(Attr.java:3913)
java: at jdk.compiler/com.sun.tools.javac.comp.Attr.checkMethodIdInternal(Attr.java:3814)
java: at jdk.compiler/com.sun.tools.javac.comp.Attr.checkId(Attr.java:3803)
java: at jdk.compiler/com.sun.tools.javac.comp.Attr.visitSelect(Attr.java:3696)
java: at jdk.compiler/com.sun.tools.javac.comp.Attr.visitLambda(Attr.java:2595)
java: at jdk.compiler/com.sun.tools.javac.comp.DeferredAttr$DeferredAttrNode.process(DeferredAttr.java:811)
java: at jdk.compiler/com.sun.tools.javac.comp.Attr.visitIdent(Attr.java:3553)
java: at jdk.compiler/com.sun.tools.javac.tree.JCTree$JCIdent.accept(JCTree.java:2243)
java: at jdk.compiler/com.sun.tools.javac.comp.Attr.attribExpr(Attr.java:702)
java: at jdk.compiler/com.sun.tools.javac.comp.Attr.visitExec(Attr.java:1773)
java: at jdk.compiler/com.sun.tools.javac.tree.JCTree$JCExpressionStatement.accept(JCTree.java:1452)
java: at jdk.compiler/com.sun.tools.javac.comp.Attr.visitMethodDef(Attr.java:1098)
java: at jdk.compiler/com.sun.tools.javac.tree.JCTree$JCMethodDecl.accept(JCTree.java:866)
java: at jdk.compiler/com.sun.tools.javac.comp.Attr.attribClassBody(Attr.java:4683)
java: at jdk.compiler/com.sun.tools.javac.comp.Attr.attribClass(Attr.java:4574)
java: at jdk.compiler/com.sun.tools.javac.comp.Attr.attribClass(Attr.java:4523)
java: at jdk.compiler/com.sun.tools.javac.comp.Attr.attribClass(Attr.java:4503)
java: at jdk.compiler/com.sun.tools.javac.comp.Attr.attrib(Attr.java:4448)
java: at jdk.compiler/com.sun.tools.javac.main.JavaCompiler.attribute(JavaCompiler.java:1341)
java: at jdk.compiler/com.sun.tools.javac.main.JavaCompiler.compile(JavaCompiler.java:973)
java: at jdk.compiler/com.sun.tools.javac.api.JavacTaskImpl.lambda$doCall$0(JavacTaskImpl.java:104)
java: at jdk.compiler/com.sun.tools.javac.api.JavacTaskImpl.handleExceptions(JavacTaskImpl.java:147)
java: at jdk.compiler/com.sun.tools.javac.api.JavacTaskImpl.doCall(JavacTaskImpl.java:100)
java: at jdk.compiler/com.sun.tools.javac.api.JavacTaskImpl.call(JavacTaskImpl.java:94)
java: at org.jetbrains.jps.javac.JavacMain.compile(JavacMain.java:231)
java: at org.jetbrains.jps.incremental.java.JavaBuilder.compileJava(JavaBuilder.java:501)
java: at org.jetbrains.jps.incremental.java.JavaBuilder.compile(JavaBuilder.java:353)
java: at org.jetbrains.jps.incremental.java.JavaBuilder.doBuild(JavaBuilder.java:277)
java: at org.jetbrains.jps.incremental.java.JavaBuilder.build(JavaBuilder.java:231)
java: at org.jetbrains.jps.incremental.IncProjectBuilder.runModuleLevelBuilders(IncProjectBuilder.java:1441)
java: at org.jetbrains.jps.incremental.IncProjectBuilder.runBuildersForChunk(IncProjectBuilder.java:1100)
java: at org.jetbrains.jps.incremental.IncProjectBuilder.buildTargetsChunk(IncProjectBuilder.java:1224)
java: at org.jetbrains.jps.incremental.IncProjectBuilder.buildChunkIfAffected(IncProjectBuilder.java:1066)
java: at org.jetbrains.jps.incremental.IncProjectBuilder.buildChunks(IncProjectBuilder.java:832)
java: at org.jetbrains.jps.incremental.IncProjectBuilder.runBuild(IncProjectBuilder.java:419)
java: at org.jetbrains.jps.incremental.IncProjectBuilder.build(IncProjectBuilder.java:183)
java: at org.jetbrains.jps.cmdline.BuildRunner.runBuild(BuildRunner.java:132)
java: at org.jetbrains.jps.cmdline.BuildSession.runBuild(BuildSession.java:302)
java: at org.jetbrains.jps.cmdline.BuildSession.run(BuildSession.java:132)
java: at org.jetbrains.jps.cmdline.BuildMain$MyMessageHandler.lambda$channelRead0$0(BuildMain.java:219)
java: at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
java: at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
java: at java.base/java.lang.Thread.run(Thread.java:834)
java: Compilation failed: internal java compiler error
java: Errors occurred while compiling module 'project'
javac 11 was used to compile java sources
Finished, saving caches...
Compilation failed: errors: 1; warnings: 100
A:
Setting -> Build -> Compiler -> Java Compiler
The Target bytecode version of the module is wrong. I set it to 1.8, then it worked.
A: In my case Information:java: java.lang.OutOfMemoryError: GC overhead limit exceeded intellij.
increased compiler -> build process heap size.
Ref: https://intellij-support.jetbrains.com/hc/en-us/community/posts/360003315120-GC-overhead-limit-exceeded
A: In my case I had to go to help > show logs in files which opens up the idea.log and build-log folders something like
/home/user/.cache/JetBrains/IntelliJIdea2021.2/log/build-log/ where I set the log level to DEBUG in the log4j.rootLogger=debug, file in build-log.properties
I then ran build again and saw
2021-11-27 19:59:39,808 [ 133595] DEBUG - s.incremental.java.JavaBuilder - Compiling chunk [module] with options: "-g -deprecation -encoding UTF-8 -source 11 -target 11 -s /home/user/project/target/generated-test-sources/test-annotations", mode=in-process
2021-11-27 19:59:41,082 [ 134869] DEBUG - s.incremental.java.JavaBuilder - java:ERROR:Compilation failed: internal java compiler error
which lead me to see that this might me related to junit test compilation failing. It turns out I had an older/mismatching of the vintage engine and the jupiter engine which are likely to have different java versions relating in the error above. Changing them to be the same ${version.junit} removed the error.
<dependency>
<groupId>org.junit.platform</groupId>
<artifactId>junit-platform-launcher</artifactId>
<version>1.6.2</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.junit.platform</groupId>
<artifactId>junit-platform-commons</artifactId>
<version>1.7.1</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.junit.jupiter</groupId>
<artifactId>junit-jupiter-engine</artifactId>
<version>${version.junit}</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.junit.vintage</groupId>
<artifactId>junit-vintage-engine</artifactId>
<version>${version.junit}</version>
<scope>test</scope>
</dependency>
In short some of your dependency jars may have mismatching java versions.
A: Was facing the same issue with Java 11. Solved by changing language level
File -> Project Structure -> Project
Change "Language Level" to SDK Default
A: Updated Java compiler to correct "Target bytecode version" which in my case is 8 :
A: one reason may be jdk version donot macth minimal version of your project.
A: Be aware of JDK-8177068 issue, which leads to internal error like
java.lang.NullPointerException
at jdk.compiler/com.sun.tools.javac.comp.Flow$FlowAnalyzer.visitApply(Flow.java:1233)
at jdk.compiler/com.sun.tools.javac.tree.JCTree$JCMethodInvocation.accept(JCTree.java:1628)
at jdk.compiler/com.sun.tools.javac.tree.TreeScanner.scan(TreeScanner.java:49)
at jdk.compiler/com.sun.tools.javac.comp.Flow$BaseAnalyzer.scan(Flow.java:393)
at jdk.compiler/com.sun.tools.javac.tree.TreeScanner.visitExec(TreeScanner.java:213)
...
It was fixed in JDK 11.0.12 and JDK 14 b14, so upgrade helped.
A: I switched across to the cmd line mvn compile build and it showed a more meaningful error.
Fatal error compiling: error: invalid target release: 17 -> [Help 1]
Checking my JAVA_HOME it was set to 11. Once I adjust my project to use 11 as well I got past this and onto another error (which was solved separately).
A: Otherwise you can remove .m2 folder. Try to reload project.
A: In my case, I was using Spring Framework 6.0.0 and JDK 11 as the same time. This is not supported according to spring framework wiki. After I degraded the spring framework version to 5.3.24, it solved.
You can check your spring framework version in this way.
spring framework version | unknown |