qid
int64
1
3.11M
question
stringlengths
10
32.1k
date
stringlengths
10
10
metadata
sequence
response_j
stringlengths
0
33.7k
response_k
stringlengths
3
34.7k
2,714,404
I have a VBA form (in Excel if that matters) that contains text boxes. On three occasions, I have found myself pressing the tab key to navigate to the next control, but instead an actual TAB is being put in the text box. The form normally acts as it should, but it does concern me that this gremlin is showing up sporadically. It has shown up on Office 2003 as well as Office 2007 on two different computers. Has anyone else encountered this problem and, if so, how did you fix it?
2010/04/26
[ "https://Stackoverflow.com/questions/2714404", "https://Stackoverflow.com", "https://Stackoverflow.com/users/250385/" ]
I have also had this behaviour in my coworkers' computer for several years now, while mine works fine. I have set all the Checkboxes TabStop property to False. It seems to work fine now.
This might solve the problem: ``` Public Sub MoveFocusToNextControl(xfrmFormName As UserForm, _ xctlCurrentControl As control) Dim xctl As control Dim lngTab As Long, lngNewTab As Long On Error Resume Next ' Move focus to the next control in the tab order lngTab = xctlCurrentControl.TabIndex + 1 For Each xctl In xfrmFormName.Controls lngNewTab = xctl.TabIndex ' An error will occur if the control does not have a TabIndex property; ' skip over those controls. If Err.Number = 0 Then If lngNewTab = lngTab Then xctl.SetFocus Exit For End If Else Err.Clear End If Next xctl Set xctl = Nothing Err.Clear End Sub ```
2,714,404
I have a VBA form (in Excel if that matters) that contains text boxes. On three occasions, I have found myself pressing the tab key to navigate to the next control, but instead an actual TAB is being put in the text box. The form normally acts as it should, but it does concern me that this gremlin is showing up sporadically. It has shown up on Office 2003 as well as Office 2007 on two different computers. Has anyone else encountered this problem and, if so, how did you fix it?
2010/04/26
[ "https://Stackoverflow.com/questions/2714404", "https://Stackoverflow.com", "https://Stackoverflow.com/users/250385/" ]
As a quick work-around, use this code in the control's Exit event. ``` Private Sub TextBox1_Exit(ByVal Cancel As MSForms.ReturnBoolean) TextBox1.Text = VBA.Replace(TextBox1.Text, VBA.Chr(9), "") End Sub ```
This might solve the problem: ``` Public Sub MoveFocusToNextControl(xfrmFormName As UserForm, _ xctlCurrentControl As control) Dim xctl As control Dim lngTab As Long, lngNewTab As Long On Error Resume Next ' Move focus to the next control in the tab order lngTab = xctlCurrentControl.TabIndex + 1 For Each xctl In xfrmFormName.Controls lngNewTab = xctl.TabIndex ' An error will occur if the control does not have a TabIndex property; ' skip over those controls. If Err.Number = 0 Then If lngNewTab = lngTab Then xctl.SetFocus Exit For End If Else Err.Clear End If Next xctl Set xctl = Nothing Err.Clear End Sub ```
2,714,404
I have a VBA form (in Excel if that matters) that contains text boxes. On three occasions, I have found myself pressing the tab key to navigate to the next control, but instead an actual TAB is being put in the text box. The form normally acts as it should, but it does concern me that this gremlin is showing up sporadically. It has shown up on Office 2003 as well as Office 2007 on two different computers. Has anyone else encountered this problem and, if so, how did you fix it?
2010/04/26
[ "https://Stackoverflow.com/questions/2714404", "https://Stackoverflow.com", "https://Stackoverflow.com/users/250385/" ]
I was able to reproduce the problem 100% of the time by launching Excel, immediately pulling up the form, and holding down the tab key. If I change any code at all in the form and resave the workbook, the problem goes away. I'm going to chalk this up to a fluke compilation error within VBA.
Set the `TabKeyBehavior` property to `False` to get "Tab jumps to next field" behavior.
2,714,404
I have a VBA form (in Excel if that matters) that contains text boxes. On three occasions, I have found myself pressing the tab key to navigate to the next control, but instead an actual TAB is being put in the text box. The form normally acts as it should, but it does concern me that this gremlin is showing up sporadically. It has shown up on Office 2003 as well as Office 2007 on two different computers. Has anyone else encountered this problem and, if so, how did you fix it?
2010/04/26
[ "https://Stackoverflow.com/questions/2714404", "https://Stackoverflow.com", "https://Stackoverflow.com/users/250385/" ]
As a quick work-around, use this code in the control's Exit event. ``` Private Sub TextBox1_Exit(ByVal Cancel As MSForms.ReturnBoolean) TextBox1.Text = VBA.Replace(TextBox1.Text, VBA.Chr(9), "") End Sub ```
Set the `TabKeyBehavior` property to `False` to get "Tab jumps to next field" behavior.
2,714,404
I have a VBA form (in Excel if that matters) that contains text boxes. On three occasions, I have found myself pressing the tab key to navigate to the next control, but instead an actual TAB is being put in the text box. The form normally acts as it should, but it does concern me that this gremlin is showing up sporadically. It has shown up on Office 2003 as well as Office 2007 on two different computers. Has anyone else encountered this problem and, if so, how did you fix it?
2010/04/26
[ "https://Stackoverflow.com/questions/2714404", "https://Stackoverflow.com", "https://Stackoverflow.com/users/250385/" ]
I was able to reproduce the problem 100% of the time by launching Excel, immediately pulling up the form, and holding down the tab key. If I change any code at all in the form and resave the workbook, the problem goes away. I'm going to chalk this up to a fluke compilation error within VBA.
As a quick work-around, use this code in the control's Exit event. ``` Private Sub TextBox1_Exit(ByVal Cancel As MSForms.ReturnBoolean) TextBox1.Text = VBA.Replace(TextBox1.Text, VBA.Chr(9), "") End Sub ```
2,714,404
I have a VBA form (in Excel if that matters) that contains text boxes. On three occasions, I have found myself pressing the tab key to navigate to the next control, but instead an actual TAB is being put in the text box. The form normally acts as it should, but it does concern me that this gremlin is showing up sporadically. It has shown up on Office 2003 as well as Office 2007 on two different computers. Has anyone else encountered this problem and, if so, how did you fix it?
2010/04/26
[ "https://Stackoverflow.com/questions/2714404", "https://Stackoverflow.com", "https://Stackoverflow.com/users/250385/" ]
I was able to reproduce the problem 100% of the time by launching Excel, immediately pulling up the form, and holding down the tab key. If I change any code at all in the form and resave the workbook, the problem goes away. I'm going to chalk this up to a fluke compilation error within VBA.
I have also had this behaviour in my coworkers' computer for several years now, while mine works fine. I have set all the Checkboxes TabStop property to False. It seems to work fine now.
2,714,404
I have a VBA form (in Excel if that matters) that contains text boxes. On three occasions, I have found myself pressing the tab key to navigate to the next control, but instead an actual TAB is being put in the text box. The form normally acts as it should, but it does concern me that this gremlin is showing up sporadically. It has shown up on Office 2003 as well as Office 2007 on two different computers. Has anyone else encountered this problem and, if so, how did you fix it?
2010/04/26
[ "https://Stackoverflow.com/questions/2714404", "https://Stackoverflow.com", "https://Stackoverflow.com/users/250385/" ]
I was able to reproduce the problem 100% of the time by launching Excel, immediately pulling up the form, and holding down the tab key. If I change any code at all in the form and resave the workbook, the problem goes away. I'm going to chalk this up to a fluke compilation error within VBA.
This might solve the problem: ``` Public Sub MoveFocusToNextControl(xfrmFormName As UserForm, _ xctlCurrentControl As control) Dim xctl As control Dim lngTab As Long, lngNewTab As Long On Error Resume Next ' Move focus to the next control in the tab order lngTab = xctlCurrentControl.TabIndex + 1 For Each xctl In xfrmFormName.Controls lngNewTab = xctl.TabIndex ' An error will occur if the control does not have a TabIndex property; ' skip over those controls. If Err.Number = 0 Then If lngNewTab = lngTab Then xctl.SetFocus Exit For End If Else Err.Clear End If Next xctl Set xctl = Nothing Err.Clear End Sub ```
2,714,404
I have a VBA form (in Excel if that matters) that contains text boxes. On three occasions, I have found myself pressing the tab key to navigate to the next control, but instead an actual TAB is being put in the text box. The form normally acts as it should, but it does concern me that this gremlin is showing up sporadically. It has shown up on Office 2003 as well as Office 2007 on two different computers. Has anyone else encountered this problem and, if so, how did you fix it?
2010/04/26
[ "https://Stackoverflow.com/questions/2714404", "https://Stackoverflow.com", "https://Stackoverflow.com/users/250385/" ]
I created a form with three text boxes. I entered characters and tabbed onto the next for some time without being able to duplicate your problem. The only way I can get an tab into the text box is by entering Ctrl+Tab. This might be embarrassing but backspace removes it so it is not a major issue. Is it possible that you are accidentally pressing Ctrl at the same time? I find occasionally that if I mispress a key that the cursor jumps to somewhere else on the screen. I am not quite sure what I mean by "mispress"; it seems to be something to do with pressing two keys at once. This seems to be a feature of modern keyboards and how they detect which key has been pressed because I have encountered it on many different computers. The implication is that by mispressing a key, a control character (perhaps tab or ctrl+tab) is generated. I also tried the following which worked and conceals the problem by removing the tab and moving on to the next control. ``` Private Sub TextBox1_Change() If InStr(1, TextBox1.Text, Chr(9)) <> 0 Then TextBox1.Text = Replace(TextBox1.Text, Chr(9), "") TextBox2.SetFocus End If End Sub ```
Set the `TabKeyBehavior` property to `False` to get "Tab jumps to next field" behavior.
962,564
I'm evaluating Apache CXF for a project so I wrote a small demo application to try a few things out. Following the CXF user's guide, I was able to get my application up and running pretty quickly. One thing I wanted to test was how well CXF is able to handle a method that returns a large array of primitives. So I defined a method '`float[] getRandFloats(int count)`' which simply returns an array of the specified length filled with random numbers. Looking at the WSDL generated by `java2wsdl`, I see the method correctly indicates a return type of `float[]`. Inspecting the client side, I also see that I'm (ultimately) receiving a `float[]`. I noticed as I increase the number of elements in my array, the client performance suffers. I ran a profiler on the client-side and saw that there are `Float` objects being created for every element in the returned array. It seems this is happening when CXF invokes JAXB during the parsing of the response. I'm evaluating CXF for use with an application that potentially sends back millions of floating point numbers so this object creation is highly undesirable. In order to use CXF, I'd need to find a way to prevent this object creation from happening. I've scanned through the documentation and mailing list, but haven't come up with a way to work around this. Has anyone encountered a similar problem using CXF? If so how did you work around this?
2009/06/07
[ "https://Stackoverflow.com/questions/962564", "https://Stackoverflow.com", "https://Stackoverflow.com/users/109412/" ]
This definitely isn't anything CXF can do anything about. It more of a JAXB issue. I believe that internally, JAXB handles all the "maxOccurs != 1" cases as a java collection, not an array. It just converts to the array as the last step of the process if it needs to. Since java collections cannot hold primitives, it would be Float objects stored. In anycase, this would have to be taken up with the JAXB folks. :-(
You say the cliernt performance suffers as the umber of elements in the array increases. This sounds reasonable to me - more data, less performance. What were you expecting there? As long as it's a linear degredation, it's behaving OK. As for the creation of millions of objects, a modern JVM will do this without breaking sweat. I suspect the designers of CXF are well aware of this. Old JVMs had poor GC algorithms, and having millions of objects kicking around did indeed cause a problem, but this is no longer the case, particularly with very short-lived objects like you have here. So on the one hand, we have a performance degradation caused by lots of data, and the fact that millions of objects are created. However, there's no evidence that the two observations are related.
2,430,655
Consider the following block of code: ``` class CheckStore { private String displayText; private boolean state; private String meaningfulText; private URL url; public CheckStore(String text, boolean state) { this.displayText = text; this.state = state; } : : } ``` When I initialize two variables (`displayText` and `state`) in the constructor, do the other two variables (`meaningfulText` and `url`) require space in the memory to store `null` value? Q1. If they do require space, how much memory does a `null` value take in the memory? (ex. `int` takes 4 bytes). Q2. How much space does a string take in memory? How much memory space does a string take? Does it depend on the length of the string?
2010/03/12
[ "https://Stackoverflow.com/questions/2430655", "https://Stackoverflow.com", "https://Stackoverflow.com/users/157027/" ]
In Java, `null` is just a value that a reference (which is basically a restricted pointer) can have. It means that the reference refers to nothing. In this case you still consume the space for the reference. This is 4 bytes on 32-bit systems or 8 bytes on 64-bit systems. However, you're not consuming any space for the class that the reference points to until you actually allocate an instance of that class to point the reference at. Edit: As far as the String, a `String` in Java takes 16 bits (2 bytes) for each character, plus a small amount of book-keeping overhead, which is probably undocumented and implementation specific.
I'd like to add: 1. variable of reference type will be initialized as null value. 2. null is not object. because (null instanceof Object) equals to false 3. there is only one null value in JVM. No matter how many variables refer to null. Object s = (String)null; Object i = (Integer)null; System.out.println(s == i);//true
2,430,655
Consider the following block of code: ``` class CheckStore { private String displayText; private boolean state; private String meaningfulText; private URL url; public CheckStore(String text, boolean state) { this.displayText = text; this.state = state; } : : } ``` When I initialize two variables (`displayText` and `state`) in the constructor, do the other two variables (`meaningfulText` and `url`) require space in the memory to store `null` value? Q1. If they do require space, how much memory does a `null` value take in the memory? (ex. `int` takes 4 bytes). Q2. How much space does a string take in memory? How much memory space does a string take? Does it depend on the length of the string?
2010/03/12
[ "https://Stackoverflow.com/questions/2430655", "https://Stackoverflow.com", "https://Stackoverflow.com/users/157027/" ]
In Java, `null` is just a value that a reference (which is basically a restricted pointer) can have. It means that the reference refers to nothing. In this case you still consume the space for the reference. This is 4 bytes on 32-bit systems or 8 bytes on 64-bit systems. However, you're not consuming any space for the class that the reference points to until you actually allocate an instance of that class to point the reference at. Edit: As far as the String, a `String` in Java takes 16 bits (2 bytes) for each character, plus a small amount of book-keeping overhead, which is probably undocumented and implementation specific.
You can use [**jol**](http://openjdk.java.net/projects/code-tools/jol/) to get the layout of that class. (However be careful, you might need a deeper understanding on the mechanics behind it, don't blindly trust the result and be aware it is just an estimate for the currently used VM (1.7.0\_76 x64 win in my case:): I use the CLI version I guess the proper method would be to include the library in your project, but anyway, it seems to work this way: ``` test>java -cp target\classes;jol-cli-0.3.1-full.jar org.openjdk.jol.Main internals test.CheckStore Running 64-bit HotSpot VM. Using compressed oop with 0-bit shift. Using compressed klass with 0-bit shift. Objects are 8 bytes aligned. Field sizes by type: 4, 1, 1, 2, 2, 4, 4, 8, 8 [bytes] Array element sizes: 4, 1, 1, 2, 2, 4, 4, 8, 8 [bytes] VM fails to invoke the default constructor, falling back to class-only introspection. test.CheckStore object internals: OFFSET SIZE TYPE DESCRIPTION VALUE 0 12 (object header) N/A 12 1 boolean CheckStore.state N/A 13 3 (alignment/padding gap) N/A 16 4 String CheckStore.displayText N/A 20 4 String CheckStore.meaningfulText N/A 24 4 URL CheckStore.url N/A 28 4 (loss due to the next object alignment) Instance size: 32 bytes (estimated, the sample instance is not available) Space losses: 3 bytes internal + 4 bytes external = 7 bytes total ``` and the same with automatic compressed oops off: ``` test>java -XX:-UseCompressedOops -cp target\classes;jol-cli-0.3.1-full.jar org.openjdk.jol.Main internals test.CheckStore Running 64-bit HotSpot VM. Objects are 8 bytes aligned. Field sizes by type: 8, 1, 1, 2, 2, 4, 4, 8, 8 [bytes] Array element sizes: 8, 1, 1, 2, 2, 4, 4, 8, 8 [bytes] VM fails to invoke the default constructor, falling back to class-only introspection. test.CheckStore object internals: OFFSET SIZE TYPE DESCRIPTION VALUE 0 16 (object header) N/A 16 1 boolean CheckStore.state N/A 17 7 (alignment/padding gap) N/A 24 8 String CheckStore.displayText N/A 32 8 String CheckStore.meaningfulText N/A 40 8 URL CheckStore.url N/A Instance size: 48 bytes (estimated, the sample instance is not available) Space losses: 7 bytes internal + 0 bytes external = 7 bytes total ``` Those are only the layouts for the object itself if your fields are null, then it will not point to more objects, otherwise you have to look at the target types (`URL` and `String`) as well. (And if you have multiple instances of all of them it depends if you use the same multiple times or different ones). An null field cannot be skipped in memory, as it would require the instance to be resized when it is assigned. So the fields are all pre-constructed, they just do not reference allocated objects somewhere else on the heap. NB: you get some more details if you implement a default constructor, but the sizing in this specific case would be the same. In case you wonder where the sequence and padding of fields is coming from, you can check [this article](http://psy-lob-saw.blogspot.de/2013/05/know-thy-java-object-memory-layout.html) - (basically it aligns objects on 8 bytes, sorts fields by size, groups same type together, references last. Fields from super types are first, 4 byte aligned.)
2,430,655
Consider the following block of code: ``` class CheckStore { private String displayText; private boolean state; private String meaningfulText; private URL url; public CheckStore(String text, boolean state) { this.displayText = text; this.state = state; } : : } ``` When I initialize two variables (`displayText` and `state`) in the constructor, do the other two variables (`meaningfulText` and `url`) require space in the memory to store `null` value? Q1. If they do require space, how much memory does a `null` value take in the memory? (ex. `int` takes 4 bytes). Q2. How much space does a string take in memory? How much memory space does a string take? Does it depend on the length of the string?
2010/03/12
[ "https://Stackoverflow.com/questions/2430655", "https://Stackoverflow.com", "https://Stackoverflow.com/users/157027/" ]
In Java, `null` is just a value that a reference (which is basically a restricted pointer) can have. It means that the reference refers to nothing. In this case you still consume the space for the reference. This is 4 bytes on 32-bit systems or 8 bytes on 64-bit systems. However, you're not consuming any space for the class that the reference points to until you actually allocate an instance of that class to point the reference at. Edit: As far as the String, a `String` in Java takes 16 bits (2 bytes) for each character, plus a small amount of book-keeping overhead, which is probably undocumented and implementation specific.
Null means 0. There is usually one place null defined in memory. Whenever one points to it using a programming language. Everything points to same place. It means only one 4 byte memory is consumed for NULL. Then whatever points to it does not consume any more memory. Definition of NULL is language specific but defining it void \*ptr=0; is common in C and C++. JAVA must have defined it similarly. It is not possible to point to nothing ofc. You have to point to something. But we define a common nothing and everything points to it consume only that space.
2,430,655
Consider the following block of code: ``` class CheckStore { private String displayText; private boolean state; private String meaningfulText; private URL url; public CheckStore(String text, boolean state) { this.displayText = text; this.state = state; } : : } ``` When I initialize two variables (`displayText` and `state`) in the constructor, do the other two variables (`meaningfulText` and `url`) require space in the memory to store `null` value? Q1. If they do require space, how much memory does a `null` value take in the memory? (ex. `int` takes 4 bytes). Q2. How much space does a string take in memory? How much memory space does a string take? Does it depend on the length of the string?
2010/03/12
[ "https://Stackoverflow.com/questions/2430655", "https://Stackoverflow.com", "https://Stackoverflow.com/users/157027/" ]
I'd like to add: 1. variable of reference type will be initialized as null value. 2. null is not object. because (null instanceof Object) equals to false 3. there is only one null value in JVM. No matter how many variables refer to null. Object s = (String)null; Object i = (Integer)null; System.out.println(s == i);//true
You can use [**jol**](http://openjdk.java.net/projects/code-tools/jol/) to get the layout of that class. (However be careful, you might need a deeper understanding on the mechanics behind it, don't blindly trust the result and be aware it is just an estimate for the currently used VM (1.7.0\_76 x64 win in my case:): I use the CLI version I guess the proper method would be to include the library in your project, but anyway, it seems to work this way: ``` test>java -cp target\classes;jol-cli-0.3.1-full.jar org.openjdk.jol.Main internals test.CheckStore Running 64-bit HotSpot VM. Using compressed oop with 0-bit shift. Using compressed klass with 0-bit shift. Objects are 8 bytes aligned. Field sizes by type: 4, 1, 1, 2, 2, 4, 4, 8, 8 [bytes] Array element sizes: 4, 1, 1, 2, 2, 4, 4, 8, 8 [bytes] VM fails to invoke the default constructor, falling back to class-only introspection. test.CheckStore object internals: OFFSET SIZE TYPE DESCRIPTION VALUE 0 12 (object header) N/A 12 1 boolean CheckStore.state N/A 13 3 (alignment/padding gap) N/A 16 4 String CheckStore.displayText N/A 20 4 String CheckStore.meaningfulText N/A 24 4 URL CheckStore.url N/A 28 4 (loss due to the next object alignment) Instance size: 32 bytes (estimated, the sample instance is not available) Space losses: 3 bytes internal + 4 bytes external = 7 bytes total ``` and the same with automatic compressed oops off: ``` test>java -XX:-UseCompressedOops -cp target\classes;jol-cli-0.3.1-full.jar org.openjdk.jol.Main internals test.CheckStore Running 64-bit HotSpot VM. Objects are 8 bytes aligned. Field sizes by type: 8, 1, 1, 2, 2, 4, 4, 8, 8 [bytes] Array element sizes: 8, 1, 1, 2, 2, 4, 4, 8, 8 [bytes] VM fails to invoke the default constructor, falling back to class-only introspection. test.CheckStore object internals: OFFSET SIZE TYPE DESCRIPTION VALUE 0 16 (object header) N/A 16 1 boolean CheckStore.state N/A 17 7 (alignment/padding gap) N/A 24 8 String CheckStore.displayText N/A 32 8 String CheckStore.meaningfulText N/A 40 8 URL CheckStore.url N/A Instance size: 48 bytes (estimated, the sample instance is not available) Space losses: 7 bytes internal + 0 bytes external = 7 bytes total ``` Those are only the layouts for the object itself if your fields are null, then it will not point to more objects, otherwise you have to look at the target types (`URL` and `String`) as well. (And if you have multiple instances of all of them it depends if you use the same multiple times or different ones). An null field cannot be skipped in memory, as it would require the instance to be resized when it is assigned. So the fields are all pre-constructed, they just do not reference allocated objects somewhere else on the heap. NB: you get some more details if you implement a default constructor, but the sizing in this specific case would be the same. In case you wonder where the sequence and padding of fields is coming from, you can check [this article](http://psy-lob-saw.blogspot.de/2013/05/know-thy-java-object-memory-layout.html) - (basically it aligns objects on 8 bytes, sorts fields by size, groups same type together, references last. Fields from super types are first, 4 byte aligned.)
2,430,655
Consider the following block of code: ``` class CheckStore { private String displayText; private boolean state; private String meaningfulText; private URL url; public CheckStore(String text, boolean state) { this.displayText = text; this.state = state; } : : } ``` When I initialize two variables (`displayText` and `state`) in the constructor, do the other two variables (`meaningfulText` and `url`) require space in the memory to store `null` value? Q1. If they do require space, how much memory does a `null` value take in the memory? (ex. `int` takes 4 bytes). Q2. How much space does a string take in memory? How much memory space does a string take? Does it depend on the length of the string?
2010/03/12
[ "https://Stackoverflow.com/questions/2430655", "https://Stackoverflow.com", "https://Stackoverflow.com/users/157027/" ]
I'd like to add: 1. variable of reference type will be initialized as null value. 2. null is not object. because (null instanceof Object) equals to false 3. there is only one null value in JVM. No matter how many variables refer to null. Object s = (String)null; Object i = (Integer)null; System.out.println(s == i);//true
Null means 0. There is usually one place null defined in memory. Whenever one points to it using a programming language. Everything points to same place. It means only one 4 byte memory is consumed for NULL. Then whatever points to it does not consume any more memory. Definition of NULL is language specific but defining it void \*ptr=0; is common in C and C++. JAVA must have defined it similarly. It is not possible to point to nothing ofc. You have to point to something. But we define a common nothing and everything points to it consume only that space.
2,675,355
I developing an application (in C#) where objects are active under a period of time, they have from and to properties of DateTime-type. Now I want to speed up my search routine for queries like: Are there other active objects in this timeperiod/at this time. Is there any existing temporal index I can use or can I use QuadTree/other tree-structures to search in an efficient way.
2010/04/20
[ "https://Stackoverflow.com/questions/2675355", "https://Stackoverflow.com", "https://Stackoverflow.com/users/321299/" ]
You should also take a look at the [interval tree](http://en.wikipedia.org/wiki/Interval_tree): > > In computer science, an interval tree > is an ordered tree data structure to > hold intervals. Specifically, it > allows one to efficiently find all > intervals that overlap with any given > interval or point. > > > And that reminds me of this [SO question](https://stackoverflow.com/questions/2147505/a-dictionary-object-that-uses-ranges-of-values-for-keys).
Why not just order your data in a list, and then use a binary search-like algorithm to limit the number of objects you consider.
2,675,355
I developing an application (in C#) where objects are active under a period of time, they have from and to properties of DateTime-type. Now I want to speed up my search routine for queries like: Are there other active objects in this timeperiod/at this time. Is there any existing temporal index I can use or can I use QuadTree/other tree-structures to search in an efficient way.
2010/04/20
[ "https://Stackoverflow.com/questions/2675355", "https://Stackoverflow.com", "https://Stackoverflow.com/users/321299/" ]
Why not just order your data in a list, and then use a binary search-like algorithm to limit the number of objects you consider.
This is an interesting sorting problem, as you need to consider both the start and end date of each element. If you used a simple sorting algorithm, then you could sort by either start date or end date, but sorting by both wouldn't be very effective, as an element with an early start date could have a very late end date, or an element with a late start date could have an early end date, meaning that you can't really pre-sort this list based on the criteria "are any of these elements active right now?" If you're looking for a super-efficient mechanism to do this, I may not have an answer for you, but if you're just looking for something easy to do with existing C# data structures, I'd consider creating two sorted lists, one sorted by start date and the other sorted by end date. Search the start-date-sorted list for elements that start before right now and search the end-date-sorted list for elements that end after right now. Intersect those results to get your final answer. As I mentioned, I'm sure there is a more efficient mechanism out there to do this, but if I wanted to keep it simple and just use what I had available, I would consider doing that.
2,675,355
I developing an application (in C#) where objects are active under a period of time, they have from and to properties of DateTime-type. Now I want to speed up my search routine for queries like: Are there other active objects in this timeperiod/at this time. Is there any existing temporal index I can use or can I use QuadTree/other tree-structures to search in an efficient way.
2010/04/20
[ "https://Stackoverflow.com/questions/2675355", "https://Stackoverflow.com", "https://Stackoverflow.com/users/321299/" ]
Why not just order your data in a list, and then use a binary search-like algorithm to limit the number of objects you consider.
There is an [i4o](http://i4o.codeplex.com/) (indexes for objects) library. May be it would be useful.
2,675,355
I developing an application (in C#) where objects are active under a period of time, they have from and to properties of DateTime-type. Now I want to speed up my search routine for queries like: Are there other active objects in this timeperiod/at this time. Is there any existing temporal index I can use or can I use QuadTree/other tree-structures to search in an efficient way.
2010/04/20
[ "https://Stackoverflow.com/questions/2675355", "https://Stackoverflow.com", "https://Stackoverflow.com/users/321299/" ]
You should also take a look at the [interval tree](http://en.wikipedia.org/wiki/Interval_tree): > > In computer science, an interval tree > is an ordered tree data structure to > hold intervals. Specifically, it > allows one to efficiently find all > intervals that overlap with any given > interval or point. > > > And that reminds me of this [SO question](https://stackoverflow.com/questions/2147505/a-dictionary-object-that-uses-ranges-of-values-for-keys).
This is an interesting sorting problem, as you need to consider both the start and end date of each element. If you used a simple sorting algorithm, then you could sort by either start date or end date, but sorting by both wouldn't be very effective, as an element with an early start date could have a very late end date, or an element with a late start date could have an early end date, meaning that you can't really pre-sort this list based on the criteria "are any of these elements active right now?" If you're looking for a super-efficient mechanism to do this, I may not have an answer for you, but if you're just looking for something easy to do with existing C# data structures, I'd consider creating two sorted lists, one sorted by start date and the other sorted by end date. Search the start-date-sorted list for elements that start before right now and search the end-date-sorted list for elements that end after right now. Intersect those results to get your final answer. As I mentioned, I'm sure there is a more efficient mechanism out there to do this, but if I wanted to keep it simple and just use what I had available, I would consider doing that.
2,675,355
I developing an application (in C#) where objects are active under a period of time, they have from and to properties of DateTime-type. Now I want to speed up my search routine for queries like: Are there other active objects in this timeperiod/at this time. Is there any existing temporal index I can use or can I use QuadTree/other tree-structures to search in an efficient way.
2010/04/20
[ "https://Stackoverflow.com/questions/2675355", "https://Stackoverflow.com", "https://Stackoverflow.com/users/321299/" ]
You should also take a look at the [interval tree](http://en.wikipedia.org/wiki/Interval_tree): > > In computer science, an interval tree > is an ordered tree data structure to > hold intervals. Specifically, it > allows one to efficiently find all > intervals that overlap with any given > interval or point. > > > And that reminds me of this [SO question](https://stackoverflow.com/questions/2147505/a-dictionary-object-that-uses-ranges-of-values-for-keys).
There is an [i4o](http://i4o.codeplex.com/) (indexes for objects) library. May be it would be useful.
2,785,624
I have a problem with my site.I cant make the table appears on the img. It appears down of the image or up of the image. I need some help with the codes. Actually i dont want the img to reapet and to fit in users window everytime. The code to insert the img is this ``` <body oncontextmenu="return false;" background="bg_body.jpg"> ``` And the code that a actually helped me but didnt solved the problem 100% because table didnt appears with img is this ``` <style> <!-- body { margin: 0px; } --> </style> <img src='whatever' style='width: 100%; height: 100%;' /> ```
2010/05/07
[ "https://Stackoverflow.com/questions/2785624", "https://Stackoverflow.com", "https://Stackoverflow.com/users/333969/" ]
if you want a background image to fit the size of the browser (which i'm guessing at, but if you have a 100% height and width on your image, that seems what you're after), you could do something like this: ``` <style type="text/css"> *{margin:0;padding:0;} html,body{height:100%;} .backgroundlayer { position:absolute;top:0;left:0;z-index:1; } .toplayer { position:absolute;top:0;left:0;z-index:2; } </style> ``` and then in the body of your code... ``` <body> <img src="someimage.png" style="height:100%;width:100%;" class="backgroundlayer" /> <div class="toplayer"> my content above the image...it doesn't have to be a div...use a table if you want </div> </body> ```
Consider using CSS background properties. HTML (something like this, un-tested): ``` <body ... style="background-image:url('bg_body.jpg'); background-repeat: no-repeat;"> ``` If you want your background image to "resize" to the browser, you will have to hack it to work. One common way is probably to use two div tags; the first one will contain the image at 100% size, with absolute positioning. The second one contains your actual body content, with a higher z-value. It is a lot more work than you might think. For detailed discussion on this, see this thread: <http://www.htmlcodetutorial.com/help/ftopic4503.html>
2,785,624
I have a problem with my site.I cant make the table appears on the img. It appears down of the image or up of the image. I need some help with the codes. Actually i dont want the img to reapet and to fit in users window everytime. The code to insert the img is this ``` <body oncontextmenu="return false;" background="bg_body.jpg"> ``` And the code that a actually helped me but didnt solved the problem 100% because table didnt appears with img is this ``` <style> <!-- body { margin: 0px; } --> </style> <img src='whatever' style='width: 100%; height: 100%;' /> ```
2010/05/07
[ "https://Stackoverflow.com/questions/2785624", "https://Stackoverflow.com", "https://Stackoverflow.com/users/333969/" ]
if you want a background image to fit the size of the browser (which i'm guessing at, but if you have a 100% height and width on your image, that seems what you're after), you could do something like this: ``` <style type="text/css"> *{margin:0;padding:0;} html,body{height:100%;} .backgroundlayer { position:absolute;top:0;left:0;z-index:1; } .toplayer { position:absolute;top:0;left:0;z-index:2; } </style> ``` and then in the body of your code... ``` <body> <img src="someimage.png" style="height:100%;width:100%;" class="backgroundlayer" /> <div class="toplayer"> my content above the image...it doesn't have to be a div...use a table if you want </div> </body> ```
There is a couple things here that don't make too much sense: "oncontextmenu="return false;" are you trying to run some sort of javascript? If so, you need to call a function before the "return false", like so: ``` <body onload="someFunction() return false;"> ``` Also, I don't think you can set a background for an element the way you did it, it would be more like this: ``` <table style="background:path/to/my/image/..."> ``` I'd love to help some more, but please explain yourself a little better. ok, I'd suggest you do something like this: Whether it is on an external style sheet, or embedded inside the head tags, you can set the image size with some simple CSS, like so: ``` <style type="text/css"> body{ background-image:url(../path/to/image); background-repeat:no-repeat; height:100%; width:100%; } </style> ``` Try this to see if it works, I'll help you more if it doesn't.
3,083,904
I have had an attack on my web server where .html files were copied by FTP into a public html directory. The FTP password was very strong. I'm trying to determin whether PHP initiated the FTP transfer. Is there an Apache or Nix log file that can give me this information? **Additional information** I have FTP log entries which seem to show different IPs were used to login and copy the files. I'm not sure but does the ? before the IP indicate except it is not the account user (which in this case is kingdom)? It looks like several different IPs logged - each one copying a different file - all in the space of less than 30 seconds. The offending files are "mickey66.html", "mickey66.jpg", and "canopy37.html". > > 2010-06-17T21:24:02.073070+01:00 webserver pure-ftpd: (?@190.20.76.74) [INFO] kingdom is now logged in > > > 2010-06-17T21:24:06.632472+01:00 webserver pure-ftpd: (?@77.250.141.158) [INFO] kingdom is now logged in 2010-06-17T21:24:07.216924+01:00 webserver pure-ftpd: (kingdom@77.250.141.158) [NOTICE] /home/kingdom//public\_html/mickey66.html uploaded (80 bytes, 0.26KB/sec) 2010-06-17T21:24:07.364313+01:00 webserver pure-ftpd: (kingdom@77.250.141.158) [INFO] Logout. 2010-06-17T21:24:08.711231+01:00 webserver pure-ftpd: (?@78.88.175.77) [INFO] kingdom is now logged in 2010-06-17T21:24:10.720315+01:00 webserver pure-ftpd: (kingdom@78.88.175.77) [NOTICE] /home/kingdom//public\_html/mickey66.jpg uploaded (40835 bytes, 35.90KB/sec) 2010-06-17T21:24:10.848782+01:00 webserver pure-ftpd: (kingdom@78.88.175.77) [INFO] Logout. 2010-06-17T21:24:18.528074+01:00 webserver pure-ftpd: (kingdom@190.20.76.74) [INFO] Logout. 2010-06-17T21:24:22.023673+01:00 webserver pure-ftpd: (?@85.130.254.227) [INFO] kingdom is now logged in 2010-06-17T21:24:23.470817+01:00 webserver pure-ftpd: (kingdom@85.130.254.227) [NOTICE] /home/kingdom//public\_html/mickey66.html uploaded (80 bytes, 0.38KB/sec) 2010-06-17T21:24:23.655023+01:00 webserver pure-ftpd: (kingdom@85.130.254.227) [INFO] Logout. 2010-06-17T21:24:26.249887+01:00 webserver pure-ftpd: (?@95.209.254.137) [INFO] kingdom is now logged in 2010-06-17T21:24:28.461310+01:00 webserver pure-ftpd: (kingdom@95.209.254.137) [NOTICE] /home/kingdom//public\_html/canopy37.html uploaded (80 bytes, 0.26KB/sec) 2010-06-17T21:24:28.760513+01:00 webserver pure-ftpd: (kingdom@95.209.254.137) [INFO] Logout.
2010/06/21
[ "https://Stackoverflow.com/questions/3083904", "https://Stackoverflow.com", "https://Stackoverflow.com/users/328765/" ]
You might have a malware on your workstation that runs your FTP client. The malware must steal passwords from your FTP client and send it to some third-party. This happened to us. All our landing pages were injected with malicious code/ iframe-url code that will download this malware on all machines that opens the page in browser.
As far as I know, the FTP protocol does not have a User-Agent header or anything similar. Even if it had so, why would malware writers add code to actively identify their software as malware? And, why would you want to prevent legitimate use of scripting tools like PHP? These kind of attacks normally come from two sources: * Vulnerable scripts hosted in a public web server * Hosting clients that got their PCs compromised If —as you seem to suggest— you actually have FTP logs to prove that those files were uploaded via FTP using your credentials, you probably have the IP address the files came from. Check whether it's *your* address and, in any case, grab a good virus scanner.
3,083,904
I have had an attack on my web server where .html files were copied by FTP into a public html directory. The FTP password was very strong. I'm trying to determin whether PHP initiated the FTP transfer. Is there an Apache or Nix log file that can give me this information? **Additional information** I have FTP log entries which seem to show different IPs were used to login and copy the files. I'm not sure but does the ? before the IP indicate except it is not the account user (which in this case is kingdom)? It looks like several different IPs logged - each one copying a different file - all in the space of less than 30 seconds. The offending files are "mickey66.html", "mickey66.jpg", and "canopy37.html". > > 2010-06-17T21:24:02.073070+01:00 webserver pure-ftpd: (?@190.20.76.74) [INFO] kingdom is now logged in > > > 2010-06-17T21:24:06.632472+01:00 webserver pure-ftpd: (?@77.250.141.158) [INFO] kingdom is now logged in 2010-06-17T21:24:07.216924+01:00 webserver pure-ftpd: (kingdom@77.250.141.158) [NOTICE] /home/kingdom//public\_html/mickey66.html uploaded (80 bytes, 0.26KB/sec) 2010-06-17T21:24:07.364313+01:00 webserver pure-ftpd: (kingdom@77.250.141.158) [INFO] Logout. 2010-06-17T21:24:08.711231+01:00 webserver pure-ftpd: (?@78.88.175.77) [INFO] kingdom is now logged in 2010-06-17T21:24:10.720315+01:00 webserver pure-ftpd: (kingdom@78.88.175.77) [NOTICE] /home/kingdom//public\_html/mickey66.jpg uploaded (40835 bytes, 35.90KB/sec) 2010-06-17T21:24:10.848782+01:00 webserver pure-ftpd: (kingdom@78.88.175.77) [INFO] Logout. 2010-06-17T21:24:18.528074+01:00 webserver pure-ftpd: (kingdom@190.20.76.74) [INFO] Logout. 2010-06-17T21:24:22.023673+01:00 webserver pure-ftpd: (?@85.130.254.227) [INFO] kingdom is now logged in 2010-06-17T21:24:23.470817+01:00 webserver pure-ftpd: (kingdom@85.130.254.227) [NOTICE] /home/kingdom//public\_html/mickey66.html uploaded (80 bytes, 0.38KB/sec) 2010-06-17T21:24:23.655023+01:00 webserver pure-ftpd: (kingdom@85.130.254.227) [INFO] Logout. 2010-06-17T21:24:26.249887+01:00 webserver pure-ftpd: (?@95.209.254.137) [INFO] kingdom is now logged in 2010-06-17T21:24:28.461310+01:00 webserver pure-ftpd: (kingdom@95.209.254.137) [NOTICE] /home/kingdom//public\_html/canopy37.html uploaded (80 bytes, 0.26KB/sec) 2010-06-17T21:24:28.760513+01:00 webserver pure-ftpd: (kingdom@95.209.254.137) [INFO] Logout.
2010/06/21
[ "https://Stackoverflow.com/questions/3083904", "https://Stackoverflow.com", "https://Stackoverflow.com/users/328765/" ]
> > I have had an attack on my web server where .html files were copied by FTP into a public html directory. > > > How do you know they were copied via FTP? > > The FTP password was very strong. > > > Not really very relevant. FTP sends passwords unencrypted - so even assuming that the files were delivered via FTP, if the password was sniffed its irrelevant how much entropy it has. > > I'm trying to determin whether PHP initiated the FTP transfer > > > You can't tell what the client was. Even if, like HTTP, the protocol provided for collecting information about the user-agent, there is no way of determining the accuracy of this information (it's sent by the client, therefore can be manipulated by the client). Your FTP server log should have recorded details of which IP address / user account uploaded which files and when. But don't be surprised if there's nothing relevant in there. C.
As far as I know, the FTP protocol does not have a User-Agent header or anything similar. Even if it had so, why would malware writers add code to actively identify their software as malware? And, why would you want to prevent legitimate use of scripting tools like PHP? These kind of attacks normally come from two sources: * Vulnerable scripts hosted in a public web server * Hosting clients that got their PCs compromised If —as you seem to suggest— you actually have FTP logs to prove that those files were uploaded via FTP using your credentials, you probably have the IP address the files came from. Check whether it's *your* address and, in any case, grab a good virus scanner.
2,510,076
I am *loving* ASP.NET MVC, keeping up with the releases/docs can sometimes be tricky, so maybe I'm just not getting something... I want to use a TextBoxFor(), and working with LabelFor() etc. is fine, all the magic happens for me. But if I create... ``` <%=Html.TextBoxFor(x => x.LastName) %> ``` And wanted to do something nice with jQuery, how would I get the ID of the control that was created? I could add a CSS class and use that to attach my jQuery, but for something I am doing I would like the ID... so I could do something like: ``` $('#LastName').(...) ``` I know I could work it out in this case, and hack it in manually, but is there a neater way?
2010/03/24
[ "https://Stackoverflow.com/questions/2510076", "https://Stackoverflow.com", "https://Stackoverflow.com/users/49843/" ]
I think you can do something like: ``` <%=Html.TextBoxFor(x => x.LastName, new { id = "LastName" })%> ``` [Overloads of TextBoxFor](http://msdn.microsoft.com/en-us/library/system.web.mvc.html.inputextensions.textboxfor%28VS.100%29.aspx)
As a point of interest it appears that the Html.Textbox() code will generate an id, duplicating the control name for anything that begins with a letter (a-z). If however your 'name' begins with a number it will simply not bother. This is a fantastic 'feature' that has caused me grief for the past hour or so.
2,510,076
I am *loving* ASP.NET MVC, keeping up with the releases/docs can sometimes be tricky, so maybe I'm just not getting something... I want to use a TextBoxFor(), and working with LabelFor() etc. is fine, all the magic happens for me. But if I create... ``` <%=Html.TextBoxFor(x => x.LastName) %> ``` And wanted to do something nice with jQuery, how would I get the ID of the control that was created? I could add a CSS class and use that to attach my jQuery, but for something I am doing I would like the ID... so I could do something like: ``` $('#LastName').(...) ``` I know I could work it out in this case, and hack it in manually, but is there a neater way?
2010/03/24
[ "https://Stackoverflow.com/questions/2510076", "https://Stackoverflow.com", "https://Stackoverflow.com/users/49843/" ]
I think you can do something like: ``` <%=Html.TextBoxFor(x => x.LastName, new { id = "LastName" })%> ``` [Overloads of TextBoxFor](http://msdn.microsoft.com/en-us/library/system.web.mvc.html.inputextensions.textboxfor%28VS.100%29.aspx)
By default your control id is your model binding value, You can also Just use firebug. select the control and get by default control id.
2,510,076
I am *loving* ASP.NET MVC, keeping up with the releases/docs can sometimes be tricky, so maybe I'm just not getting something... I want to use a TextBoxFor(), and working with LabelFor() etc. is fine, all the magic happens for me. But if I create... ``` <%=Html.TextBoxFor(x => x.LastName) %> ``` And wanted to do something nice with jQuery, how would I get the ID of the control that was created? I could add a CSS class and use that to attach my jQuery, but for something I am doing I would like the ID... so I could do something like: ``` $('#LastName').(...) ``` I know I could work it out in this case, and hack it in manually, but is there a neater way?
2010/03/24
[ "https://Stackoverflow.com/questions/2510076", "https://Stackoverflow.com", "https://Stackoverflow.com/users/49843/" ]
As a point of interest it appears that the Html.Textbox() code will generate an id, duplicating the control name for anything that begins with a letter (a-z). If however your 'name' begins with a number it will simply not bother. This is a fantastic 'feature' that has caused me grief for the past hour or so.
By default your control id is your model binding value, You can also Just use firebug. select the control and get by default control id.
2,510,076
I am *loving* ASP.NET MVC, keeping up with the releases/docs can sometimes be tricky, so maybe I'm just not getting something... I want to use a TextBoxFor(), and working with LabelFor() etc. is fine, all the magic happens for me. But if I create... ``` <%=Html.TextBoxFor(x => x.LastName) %> ``` And wanted to do something nice with jQuery, how would I get the ID of the control that was created? I could add a CSS class and use that to attach my jQuery, but for something I am doing I would like the ID... so I could do something like: ``` $('#LastName').(...) ``` I know I could work it out in this case, and hack it in manually, but is there a neater way?
2010/03/24
[ "https://Stackoverflow.com/questions/2510076", "https://Stackoverflow.com", "https://Stackoverflow.com/users/49843/" ]
Since MVC4 there is a built-in way to do it - [@Html.IdFor()](http://msdn.microsoft.com/en-us/library/hh833709%28v=vs.108%29.aspx). [Here is a sample](http://www.nickriggs.com/posts/getting-the-id-and-name-attribute-generated-by-typed-html-helpers/) of using it: ``` @Html.IdFor(m => m.Filters.Occurred.From) ``` and the result is like ``` Filters_Occurred_From ```
As a point of interest it appears that the Html.Textbox() code will generate an id, duplicating the control name for anything that begins with a letter (a-z). If however your 'name' begins with a number it will simply not bother. This is a fantastic 'feature' that has caused me grief for the past hour or so.
2,510,076
I am *loving* ASP.NET MVC, keeping up with the releases/docs can sometimes be tricky, so maybe I'm just not getting something... I want to use a TextBoxFor(), and working with LabelFor() etc. is fine, all the magic happens for me. But if I create... ``` <%=Html.TextBoxFor(x => x.LastName) %> ``` And wanted to do something nice with jQuery, how would I get the ID of the control that was created? I could add a CSS class and use that to attach my jQuery, but for something I am doing I would like the ID... so I could do something like: ``` $('#LastName').(...) ``` I know I could work it out in this case, and hack it in manually, but is there a neater way?
2010/03/24
[ "https://Stackoverflow.com/questions/2510076", "https://Stackoverflow.com", "https://Stackoverflow.com/users/49843/" ]
Since MVC4 there is a built-in way to do it - [@Html.IdFor()](http://msdn.microsoft.com/en-us/library/hh833709%28v=vs.108%29.aspx). [Here is a sample](http://www.nickriggs.com/posts/getting-the-id-and-name-attribute-generated-by-typed-html-helpers/) of using it: ``` @Html.IdFor(m => m.Filters.Occurred.From) ``` and the result is like ``` Filters_Occurred_From ```
By default your control id is your model binding value, You can also Just use firebug. select the control and get by default control id.
2,037,234
I am calling the Google Analytics \_trackEvent() function on a web page, and get back an error from the obfuscated Google code. In Firebug, it comes back "q is undefined". In Safari developer console: "TypeError: Result of expression 'q' [undefined] is not an object." As a test, I have reduced the page to only this call, and still get the error back. Besides the necessary elements and the standard Google tracking code, my page is: ``` <script> pageTracker._trackEvent('Survey', 'Checkout - Survey', 'Rating', 3); </script> ``` Results is that error. What's going on here?
2010/01/10
[ "https://Stackoverflow.com/questions/2037234", "https://Stackoverflow.com", "https://Stackoverflow.com/users/213042/" ]
This problem seems to occure when the page is not fully loaded yet: <http://www.google.com/support/forum/p/Google+Analytics/thread?tid=4596554b1e9a1545&hl=en> The provided solution is to wait for pageTracker.cb ``` function trackEvent(target, action, opt_label, opt_value) { if(pageTracker && !pageTracker.cb) { setTimeout(function() { trackEvent(target, action, opt_label, opt_value); }, 200); return; } pageTracker._trackEvent(target, action, opt_label, opt_value); } ```
Actually the answer no. 1 is not correct. That's because pageTracker.cb never gets set (it's an obfuscated property name) with other versions of GA. You should call upon initialization: `pageTracker._initData()`
2,037,234
I am calling the Google Analytics \_trackEvent() function on a web page, and get back an error from the obfuscated Google code. In Firebug, it comes back "q is undefined". In Safari developer console: "TypeError: Result of expression 'q' [undefined] is not an object." As a test, I have reduced the page to only this call, and still get the error back. Besides the necessary elements and the standard Google tracking code, my page is: ``` <script> pageTracker._trackEvent('Survey', 'Checkout - Survey', 'Rating', 3); </script> ``` Results is that error. What's going on here?
2010/01/10
[ "https://Stackoverflow.com/questions/2037234", "https://Stackoverflow.com", "https://Stackoverflow.com/users/213042/" ]
This problem seems to occure when the page is not fully loaded yet: <http://www.google.com/support/forum/p/Google+Analytics/thread?tid=4596554b1e9a1545&hl=en> The provided solution is to wait for pageTracker.cb ``` function trackEvent(target, action, opt_label, opt_value) { if(pageTracker && !pageTracker.cb) { setTimeout(function() { trackEvent(target, action, opt_label, opt_value); }, 200); return; } pageTracker._trackEvent(target, action, opt_label, opt_value); } ```
This looks like a bug in ga.js introduced when they added `_initData()` functionality to `_trackPageview()`. Unfortunately `_initData()` isn't actually called after the conditional. Hope they fix it before they deprecate`_initData()` for good. e.g. This page suggests the above should work without calling `_initData()`: <http://www.google.com/support/googleanalytics/bin/answer.py?hl=en&answer=55527>
2,037,234
I am calling the Google Analytics \_trackEvent() function on a web page, and get back an error from the obfuscated Google code. In Firebug, it comes back "q is undefined". In Safari developer console: "TypeError: Result of expression 'q' [undefined] is not an object." As a test, I have reduced the page to only this call, and still get the error back. Besides the necessary elements and the standard Google tracking code, my page is: ``` <script> pageTracker._trackEvent('Survey', 'Checkout - Survey', 'Rating', 3); </script> ``` Results is that error. What's going on here?
2010/01/10
[ "https://Stackoverflow.com/questions/2037234", "https://Stackoverflow.com", "https://Stackoverflow.com/users/213042/" ]
Actually the answer no. 1 is not correct. That's because pageTracker.cb never gets set (it's an obfuscated property name) with other versions of GA. You should call upon initialization: `pageTracker._initData()`
This looks like a bug in ga.js introduced when they added `_initData()` functionality to `_trackPageview()`. Unfortunately `_initData()` isn't actually called after the conditional. Hope they fix it before they deprecate`_initData()` for good. e.g. This page suggests the above should work without calling `_initData()`: <http://www.google.com/support/googleanalytics/bin/answer.py?hl=en&answer=55527>
1,044,679
how do i count double tap in simulator?
2009/06/25
[ "https://Stackoverflow.com/questions/1044679", "https://Stackoverflow.com", "https://Stackoverflow.com/users/83905/" ]
``` - (void) touchesBegan:(NSSet *) touches withEvent:(UIEvent *) event { UITouch *touch = [touches anyObject]; if (touch.tapCount == 2) { // do your stuff here } } ```
Implement one of the touch functions defined in UIResponder (touchesBegan, touchedEnded, etc...). When you get the touches array, you can get a UITouch's tap count using code like this: ``` UITouch * t = [touches anyObject]; NSLog(@"%d", [t tapCount]); ```
2,215,363
I'm writing a program, some kind of database. While I was reading manual of `fclose(3)` I found that it calls `fflush(3)` to flush `FILE*` buffers to disk (actually to OS buffer, but it doesn't matter right now, we can always call `fsync(2)`). Because I'm writing a DB it is obvious that I want to prevent data loss. If there is no disk space and `fflush(3)` in `fclose(3)` fails — we will lose our data, because > > using `FILE*` after an error in `fclose()` will cause undefined behavior > > > So I thought about explicit use of `fflush(3)` before `fclose(3)`, warn user about low disk space and recall `fflush(3)` after a while. I've read the **C** standard and thought this was a good idea. In practice, after failed `fflush` the second call would always return 0 (no error), but would actually do nothing. `fsync` didn't help me (I thought data might be saved in RAM). How can I prevent data loss in such a situation? Maybe there are some rules of thumb. Here is my test code: ``` #include <stdio.h> int main() { FILE *a = fopen("/tmp/1", "wb") if ( !a ) perror("fopen"); if ( fwrite("test", 1, 4, a) != 4 ) perror("fwrite"); // always OK, cause data is buffered while( fflush(a) ) // ...second call will always return 0! { perror("fflush"); // if there is no disk space, I will get this perror, but ... } if ( fclose(a) ) // always ok, because calls only close(2) perror("fclose"); return 0; } ```
2010/02/07
[ "https://Stackoverflow.com/questions/2215363", "https://Stackoverflow.com", "https://Stackoverflow.com/users/129413/" ]
fflush will only flush the the C Library internal buffers to the OS, so an fflush won't guarantee that there won't be data loss. Calling fflush repeatedly (without intermediate fwrites) won't help, as you have already flushed the data to the OS once. The second fflush call will return SUCCESS as there is *nothing* to flush to the OS. If fflush fails because hard disk is full, you have already lost some data. To flush the data to the disk, you *need* to use fsync. If the hard disk is full, you are out of luck. The only way to prevent data loss is to now keep your process alive (and data in memory: either in user space/kernel file buffers) till you find some space on the disk to fsync to. Now if the power goes out, you *will* lose the data. In short, there is no way you can guarantee no data loss if your hard disk is full.
You could fseek(3) to the end of the file (assuming you'd know the length) before doing anything. That way you'd eliminate the possibility of failure due to insufficient disk space.
2,215,363
I'm writing a program, some kind of database. While I was reading manual of `fclose(3)` I found that it calls `fflush(3)` to flush `FILE*` buffers to disk (actually to OS buffer, but it doesn't matter right now, we can always call `fsync(2)`). Because I'm writing a DB it is obvious that I want to prevent data loss. If there is no disk space and `fflush(3)` in `fclose(3)` fails — we will lose our data, because > > using `FILE*` after an error in `fclose()` will cause undefined behavior > > > So I thought about explicit use of `fflush(3)` before `fclose(3)`, warn user about low disk space and recall `fflush(3)` after a while. I've read the **C** standard and thought this was a good idea. In practice, after failed `fflush` the second call would always return 0 (no error), but would actually do nothing. `fsync` didn't help me (I thought data might be saved in RAM). How can I prevent data loss in such a situation? Maybe there are some rules of thumb. Here is my test code: ``` #include <stdio.h> int main() { FILE *a = fopen("/tmp/1", "wb") if ( !a ) perror("fopen"); if ( fwrite("test", 1, 4, a) != 4 ) perror("fwrite"); // always OK, cause data is buffered while( fflush(a) ) // ...second call will always return 0! { perror("fflush"); // if there is no disk space, I will get this perror, but ... } if ( fclose(a) ) // always ok, because calls only close(2) perror("fclose"); return 0; } ```
2010/02/07
[ "https://Stackoverflow.com/questions/2215363", "https://Stackoverflow.com", "https://Stackoverflow.com/users/129413/" ]
You could preallocate some reasonable amount of disk space. Write, flush, and fsync some binary zeros (or whatever) and then seek back to where you were. Rinse and repeat when necessary. And remember to truncate if necessary. A bit of a pain but it should work.
You could fseek(3) to the end of the file (assuming you'd know the length) before doing anything. That way you'd eliminate the possibility of failure due to insufficient disk space.
2,215,363
I'm writing a program, some kind of database. While I was reading manual of `fclose(3)` I found that it calls `fflush(3)` to flush `FILE*` buffers to disk (actually to OS buffer, but it doesn't matter right now, we can always call `fsync(2)`). Because I'm writing a DB it is obvious that I want to prevent data loss. If there is no disk space and `fflush(3)` in `fclose(3)` fails — we will lose our data, because > > using `FILE*` after an error in `fclose()` will cause undefined behavior > > > So I thought about explicit use of `fflush(3)` before `fclose(3)`, warn user about low disk space and recall `fflush(3)` after a while. I've read the **C** standard and thought this was a good idea. In practice, after failed `fflush` the second call would always return 0 (no error), but would actually do nothing. `fsync` didn't help me (I thought data might be saved in RAM). How can I prevent data loss in such a situation? Maybe there are some rules of thumb. Here is my test code: ``` #include <stdio.h> int main() { FILE *a = fopen("/tmp/1", "wb") if ( !a ) perror("fopen"); if ( fwrite("test", 1, 4, a) != 4 ) perror("fwrite"); // always OK, cause data is buffered while( fflush(a) ) // ...second call will always return 0! { perror("fflush"); // if there is no disk space, I will get this perror, but ... } if ( fclose(a) ) // always ok, because calls only close(2) perror("fclose"); return 0; } ```
2010/02/07
[ "https://Stackoverflow.com/questions/2215363", "https://Stackoverflow.com", "https://Stackoverflow.com/users/129413/" ]
The reason the subsequent fflush() operations succeed is that there is no (new) data to write to disk. The first fflush() failed; that is tragic but history. The subsequent fflush() has nothing to do, so it does so successfully. If you are writing to a database, you have to be careful about each write - not just dealing with problems at the end. Depending on how critical your data is, you may need to go through all sorts of gyrations to deal with problems - there are reasons why DBMS are complex, and failed writes are one of them. One way of dealing with the problem is to pre-allocate the space for the data. As others have noted, classic Unix file systems allow for sparse files (files where there are empty blocks with no disk space allocated for them), so you actually have to write some data onto each page that you need allocated. Then you only have to worry about 'disk full' problems when you extend the space - and you know when you do that and you can deal with that failure carefully. On Unix-based systems, there are a variety of system calls that can help you synchronize your data on disk, and options to 'open' etc. These include the 'O\_DSYNC' and related values. However, if you are extending a file, they can still cause failures for 'out of space', even with the fancy synchronizing options. And when you do run into that failure, you have to wait for space to become available (because you asked the user to tell you when it is available, perhaps), and then try the write again.
You could fseek(3) to the end of the file (assuming you'd know the length) before doing anything. That way you'd eliminate the possibility of failure due to insufficient disk space.
2,215,363
I'm writing a program, some kind of database. While I was reading manual of `fclose(3)` I found that it calls `fflush(3)` to flush `FILE*` buffers to disk (actually to OS buffer, but it doesn't matter right now, we can always call `fsync(2)`). Because I'm writing a DB it is obvious that I want to prevent data loss. If there is no disk space and `fflush(3)` in `fclose(3)` fails — we will lose our data, because > > using `FILE*` after an error in `fclose()` will cause undefined behavior > > > So I thought about explicit use of `fflush(3)` before `fclose(3)`, warn user about low disk space and recall `fflush(3)` after a while. I've read the **C** standard and thought this was a good idea. In practice, after failed `fflush` the second call would always return 0 (no error), but would actually do nothing. `fsync` didn't help me (I thought data might be saved in RAM). How can I prevent data loss in such a situation? Maybe there are some rules of thumb. Here is my test code: ``` #include <stdio.h> int main() { FILE *a = fopen("/tmp/1", "wb") if ( !a ) perror("fopen"); if ( fwrite("test", 1, 4, a) != 4 ) perror("fwrite"); // always OK, cause data is buffered while( fflush(a) ) // ...second call will always return 0! { perror("fflush"); // if there is no disk space, I will get this perror, but ... } if ( fclose(a) ) // always ok, because calls only close(2) perror("fclose"); return 0; } ```
2010/02/07
[ "https://Stackoverflow.com/questions/2215363", "https://Stackoverflow.com", "https://Stackoverflow.com/users/129413/" ]
The reason the subsequent fflush() operations succeed is that there is no (new) data to write to disk. The first fflush() failed; that is tragic but history. The subsequent fflush() has nothing to do, so it does so successfully. If you are writing to a database, you have to be careful about each write - not just dealing with problems at the end. Depending on how critical your data is, you may need to go through all sorts of gyrations to deal with problems - there are reasons why DBMS are complex, and failed writes are one of them. One way of dealing with the problem is to pre-allocate the space for the data. As others have noted, classic Unix file systems allow for sparse files (files where there are empty blocks with no disk space allocated for them), so you actually have to write some data onto each page that you need allocated. Then you only have to worry about 'disk full' problems when you extend the space - and you know when you do that and you can deal with that failure carefully. On Unix-based systems, there are a variety of system calls that can help you synchronize your data on disk, and options to 'open' etc. These include the 'O\_DSYNC' and related values. However, if you are extending a file, they can still cause failures for 'out of space', even with the fancy synchronizing options. And when you do run into that failure, you have to wait for space to become available (because you asked the user to tell you when it is available, perhaps), and then try the write again.
fflush will only flush the the C Library internal buffers to the OS, so an fflush won't guarantee that there won't be data loss. Calling fflush repeatedly (without intermediate fwrites) won't help, as you have already flushed the data to the OS once. The second fflush call will return SUCCESS as there is *nothing* to flush to the OS. If fflush fails because hard disk is full, you have already lost some data. To flush the data to the disk, you *need* to use fsync. If the hard disk is full, you are out of luck. The only way to prevent data loss is to now keep your process alive (and data in memory: either in user space/kernel file buffers) till you find some space on the disk to fsync to. Now if the power goes out, you *will* lose the data. In short, there is no way you can guarantee no data loss if your hard disk is full.
2,215,363
I'm writing a program, some kind of database. While I was reading manual of `fclose(3)` I found that it calls `fflush(3)` to flush `FILE*` buffers to disk (actually to OS buffer, but it doesn't matter right now, we can always call `fsync(2)`). Because I'm writing a DB it is obvious that I want to prevent data loss. If there is no disk space and `fflush(3)` in `fclose(3)` fails — we will lose our data, because > > using `FILE*` after an error in `fclose()` will cause undefined behavior > > > So I thought about explicit use of `fflush(3)` before `fclose(3)`, warn user about low disk space and recall `fflush(3)` after a while. I've read the **C** standard and thought this was a good idea. In practice, after failed `fflush` the second call would always return 0 (no error), but would actually do nothing. `fsync` didn't help me (I thought data might be saved in RAM). How can I prevent data loss in such a situation? Maybe there are some rules of thumb. Here is my test code: ``` #include <stdio.h> int main() { FILE *a = fopen("/tmp/1", "wb") if ( !a ) perror("fopen"); if ( fwrite("test", 1, 4, a) != 4 ) perror("fwrite"); // always OK, cause data is buffered while( fflush(a) ) // ...second call will always return 0! { perror("fflush"); // if there is no disk space, I will get this perror, but ... } if ( fclose(a) ) // always ok, because calls only close(2) perror("fclose"); return 0; } ```
2010/02/07
[ "https://Stackoverflow.com/questions/2215363", "https://Stackoverflow.com", "https://Stackoverflow.com/users/129413/" ]
The reason the subsequent fflush() operations succeed is that there is no (new) data to write to disk. The first fflush() failed; that is tragic but history. The subsequent fflush() has nothing to do, so it does so successfully. If you are writing to a database, you have to be careful about each write - not just dealing with problems at the end. Depending on how critical your data is, you may need to go through all sorts of gyrations to deal with problems - there are reasons why DBMS are complex, and failed writes are one of them. One way of dealing with the problem is to pre-allocate the space for the data. As others have noted, classic Unix file systems allow for sparse files (files where there are empty blocks with no disk space allocated for them), so you actually have to write some data onto each page that you need allocated. Then you only have to worry about 'disk full' problems when you extend the space - and you know when you do that and you can deal with that failure carefully. On Unix-based systems, there are a variety of system calls that can help you synchronize your data on disk, and options to 'open' etc. These include the 'O\_DSYNC' and related values. However, if you are extending a file, they can still cause failures for 'out of space', even with the fancy synchronizing options. And when you do run into that failure, you have to wait for space to become available (because you asked the user to tell you when it is available, perhaps), and then try the write again.
You could preallocate some reasonable amount of disk space. Write, flush, and fsync some binary zeros (or whatever) and then seek back to where you were. Rinse and repeat when necessary. And remember to truncate if necessary. A bit of a pain but it should work.
1,181,561
I am using Enterprise Library 4 on one of my projects for logging (and other purposes). I've noticed that there is some cost to the logging that I am doing that I can mitigate by doing the logging on a separate thread. The way I am doing this now is that I create a LogEntry object and then I call BeginInvoke on a delegate that calls Logger.Write. ``` new Action<LogEntry>(Logger.Write).BeginInvoke(le, null, null); ``` What I'd really like to do is add the log message to a queue and then have a single thread pulling LogEntry instances off the queue and performing the log operation. The benefit of this would be that logging is not interfering with the executing operation and not every logging operation results in a job getting thrown on the thread pool. How can I create a shared queue that supports many writers and one reader in a thread safe way? Some examples of a queue implementation that is designed to support many writers (without causing synchronization/blocking) and a single reader would be really appreciated. Recommendation regarding alternative approaches would also be appreciated, I am not interested in changing logging frameworks though.
2009/07/25
[ "https://Stackoverflow.com/questions/1181561", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3957/" ]
I wrote this code a while back, feel free to use it. ``` using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading; namespace MediaBrowser.Library.Logging { public abstract class ThreadedLogger : LoggerBase { Queue<Action> queue = new Queue<Action>(); AutoResetEvent hasNewItems = new AutoResetEvent(false); volatile bool waiting = false; public ThreadedLogger() : base() { Thread loggingThread = new Thread(new ThreadStart(ProcessQueue)); loggingThread.IsBackground = true; loggingThread.Start(); } void ProcessQueue() { while (true) { waiting = true; hasNewItems.WaitOne(10000,true); waiting = false; Queue<Action> queueCopy; lock (queue) { queueCopy = new Queue<Action>(queue); queue.Clear(); } foreach (var log in queueCopy) { log(); } } } public override void LogMessage(LogRow row) { lock (queue) { queue.Enqueue(() => AsyncLogMessage(row)); } hasNewItems.Set(); } protected abstract void AsyncLogMessage(LogRow row); public override void Flush() { while (!waiting) { Thread.Sleep(1); } } } } ``` Some advantages: * It keeps the background logger alive, so it does not need to spin up and spin down threads. * It uses a single thread to service the queue, which means there will never be a situation where 100 threads are servicing the queue. * It copies the queues to ensure the queue is not blocked while the log operation is performed * It uses an AutoResetEvent to ensure the bg thread is in a wait state * It is, IMHO, very easy to follow Here is a slightly improved version, keep in mind I performed very little testing on it, but it does address a few minor issues. ``` public abstract class ThreadedLogger : IDisposable { Queue<Action> queue = new Queue<Action>(); ManualResetEvent hasNewItems = new ManualResetEvent(false); ManualResetEvent terminate = new ManualResetEvent(false); ManualResetEvent waiting = new ManualResetEvent(false); Thread loggingThread; public ThreadedLogger() { loggingThread = new Thread(new ThreadStart(ProcessQueue)); loggingThread.IsBackground = true; // this is performed from a bg thread, to ensure the queue is serviced from a single thread loggingThread.Start(); } void ProcessQueue() { while (true) { waiting.Set(); int i = ManualResetEvent.WaitAny(new WaitHandle[] { hasNewItems, terminate }); // terminate was signaled if (i == 1) return; hasNewItems.Reset(); waiting.Reset(); Queue<Action> queueCopy; lock (queue) { queueCopy = new Queue<Action>(queue); queue.Clear(); } foreach (var log in queueCopy) { log(); } } } public void LogMessage(LogRow row) { lock (queue) { queue.Enqueue(() => AsyncLogMessage(row)); } hasNewItems.Set(); } protected abstract void AsyncLogMessage(LogRow row); public void Flush() { waiting.WaitOne(); } public void Dispose() { terminate.Set(); loggingThread.Join(); } } ``` Advantages over the original: * It's disposable, so you can get rid of the async logger * The flush semantics are improved * It will respond slightly better to a burst followed by silence
An extra level of indirection may help here. Your first async method call can put messages onto a synchonized Queue and set an event -- so the locks are happening in the thread-pool, not on your worker threads -- and then have yet another thread pulling messages off the queue when the event is raised.
1,181,561
I am using Enterprise Library 4 on one of my projects for logging (and other purposes). I've noticed that there is some cost to the logging that I am doing that I can mitigate by doing the logging on a separate thread. The way I am doing this now is that I create a LogEntry object and then I call BeginInvoke on a delegate that calls Logger.Write. ``` new Action<LogEntry>(Logger.Write).BeginInvoke(le, null, null); ``` What I'd really like to do is add the log message to a queue and then have a single thread pulling LogEntry instances off the queue and performing the log operation. The benefit of this would be that logging is not interfering with the executing operation and not every logging operation results in a job getting thrown on the thread pool. How can I create a shared queue that supports many writers and one reader in a thread safe way? Some examples of a queue implementation that is designed to support many writers (without causing synchronization/blocking) and a single reader would be really appreciated. Recommendation regarding alternative approaches would also be appreciated, I am not interested in changing logging frameworks though.
2009/07/25
[ "https://Stackoverflow.com/questions/1181561", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3957/" ]
In response to Sam Safrons post, I wanted to call flush and make sure everything was really finished writting. In my case, I am writing to a database in the queue thread and all my log events were getting queued up but sometimes the application stopped before everything was finished writing which is not acceptable in my situation. I changed several chunks of your code but the main thing I wanted to share was the flush: ``` public static void FlushLogs() { bool queueHasValues = true; while (queueHasValues) { //wait for the current iteration to complete m_waitingThreadEvent.WaitOne(); lock (m_loggerQueueSync) { queueHasValues = m_loggerQueue.Count > 0; } } //force MEL to flush all its listeners foreach (MEL.LogSource logSource in MEL.Logger.Writer.TraceSources.Values) { foreach (TraceListener listener in logSource.Listeners) { listener.Flush(); } } } ``` I hope that saves someone some frustration. It is especially apparent in parallel processes logging lots of data. Thanks for sharing your solution, it set me into a good direction! --Johnny S
If what you have in mind is a SHARED queue, then I think you are going to have to synchronize the writes to it, the pushes and the pops. But, I still think it's worth aiming at the shared queue design. In comparison to the IO of logging and probably in comparison to the other work your app is doing, the brief amount of blocking for the pushes and the pops will probably not be significant.
1,181,561
I am using Enterprise Library 4 on one of my projects for logging (and other purposes). I've noticed that there is some cost to the logging that I am doing that I can mitigate by doing the logging on a separate thread. The way I am doing this now is that I create a LogEntry object and then I call BeginInvoke on a delegate that calls Logger.Write. ``` new Action<LogEntry>(Logger.Write).BeginInvoke(le, null, null); ``` What I'd really like to do is add the log message to a queue and then have a single thread pulling LogEntry instances off the queue and performing the log operation. The benefit of this would be that logging is not interfering with the executing operation and not every logging operation results in a job getting thrown on the thread pool. How can I create a shared queue that supports many writers and one reader in a thread safe way? Some examples of a queue implementation that is designed to support many writers (without causing synchronization/blocking) and a single reader would be really appreciated. Recommendation regarding alternative approaches would also be appreciated, I am not interested in changing logging frameworks though.
2009/07/25
[ "https://Stackoverflow.com/questions/1181561", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3957/" ]
Here is what I came up with... also see Sam Saffron's answer. This answer is community wiki in case there are any problems that people see in the code and want to update. ``` /// <summary> /// A singleton queue that manages writing log entries to the different logging sources (Enterprise Library Logging) off the executing thread. /// This queue ensures that log entries are written in the order that they were executed and that logging is only utilizing one thread (backgroundworker) at any given time. /// </summary> public class AsyncLoggerQueue { //create singleton instance of logger queue public static AsyncLoggerQueue Current = new AsyncLoggerQueue(); private static readonly object logEntryQueueLock = new object(); private Queue<LogEntry> _LogEntryQueue = new Queue<LogEntry>(); private BackgroundWorker _Logger = new BackgroundWorker(); private AsyncLoggerQueue() { //configure background worker _Logger.WorkerSupportsCancellation = false; _Logger.DoWork += new DoWorkEventHandler(_Logger_DoWork); } public void Enqueue(LogEntry le) { //lock during write lock (logEntryQueueLock) { _LogEntryQueue.Enqueue(le); //while locked check to see if the BW is running, if not start it if (!_Logger.IsBusy) _Logger.RunWorkerAsync(); } } private void _Logger_DoWork(object sender, DoWorkEventArgs e) { while (true) { LogEntry le = null; bool skipEmptyCheck = false; lock (logEntryQueueLock) { if (_LogEntryQueue.Count <= 0) //if queue is empty than BW is done return; else if (_LogEntryQueue.Count > 1) //if greater than 1 we can skip checking to see if anything has been enqueued during the logging operation skipEmptyCheck = true; //dequeue the LogEntry that will be written to the log le = _LogEntryQueue.Dequeue(); } //pass LogEntry to Enterprise Library Logger.Write(le); if (skipEmptyCheck) //if LogEntryQueue.Count was > 1 before we wrote the last LogEntry we know to continue without double checking { lock (logEntryQueueLock) { if (_LogEntryQueue.Count <= 0) //if queue is still empty than BW is done return; } } } } } ```
Just an update: Using enteprise library 5.0 with .NET 4.0 it can easily be done by: ``` static public void LogMessageAsync(LogEntry logEntry) { Task.Factory.StartNew(() => LogMessage(logEntry)); } ``` See: <http://randypaulo.wordpress.com/2011/07/28/c-enterprise-library-asynchronous-logging/>
1,181,561
I am using Enterprise Library 4 on one of my projects for logging (and other purposes). I've noticed that there is some cost to the logging that I am doing that I can mitigate by doing the logging on a separate thread. The way I am doing this now is that I create a LogEntry object and then I call BeginInvoke on a delegate that calls Logger.Write. ``` new Action<LogEntry>(Logger.Write).BeginInvoke(le, null, null); ``` What I'd really like to do is add the log message to a queue and then have a single thread pulling LogEntry instances off the queue and performing the log operation. The benefit of this would be that logging is not interfering with the executing operation and not every logging operation results in a job getting thrown on the thread pool. How can I create a shared queue that supports many writers and one reader in a thread safe way? Some examples of a queue implementation that is designed to support many writers (without causing synchronization/blocking) and a single reader would be really appreciated. Recommendation regarding alternative approaches would also be appreciated, I am not interested in changing logging frameworks though.
2009/07/25
[ "https://Stackoverflow.com/questions/1181561", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3957/" ]
I suggest to *start with measuring actual performance impact of logging* on the overall system (i.e. by running profiler) and optionally switching to something faster like [log4net](http://logging.apache.org/log4net/index.html) (I've personally migrated to it from EntLib logging a long time ago). If this does not work, you can try using this simple method from .NET Framework: ``` ThreadPool.QueueUserWorkItem ``` > > *Queues a method for execution. The method executes when a thread pool thread becomes available.* > > > [MSDN Details](http://msdn.microsoft.com/en-us/library/system.threading.threadpool.queueuserworkitem.aspx) If this does not work either then you can resort to something like John Skeet has offered and actually code the async logging framework yourself.
If what you have in mind is a SHARED queue, then I think you are going to have to synchronize the writes to it, the pushes and the pops. But, I still think it's worth aiming at the shared queue design. In comparison to the IO of logging and probably in comparison to the other work your app is doing, the brief amount of blocking for the pushes and the pops will probably not be significant.
1,181,561
I am using Enterprise Library 4 on one of my projects for logging (and other purposes). I've noticed that there is some cost to the logging that I am doing that I can mitigate by doing the logging on a separate thread. The way I am doing this now is that I create a LogEntry object and then I call BeginInvoke on a delegate that calls Logger.Write. ``` new Action<LogEntry>(Logger.Write).BeginInvoke(le, null, null); ``` What I'd really like to do is add the log message to a queue and then have a single thread pulling LogEntry instances off the queue and performing the log operation. The benefit of this would be that logging is not interfering with the executing operation and not every logging operation results in a job getting thrown on the thread pool. How can I create a shared queue that supports many writers and one reader in a thread safe way? Some examples of a queue implementation that is designed to support many writers (without causing synchronization/blocking) and a single reader would be really appreciated. Recommendation regarding alternative approaches would also be appreciated, I am not interested in changing logging frameworks though.
2009/07/25
[ "https://Stackoverflow.com/questions/1181561", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3957/" ]
I wrote this code a while back, feel free to use it. ``` using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading; namespace MediaBrowser.Library.Logging { public abstract class ThreadedLogger : LoggerBase { Queue<Action> queue = new Queue<Action>(); AutoResetEvent hasNewItems = new AutoResetEvent(false); volatile bool waiting = false; public ThreadedLogger() : base() { Thread loggingThread = new Thread(new ThreadStart(ProcessQueue)); loggingThread.IsBackground = true; loggingThread.Start(); } void ProcessQueue() { while (true) { waiting = true; hasNewItems.WaitOne(10000,true); waiting = false; Queue<Action> queueCopy; lock (queue) { queueCopy = new Queue<Action>(queue); queue.Clear(); } foreach (var log in queueCopy) { log(); } } } public override void LogMessage(LogRow row) { lock (queue) { queue.Enqueue(() => AsyncLogMessage(row)); } hasNewItems.Set(); } protected abstract void AsyncLogMessage(LogRow row); public override void Flush() { while (!waiting) { Thread.Sleep(1); } } } } ``` Some advantages: * It keeps the background logger alive, so it does not need to spin up and spin down threads. * It uses a single thread to service the queue, which means there will never be a situation where 100 threads are servicing the queue. * It copies the queues to ensure the queue is not blocked while the log operation is performed * It uses an AutoResetEvent to ensure the bg thread is in a wait state * It is, IMHO, very easy to follow Here is a slightly improved version, keep in mind I performed very little testing on it, but it does address a few minor issues. ``` public abstract class ThreadedLogger : IDisposable { Queue<Action> queue = new Queue<Action>(); ManualResetEvent hasNewItems = new ManualResetEvent(false); ManualResetEvent terminate = new ManualResetEvent(false); ManualResetEvent waiting = new ManualResetEvent(false); Thread loggingThread; public ThreadedLogger() { loggingThread = new Thread(new ThreadStart(ProcessQueue)); loggingThread.IsBackground = true; // this is performed from a bg thread, to ensure the queue is serviced from a single thread loggingThread.Start(); } void ProcessQueue() { while (true) { waiting.Set(); int i = ManualResetEvent.WaitAny(new WaitHandle[] { hasNewItems, terminate }); // terminate was signaled if (i == 1) return; hasNewItems.Reset(); waiting.Reset(); Queue<Action> queueCopy; lock (queue) { queueCopy = new Queue<Action>(queue); queue.Clear(); } foreach (var log in queueCopy) { log(); } } } public void LogMessage(LogRow row) { lock (queue) { queue.Enqueue(() => AsyncLogMessage(row)); } hasNewItems.Set(); } protected abstract void AsyncLogMessage(LogRow row); public void Flush() { waiting.WaitOne(); } public void Dispose() { terminate.Set(); loggingThread.Join(); } } ``` Advantages over the original: * It's disposable, so you can get rid of the async logger * The flush semantics are improved * It will respond slightly better to a burst followed by silence
In response to Sam Safrons post, I wanted to call flush and make sure everything was really finished writting. In my case, I am writing to a database in the queue thread and all my log events were getting queued up but sometimes the application stopped before everything was finished writing which is not acceptable in my situation. I changed several chunks of your code but the main thing I wanted to share was the flush: ``` public static void FlushLogs() { bool queueHasValues = true; while (queueHasValues) { //wait for the current iteration to complete m_waitingThreadEvent.WaitOne(); lock (m_loggerQueueSync) { queueHasValues = m_loggerQueue.Count > 0; } } //force MEL to flush all its listeners foreach (MEL.LogSource logSource in MEL.Logger.Writer.TraceSources.Values) { foreach (TraceListener listener in logSource.Listeners) { listener.Flush(); } } } ``` I hope that saves someone some frustration. It is especially apparent in parallel processes logging lots of data. Thanks for sharing your solution, it set me into a good direction! --Johnny S
1,181,561
I am using Enterprise Library 4 on one of my projects for logging (and other purposes). I've noticed that there is some cost to the logging that I am doing that I can mitigate by doing the logging on a separate thread. The way I am doing this now is that I create a LogEntry object and then I call BeginInvoke on a delegate that calls Logger.Write. ``` new Action<LogEntry>(Logger.Write).BeginInvoke(le, null, null); ``` What I'd really like to do is add the log message to a queue and then have a single thread pulling LogEntry instances off the queue and performing the log operation. The benefit of this would be that logging is not interfering with the executing operation and not every logging operation results in a job getting thrown on the thread pool. How can I create a shared queue that supports many writers and one reader in a thread safe way? Some examples of a queue implementation that is designed to support many writers (without causing synchronization/blocking) and a single reader would be really appreciated. Recommendation regarding alternative approaches would also be appreciated, I am not interested in changing logging frameworks though.
2009/07/25
[ "https://Stackoverflow.com/questions/1181561", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3957/" ]
I wrote this code a while back, feel free to use it. ``` using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading; namespace MediaBrowser.Library.Logging { public abstract class ThreadedLogger : LoggerBase { Queue<Action> queue = new Queue<Action>(); AutoResetEvent hasNewItems = new AutoResetEvent(false); volatile bool waiting = false; public ThreadedLogger() : base() { Thread loggingThread = new Thread(new ThreadStart(ProcessQueue)); loggingThread.IsBackground = true; loggingThread.Start(); } void ProcessQueue() { while (true) { waiting = true; hasNewItems.WaitOne(10000,true); waiting = false; Queue<Action> queueCopy; lock (queue) { queueCopy = new Queue<Action>(queue); queue.Clear(); } foreach (var log in queueCopy) { log(); } } } public override void LogMessage(LogRow row) { lock (queue) { queue.Enqueue(() => AsyncLogMessage(row)); } hasNewItems.Set(); } protected abstract void AsyncLogMessage(LogRow row); public override void Flush() { while (!waiting) { Thread.Sleep(1); } } } } ``` Some advantages: * It keeps the background logger alive, so it does not need to spin up and spin down threads. * It uses a single thread to service the queue, which means there will never be a situation where 100 threads are servicing the queue. * It copies the queues to ensure the queue is not blocked while the log operation is performed * It uses an AutoResetEvent to ensure the bg thread is in a wait state * It is, IMHO, very easy to follow Here is a slightly improved version, keep in mind I performed very little testing on it, but it does address a few minor issues. ``` public abstract class ThreadedLogger : IDisposable { Queue<Action> queue = new Queue<Action>(); ManualResetEvent hasNewItems = new ManualResetEvent(false); ManualResetEvent terminate = new ManualResetEvent(false); ManualResetEvent waiting = new ManualResetEvent(false); Thread loggingThread; public ThreadedLogger() { loggingThread = new Thread(new ThreadStart(ProcessQueue)); loggingThread.IsBackground = true; // this is performed from a bg thread, to ensure the queue is serviced from a single thread loggingThread.Start(); } void ProcessQueue() { while (true) { waiting.Set(); int i = ManualResetEvent.WaitAny(new WaitHandle[] { hasNewItems, terminate }); // terminate was signaled if (i == 1) return; hasNewItems.Reset(); waiting.Reset(); Queue<Action> queueCopy; lock (queue) { queueCopy = new Queue<Action>(queue); queue.Clear(); } foreach (var log in queueCopy) { log(); } } } public void LogMessage(LogRow row) { lock (queue) { queue.Enqueue(() => AsyncLogMessage(row)); } hasNewItems.Set(); } protected abstract void AsyncLogMessage(LogRow row); public void Flush() { waiting.WaitOne(); } public void Dispose() { terminate.Set(); loggingThread.Join(); } } ``` Advantages over the original: * It's disposable, so you can get rid of the async logger * The flush semantics are improved * It will respond slightly better to a burst followed by silence
I wanted to say that my previous post was kind of useless. You can simply set AutoFlush to true and you will not have to loop through all the listeners. However, I still had crazy problem with parallel threads trying to flush the logger. I had to create another boolean that was set to true during the copying of the queue and executing the LogEntry writes and then in the flush routine I had to check that boolean to make sure something was not already in the queue and the nothing was getting processed before returning. Now multiple threads in parallel can hit this thing and when I call flush I know it is really flushed. ``` public static void FlushLogs() { int queueCount; bool isProcessingLogs; while (true) { //wait for the current iteration to complete m_waitingThreadEvent.WaitOne(); //check to see if we are currently processing logs lock (m_isProcessingLogsSync) { isProcessingLogs = m_isProcessingLogs; } //check to see if more events were added while the logger was processing the last batch lock (m_loggerQueueSync) { queueCount = m_loggerQueue.Count; } if (queueCount == 0 && !isProcessingLogs) break; //since something is in the queue, reset the signal so we will not keep looping Thread.Sleep(400); } } ```
1,181,561
I am using Enterprise Library 4 on one of my projects for logging (and other purposes). I've noticed that there is some cost to the logging that I am doing that I can mitigate by doing the logging on a separate thread. The way I am doing this now is that I create a LogEntry object and then I call BeginInvoke on a delegate that calls Logger.Write. ``` new Action<LogEntry>(Logger.Write).BeginInvoke(le, null, null); ``` What I'd really like to do is add the log message to a queue and then have a single thread pulling LogEntry instances off the queue and performing the log operation. The benefit of this would be that logging is not interfering with the executing operation and not every logging operation results in a job getting thrown on the thread pool. How can I create a shared queue that supports many writers and one reader in a thread safe way? Some examples of a queue implementation that is designed to support many writers (without causing synchronization/blocking) and a single reader would be really appreciated. Recommendation regarding alternative approaches would also be appreciated, I am not interested in changing logging frameworks though.
2009/07/25
[ "https://Stackoverflow.com/questions/1181561", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3957/" ]
I suggest to *start with measuring actual performance impact of logging* on the overall system (i.e. by running profiler) and optionally switching to something faster like [log4net](http://logging.apache.org/log4net/index.html) (I've personally migrated to it from EntLib logging a long time ago). If this does not work, you can try using this simple method from .NET Framework: ``` ThreadPool.QueueUserWorkItem ``` > > *Queues a method for execution. The method executes when a thread pool thread becomes available.* > > > [MSDN Details](http://msdn.microsoft.com/en-us/library/system.threading.threadpool.queueuserworkitem.aspx) If this does not work either then you can resort to something like John Skeet has offered and actually code the async logging framework yourself.
An extra level of indirection may help here. Your first async method call can put messages onto a synchonized Queue and set an event -- so the locks are happening in the thread-pool, not on your worker threads -- and then have yet another thread pulling messages off the queue when the event is raised.
1,181,561
I am using Enterprise Library 4 on one of my projects for logging (and other purposes). I've noticed that there is some cost to the logging that I am doing that I can mitigate by doing the logging on a separate thread. The way I am doing this now is that I create a LogEntry object and then I call BeginInvoke on a delegate that calls Logger.Write. ``` new Action<LogEntry>(Logger.Write).BeginInvoke(le, null, null); ``` What I'd really like to do is add the log message to a queue and then have a single thread pulling LogEntry instances off the queue and performing the log operation. The benefit of this would be that logging is not interfering with the executing operation and not every logging operation results in a job getting thrown on the thread pool. How can I create a shared queue that supports many writers and one reader in a thread safe way? Some examples of a queue implementation that is designed to support many writers (without causing synchronization/blocking) and a single reader would be really appreciated. Recommendation regarding alternative approaches would also be appreciated, I am not interested in changing logging frameworks though.
2009/07/25
[ "https://Stackoverflow.com/questions/1181561", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3957/" ]
Yes, you need a producer/consumer queue. I have one example of this in my threading tutorial - if you look my ["deadlocks / monitor methods"](http://pobox.com/~skeet/csharp/threads/deadlocks.shtml) page you'll find the code in the second half. There are plenty of other examples online, of course - and .NET 4.0 will ship with one in the framework too (rather more fully featured than mine!). In .NET 4.0 you'd probably wrap a [`ConcurrentQueue<T>`](http://msdn.microsoft.com/en-us/library/dd267265(VS.100).aspx) in a [`BlockingCollection<T>`](http://msdn.microsoft.com/en-us/library/dd267312(VS.100).aspx). The version on that page is non-generic (it was written a *long* time ago) but you'd probably want to make it generic - it would be trivial to do. You would call `Produce` from each "normal" thread, and `Consume` from one thread, just looping round and logging whatever it consumes. It's probably easiest just to make the consumer thread a background thread, so you don't need to worry about "stopping" the queue when your app exits. That does mean there's a remote possibility of missing the final log entry though (if it's half way through writing it when the app exits) - or even more if you're producing faster than it can consume/log.
I wanted to say that my previous post was kind of useless. You can simply set AutoFlush to true and you will not have to loop through all the listeners. However, I still had crazy problem with parallel threads trying to flush the logger. I had to create another boolean that was set to true during the copying of the queue and executing the LogEntry writes and then in the flush routine I had to check that boolean to make sure something was not already in the queue and the nothing was getting processed before returning. Now multiple threads in parallel can hit this thing and when I call flush I know it is really flushed. ``` public static void FlushLogs() { int queueCount; bool isProcessingLogs; while (true) { //wait for the current iteration to complete m_waitingThreadEvent.WaitOne(); //check to see if we are currently processing logs lock (m_isProcessingLogsSync) { isProcessingLogs = m_isProcessingLogs; } //check to see if more events were added while the logger was processing the last batch lock (m_loggerQueueSync) { queueCount = m_loggerQueue.Count; } if (queueCount == 0 && !isProcessingLogs) break; //since something is in the queue, reset the signal so we will not keep looping Thread.Sleep(400); } } ```
1,181,561
I am using Enterprise Library 4 on one of my projects for logging (and other purposes). I've noticed that there is some cost to the logging that I am doing that I can mitigate by doing the logging on a separate thread. The way I am doing this now is that I create a LogEntry object and then I call BeginInvoke on a delegate that calls Logger.Write. ``` new Action<LogEntry>(Logger.Write).BeginInvoke(le, null, null); ``` What I'd really like to do is add the log message to a queue and then have a single thread pulling LogEntry instances off the queue and performing the log operation. The benefit of this would be that logging is not interfering with the executing operation and not every logging operation results in a job getting thrown on the thread pool. How can I create a shared queue that supports many writers and one reader in a thread safe way? Some examples of a queue implementation that is designed to support many writers (without causing synchronization/blocking) and a single reader would be really appreciated. Recommendation regarding alternative approaches would also be appreciated, I am not interested in changing logging frameworks though.
2009/07/25
[ "https://Stackoverflow.com/questions/1181561", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3957/" ]
Yes, you need a producer/consumer queue. I have one example of this in my threading tutorial - if you look my ["deadlocks / monitor methods"](http://pobox.com/~skeet/csharp/threads/deadlocks.shtml) page you'll find the code in the second half. There are plenty of other examples online, of course - and .NET 4.0 will ship with one in the framework too (rather more fully featured than mine!). In .NET 4.0 you'd probably wrap a [`ConcurrentQueue<T>`](http://msdn.microsoft.com/en-us/library/dd267265(VS.100).aspx) in a [`BlockingCollection<T>`](http://msdn.microsoft.com/en-us/library/dd267312(VS.100).aspx). The version on that page is non-generic (it was written a *long* time ago) but you'd probably want to make it generic - it would be trivial to do. You would call `Produce` from each "normal" thread, and `Consume` from one thread, just looping round and logging whatever it consumes. It's probably easiest just to make the consumer thread a background thread, so you don't need to worry about "stopping" the queue when your app exits. That does mean there's a remote possibility of missing the final log entry though (if it's half way through writing it when the app exits) - or even more if you're producing faster than it can consume/log.
If what you have in mind is a SHARED queue, then I think you are going to have to synchronize the writes to it, the pushes and the pops. But, I still think it's worth aiming at the shared queue design. In comparison to the IO of logging and probably in comparison to the other work your app is doing, the brief amount of blocking for the pushes and the pops will probably not be significant.
1,181,561
I am using Enterprise Library 4 on one of my projects for logging (and other purposes). I've noticed that there is some cost to the logging that I am doing that I can mitigate by doing the logging on a separate thread. The way I am doing this now is that I create a LogEntry object and then I call BeginInvoke on a delegate that calls Logger.Write. ``` new Action<LogEntry>(Logger.Write).BeginInvoke(le, null, null); ``` What I'd really like to do is add the log message to a queue and then have a single thread pulling LogEntry instances off the queue and performing the log operation. The benefit of this would be that logging is not interfering with the executing operation and not every logging operation results in a job getting thrown on the thread pool. How can I create a shared queue that supports many writers and one reader in a thread safe way? Some examples of a queue implementation that is designed to support many writers (without causing synchronization/blocking) and a single reader would be really appreciated. Recommendation regarding alternative approaches would also be appreciated, I am not interested in changing logging frameworks though.
2009/07/25
[ "https://Stackoverflow.com/questions/1181561", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3957/" ]
I wanted to say that my previous post was kind of useless. You can simply set AutoFlush to true and you will not have to loop through all the listeners. However, I still had crazy problem with parallel threads trying to flush the logger. I had to create another boolean that was set to true during the copying of the queue and executing the LogEntry writes and then in the flush routine I had to check that boolean to make sure something was not already in the queue and the nothing was getting processed before returning. Now multiple threads in parallel can hit this thing and when I call flush I know it is really flushed. ``` public static void FlushLogs() { int queueCount; bool isProcessingLogs; while (true) { //wait for the current iteration to complete m_waitingThreadEvent.WaitOne(); //check to see if we are currently processing logs lock (m_isProcessingLogsSync) { isProcessingLogs = m_isProcessingLogs; } //check to see if more events were added while the logger was processing the last batch lock (m_loggerQueueSync) { queueCount = m_loggerQueue.Count; } if (queueCount == 0 && !isProcessingLogs) break; //since something is in the queue, reset the signal so we will not keep looping Thread.Sleep(400); } } ```
An extra level of indirection may help here. Your first async method call can put messages onto a synchonized Queue and set an event -- so the locks are happening in the thread-pool, not on your worker threads -- and then have yet another thread pulling messages off the queue when the event is raised.
2,458,446
I am not an expert web-dev so please bear with me here. I would like to display a banner style header for a page with the top part taken up by an image that is 275x116 and then a horizontal menu bar (styled using ul items) appearing at 70% from the top of the banner. How would I set this up so that the banner appears underneath my navigation? Currently, a portion of the left side of my menu bar sits underneath the image but I'd like it to be the opposite so the menu bar is above the image, some thing like this: ``` ============= <start of header> =========== -------- | img | | | | Horizontal menu | | -------- ============= <end of header> =========== ``` My css: ``` #header { background-color: green; border: 0; margin: 0; padding: 0; overflow: hidden; width: 100%; height: 120px; } #logo { background: green url(images/logo.png) no-repeat scroll 0 0; margin: 0px 0px; border: 1px solid white; left: 20px; top: 20px; width: 275px; height: 116px; position: absolute; z-index: -1000; } .container { border:1px solid grey; margin-left:auto; margin-right:auto; width:960px; } ``` My Html: ``` <body> <div id="header"> <div id="logo"> </div> <div class="container" id="primaryNavbar"> <ul> <li><a href="#">Home</a></li> <li><a href="#">Books</a></li> <li><a href="#">Shows</a></li> <li><a href="#">Movies</a></li> </ul> <div class="clear">&nbsp;</div> </div> <!-- end of container --> </div> <!-- end of header --> </body> ``` I thought that setting the position to "absolute" for the logo element and adding in a very low z-index would achieve this but that isn't the case here. Any suggestions?
2010/03/16
[ "https://Stackoverflow.com/questions/2458446", "https://Stackoverflow.com", "https://Stackoverflow.com/users/265838/" ]
Append these items to your existing styles: ``` #header { position: relative; //this will keep your absolute items inside of this container } #logo { z-index: 1; } .container { z-index: 2; position: absolute; } .container ul li { float: left; } ```
Make the `.container` float left, position relative and move it around until it's where you need it.
1,776,241
I try to compile a ([LWJGL](http://en.wikipedia.org/wiki/Lightweight_Java_Game_Library)) Java project using [NetBeans](http://en.wikipedia.org/wiki/NetBeans). I clicked on the project -> properties and under Libraries -> Compile. I added the Jars location, the source files location and javadoc location. Still when I try to build the project I get the error: > > package org.lwjgl does not exist. > > > What can I do to resolve this error?
2009/11/21
[ "https://Stackoverflow.com/questions/1776241", "https://Stackoverflow.com", "https://Stackoverflow.com/users/80932/" ]
Try to remove all libraries, "build and clean" (obviously compile give 100% errors...). Try to add all libraries, press "ok". After that wait for "scanning the projects" to finish successfully and after that retry to "build and clean".
I added the directory location of the jar files and I needed to add each jar individually.
845,058
How do I get a line count of a large file in the most memory- and time-efficient manner? ``` def file_len(filename): with open(filename) as f: for i, _ in enumerate(f): pass return i + 1 ```
2009/05/10
[ "https://Stackoverflow.com/questions/845058", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12855/" ]
I had to post this on a similar question until my reputation score jumped a bit (thanks to whoever bumped me!). All of these solutions ignore one way to make this run considerably faster, namely by using the unbuffered (raw) interface, using bytearrays, and doing your own buffering. (This only applies in Python 3. In Python 2, the raw interface may or may not be used by default, but in Python 3, you'll default into Unicode.) Using a modified version of the timing tool, I believe the following code is faster (and marginally more pythonic) than any of the solutions offered: ``` def rawcount(filename): f = open(filename, 'rb') lines = 0 buf_size = 1024 * 1024 read_f = f.raw.read buf = read_f(buf_size) while buf: lines += buf.count(b'\n') buf = read_f(buf_size) return lines ``` Using a separate generator function, this runs a smidge faster: ``` def _make_gen(reader): b = reader(1024 * 1024) while b: yield b b = reader(1024*1024) def rawgencount(filename): f = open(filename, 'rb') f_gen = _make_gen(f.raw.read) return sum( buf.count(b'\n') for buf in f_gen ) ``` This can be done completely with generators expressions in-line using itertools, but it gets pretty weird looking: ``` from itertools import (takewhile,repeat) def rawincount(filename): f = open(filename, 'rb') bufgen = takewhile(lambda x: x, (f.raw.read(1024*1024) for _ in repeat(None))) return sum( buf.count(b'\n') for buf in bufgen ) ``` Here are my timings: ``` function average, s min, s ratio rawincount 0.0043 0.0041 1.00 rawgencount 0.0044 0.0042 1.01 rawcount 0.0048 0.0045 1.09 bufcount 0.008 0.0068 1.64 wccount 0.01 0.0097 2.35 itercount 0.014 0.014 3.41 opcount 0.02 0.02 4.83 kylecount 0.021 0.021 5.05 simplecount 0.022 0.022 5.25 mapcount 0.037 0.031 7.46 ```
This code is shorter and clearer. It's probably the best way: ``` num_lines = open('yourfile.ext').read().count('\n') ```
845,058
How do I get a line count of a large file in the most memory- and time-efficient manner? ``` def file_len(filename): with open(filename) as f: for i, _ in enumerate(f): pass return i + 1 ```
2009/05/10
[ "https://Stackoverflow.com/questions/845058", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12855/" ]
``` def file_len(full_path): """ Count number of lines in a file.""" f = open(full_path) nr_of_lines = sum(1 for line in f) f.close() return nr_of_lines ```
An alternative for big files is using [`xreadlines():`](https://python-reference.readthedocs.io/en/latest/docs/file/xreadlines.html) ``` count = 0 for line in open(thefilepath).xreadlines( ): count += 1 ``` For Python 3 please see: [What substitutes xreadlines() in Python 3?](https://stackoverflow.com/questions/3541274/what-substitutes-xreadlines-in-python-3)
845,058
How do I get a line count of a large file in the most memory- and time-efficient manner? ``` def file_len(filename): with open(filename) as f: for i, _ in enumerate(f): pass return i + 1 ```
2009/05/10
[ "https://Stackoverflow.com/questions/845058", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12855/" ]
I had to post this on a similar question until my reputation score jumped a bit (thanks to whoever bumped me!). All of these solutions ignore one way to make this run considerably faster, namely by using the unbuffered (raw) interface, using bytearrays, and doing your own buffering. (This only applies in Python 3. In Python 2, the raw interface may or may not be used by default, but in Python 3, you'll default into Unicode.) Using a modified version of the timing tool, I believe the following code is faster (and marginally more pythonic) than any of the solutions offered: ``` def rawcount(filename): f = open(filename, 'rb') lines = 0 buf_size = 1024 * 1024 read_f = f.raw.read buf = read_f(buf_size) while buf: lines += buf.count(b'\n') buf = read_f(buf_size) return lines ``` Using a separate generator function, this runs a smidge faster: ``` def _make_gen(reader): b = reader(1024 * 1024) while b: yield b b = reader(1024*1024) def rawgencount(filename): f = open(filename, 'rb') f_gen = _make_gen(f.raw.read) return sum( buf.count(b'\n') for buf in f_gen ) ``` This can be done completely with generators expressions in-line using itertools, but it gets pretty weird looking: ``` from itertools import (takewhile,repeat) def rawincount(filename): f = open(filename, 'rb') bufgen = takewhile(lambda x: x, (f.raw.read(1024*1024) for _ in repeat(None))) return sum( buf.count(b'\n') for buf in bufgen ) ``` Here are my timings: ``` function average, s min, s ratio rawincount 0.0043 0.0041 1.00 rawgencount 0.0044 0.0042 1.01 rawcount 0.0048 0.0045 1.09 bufcount 0.008 0.0068 1.64 wccount 0.01 0.0097 2.35 itercount 0.014 0.014 3.41 opcount 0.02 0.02 4.83 kylecount 0.021 0.021 5.05 simplecount 0.022 0.022 5.25 mapcount 0.037 0.031 7.46 ```
Just to complete the above methods I tried a variant with the fileinput module: ``` import fileinput as fi def filecount(fname): for line in fi.input(fname): pass return fi.lineno() ``` And passed a 60mil lines file to all the above stated methods: ``` mapcount : 6.1331050396 simplecount : 4.588793993 opcount : 4.42918205261 filecount : 43.2780818939 bufcount : 0.170812129974 ``` It's a little surprise to me that fileinput is that bad and scales far worse than all the other methods...
845,058
How do I get a line count of a large file in the most memory- and time-efficient manner? ``` def file_len(filename): with open(filename) as f: for i, _ in enumerate(f): pass return i + 1 ```
2009/05/10
[ "https://Stackoverflow.com/questions/845058", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12855/" ]
Here is a python program to use the multiprocessing library to distribute the line counting across machines/cores. My test improves counting a 20million line file from 26 seconds to 7 seconds using an 8 core windows 64 server. Note: not using memory mapping makes things much slower. ``` import multiprocessing, sys, time, os, mmap import logging, logging.handlers def init_logger(pid): console_format = 'P{0} %(levelname)s %(message)s'.format(pid) logger = logging.getLogger() # New logger at root level logger.setLevel( logging.INFO ) logger.handlers.append( logging.StreamHandler() ) logger.handlers[0].setFormatter( logging.Formatter( console_format, '%d/%m/%y %H:%M:%S' ) ) def getFileLineCount( queues, pid, processes, file1 ): init_logger(pid) logging.info( 'start' ) physical_file = open(file1, "r") # mmap.mmap(fileno, length[, tagname[, access[, offset]]] m1 = mmap.mmap( physical_file.fileno(), 0, access=mmap.ACCESS_READ ) #work out file size to divide up line counting fSize = os.stat(file1).st_size chunk = (fSize / processes) + 1 lines = 0 #get where I start and stop _seedStart = chunk * (pid) _seekEnd = chunk * (pid+1) seekStart = int(_seedStart) seekEnd = int(_seekEnd) if seekEnd < int(_seekEnd + 1): seekEnd += 1 if _seedStart < int(seekStart + 1): seekStart += 1 if seekEnd > fSize: seekEnd = fSize #find where to start if pid > 0: m1.seek( seekStart ) #read next line l1 = m1.readline() # need to use readline with memory mapped files seekStart = m1.tell() #tell previous rank my seek start to make their seek end if pid > 0: queues[pid-1].put( seekStart ) if pid < processes-1: seekEnd = queues[pid].get() m1.seek( seekStart ) l1 = m1.readline() while len(l1) > 0: lines += 1 l1 = m1.readline() if m1.tell() > seekEnd or len(l1) == 0: break logging.info( 'done' ) # add up the results if pid == 0: for p in range(1,processes): lines += queues[0].get() queues[0].put(lines) # the total lines counted else: queues[0].put(lines) m1.close() physical_file.close() if __name__ == '__main__': init_logger( 'main' ) if len(sys.argv) > 1: file_name = sys.argv[1] else: logging.fatal( 'parameters required: file-name [processes]' ) exit() t = time.time() processes = multiprocessing.cpu_count() if len(sys.argv) > 2: processes = int(sys.argv[2]) queues=[] # a queue for each process for pid in range(processes): queues.append( multiprocessing.Queue() ) jobs=[] prev_pipe = 0 for pid in range(processes): p = multiprocessing.Process( target = getFileLineCount, args=(queues, pid, processes, file_name,) ) p.start() jobs.append(p) jobs[0].join() #wait for counting to finish lines = queues[0].get() logging.info( 'finished {} Lines:{}'.format( time.time() - t, lines ) ) ```
If one wants to get the line count cheaply in Python in Linux, I recommend this method: ``` import os print os.popen("wc -l file_path").readline().split()[0] ``` file\_path can be both abstract file path or relative path. Hope this may help.
845,058
How do I get a line count of a large file in the most memory- and time-efficient manner? ``` def file_len(filename): with open(filename) as f: for i, _ in enumerate(f): pass return i + 1 ```
2009/05/10
[ "https://Stackoverflow.com/questions/845058", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12855/" ]
``` def file_len(full_path): """ Count number of lines in a file.""" f = open(full_path) nr_of_lines = sum(1 for line in f) f.close() return nr_of_lines ```
[Kyle's answer](https://stackoverflow.com/a/1019572/2698552) ``` num_lines = sum(1 for line in open('my_file.txt')) ``` is probably best, an alternative for this is ``` num_lines = len(open('my_file.txt').read().splitlines()) ``` Here is the comparision of performance of both ``` In [20]: timeit sum(1 for line in open('Charts.ipynb')) 100000 loops, best of 3: 9.79 µs per loop In [21]: timeit len(open('Charts.ipynb').read().splitlines()) 100000 loops, best of 3: 12 µs per loop ```
845,058
How do I get a line count of a large file in the most memory- and time-efficient manner? ``` def file_len(filename): with open(filename) as f: for i, _ in enumerate(f): pass return i + 1 ```
2009/05/10
[ "https://Stackoverflow.com/questions/845058", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12855/" ]
You could execute a subprocess and run `wc -l filename` ``` import subprocess def file_len(fname): p = subprocess.Popen(['wc', '-l', fname], stdout=subprocess.PIPE, stderr=subprocess.PIPE) result, err = p.communicate() if p.returncode != 0: raise IOError(err) return int(result.strip().split()[0]) ```
``` def line_count(path): count = 0 with open(path) as lines: for count, l in enumerate(lines, start=1): pass return count ```
845,058
How do I get a line count of a large file in the most memory- and time-efficient manner? ``` def file_len(filename): with open(filename) as f: for i, _ in enumerate(f): pass return i + 1 ```
2009/05/10
[ "https://Stackoverflow.com/questions/845058", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12855/" ]
the result of opening a file is an iterator, which can be converted to a sequence, which has a length: ``` with open(filename) as f: return len(list(f)) ``` this is more concise than your explicit loop, and avoids the `enumerate`.
Similarly: ``` lines = 0 with open(path) as f: for line in f: lines += 1 ```
845,058
How do I get a line count of a large file in the most memory- and time-efficient manner? ``` def file_len(filename): with open(filename) as f: for i, _ in enumerate(f): pass return i + 1 ```
2009/05/10
[ "https://Stackoverflow.com/questions/845058", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12855/" ]
How about this one-liner: ``` file_length = len(open('myfile.txt','r').read().split('\n')) ``` Takes 0.003 sec using this method to time it on a 3900 line file ``` def c(): import time s = time.time() file_length = len(open('myfile.txt','r').read().split('\n')) print time.time() - s ```
You can use the `os.path` module in the following way: ``` import os import subprocess Number_lines = int( (subprocess.Popen( 'wc -l {0}'.format( Filename ), shell=True, stdout=subprocess.PIPE).stdout).readlines()[0].split()[0] ) ``` , where `Filename` is the absolute path of the file.
845,058
How do I get a line count of a large file in the most memory- and time-efficient manner? ``` def file_len(filename): with open(filename) as f: for i, _ in enumerate(f): pass return i + 1 ```
2009/05/10
[ "https://Stackoverflow.com/questions/845058", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12855/" ]
Just to complete the above methods I tried a variant with the fileinput module: ``` import fileinput as fi def filecount(fname): for line in fi.input(fname): pass return fi.lineno() ``` And passed a 60mil lines file to all the above stated methods: ``` mapcount : 6.1331050396 simplecount : 4.588793993 opcount : 4.42918205261 filecount : 43.2780818939 bufcount : 0.170812129974 ``` It's a little surprise to me that fileinput is that bad and scales far worse than all the other methods...
i found that you can just. ``` f = open("data.txt") linecout = len(f.readlines()) ``` will give you an answer
845,058
How do I get a line count of a large file in the most memory- and time-efficient manner? ``` def file_len(filename): with open(filename) as f: for i, _ in enumerate(f): pass return i + 1 ```
2009/05/10
[ "https://Stackoverflow.com/questions/845058", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12855/" ]
A one-line bash solution similar to [this answer](https://stackoverflow.com/a/845069/1397061), using the modern `subprocess.check_output` function: ``` def line_count(filename): return int(subprocess.check_output(['wc', '-l', filename]).split()[0]) ```
Another possibility: ``` import subprocess def num_lines_in_file(fpath): return int(subprocess.check_output('wc -l %s' % fpath, shell=True).strip().split()[0]) ```
193,095
.NET throws an exception trying to parse a datetime string in this format: **Fri, 10 Oct 2008 00:05:51 CST** Convert.ToDateTime("Fri, 10 Oct 2008 00:05:51 CST") results in an exception: > > The string was not recognized as a > valid DateTime. There is a unknown > word starting at index 26 > > > Character 26 obviously being the start of "CST" In a quick test, PHP and javascript can both parse this string into a date with no problem. Is .NET just full of fail or is there a reasonable explanation?
2008/10/10
[ "https://Stackoverflow.com/questions/193095", "https://Stackoverflow.com", "https://Stackoverflow.com/users/26550/" ]
<http://msdn.microsoft.com/en-us/library/ey1cdcx8.aspx> You need to use the overloaded DateTime.Parse to accurately parse timezones.
If a specific date and time format will be parsed across different locales, use one of the overloads of the ParseExact method and provide a format specifier.
1,991,776
How can I parse the following string: Phone + 300 mins & unlimited texts - 24 month plan $25 to obtain the bracketed values, i.e. Phone + [300] mins & [unlimited] texts - [24] month plan $[25]
2010/01/02
[ "https://Stackoverflow.com/questions/1991776", "https://Stackoverflow.com", "https://Stackoverflow.com/users/242319/" ]
Depends, if they all look like that, then: `/Phone \+ (\w+) mins & (\w+) texts - (\d+) month plan \$(\w+)/` That assumes that a plan may contain unlimited minutes. You can use the regex like this: ``` str = "Phone + 300 mins & unlimited texts - 24 month plan $25" regex = /Phone \+ (\w+) mins & (\w+) texts - (\d+) month plan \$(\w+)/ match = regex.match(str).to_a ``` now match is `["Phone + 300 mins & unlimited texts - 24 month plan $25", "300", "unlimited", "24", "25"]`
Match can also be abbreviated with the `=~` so: ``` string =~ /Phone\s*\+\s*(\w*)\s*mins\s*&\s*(\w*)\s*texts\s*-\s*(\w*)\s*month\s*plan\s*\$(\w*)/ ``` performs a match on the string with the regex on the right hand side. You can also directly access the value of a group (the parts of the regex within parens) utilizing $1 etc so in this case ``` minutes = $1 texts = $2 months = $3 cost = $4 ```
2,013,676
My problem is that I need to dynamically include a javascript file from another external javascript file. I'm trying to do it by using this function: ``` function addCustomScriptTag(url) { var scriptTag=document.createElement('script'); scriptTag.type = 'text/javascript'; scriptTag.src=url; var myElement = document.getElementsByTagName("head")[0]; myElement.appendChild(scriptTag); } ``` The problem happens only in IE6 where trying to append to the head element causes an 'operation aborted' error. Any help would be appreciated
2010/01/06
[ "https://Stackoverflow.com/questions/2013676", "https://Stackoverflow.com", "https://Stackoverflow.com/users/244783/" ]
Append it to the body then. Javascript doesn't have to go exclusively in the <head> of your document.
I think it's because IE6 doesn't support `getElementsByTagName()`, try replacing it with `document.body`.
2,013,676
My problem is that I need to dynamically include a javascript file from another external javascript file. I'm trying to do it by using this function: ``` function addCustomScriptTag(url) { var scriptTag=document.createElement('script'); scriptTag.type = 'text/javascript'; scriptTag.src=url; var myElement = document.getElementsByTagName("head")[0]; myElement.appendChild(scriptTag); } ``` The problem happens only in IE6 where trying to append to the head element causes an 'operation aborted' error. Any help would be appreciated
2010/01/06
[ "https://Stackoverflow.com/questions/2013676", "https://Stackoverflow.com", "https://Stackoverflow.com/users/244783/" ]
It depends when you add it to the head DOM element. Operation aborted occurs in all versions of IE because you're trying to modify a DOM element via JavaScript before that DOM element has finished loading, <http://support.microsoft.com/default.aspx/kb/927917>. If you need this script loaded right away, you could do an old school document.write to add the script tag, e.g. ``` <head> <script>document.write('<script src='yourUrl.js'><\/scr'+'ipt>');</script> </head> ``` Otherwise call your function in the body onload via plain old JavaScript or via a framework like jQuery a la document.ready.
Append it to the body then. Javascript doesn't have to go exclusively in the <head> of your document.
2,013,676
My problem is that I need to dynamically include a javascript file from another external javascript file. I'm trying to do it by using this function: ``` function addCustomScriptTag(url) { var scriptTag=document.createElement('script'); scriptTag.type = 'text/javascript'; scriptTag.src=url; var myElement = document.getElementsByTagName("head")[0]; myElement.appendChild(scriptTag); } ``` The problem happens only in IE6 where trying to append to the head element causes an 'operation aborted' error. Any help would be appreciated
2010/01/06
[ "https://Stackoverflow.com/questions/2013676", "https://Stackoverflow.com", "https://Stackoverflow.com/users/244783/" ]
Consider using a library like jQuery and then just use the equivalent (if not using jQuery) of [`getScript`](http://docs.jquery.com/Ajax/jQuery.getScript). This will handle cross-browser quirks and inconsistencies for the most part.
I think it's because IE6 doesn't support `getElementsByTagName()`, try replacing it with `document.body`.
2,013,676
My problem is that I need to dynamically include a javascript file from another external javascript file. I'm trying to do it by using this function: ``` function addCustomScriptTag(url) { var scriptTag=document.createElement('script'); scriptTag.type = 'text/javascript'; scriptTag.src=url; var myElement = document.getElementsByTagName("head")[0]; myElement.appendChild(scriptTag); } ``` The problem happens only in IE6 where trying to append to the head element causes an 'operation aborted' error. Any help would be appreciated
2010/01/06
[ "https://Stackoverflow.com/questions/2013676", "https://Stackoverflow.com", "https://Stackoverflow.com/users/244783/" ]
It depends when you add it to the head DOM element. Operation aborted occurs in all versions of IE because you're trying to modify a DOM element via JavaScript before that DOM element has finished loading, <http://support.microsoft.com/default.aspx/kb/927917>. If you need this script loaded right away, you could do an old school document.write to add the script tag, e.g. ``` <head> <script>document.write('<script src='yourUrl.js'><\/scr'+'ipt>');</script> </head> ``` Otherwise call your function in the body onload via plain old JavaScript or via a framework like jQuery a la document.ready.
Consider using a library like jQuery and then just use the equivalent (if not using jQuery) of [`getScript`](http://docs.jquery.com/Ajax/jQuery.getScript). This will handle cross-browser quirks and inconsistencies for the most part.
2,013,676
My problem is that I need to dynamically include a javascript file from another external javascript file. I'm trying to do it by using this function: ``` function addCustomScriptTag(url) { var scriptTag=document.createElement('script'); scriptTag.type = 'text/javascript'; scriptTag.src=url; var myElement = document.getElementsByTagName("head")[0]; myElement.appendChild(scriptTag); } ``` The problem happens only in IE6 where trying to append to the head element causes an 'operation aborted' error. Any help would be appreciated
2010/01/06
[ "https://Stackoverflow.com/questions/2013676", "https://Stackoverflow.com", "https://Stackoverflow.com/users/244783/" ]
It depends when you add it to the head DOM element. Operation aborted occurs in all versions of IE because you're trying to modify a DOM element via JavaScript before that DOM element has finished loading, <http://support.microsoft.com/default.aspx/kb/927917>. If you need this script loaded right away, you could do an old school document.write to add the script tag, e.g. ``` <head> <script>document.write('<script src='yourUrl.js'><\/scr'+'ipt>');</script> </head> ``` Otherwise call your function in the body onload via plain old JavaScript or via a framework like jQuery a la document.ready.
I think it's because IE6 doesn't support `getElementsByTagName()`, try replacing it with `document.body`.
2,013,676
My problem is that I need to dynamically include a javascript file from another external javascript file. I'm trying to do it by using this function: ``` function addCustomScriptTag(url) { var scriptTag=document.createElement('script'); scriptTag.type = 'text/javascript'; scriptTag.src=url; var myElement = document.getElementsByTagName("head")[0]; myElement.appendChild(scriptTag); } ``` The problem happens only in IE6 where trying to append to the head element causes an 'operation aborted' error. Any help would be appreciated
2010/01/06
[ "https://Stackoverflow.com/questions/2013676", "https://Stackoverflow.com", "https://Stackoverflow.com/users/244783/" ]
I steal from the jQuery source: ``` var head = document.getElementsByTagName("head")[0]; var script = document.createElement("script"); script.src = s.url; // Attach handlers for all browsers script.onload = script.onreadystatechange = function(){ if ( !done && (!this.readyState || this.readyState == "loaded" || this.readyState == "complete") ) { done = true; success(); complete(); // Handle memory leak in IE script.onload = script.onreadystatechange = null; head.removeChild( script ); } }; head.appendChild(script); ```
I think it's because IE6 doesn't support `getElementsByTagName()`, try replacing it with `document.body`.
2,013,676
My problem is that I need to dynamically include a javascript file from another external javascript file. I'm trying to do it by using this function: ``` function addCustomScriptTag(url) { var scriptTag=document.createElement('script'); scriptTag.type = 'text/javascript'; scriptTag.src=url; var myElement = document.getElementsByTagName("head")[0]; myElement.appendChild(scriptTag); } ``` The problem happens only in IE6 where trying to append to the head element causes an 'operation aborted' error. Any help would be appreciated
2010/01/06
[ "https://Stackoverflow.com/questions/2013676", "https://Stackoverflow.com", "https://Stackoverflow.com/users/244783/" ]
It depends when you add it to the head DOM element. Operation aborted occurs in all versions of IE because you're trying to modify a DOM element via JavaScript before that DOM element has finished loading, <http://support.microsoft.com/default.aspx/kb/927917>. If you need this script loaded right away, you could do an old school document.write to add the script tag, e.g. ``` <head> <script>document.write('<script src='yourUrl.js'><\/scr'+'ipt>');</script> </head> ``` Otherwise call your function in the body onload via plain old JavaScript or via a framework like jQuery a la document.ready.
I steal from the jQuery source: ``` var head = document.getElementsByTagName("head")[0]; var script = document.createElement("script"); script.src = s.url; // Attach handlers for all browsers script.onload = script.onreadystatechange = function(){ if ( !done && (!this.readyState || this.readyState == "loaded" || this.readyState == "complete") ) { done = true; success(); complete(); // Handle memory leak in IE script.onload = script.onreadystatechange = null; head.removeChild( script ); } }; head.appendChild(script); ```
1,344,576
**Is it possible for there to by any type of value in `$_GET` or `$_POST` which is *not* an array or string?** For those who read code better, is it at all possible to run this simple script on a *web server* and get it to throw the exception? ``` // crash-me.php <?php function must_be_array_or_string($value) { if(is_string($value)) return; if(is_array($value)) { foreach($value as $subValue) must_be_array_or_string($subValue); return; } throw new Exception("Value is " . gettype($value)); } if(isset($_GET)) must_be_array_or_string($_GET); if(isset($_POST)) must_be_array_or_string($_POST); ```
2009/08/28
[ "https://Stackoverflow.com/questions/1344576", "https://Stackoverflow.com", "https://Stackoverflow.com/users/28835/" ]
Except for file uploads, values are always strings or arrays.
I believe in the case of file uploads, the `'error'` and `'size'` fields would be `ints`.
3,054,059
EDITED FOR BETTER UNDERSTANDING I made a custom control with propertise for some global variables. ``` private string[] LBTitles = new string[1] { "Rien" }; //... [CategoryAttribute("WildData"), DescriptionAttribute("Tableau des noms des titres au dessus de le listbox")] public string[] Titles { get { return LBTitles; } set { LBTitles = value; OnColumns(EventArgs.Empty); } } ``` OnColums does many things to format the control. One among others is: ``` int vLongest = 0; //... //Si le plus long est plus petit que le titre de la colonne if (vLongest < LBTitles[i].Length) { vLongest = LBTitles[i].Length; } ``` The above is for my control itself. Everything work fine, its a wonderful day, etc. Now when it comes to add it to a form: - I add it, everything is ok - I modify the properties via the design, everything is ok - I try to run it... there is the problem. When I build, it add into InitializeComponent() the following code: ``` this.wildList1 = new WildList.WildList(); //Which is ok but also it add, everytime I build, this: // // wildList1 // this.wildList1.Colors = new string[] { null}; this.wildList1.Font = new System.Drawing.Font("Courier New", 8F); this.wildList1.Location = new System.Drawing.Point(211, 33); this.wildList1.Name = "wildList1"; this.wildList1.Positions = new int[] { 0}; this.wildList1.Size = new System.Drawing.Size(238, 224); this.wildList1.TabIndex = 16; this.wildList1.Titles = new string[] { null}; ``` It add lines of code that reset my arrays. Why? How can I get ride of them? Or at least, make them use the values entered by the programmer (aka me) into the designer? Because when it goes throu the line that reset it, it also call the property "set", which call OnColumns, which then try to do stuff with empty arrays, which cause a crash.
2010/06/16
[ "https://Stackoverflow.com/questions/3054059", "https://Stackoverflow.com", "https://Stackoverflow.com/users/313101/" ]
It's "thread-safe" in the sense that the call to `Change` won't actually corrupt the timer. However, it's not "thread-safe" in the sense that you definitely have a race condition (it's not possible to ensure that `timerCallback2` isn't running when you're in `DoStuff`).
Per MSDN documentation the Timer type is thread safe, so the only place you have to be careful is where you call `DoStuff();.`
2,422,185
As far as I can tell Adobe Air does not support persistent HTTP connections via KEEP ALIVEs. Does anyone know any different? Thanks
2010/03/11
[ "https://Stackoverflow.com/questions/2422185", "https://Stackoverflow.com", "https://Stackoverflow.com/users/239472/" ]
Here's basically how I ended up solving this. While Amorya's and MHarrison's answers were valid, they had one assumption: that once created, not only the tables but each row in each table would always be the same. The problem is that my process to pre-populate the "Animals" database, using existing data (that is updated periodically), creates a new database file each time. In other words, I can't rely on creating a relationship between the (static) Animal entity and a (dynamic) Rating entity in Core Data, since that entity may not exist the next time I regenerate the application. Why not? Because I have no control how Core Data is storing that relationship behind the scenes. Since it's an SQLite backing store, it's likely that it's using a table with foreign key relations. But when you regenerate the database, you can't assume anything about what values each row gets for a key. The primary key for Lion may be different the second time around, if I've added a Lemur to the list. The only way to avoid this problem would require pre-populating the database only once, and then manually updating rows each time there's an update. However, that kind of process isn't really possible in my case. So, what's the solution? Well, since I can't rely on the foreign key relations that Core Data makes, I have to make up my own. What I do is introduce an intermediate step in my database generation process: instead of taking my raw data (which happens to be UTF-8 text but is actually MS Word files) and creating the SQLite database with Core Data directly, I introduce an intermediary step: I convert the .txt to .xml. Why XML? Well, not because it's a silver bullet, but simply because it's a data format I can parse very easily. So what does this XML file have different? A hash value that I generate for each Animal, using MD5, that I'll assume is unique. What is the hash value for? Well, now I can create two databases: one for the "static" Animal data (for which I have a process already), and one for the "dynamic" Ratings database, which the iPhone app creates and which lives in the application's Documents directory. For each Rating, I create a pseudo-relationship with the Animal by saving the Animal entity's hash value. So every time the user brings up an Animal detail view on the iPhone, I query the "dynamic" database to find if a Rating entity exists that matches the Animal.md5Hash value. Since I'm saving this intermediate XML data file, the next time there's an update, I can diff it against the last XML file I used to see what's changed. Now, if the name of an animal was changed -- let's say a typo was corrected -- I revert the hash value for that Animal in situ. This means that even if an Animal name is changed, I'll still be able to find a matching Rating, if it exists, in the "dynamic" database. This solution has another nice side effect: I don't need to handle any migration issues. The "static" Animal database that ships with the app can stay embedded as an app resource. It can change all it wants. The "dynamic" Ratings database may need migration at some point, if I modify its data model to add more entities, but in effect the two data models stay totally independent.
The way I'm doing this is: ship a database of the static stuff as part of your app bundle. On app launch, check if there is a database file in Documents. If not, copy the one from the app bundle to Documents. Then open the database from Documents: this is the only one you read from and edit. When an upgrade has happened, the new static content will need to be merged with the user's editable database. Each static item (Animal, in your case) has a field called factoryID, which is a unique identifier. On the first launch after an update, load the database from the app bundle, and iterate through each Animal. For each one, find the appropriate record in the working database, and update any fields as necessary. There may be a quicker solution, but since the upgrade process doesn't happen too often then the time taken shouldn't be too problematic.
2,422,185
As far as I can tell Adobe Air does not support persistent HTTP connections via KEEP ALIVEs. Does anyone know any different? Thanks
2010/03/11
[ "https://Stackoverflow.com/questions/2422185", "https://Stackoverflow.com", "https://Stackoverflow.com/users/239472/" ]
Here's basically how I ended up solving this. While Amorya's and MHarrison's answers were valid, they had one assumption: that once created, not only the tables but each row in each table would always be the same. The problem is that my process to pre-populate the "Animals" database, using existing data (that is updated periodically), creates a new database file each time. In other words, I can't rely on creating a relationship between the (static) Animal entity and a (dynamic) Rating entity in Core Data, since that entity may not exist the next time I regenerate the application. Why not? Because I have no control how Core Data is storing that relationship behind the scenes. Since it's an SQLite backing store, it's likely that it's using a table with foreign key relations. But when you regenerate the database, you can't assume anything about what values each row gets for a key. The primary key for Lion may be different the second time around, if I've added a Lemur to the list. The only way to avoid this problem would require pre-populating the database only once, and then manually updating rows each time there's an update. However, that kind of process isn't really possible in my case. So, what's the solution? Well, since I can't rely on the foreign key relations that Core Data makes, I have to make up my own. What I do is introduce an intermediate step in my database generation process: instead of taking my raw data (which happens to be UTF-8 text but is actually MS Word files) and creating the SQLite database with Core Data directly, I introduce an intermediary step: I convert the .txt to .xml. Why XML? Well, not because it's a silver bullet, but simply because it's a data format I can parse very easily. So what does this XML file have different? A hash value that I generate for each Animal, using MD5, that I'll assume is unique. What is the hash value for? Well, now I can create two databases: one for the "static" Animal data (for which I have a process already), and one for the "dynamic" Ratings database, which the iPhone app creates and which lives in the application's Documents directory. For each Rating, I create a pseudo-relationship with the Animal by saving the Animal entity's hash value. So every time the user brings up an Animal detail view on the iPhone, I query the "dynamic" database to find if a Rating entity exists that matches the Animal.md5Hash value. Since I'm saving this intermediate XML data file, the next time there's an update, I can diff it against the last XML file I used to see what's changed. Now, if the name of an animal was changed -- let's say a typo was corrected -- I revert the hash value for that Animal in situ. This means that even if an Animal name is changed, I'll still be able to find a matching Rating, if it exists, in the "dynamic" database. This solution has another nice side effect: I don't need to handle any migration issues. The "static" Animal database that ships with the app can stay embedded as an app resource. It can change all it wants. The "dynamic" Ratings database may need migration at some point, if I modify its data model to add more entities, but in effect the two data models stay totally independent.
Storing your SQLite database in the Documents directory (NSDocumentDirectory) is certainly the way to go. In general, you should avoid application changes that modify or delete SQL tables as much as possible (adding is ok). However, when you absolutely have to make a change in an update, something like what Amorya said would work - open up the old DB, import whatever you need into the new DB, and delete the old one. Since it sounds like you want a static database with an "Animal" table that can't be modified, then simply replacing this table with upgrades shouldn't be an issue - as long as the ID of the entries doesn't change. The way you should store user data about animals is to create a relation with a foreign key to an animal ID for each entry the user creates. This is what you would need to migrate when an upgrade changes it.
2,614,101
If i want to use a variable as name of the new column, is this posible in MS SQL? Example that dont work: ``` ALTER TABLE my_table ADD @column INT ``` **This worked great for me:** ``` EXEC ('ALTER TABLE my_table ADD ' + @column + ' INT') ```
2010/04/10
[ "https://Stackoverflow.com/questions/2614101", "https://Stackoverflow.com", "https://Stackoverflow.com/users/285447/" ]
This is possible using dynamic sql to build your DDL and using the `EXEC` command to execute the string. ``` Declare @SQL VarChar(1000) SELECT @SQL = 'ALTER TABLE my_table ADD ' + @column + ' INT' Exec (@SQL) ``` See [this](http://www.sqlteam.com/article/introduction-to-dynamic-sql-part-2) article. I will also add that the moment you venture to the land of dynamic sql, you need to take care to not expose yourself to [SQL Injection attacks](http://en.wikipedia.org/wiki/SQL_injection). Always clean up the parameters coming in. As Philip mentions - think long and hard before doing this. The fact that it is possible does not make it a good thing... Erland Sommarskog wrote an extensive article about using dynamic sql - [The curse and blessings of dynamic SQL](http://www.sommarskog.se/dynamic_sql.html) which I recommend reading fully.
Have a look at ([EXECUTE (Transact-SQL)](http://msdn.microsoft.com/en-us/library/ms188332.aspx)) ``` CREATE TABLE MyTable( ID INT ) GO SELECT * FROM MyTable GO DECLARE @column VARCHAR(100) SET @column = 'MyNewCol' EXEC('ALTER TABLE MyTable ADD ' + @column + ' INT') GO SELECT * FROM MyTable GO DROP TABLE MyTable ```
2,614,101
If i want to use a variable as name of the new column, is this posible in MS SQL? Example that dont work: ``` ALTER TABLE my_table ADD @column INT ``` **This worked great for me:** ``` EXEC ('ALTER TABLE my_table ADD ' + @column + ' INT') ```
2010/04/10
[ "https://Stackoverflow.com/questions/2614101", "https://Stackoverflow.com", "https://Stackoverflow.com/users/285447/" ]
This is possible using dynamic sql to build your DDL and using the `EXEC` command to execute the string. ``` Declare @SQL VarChar(1000) SELECT @SQL = 'ALTER TABLE my_table ADD ' + @column + ' INT' Exec (@SQL) ``` See [this](http://www.sqlteam.com/article/introduction-to-dynamic-sql-part-2) article. I will also add that the moment you venture to the land of dynamic sql, you need to take care to not expose yourself to [SQL Injection attacks](http://en.wikipedia.org/wiki/SQL_injection). Always clean up the parameters coming in. As Philip mentions - think long and hard before doing this. The fact that it is possible does not make it a good thing... Erland Sommarskog wrote an extensive article about using dynamic sql - [The curse and blessings of dynamic SQL](http://www.sommarskog.se/dynamic_sql.html) which I recommend reading fully.
``` alter procedure sp_check_table_column ( @field_name varchar(max), @data_type varchar(max), @mandatory varchar(max) ) as if not exists (select COLUMN_NAME from INFORMATION_SCHEMA.COLUMNS where TABLE_NAME = '<table_name>' and COLUMN_NAME = @field_name) begin declare @sql varchar(max) set @sql = ('ALTER TABLE <table_name> ADD ' + @field_name + ' ' + @data_type + ' ' + @mandatory) exec (@sql) end ```
2,614,101
If i want to use a variable as name of the new column, is this posible in MS SQL? Example that dont work: ``` ALTER TABLE my_table ADD @column INT ``` **This worked great for me:** ``` EXEC ('ALTER TABLE my_table ADD ' + @column + ' INT') ```
2010/04/10
[ "https://Stackoverflow.com/questions/2614101", "https://Stackoverflow.com", "https://Stackoverflow.com/users/285447/" ]
Have a look at ([EXECUTE (Transact-SQL)](http://msdn.microsoft.com/en-us/library/ms188332.aspx)) ``` CREATE TABLE MyTable( ID INT ) GO SELECT * FROM MyTable GO DECLARE @column VARCHAR(100) SET @column = 'MyNewCol' EXEC('ALTER TABLE MyTable ADD ' + @column + ' INT') GO SELECT * FROM MyTable GO DROP TABLE MyTable ```
``` alter procedure sp_check_table_column ( @field_name varchar(max), @data_type varchar(max), @mandatory varchar(max) ) as if not exists (select COLUMN_NAME from INFORMATION_SCHEMA.COLUMNS where TABLE_NAME = '<table_name>' and COLUMN_NAME = @field_name) begin declare @sql varchar(max) set @sql = ('ALTER TABLE <table_name> ADD ' + @field_name + ' ' + @data_type + ' ' + @mandatory) exec (@sql) end ```
2,098,751
I have encountered the following statement in fortran: ``` integer iparam(11), ipntr(14) logical select(maxncv) Double precision & ax(maxn), d(maxncv,3), resid(maxn), & v(ldv,maxncv), workd(3*maxn), & workev(3*maxncv), & workl(3*maxncv*maxncv+6*maxncv) ``` Well, I can understand what `integer`, `Double precision` is. But what about `logical` `select` ? What do they mean?
2010/01/20
[ "https://Stackoverflow.com/questions/2098751", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3834/" ]
"logical" is a boolean type, which takes on only the values .TRUE. or .FALSE. The declaration creates a 1D array of name "select" of length "maxncv", just as the previous declaration creates an integer 1D array "iparam" of length "11". The layout (e.g., the continuation symbol on the start of continued lines) and the use of Double Precision suggest Fortran 77. For new code I recommend Fortran 95/2003.
logical is a datatype just like double precision is. select is a variable just like d is. maxncv is an array bound just like maxncv is.
2,146,117
I have compiled my own Kernel module and now I would like to be able to load it into the GNU Debugger GDB. I did this once, a year ago or so to have a look at the memory layout. It worked fine then, but of course I was too silly to write down the single steps I took to accomplish this... Can anyone enlighten me or point me to a good tutorial? Thank you so much
2010/01/27
[ "https://Stackoverflow.com/questions/2146117", "https://Stackoverflow.com", "https://Stackoverflow.com/users/259964/" ]
It has been a while since I was actively developing drivers for Linux, so maybe my answer is a bit out of date. I would say you cannot use GDB. If at all, only to debug post mortem on dump files. To debug you should rather use a kernel debugger. Build the kernel with a kernel debugger enabled (there is one out-of-the box debugger for 2.6, which was lacking at the time I was active). I used the kernel patches for KDB from Sun <ftp://oss.sgi.com/www/projects/kdb/download/>, which I was quite happy with. A user space tool won't be of much use unless new gdb communicate somehow with the internal kernel debugger (which anyway you would have to activate) I hope this gives you at least some hints, while not being a detailled answer. Better than no answer at all. Regards.
I suspect what you did was ``` gdb /boot/vmlinux /proc/kcore ``` Of course you can't actually do any debugging, but it's certainly good enough to have a poke around the kernel.
2,146,117
I have compiled my own Kernel module and now I would like to be able to load it into the GNU Debugger GDB. I did this once, a year ago or so to have a look at the memory layout. It worked fine then, but of course I was too silly to write down the single steps I took to accomplish this... Can anyone enlighten me or point me to a good tutorial? Thank you so much
2010/01/27
[ "https://Stackoverflow.com/questions/2146117", "https://Stackoverflow.com", "https://Stackoverflow.com/users/259964/" ]
For kernels > 2.6.26 (i.e. after May 2008), the preferred way is probably to use "kgdb light" (not to be confused with its ancestor kgdb, available as a set of kernel patches). "kgdb light" is now part of the kernel (in by default in current Ubuntu kernels, for instance), and it's capabilities are improving fast (Jason Wessel is working on it - possible google key). Drawback: You need two machines, the one you're debugging and the development machine (host) where gdb runs. Currently, those two machines can only be linked through a serial link. kgdb runs in the target machine where it handles the breakpoints, stepping, etc. and the remote debugging protocol use to talk with the development machine. gdb runs in the development machine where it handles the user interface. An USB-to-serial adapter works OK on the development machine, but currently, you need a real UART on the target machine - and that's not so frequent anymore on recent hardware. The (terse) kgdb documentation is in the kernel sources, in Documentation/DocBook I suggest you google around for "kgdb light" for the complete story. Again, don't confuse kgdb and kgdb light, they come together in google searches but are mostly different animals. In particular, info from linsyssoft.com relate to the "ancestor" kgdb, so try queries like: ``` kgdb module debugging -"linsyssoft.com" -site:linsyssoft.com ``` and discard articles prior to May 2008 / 2.6.26 kernel. Finally, for module debugging, you need to manually load the module symbols in the dev machine for all the code and sections you are interested in. That's a bit too long to address here, but some clues [there](http://fotis.loukos.me/blog/?p=74), [there](http://www.linuxhelp.net/forums/index.php?showtopic=9364) and [there.](http://www.linuxquestions.org/questions/programming-9/kgdb-module-debugging-question-657611/) Bottom line is, kgdb is a very welcome improvement but don't expect this trip to be as easy as running gdb in user mode. Yet. :)
I suspect what you did was ``` gdb /boot/vmlinux /proc/kcore ``` Of course you can't actually do any debugging, but it's certainly good enough to have a poke around the kernel.
1,388,061
Is this the right way to do a nested navigation? ``` <dl> <dt>Struktur</dt> <dd> <ul id="structure"> <li><a href="/module/structure/add">Hinzufügen</a></li> <li><a href="/module/structure/index">Auflisten</a></li> </ul> </dd> <dt>Nachrichten</dt> <dd> <ul id="messages"> <li><a href="/module/messages/add">Schreiben</a></li> <li><a href="/module/messages/directory">Ordner</a></li> <li><a href="/module/messages/index">Auflisten</a></li> </ul> </dd> </dl> ```
2009/09/07
[ "https://Stackoverflow.com/questions/1388061", "https://Stackoverflow.com", "https://Stackoverflow.com/users/150325/" ]
I agree with n1313, it really depends what you mean by "right way". If you do want a nit-picky answer: Strictly speaking, "Hinzufügen" and "Auflisten" are not the *definition* of "Struktur", so using a `<dl>` list to structure those elements is probably not The Right Way™. A simple nested `<ul>` list might be better. ``` <ul> <li> <div class="parent">Struktur</div> <ul> <li> ... ```
semantically, i dont think using a dt tag is correct. use a h2 or h3 tag instead. ``` <h2>Nachrichten</h2> <ul id="messages"> <li><a href="/module/messages/add">Schreiben</a></li> <li><a href="/module/messages/directory">Ordner</a></li> <li><a href="/module/messages/index">Auflisten</a></li> </ul> ``` looking at your code, it doesnt seem like you're nesting any of the ul/li items, but the method deceze posted for doing so is correct: ``` <ul> <li>Item 1</li> <li>Item 2 <ul> <li>subitem</li> <li>subitem 2</li> </ul> </li> </ul> ```
812,192
I'm using pdb to debug Python programs and am unhappy with it's behaviour. I have the screen divided into multiple emacs windows, and when I execute pdb, it (randomly?) replaces one of the windows with the output of the \*gud\* debugger. Also, when a breakpoint is encountered, even if the debugging buffer is already visible in a window, it usually puts this buffer into *another* window, and replaces another of my windows with the contents of the source file. (incidentally I like that it jumps to the correct line in the source file) How can I disable gud/pdb from managing my windows for me? Is it possible in emacs to prevent all programattic manipulation of windows & screen layout? Edit: I found the answer that partially solves this in another post: [toggle dedicated windows](https://stackoverflow.com/questions/43765/pin-emacs-buffers-to-windows-for-cscope/65992#65992)
2009/05/01
[ "https://Stackoverflow.com/questions/812192", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6691/" ]
Look into [sticky windows](http://www.emacswiki.org/emacs/StickyWindows).
You should use [Sticky Windows](http://www.emacswiki.org/emacs/StickyWindows) to make your windows and buffers stick where they are but Sticky Windows won't stop gud/pdb from trying to steal your windows. When gud/pdb can't steal your source code window, it opens a new Emacs Frame even if there is another window on the current frame. This comes from the fact that the function that tries to jump to the gud-pdb buffer ([`py-pdbtrack-track-stack-file`](http://bazaar.launchpad.net/~python-mode-devs/python-mode/python-mode/view/head:/python-mode.el#L1649)) calls function `pop-to-buffer` with argument OTHER-WINDOW set to `t`. To circumvent this behavior for all libraries that calls pop-to-buffer, you could cancel the role of OTHER-WINDOW by defining an advice on `pop-to-buffer` (in your .emacs) : ``` (defadvice pop-to-buffer (before cancel-other-window first) (ad-set-arg 1 nil)) (ad-activate 'pop-to-buffer) ``` You should also customize variable `pop-up-windows` to nil in order to force `display-buffer` (the low-level routine used to display a particular buffer on windows and frames) to not create a new window.
812,192
I'm using pdb to debug Python programs and am unhappy with it's behaviour. I have the screen divided into multiple emacs windows, and when I execute pdb, it (randomly?) replaces one of the windows with the output of the \*gud\* debugger. Also, when a breakpoint is encountered, even if the debugging buffer is already visible in a window, it usually puts this buffer into *another* window, and replaces another of my windows with the contents of the source file. (incidentally I like that it jumps to the correct line in the source file) How can I disable gud/pdb from managing my windows for me? Is it possible in emacs to prevent all programattic manipulation of windows & screen layout? Edit: I found the answer that partially solves this in another post: [toggle dedicated windows](https://stackoverflow.com/questions/43765/pin-emacs-buffers-to-windows-for-cscope/65992#65992)
2009/05/01
[ "https://Stackoverflow.com/questions/812192", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6691/" ]
Look into [sticky windows](http://www.emacswiki.org/emacs/StickyWindows).
I tried all these approaches without success on Emacs 24. If you are still interested I reverted to the old gdb behavior using 'gud-gdb' which implements the old behavior of gdb/emacs interaction (no dedicated-windows and no I/O buffer). If you don't want to call M-x gud-gdb when you use it, you can define an alias for M-x gdb
812,192
I'm using pdb to debug Python programs and am unhappy with it's behaviour. I have the screen divided into multiple emacs windows, and when I execute pdb, it (randomly?) replaces one of the windows with the output of the \*gud\* debugger. Also, when a breakpoint is encountered, even if the debugging buffer is already visible in a window, it usually puts this buffer into *another* window, and replaces another of my windows with the contents of the source file. (incidentally I like that it jumps to the correct line in the source file) How can I disable gud/pdb from managing my windows for me? Is it possible in emacs to prevent all programattic manipulation of windows & screen layout? Edit: I found the answer that partially solves this in another post: [toggle dedicated windows](https://stackoverflow.com/questions/43765/pin-emacs-buffers-to-windows-for-cscope/65992#65992)
2009/05/01
[ "https://Stackoverflow.com/questions/812192", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6691/" ]
Look into [sticky windows](http://www.emacswiki.org/emacs/StickyWindows).
I have a solution that prevents the gdb from stealing windows. It works with Emacs 24.4 (2014-07-18 snapshot) and does not require dedicating buffers. The benefit over other answers is you won't have to bother dedicating and undedicating buffers whenever you change buffers, which quickly becomes tedious. Place this advice in your .emacs: ``` (defadvice gdb-inferior-filter (around gdb-inferior-filter-without-stealing) (with-current-buffer (gdb-get-buffer-create 'gdb-inferior-io) (comint-output-filter proc string))) (ad-activate 'gdb-inferior-filter) ``` This effectively replaces this function as defined in gdb-mi.el and removes the branch that calls gdb-display-buffer, which is the cause of the window thievery.
138,884
I would like to know when I should include external scripts or write them inline with the html code, in terms of performance and ease of maintenance. What is the general practice for this? Real-world-scenario - I have several html pages that need client-side form validation. For this I use a jQuery plugin that I include on all these pages. But the question is, do I: * write the bits of code that configure this script inline? * include all bits in one file that's share among all these html pages? * include each bit in a separate external file, one for each html page? Thanks.
2008/09/26
[ "https://Stackoverflow.com/questions/138884", "https://Stackoverflow.com", "https://Stackoverflow.com/users/21613/" ]
The only defense I can offer for inline javascipt is that when using strongly typed views with .net MVC you can refer to c# variables mid javascript which I've found useful.
Google has included load times into it's page ranking measurements, if you inline a lot, it will take longer for the spiders to crawl thru your page, this may be influence your page ranking if you have to much included. in any case different strategies may have influence on your ranking.
138,884
I would like to know when I should include external scripts or write them inline with the html code, in terms of performance and ease of maintenance. What is the general practice for this? Real-world-scenario - I have several html pages that need client-side form validation. For this I use a jQuery plugin that I include on all these pages. But the question is, do I: * write the bits of code that configure this script inline? * include all bits in one file that's share among all these html pages? * include each bit in a separate external file, one for each html page? Thanks.
2008/09/26
[ "https://Stackoverflow.com/questions/138884", "https://Stackoverflow.com", "https://Stackoverflow.com/users/21613/" ]
If you only care about performance, most of advice in this thread is flat out wrong, and is becoming more and more wrong in the SPA era, where we can assume that the page is useless without the JS code. I've spent countless hours optimizing SPA page load times, and verifying these results with different browsers. Across the board the performance increase by re-orchestrating your html, can be quite dramatic. To get the best performance, you have to think of pages as two-stage rockets. These two stages roughly correspond to `<head>` and `<body>` phases, but think of them instead as `<static>` and `<dynamic>`. The static portion is basically a string constant which you shove down the response pipe as fast as you possibly can. This can be a little tricky if you use a lot of middleware that sets cookies (these need to be set before sending http content), but in principle it's just flushing the response buffer, hopefully before jumping into some templating code (razor, php, etc) on the server. This may sound difficult, but then I'm just explaining it wrong, because it's near trivial. As you may have guessed, this static portion should contain all javascript inlined and minified. It would look something like ``` <!DOCTYPE html> <html> <head> <script>/*...inlined jquery, angular, your code*/</script> <style>/* ditto css */</style> </head> <body> <!-- inline all your templates, if applicable --> <script type='template-mime' id='1'></script> <script type='template-mime' id='2'></script> <script type='template-mime' id='3'></script> ``` Since it costs you next to nothing to send this portion down the wire, you can expect that the client will start receiving this somewhere around 5ms + latency after connecting to your server. Assuming the server is reasonably close this latency could be between 20ms to 60ms. Browsers will start processing this section as soon as they get it, and the processing time will normally dominate transfer time by factor 20 or more, which is now your amortized window for server-side processing of the `<dynamic>` portion. It takes about 50ms for the browser (chrome, rest maybe 20% slower) to process inline jquery + signalr + angular + ng animate + ng touch + ng routes + lodash. That's pretty amazing in and of itself. Most web apps have less code than all those popular libraries put together, but let's say you have just as much, so we would win latency+100ms of processing on the client (this latency win comes from the second transfer chunk). By the time the second chunk arrives, we've processed all js code and templates and we can start executing dom transforms. You may object that this method is orthogonal to the inlining concept, but it isn't. If you, instead of inlining, link to cdns or your own servers the browser would have to open another connection(s) and delay execution. Since this execution is basically free (as the server side is talking to the database) it must be clear that all of these jumps would cost more than doing no jumps at all. If there were a browser quirk that said external js executes faster we could measure which factor dominates. My measurements indicate that extra requests kill performance at this stage. I work a lot with optimization of SPA apps. It's common for people to think that data volume is a big deal, while in truth latency, and execution often dominate. The minified libraries I listed add up to 300kb of data, and that's just 68 kb gzipped, or 200ms download on a 2mbit 3g/4g phone, which is exactly the latency it would take on the same phone to check IF it had the same data in its cache already, even if it was proxy cached, because the mobile latency tax (phone-to-tower-latency) still applies. Meanwhile, desktop connections that have lower first-hop latency typically have higher bandwidth anyway. In short, right now (2014), it's best to inline all scripts, styles and templates. **EDIT (MAY 2016)** As JS applications continue to grow, and some of my payloads now stack up to 3+ megabytes of minified code, it's becoming obvious that at the very least common libraries should no longer be inlined.
Always try to use external Js as inline js is always difficult to maintain. Moreover, it is professionally required that you use an external js since majority of the developers recommend using js externally. I myself use external js.
138,884
I would like to know when I should include external scripts or write them inline with the html code, in terms of performance and ease of maintenance. What is the general practice for this? Real-world-scenario - I have several html pages that need client-side form validation. For this I use a jQuery plugin that I include on all these pages. But the question is, do I: * write the bits of code that configure this script inline? * include all bits in one file that's share among all these html pages? * include each bit in a separate external file, one for each html page? Thanks.
2008/09/26
[ "https://Stackoverflow.com/questions/138884", "https://Stackoverflow.com", "https://Stackoverflow.com/users/21613/" ]
Externalizing javascript is one of the yahoo performance rules: <http://developer.yahoo.com/performance/rules.html#external> While the hard-and-fast rule that you should always externalize scripts will generally be a good bet, in some cases you may want to inline some of the scripts and styles. You should however only inline things that you know will improve performance (because you've measured this).
During early prototyping keep your code inline for the benefit of fast iteration, but be sure to make it all external by the time you reach production. I'd even dare to say that if you can't place all your Javascript externally, then you have a bad design under your hands, and you should refactor your data and scripts
138,884
I would like to know when I should include external scripts or write them inline with the html code, in terms of performance and ease of maintenance. What is the general practice for this? Real-world-scenario - I have several html pages that need client-side form validation. For this I use a jQuery plugin that I include on all these pages. But the question is, do I: * write the bits of code that configure this script inline? * include all bits in one file that's share among all these html pages? * include each bit in a separate external file, one for each html page? Thanks.
2008/09/26
[ "https://Stackoverflow.com/questions/138884", "https://Stackoverflow.com", "https://Stackoverflow.com/users/21613/" ]
Externalizing javascript is one of the yahoo performance rules: <http://developer.yahoo.com/performance/rules.html#external> While the hard-and-fast rule that you should always externalize scripts will generally be a good bet, in some cases you may want to inline some of the scripts and styles. You should however only inline things that you know will improve performance (because you've measured this).
The only defense I can offer for inline javascipt is that when using strongly typed views with .net MVC you can refer to c# variables mid javascript which I've found useful.
138,884
I would like to know when I should include external scripts or write them inline with the html code, in terms of performance and ease of maintenance. What is the general practice for this? Real-world-scenario - I have several html pages that need client-side form validation. For this I use a jQuery plugin that I include on all these pages. But the question is, do I: * write the bits of code that configure this script inline? * include all bits in one file that's share among all these html pages? * include each bit in a separate external file, one for each html page? Thanks.
2008/09/26
[ "https://Stackoverflow.com/questions/138884", "https://Stackoverflow.com", "https://Stackoverflow.com/users/21613/" ]
Another reason why you should always use external scripts is for easier transition to [Content Security Policy (CSP)](http://developer.chrome.com/extensions/contentSecurityPolicy.html). CSP defaults forbid all inline script, making your site more resistant to XSS attacks.
Three considerations: * How much code do you need (sometimes libraries are a first-class consumer)? * Specificity: is this code only functional in the context of this specific document or element? * Every code inside the document tends to make it longer and thus slower. Besides that SEO considerations make it obvious, that you minimize internal scripting ...
138,884
I would like to know when I should include external scripts or write them inline with the html code, in terms of performance and ease of maintenance. What is the general practice for this? Real-world-scenario - I have several html pages that need client-side form validation. For this I use a jQuery plugin that I include on all these pages. But the question is, do I: * write the bits of code that configure this script inline? * include all bits in one file that's share among all these html pages? * include each bit in a separate external file, one for each html page? Thanks.
2008/09/26
[ "https://Stackoverflow.com/questions/138884", "https://Stackoverflow.com", "https://Stackoverflow.com/users/21613/" ]
i think the [specific to one page, short script case](http://www.thefutureoftheweb.com/blog/when-to-use-inline-javascript) is (only) defensible case for inline script
The only defense I can offer for inline javascipt is that when using strongly typed views with .net MVC you can refer to c# variables mid javascript which I've found useful.
138,884
I would like to know when I should include external scripts or write them inline with the html code, in terms of performance and ease of maintenance. What is the general practice for this? Real-world-scenario - I have several html pages that need client-side form validation. For this I use a jQuery plugin that I include on all these pages. But the question is, do I: * write the bits of code that configure this script inline? * include all bits in one file that's share among all these html pages? * include each bit in a separate external file, one for each html page? Thanks.
2008/09/26
[ "https://Stackoverflow.com/questions/138884", "https://Stackoverflow.com", "https://Stackoverflow.com/users/21613/" ]
Actually, there's a pretty solid case to use inline javascript. **If the js is small enough** (one-liner), I tend to prefer the javascript inline because of two factors: * **Locality**. There's no need to navigate an external file to validate the behaviour of some javascript * **AJAX**. If you're refreshing some section of the page via AJAX, you **may** lose all of your DOM handlers (onclick, etc) for that section, depending on how you binded them. For example, using `jQuery` you can either use the [`live`](http://api.jquery.com/live/) or [`delegate`](http://api.jquery.com/delegate/) methods to circumvent this, but I find that if the js is small enough it is preferrable to just put it inline.
On the point of keeping JavaScript external: ASP.NET 3.5SP1 recently introduced functionality to create a Composite script resource (merge a bunch of js files into one). Another benefit to this is when Webserver compression is turned on, downloading one slightly larger file will have a better compression ratio then many smaller files (also less http overhead, roundtrip etc...). I guess this saves on the initial page load, then browser caching kicks in as mentioned above. ASP.NET aside, this screencast explains the benefits in more detail: <http://www.asp.net/learn/3.5-SP1/video-296.aspx>
138,884
I would like to know when I should include external scripts or write them inline with the html code, in terms of performance and ease of maintenance. What is the general practice for this? Real-world-scenario - I have several html pages that need client-side form validation. For this I use a jQuery plugin that I include on all these pages. But the question is, do I: * write the bits of code that configure this script inline? * include all bits in one file that's share among all these html pages? * include each bit in a separate external file, one for each html page? Thanks.
2008/09/26
[ "https://Stackoverflow.com/questions/138884", "https://Stackoverflow.com", "https://Stackoverflow.com/users/21613/" ]
i think the [specific to one page, short script case](http://www.thefutureoftheweb.com/blog/when-to-use-inline-javascript) is (only) defensible case for inline script
Actually, there's a pretty solid case to use inline javascript. **If the js is small enough** (one-liner), I tend to prefer the javascript inline because of two factors: * **Locality**. There's no need to navigate an external file to validate the behaviour of some javascript * **AJAX**. If you're refreshing some section of the page via AJAX, you **may** lose all of your DOM handlers (onclick, etc) for that section, depending on how you binded them. For example, using `jQuery` you can either use the [`live`](http://api.jquery.com/live/) or [`delegate`](http://api.jquery.com/delegate/) methods to circumvent this, but I find that if the js is small enough it is preferrable to just put it inline.
138,884
I would like to know when I should include external scripts or write them inline with the html code, in terms of performance and ease of maintenance. What is the general practice for this? Real-world-scenario - I have several html pages that need client-side form validation. For this I use a jQuery plugin that I include on all these pages. But the question is, do I: * write the bits of code that configure this script inline? * include all bits in one file that's share among all these html pages? * include each bit in a separate external file, one for each html page? Thanks.
2008/09/26
[ "https://Stackoverflow.com/questions/138884", "https://Stackoverflow.com", "https://Stackoverflow.com/users/21613/" ]
Another hidden benefit of external scripts is that you can easily run them through a syntax checker like [jslint](http://www.jslint.com/). That can save you from a lot of heartbreaking, hard-to-find, IE6 bugs.
Always try to use external Js as inline js is always difficult to maintain. Moreover, it is professionally required that you use an external js since majority of the developers recommend using js externally. I myself use external js.
138,884
I would like to know when I should include external scripts or write them inline with the html code, in terms of performance and ease of maintenance. What is the general practice for this? Real-world-scenario - I have several html pages that need client-side form validation. For this I use a jQuery plugin that I include on all these pages. But the question is, do I: * write the bits of code that configure this script inline? * include all bits in one file that's share among all these html pages? * include each bit in a separate external file, one for each html page? Thanks.
2008/09/26
[ "https://Stackoverflow.com/questions/138884", "https://Stackoverflow.com", "https://Stackoverflow.com/users/21613/" ]
Actually, there's a pretty solid case to use inline javascript. **If the js is small enough** (one-liner), I tend to prefer the javascript inline because of two factors: * **Locality**. There's no need to navigate an external file to validate the behaviour of some javascript * **AJAX**. If you're refreshing some section of the page via AJAX, you **may** lose all of your DOM handlers (onclick, etc) for that section, depending on how you binded them. For example, using `jQuery` you can either use the [`live`](http://api.jquery.com/live/) or [`delegate`](http://api.jquery.com/delegate/) methods to circumvent this, but I find that if the js is small enough it is preferrable to just put it inline.
I would take a look at the required code and divide it into as many separate files as needed. Every js file would only hold one "logical set" of functions etc. eg. one file for all login related functions. Then during site developement on each html page you only include those that are needed. When you go live with your site you can optimize by combining every js file a page needs into one file.
1,936,837
After finding the fastest string replace algorithm in [this thread](https://stackoverflow.com/questions/1919096/mass-string-replace-in-python), I've been trying to modify one of them to suit my needs, particularly [this one](https://stackoverflow.com/questions/1919096/mass-string-replace-in-python/1919221#1919221) by gnibbler. I will explain the problem again here, and what issue I am having. Say I have a string that looks like this: ``` str = "The &yquick &cbrown &bfox &Yjumps over the &ulazy dog" ``` You'll notice a lot of locations in the string where there is an ampersand, followed by a character (such as "&y" and "&c"). I need to replace these characters with an appropriate value that I have in a dictionary, like so: ``` dict = {"y":"\033[0;30m", "c":"\033[0;31m", "b":"\033[0;32m", "Y":"\033[0;33m", "u":"\033[0;34m"} ``` Using gnibblers solution provided in my previous thread, I have this as my current solution: ``` myparts = tmp.split('&') myparts[1:]=[dict.get(x[0],"&"+x[0])+x[1:] for x in myparts[1:]] result = "".join(myparts) ``` This works for replacing the characters properly, and does not fail on characters that are not found. The only problem with this is that there is no simple way to **actually** keep an ampersand in the output. The easiest way I could think of would be to change my dictionary to contain: ``` dict = {"y":"\033[0;30m", "c":"\033[0;31m", "b":"\033[0;32m", "Y":"\033[0;33m", "u":"\033[0;34m", "&":"&"} ``` And change my "split" call to do a regex split on ampersands that are NOT followed by other ampersands. ``` >>> import re >>> tmp = "&yI &creally &blove A && W &uRootbeer." >>> tmp.split('&') ['', 'yI ', 'creally ', 'blove A ', '', ' W ', 'uRootbeer.'] >>> re.split('MyRegex', tmp) ['', 'yI ', 'creally ', 'blove A ', '&W ', 'uRootbeer.'] ``` Basically, I need a Regex that will split on the **first ampersand of a pair**, and every **single** ampersand, to allow me to escape it via my dictionary. If anyone has any better solutions please feel free to let me know.
2009/12/20
[ "https://Stackoverflow.com/questions/1936837", "https://Stackoverflow.com", "https://Stackoverflow.com/users/117603/" ]
You could use a negative lookbehind (assuming the regex engine in question supports it) to only match ampersands that do not follow another ampersand. ``` /(?<!&)&/ ```
Maybe loop while (q = str.find('&', p)) != -1, then append the left side (p + 2 to q - 1) and the replacement value.
1,936,837
After finding the fastest string replace algorithm in [this thread](https://stackoverflow.com/questions/1919096/mass-string-replace-in-python), I've been trying to modify one of them to suit my needs, particularly [this one](https://stackoverflow.com/questions/1919096/mass-string-replace-in-python/1919221#1919221) by gnibbler. I will explain the problem again here, and what issue I am having. Say I have a string that looks like this: ``` str = "The &yquick &cbrown &bfox &Yjumps over the &ulazy dog" ``` You'll notice a lot of locations in the string where there is an ampersand, followed by a character (such as "&y" and "&c"). I need to replace these characters with an appropriate value that I have in a dictionary, like so: ``` dict = {"y":"\033[0;30m", "c":"\033[0;31m", "b":"\033[0;32m", "Y":"\033[0;33m", "u":"\033[0;34m"} ``` Using gnibblers solution provided in my previous thread, I have this as my current solution: ``` myparts = tmp.split('&') myparts[1:]=[dict.get(x[0],"&"+x[0])+x[1:] for x in myparts[1:]] result = "".join(myparts) ``` This works for replacing the characters properly, and does not fail on characters that are not found. The only problem with this is that there is no simple way to **actually** keep an ampersand in the output. The easiest way I could think of would be to change my dictionary to contain: ``` dict = {"y":"\033[0;30m", "c":"\033[0;31m", "b":"\033[0;32m", "Y":"\033[0;33m", "u":"\033[0;34m", "&":"&"} ``` And change my "split" call to do a regex split on ampersands that are NOT followed by other ampersands. ``` >>> import re >>> tmp = "&yI &creally &blove A && W &uRootbeer." >>> tmp.split('&') ['', 'yI ', 'creally ', 'blove A ', '', ' W ', 'uRootbeer.'] >>> re.split('MyRegex', tmp) ['', 'yI ', 'creally ', 'blove A ', '&W ', 'uRootbeer.'] ``` Basically, I need a Regex that will split on the **first ampersand of a pair**, and every **single** ampersand, to allow me to escape it via my dictionary. If anyone has any better solutions please feel free to let me know.
2009/12/20
[ "https://Stackoverflow.com/questions/1936837", "https://Stackoverflow.com", "https://Stackoverflow.com/users/117603/" ]
You could use a negative lookbehind (assuming the regex engine in question supports it) to only match ampersands that do not follow another ampersand. ``` /(?<!&)&/ ```
I think this does the trick: ``` import re def fix(text): dict = {"y":"\033[0;30m", "c":"\033[0;31m", "b":"\033[0;32m", "Y":"\033[0;33m", "u":"\033[0;34m", "&":"&"} myparts = re.split('\&(\&*)', text) myparts[1:]=[dict.get(x[0],"&"+x[0])+x[1:] if len(x) > 0 else x for x in myparts[1:]] result = "".join(myparts) return result print fix("The &yquick &cbrown &bfox &Yjumps over the &ulazy dog") print fix("&yI &creally &blove A && W &uRootbeer.") ```
1,936,837
After finding the fastest string replace algorithm in [this thread](https://stackoverflow.com/questions/1919096/mass-string-replace-in-python), I've been trying to modify one of them to suit my needs, particularly [this one](https://stackoverflow.com/questions/1919096/mass-string-replace-in-python/1919221#1919221) by gnibbler. I will explain the problem again here, and what issue I am having. Say I have a string that looks like this: ``` str = "The &yquick &cbrown &bfox &Yjumps over the &ulazy dog" ``` You'll notice a lot of locations in the string where there is an ampersand, followed by a character (such as "&y" and "&c"). I need to replace these characters with an appropriate value that I have in a dictionary, like so: ``` dict = {"y":"\033[0;30m", "c":"\033[0;31m", "b":"\033[0;32m", "Y":"\033[0;33m", "u":"\033[0;34m"} ``` Using gnibblers solution provided in my previous thread, I have this as my current solution: ``` myparts = tmp.split('&') myparts[1:]=[dict.get(x[0],"&"+x[0])+x[1:] for x in myparts[1:]] result = "".join(myparts) ``` This works for replacing the characters properly, and does not fail on characters that are not found. The only problem with this is that there is no simple way to **actually** keep an ampersand in the output. The easiest way I could think of would be to change my dictionary to contain: ``` dict = {"y":"\033[0;30m", "c":"\033[0;31m", "b":"\033[0;32m", "Y":"\033[0;33m", "u":"\033[0;34m", "&":"&"} ``` And change my "split" call to do a regex split on ampersands that are NOT followed by other ampersands. ``` >>> import re >>> tmp = "&yI &creally &blove A && W &uRootbeer." >>> tmp.split('&') ['', 'yI ', 'creally ', 'blove A ', '', ' W ', 'uRootbeer.'] >>> re.split('MyRegex', tmp) ['', 'yI ', 'creally ', 'blove A ', '&W ', 'uRootbeer.'] ``` Basically, I need a Regex that will split on the **first ampersand of a pair**, and every **single** ampersand, to allow me to escape it via my dictionary. If anyone has any better solutions please feel free to let me know.
2009/12/20
[ "https://Stackoverflow.com/questions/1936837", "https://Stackoverflow.com", "https://Stackoverflow.com/users/117603/" ]
You could use a negative lookbehind (assuming the regex engine in question supports it) to only match ampersands that do not follow another ampersand. ``` /(?<!&)&/ ```
re.sub will do what you want. It takes a regex pattern and can take a function to process the match and return the replacement. Below if the character following the & is not in the dictionary, no replacement is made. && is replaced with & to allow escaping an & that is followed by a character in the dictionary. Also 'str' and 'dict' are bad variables names because they shadow the built-in functions of the same name. In 's' below, '& cat' will not be affected and '&&cat' will become "&cat" suppressing &c translation. ``` import re s = "The &yquick &cbrown &bfox & cat &&cat &Yjumps over the &ulazy dog" D = {"y":"\033[0;30m", "c":"\033[0;31m", "b":"\033[0;32m", "Y":"\033[0;33m", "u":"\033[0;34m", "&":"&"} def func(m): return D.get(m.group(1),m.group(0)) print repr(re.sub(r'&(.)',func,s)) ``` OUTPUT: ``` 'The \x1b[0;30mquick \x1b[0;31mbrown \x1b[0;32mfox & cat &cat \x1b[0;33mjumps over the \x1b[0;34mlazy dog' ``` -Mark
2,620,473
I have seen many answers on stackoverflow, but I didn't find an answer that is matching mine. Apart from all those difference, Does it make sense if we say an abstract class abstracts the implementation of behaviour while an interface abstracts the type which implements the behaviour.
2010/04/12
[ "https://Stackoverflow.com/questions/2620473", "https://Stackoverflow.com", "https://Stackoverflow.com/users/266037/" ]
The main differences from design point of view are that: * you can declare a contract on constructor of the implementing classes, by creating a protected constructor in the base abstract class. * you can provide implementations of methods usable by base classes * you can make a wrapper around the contract (e.g. validate method arguments) * you can provide a "calling scheme" when you create non-abstract methods that call abstract methods of the type, implemented by derived classes. This can be useful for implementing abstraction of an algorithm in derived classes, while the base class implements all the handling logic - prepares and validates data, and lets the actual processing algorithm to be implemented by derived classes. So I would say you are correct in the statement that "an abstract class abstracts the implementation of behaviour while an interface abstracts the type which implements the behaviour" **Abstract class:** provides requirement to implement some methods (you *override methods of the abstract class*) **Interface:** defines only a contract. Indicates that a class that implements the interface has methods of the interface (you *implement an interface*) For example: * by implementing an interface on an existing class, you just declare adding the interface methods to the contract of the class. The class may already implement all the methods of the interface and you do not need to change anything in the existing class. * by changing the base type to an abstract class, you are required to override all the methods, even if methods with the same names as abstract methods of the base class already exist on the type.
Not really no, because an abstract class doesn't need to implement any behaviour. It probably should, because otherwise you may argue the usefulness of it, but it doesn't *have to*.
2,620,473
I have seen many answers on stackoverflow, but I didn't find an answer that is matching mine. Apart from all those difference, Does it make sense if we say an abstract class abstracts the implementation of behaviour while an interface abstracts the type which implements the behaviour.
2010/04/12
[ "https://Stackoverflow.com/questions/2620473", "https://Stackoverflow.com", "https://Stackoverflow.com/users/266037/" ]
An abstract class can (and normally does) provide some implementation. And interface cannot provide any implementation.
Interface = pure abstract class (abstract class with no implementation)
2,620,473
I have seen many answers on stackoverflow, but I didn't find an answer that is matching mine. Apart from all those difference, Does it make sense if we say an abstract class abstracts the implementation of behaviour while an interface abstracts the type which implements the behaviour.
2010/04/12
[ "https://Stackoverflow.com/questions/2620473", "https://Stackoverflow.com", "https://Stackoverflow.com/users/266037/" ]
The main differences from design point of view are that: * you can declare a contract on constructor of the implementing classes, by creating a protected constructor in the base abstract class. * you can provide implementations of methods usable by base classes * you can make a wrapper around the contract (e.g. validate method arguments) * you can provide a "calling scheme" when you create non-abstract methods that call abstract methods of the type, implemented by derived classes. This can be useful for implementing abstraction of an algorithm in derived classes, while the base class implements all the handling logic - prepares and validates data, and lets the actual processing algorithm to be implemented by derived classes. So I would say you are correct in the statement that "an abstract class abstracts the implementation of behaviour while an interface abstracts the type which implements the behaviour" **Abstract class:** provides requirement to implement some methods (you *override methods of the abstract class*) **Interface:** defines only a contract. Indicates that a class that implements the interface has methods of the interface (you *implement an interface*) For example: * by implementing an interface on an existing class, you just declare adding the interface methods to the contract of the class. The class may already implement all the methods of the interface and you do not need to change anything in the existing class. * by changing the base type to an abstract class, you are required to override all the methods, even if methods with the same names as abstract methods of the base class already exist on the type.
Both have specific uses as per the language design- abstract class are designed to be a base class and cannot be instantiated. wheras when u need to define just a contract (NO implementation) which each implementing class must follow in thrie own way, then u must use interfaces.Also - Can be a base class for Inheritance abstract class - yes Interface - no Can have impelementation abstract class - Yes Interface -No
2,620,473
I have seen many answers on stackoverflow, but I didn't find an answer that is matching mine. Apart from all those difference, Does it make sense if we say an abstract class abstracts the implementation of behaviour while an interface abstracts the type which implements the behaviour.
2010/04/12
[ "https://Stackoverflow.com/questions/2620473", "https://Stackoverflow.com", "https://Stackoverflow.com/users/266037/" ]
An abstract class can (and normally does) provide some implementation. And interface cannot provide any implementation.
Both have specific uses as per the language design- abstract class are designed to be a base class and cannot be instantiated. wheras when u need to define just a contract (NO implementation) which each implementing class must follow in thrie own way, then u must use interfaces.Also - Can be a base class for Inheritance abstract class - yes Interface - no Can have impelementation abstract class - Yes Interface -No
2,620,473
I have seen many answers on stackoverflow, but I didn't find an answer that is matching mine. Apart from all those difference, Does it make sense if we say an abstract class abstracts the implementation of behaviour while an interface abstracts the type which implements the behaviour.
2010/04/12
[ "https://Stackoverflow.com/questions/2620473", "https://Stackoverflow.com", "https://Stackoverflow.com/users/266037/" ]
An abstract class can (and normally does) provide some implementation. And interface cannot provide any implementation.
Not really no, because an abstract class doesn't need to implement any behaviour. It probably should, because otherwise you may argue the usefulness of it, but it doesn't *have to*.