text
stringlengths 8
267k
| meta
dict |
---|---|
Q: HTML differences between browsers Do you know of any differences in handling HTML tags/properties in different browsers? For example, I once saw a page with a input tag with a maxlength field set to "2o". Firefox and Opera ignore the "o", and set the max length to 2, while Internet Explorer ignores the field altogether. Do you know of any more?
(Note: seeing as this will probably be a list, it would be great if the general name of the difference was in bold text, like: Different erratic value handling in tag properties)
A: Bug Lists
Web developers have already compiled some pretty comprehensive lists; I think it's better to compile a list of resources than to duplicate those lists.
*
*http://www.positioniseverything.net/
*http://www.richinstyle.com/bugs/table.html
*http://www.quirksmode.org/ (as mentioned by Kristopher Johnson)
Javascript
I agree with Craig - it's best to program Javascript using a library that handles differences between browsers (as well as simplify things like namespacing, AJAX event handling, and context). Here's the jump to Craig's answer (on this page).
CSS Resets
CSS Resets can really simplify web development. They override settings which vary slightly between browsers to give you a more common starting point. I like Yahoo's YUI Reset CSS.
A: Check out http://www.quirksmode.org/
A: If you are programming in javascript the best advice I can give is to use a javascript library instead of trying to roll your own. The libraries are well tested, and the corner cases are more likely to have been encountered.
Scriptalicious - http://script.aculo.us/
jQuery - http://jquery.com/
Microsoft AJAX - http://www.asp.net/ajax/
Dojo - http://dojotoolkit.org/
Prototype - http://www.prototypejs.org/
YUI - http://developer.yahoo.com/yui/
A:
Do you know of any differences in handling HTML tags/properties in different browsers
Is this question asking for information on all differences, including DOM and CSS? Bit of a big topic. I thought the OP was asking about HTML behaviour specifically, not all this other stuff...
A: The one that really annoys me is IE's broken document.getElementById javascript function - in most browsers this will give you something that has the id you specify, IE is happy to give you something that has the value in the name attribute, even if there is something later in the document with the id you asked for.
A:
I once saw a page with a input tag
with a maxlength field set to "2o".
In this specific case, you're talking about invalid code. The maxlength attribute can't contain letters, only numbers.
What browsers do with invalid code varies a great deal, as you can see for yourself.
If you're really asking "what do all the different browsers do when faced with HTML code that, for any one of an infinite number of reasons, is broken?", that way lies madness.
We can reduce the problem space a great deal by using valid code.
So, use valid HTML. Then you are left with two main problem areas:
*
*browser bugs -- how the browser follows the HTML standard and what it does wrong
*differences in browser defaults, like the amount of padding/margin it gives to the body
A: Inconsistent parsing of XHTML in HTML mode
HTML parsers are not designed to handle XML.
If an XHTML document is served as "text/html“ and the compatibilities guidelines are not followed you can get unexpected results.
Empty tags is one possible source of problems. <tag/> and <tag></tag> are equivalent in XML. However the HTML parser can interpret them in two ways.
For instance Opera and IE treat <br></br> as two <br> but Firefox and WebKit treat <br></br> as one <br>.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/13594",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Convert enums to human readable values Does anyone know how to transform a enum value to a human readable value?
For example:
ThisIsValueA should be "This is Value A".
A: Most examples of this that I've seen involve marking your enum values up with [Description] attributes and using reflection to do the "conversion" between the value and the description. Here's an old blog post about it:
<Link>
A: You can inherit from the "Attribute" class of System.Reflection to create your own "Description" class. Like this (from here):
using System;
using System.Reflection;
namespace FunWithEnum
{
enum Coolness : byte
{
[Description("Not so cool")]
NotSoCool = 5,
Cool, // since description same as ToString no attr are used
[Description("Very cool")]
VeryCool = NotSoCool + 7,
[Description("Super cool")]
SuperCool
}
class Description : Attribute
{
public string Text;
public Description(string text)
{
Text = text;
}
}
class Program
{
static string GetDescription(Enum en)
{
Type type = en.GetType();
MemberInfo[] memInfo = type.GetMember(en.ToString());
if (memInfo != null && memInfo.Length > 0)
{
object[] attrs = memInfo[0].GetCustomAttributes(typeof(Description), false);
if (attrs != null && attrs.Length > 0)
return ((Description)attrs[0]).Text;
}
return en.ToString();
}
static void Main(string[] args)
{
Coolness coolType1 = Coolness.Cool;
Coolness coolType2 = Coolness.NotSoCool;
Console.WriteLine(GetDescription(coolType1));
Console.WriteLine(GetDescription(coolType2));
}
}
}
A: You can also take a look at this article: http://www.codeproject.com/KB/cs/enumdatabinding.aspx
It's specifically about data binding, but shows how to use an attribute to decorate the enum values and provides a "GetDescription" method to retrieve the text of the attribute. The problem with using the built-in description attribute is that there are other uses/users of that attribute so there is a possibility that the description appears where you don't want it to. The custom attribute solves that issue.
A: Converting this from a vb code snippet that a certain Ian Horwill left at a blog post long ago... i've since used this in production successfully.
/// <summary>
/// Add spaces to separate the capitalized words in the string,
/// i.e. insert a space before each uppercase letter that is
/// either preceded by a lowercase letter or followed by a
/// lowercase letter (but not for the first char in string).
/// This keeps groups of uppercase letters - e.g. acronyms - together.
/// </summary>
/// <param name="pascalCaseString">A string in PascalCase</param>
/// <returns></returns>
public static string Wordify(string pascalCaseString)
{
Regex r = new Regex("(?<=[a-z])(?<x>[A-Z])|(?<=.)(?<x>[A-Z])(?=[a-z])");
return r.Replace(pascalCaseString, " ${x}");
}
(requires, 'using System.Text.RegularExpressions;')
Thus:
Console.WriteLine(Wordify(ThisIsValueA.ToString()));
Would return,
"This Is Value A".
It's much simpler, and less redundant than providing Description attributes.
Attributes are useful here only if you need to provide a layer of indirection (which the question didn't ask for).
A: The .ToString on Enums is relatively slow in C#, comparable with GetType().Name (it might even use that under the covers).
If your solution needs to be very quick or highly efficient you may be best of caching your conversions in a static dictionary, and looking them up from there.
A small adaptation of @Leon's code to take advantage of C#3. This does make sense as an extension of enums - you could limit this to the specific type if you didn't want to clutter up all of them.
public static string Wordify(this Enum input)
{
Regex r = new Regex("(?<=[a-z])(?<x>[A-Z])|(?<=.)(?<x>[A-Z])(?=[a-z])");
return r.Replace( input.ToString() , " ${x}");
}
//then your calling syntax is down to:
MyEnum.ThisIsA.Wordify();
A: I found it best to define your enum values with an under score so ThisIsValueA would be This_Is_Value_A then you can just do a enumValue.toString().Replace("_"," ") where enumValue is your varible.
A: An alternative to adding Description attributes to each enumeration is to create an extension method. To re-use Adam's "Coolness" enum:
public enum Coolness
{
NotSoCool,
Cool,
VeryCool,
SuperCool
}
public static class CoolnessExtensions
{
public static string ToString(this Coolness coolness)
{
switch (coolness)
{
case Coolness.NotSoCool:
return "Not so cool";
case Coolness.Cool:
return "Cool";
case Coolness.VeryCool:
return "Very cool";
case Coolness.SuperCool:
return Properties.Settings.Default["SuperCoolDescription"].ToString();
default:
throw new ArgumentException("Unknown amount of coolness", nameof(coolness));
}
}
}
Although this means that the descriptions are further away from the actual values, it allows you to use localisation to print different strings for each language, such as in my VeryCool example.
A: Enum.GetName(typeof(EnumFoo), EnumFoo.BarValue)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/13599",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "27"
} |
Q: GUI system development resources? Could someone recommend any good resources for creating Graphics User Interfaces, preferably in C/C++?
Currently my biggest influence is 3DBuzz.com's C++/OpenGL VTMs (Video Training Modules). While they are very good, they cover a large area of graphics programming, so only skim the surface of GUI programming.
This question does relate to "How do I make a GUI?", where there is also a rough outline of my current structure.
Any response would be appreciated.
Edit:
I've just read some of the answers and spotted that I've missed some points. I had to type the question twice as I caught the wrong button and lost it.
I missed two important points, first: This will be used cross platform including homebrew on a Sony PSP. Second: I want to create a GUI system not use an existing one.
Edit 2: I think some of you are missing the point, I don't what to use an existing GUI system I want to build one.
Qt in it's current form is not portable to the PSP, never mind the overkill of such a task.
That said I've decided to create an IM-GUI, and have started to prototype the code.
A: I wouldn't use OpenGL for the GUI unless you are planning for hardware accelerated and/or 3D effects that you don't think you would get with a more traditional GUI toolkit (e.g Qt, wxWidgets, GTK, etc). If you just need a quick simple GUI for hosting your OpenGL graphics then FLTK is a nice choice. Otherwise, for rendering the GUI directly in OpenGL their are libraries like Crazy Eddie's GUI that do just that and provide lots of skinnable widgets that you won't have to reinvent. The window and OpenGL context could then be provide with a portable library like SDL.
EDIT: Now that I've gone back and taken at look your other post I think I have a better understanding of what you are asking. For a GUI on an embedded system like the Nintendo DS, I would consider using an "immediate mode" GUI. Jari Komppa has a good tutorial about them, but you could use a more object-oriented approach with C++ than the C code he presents.
A: Have a look at Qt. It is an open source library for making GUI's. Unlike Swing in Java, it assumes a lot of stuff, so it is really easy to make functional GUI's. For example, a textarea assumes that you want a context menu when you right click it with copy, paste, select all, etc. The documentation is also very good.
A: http://www.fox-toolkit.org has an API reference, if you're looking how to work with a specific framework. Or were you more interested in general theory or something more along the lines of how to do the low-level stuff yourself?
A: For more information about "immediate mode" GUI, I can recommend the Molly Rocket forums. There's a good video presentation of the thinking behind IM-GUI, along with lots of discussion.
I recently hacked together a very quick IM-GUI system based on presentation on Jari's page, and in my case, where I really just wanted to be able to get a couple of buttons and boxes on the screen, and more or less just hard code the response to the inputs, it really felt like the right thing to do, instead of going for a more full blown GUI-architecture. (This was in a DirectX-application, so the number of choices I had was pretty limited).
A: One of the fastest ways is to use python with a gui binding like pyQt, PyFLTK, tkinter, wxPython or even via pygame which uses SDL.
Its easy fast and platform independent.
Also the management of the packages is unbeatable.
See:
*
*http://wiki.python.org/moin/PyQt
*http://www.fltk.org/
*(tkinter is default and already packaged with python)
*http://wxpython.org/
*http://www.pygame.org/news.html
A: For a platform like the PSP, I'd worry slightly about the performance of an IM GUI solution. With a traditional retained mode type of solution, when you create a control, you can also create the vertex buffer/display list or what-have-you required to render it. With an immediate mode solution, it seems to me that you'd need to recreate this dynamically each frame.
You might not care about this, if you're only doing a few buttons, or it's not going to be used in-game (assuming you're making a game) but, especially if you have a fair bit of text, the cost of rendering might start to hurt if you can't find a way to cache the display lists somehow.
A: I'll second Qt. It's cross platform, and I found it much easier to work with than the built in Visual Studio GUI stuff. It's dual-licensed, so if you don't want your code to be GPL you could purchase a license instead.
A: I've had a look at the Video from Molley Rocket and Looked through Jari Komppa's cached tutorials.
An IM-GUI seems the best way to go, I think it will be a lot more streamlined, and lot quicker to build than the system I originally had in mind.
Now a new issue, I can only except one Answer. :(
Thanks again to Monjardin and dooz, cheers.
thing2k
A: I'd have a look at GLAM and GLGooey
| {
"language": "en",
"url": "https://stackoverflow.com/questions/13607",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Validate Enum Values I need to validate an integer to know if is a valid enum value.
What is the best way to do this in C#?
A: Brad Abrams specifically warns against Enum.IsDefined in his post The Danger of Oversimplification.
The best way to get rid of this requirement (that is, the need to validate enums) is to remove ways where users can get it wrong, e.g., an input box of some sort. Use enums with drop downs, for example, to enforce only valid enums.
A: This answer is in response to deegee's answer which raises the performance issues of System.Enum so should not be taken as my preferred generic answer, more addressing enum validation in tight performance scenarios.
If you have a mission critical performance issue where slow but functional code is being run in a tight loop then I personally would look at moving that code out of the loop if possible instead of solving by reducing functionality. Constraining the code to only support contiguous enums could be a nightmare to find a bug if, for example, somebody in the future decides to deprecate some enum values. Simplistically you could just call Enum.GetValues once, right at the start to avoid triggering all the reflection, etc thousands of times. That should give you an immediate performance increase. If you need more performance and you know that a lot of your enums are contiguous (but you still want to support 'gappy' enums) you could go a stage further and do something like:
public abstract class EnumValidator<TEnum> where TEnum : struct, IConvertible
{
protected static bool IsContiguous
{
get
{
int[] enumVals = Enum.GetValues(typeof(TEnum)).Cast<int>().ToArray();
int lowest = enumVals.OrderBy(i => i).First();
int highest = enumVals.OrderByDescending(i => i).First();
return !Enumerable.Range(lowest, highest).Except(enumVals).Any();
}
}
public static EnumValidator<TEnum> Create()
{
if (!typeof(TEnum).IsEnum)
{
throw new ArgumentException("Please use an enum!");
}
return IsContiguous ? (EnumValidator<TEnum>)new ContiguousEnumValidator<TEnum>() : new JumbledEnumValidator<TEnum>();
}
public abstract bool IsValid(int value);
}
public class JumbledEnumValidator<TEnum> : EnumValidator<TEnum> where TEnum : struct, IConvertible
{
private readonly int[] _values;
public JumbledEnumValidator()
{
_values = Enum.GetValues(typeof (TEnum)).Cast<int>().ToArray();
}
public override bool IsValid(int value)
{
return _values.Contains(value);
}
}
public class ContiguousEnumValidator<TEnum> : EnumValidator<TEnum> where TEnum : struct, IConvertible
{
private readonly int _highest;
private readonly int _lowest;
public ContiguousEnumValidator()
{
List<int> enumVals = Enum.GetValues(typeof (TEnum)).Cast<int>().ToList();
_lowest = enumVals.OrderBy(i => i).First();
_highest = enumVals.OrderByDescending(i => i).First();
}
public override bool IsValid(int value)
{
return value >= _lowest && value <= _highest;
}
}
Where your loop becomes something like:
//Pre import-loop
EnumValidator< MyEnum > enumValidator = EnumValidator< MyEnum >.Create();
while(import) //Tight RT loop.
{
bool isValid = enumValidator.IsValid(theValue);
}
I'm sure the EnumValidator classes could written more efficiently (it’s just a quick hack to demonstrate) but quite frankly who cares what happens outside the import loop? The only bit that needs to be super-fast is within the loop. This was the reason for taking the abstract class route, to avoid an unnecessary if-enumContiguous-then-else in the loop (the factory Create essentially does this upfront).
You will note a bit of hypocrisy, for brevity this code constrains functionality to int-enums. I should be making use of IConvertible rather than using int's directly but this answer is already wordy enough!
A: Building upon Timo's answer, here is an even faster, safer and simpler solution, provided as an extension method.
public static class EnumExtensions
{
/// <summary>Whether the given value is defined on its enum type.</summary>
public static bool IsDefined<T>(this T enumValue) where T : Enum
{
return EnumValueCache<T>.DefinedValues.Contains(enumValue);
}
private static class EnumValueCache<T> where T : Enum
{
public static readonly HashSet<T> DefinedValues = new HashSet<T>((T[])Enum.GetValues(typeof(T)));
}
}
Usage:
if (myEnumValue.IsDefined()) { ... }
Update - it's even now cleaner in .NET 5:
public static class EnumExtensions
{
/// <summary>Whether the given value is defined on its enum type.</summary>
public static bool IsDefined<T>(this T enumValue) where T : struct, Enum
{
return EnumValueCache<T>.DefinedValues.Contains(enumValue);
}
private static class EnumValueCache<T> where T : struct, Enum
{
public static readonly HashSet<T> DefinedValues = new(Enum.GetValues<T>());
}
}
A: IMHO the post marked as the answer is incorrect.
Parameter and data validation is one of the things that was drilled into me decades ago.
WHY
Validation is required because essentially any integer value can be assigned to an enum without throwing an error.
I spent many days researching C# enum validation because it is a necessary function in many cases.
WHERE
The main purpose in enum validation for me is in validating data read from a file: you never know if the file has been corrupted, or was modified externally, or was hacked on purpose.
And with enum validation of application data pasted from the clipboard: you never know if the user has edited the clipboard contents.
That said, I spent days researching and testing many methods including profiling the performance of every method I could find or design.
Making calls into anything in System.Enum is so slow that it was a noticeable performance penalty on functions that contained hundreds or thousands of objects that had one or more enums in their properties that had to be validated for bounds.
Bottom line, stay away from everything in the System.Enum class when validating enum values, it is dreadfully slow.
RESULT
The method that I currently use for enum validation will probably draw rolling eyes from many programmers here, but it is imho the least evil for my specific application design.
I define one or two constants that are the upper and (optionally) lower bounds of the enum, and use them in a pair of if() statements for validation.
One downside is that you must be sure to update the constants if you change the enum.
This method also only works if the enum is an "auto" style where each enum element is an incremental integer value such as 0,1,2,3,4,.... It won't work properly with Flags or enums that have values that are not incremental.
Also note that this method is almost as fast as regular if "<" ">" on regular int32s (which scored 38,000 ticks on my tests).
For example:
public const MyEnum MYENUM_MINIMUM = MyEnum.One;
public const MyEnum MYENUM_MAXIMUM = MyEnum.Four;
public enum MyEnum
{
One,
Two,
Three,
Four
};
public static MyEnum Validate(MyEnum value)
{
if (value < MYENUM_MINIMUM) { return MYENUM_MINIMUM; }
if (value > MYENUM_MAXIMUM) { return MYENUM_MAXIMUM; }
return value;
}
PERFORMANCE
For those who are interested, I profiled the following variations on an enum validation, and here are the results.
The profiling was performed on release compile in a loop of one million times on each method with a random integer input value. Each test was ran more than 10 times and averaged. The tick results include the total time to execute which will include the random number generation etc. but those will be constant across the tests. 1 tick = 10ns.
Note that the code here isn't the complete test code, it is only the basic enum validation method. There were also a lot of additional variations on these that were tested, and all of them with results similar to those shown here that benched 1,800,000 ticks.
Listed slowest to fastest with rounded results, hopefully no typos.
Bounds determined in Method = 13,600,000 ticks
public static T Clamp<T>(T value)
{
int minimum = Enum.GetValues(typeof(T)).GetLowerBound(0);
int maximum = Enum.GetValues(typeof(T)).GetUpperBound(0);
if (Convert.ToInt32(value) < minimum) { return (T)Enum.ToObject(typeof(T), minimum); }
if (Convert.ToInt32(value) > maximum) { return (T)Enum.ToObject(typeof(T), maximum); }
return value;
}
Enum.IsDefined = 1,800,000 ticks
Note: this code version doesn't clamp to Min/Max but returns Default if out of bounds.
public static T ValidateItem<T>(T eEnumItem)
{
if (Enum.IsDefined(typeof(T), eEnumItem) == true)
return eEnumItem;
else
return default(T);
}
System.Enum Convert Int32 with casts = 1,800,000 ticks
public static Enum Clamp(this Enum value, Enum minimum, Enum maximum)
{
if (Convert.ToInt32(value) < Convert.ToInt32(minimum)) { return minimum; }
if (Convert.ToInt32(value) > Convert.ToInt32(maximum)) { return maximum; }
return value;
}
if() Min/Max Constants = 43,000 ticks = the winner by 42x and 316x faster.
public static MyEnum Clamp(MyEnum value)
{
if (value < MYENUM_MINIMUM) { return MYENUM_MINIMUM; }
if (value > MYENUM_MAXIMUM) { return MYENUM_MAXIMUM; }
return value;
}
-eol-
A: As others have mentioned, Enum.IsDefined is slow, something you have to be aware of if it's in a loop.
When doing multiple comparisons, a speedier method is to first put the values into a HashSet. Then simply use Contains to check whether the value is valid, like so:
int userInput = 4;
// below, Enum.GetValues converts enum to array. We then convert the array to hashset.
HashSet<int> validVals = new HashSet<int>((int[])Enum.GetValues(typeof(MyEnum)));
// the following could be in a loop, or do multiple comparisons, etc.
if (validVals.Contains(userInput))
{
// is valid
}
A: Update 2022-09-27
As of .NET 5, a fast, generic overload is available: Enum.IsDefined<TEnum>(TEnum value).
The generic overload alleviates the performance issues of the non-generic one.
Original Answer
Here is a fast generic solution, using a statically-constucted HashSet<T>.
You can define this once in your toolbox, and then use it for all your enum validation.
public static class EnumHelpers
{
/// <summary>
/// Returns whether the given enum value is a defined value for its type.
/// Throws if the type parameter is not an enum type.
/// </summary>
public static bool IsDefined<T>(T enumValue)
{
if (typeof(T).BaseType != typeof(System.Enum)) throw new ArgumentException($"{nameof(T)} must be an enum type.");
return EnumValueCache<T>.DefinedValues.Contains(enumValue);
}
/// <summary>
/// Statically caches each defined value for each enum type for which this class is accessed.
/// Uses the fact that static things exist separately for each distinct type parameter.
/// </summary>
internal static class EnumValueCache<T>
{
public static HashSet<T> DefinedValues { get; }
static EnumValueCache()
{
if (typeof(T).BaseType != typeof(System.Enum)) throw new Exception($"{nameof(T)} must be an enum type.");
DefinedValues = new HashSet<T>((T[])System.Enum.GetValues(typeof(T)));
}
}
}
Note that this approach is easily extended to enum parsing as well, by using a dictionary with string keys (minding case-insensitivity and numeric string representations).
A: You got to love these folk who assume that data not only always comes from a UI, but a UI within your control!
IsDefined is fine for most scenarios, you could start with:
public static bool TryParseEnum<TEnum>(this int enumValue, out TEnum retVal)
{
retVal = default(TEnum);
bool success = Enum.IsDefined(typeof(TEnum), enumValue);
if (success)
{
retVal = (TEnum)Enum.ToObject(typeof(TEnum), enumValue);
}
return success;
}
(Obviously just drop the ‘this’ if you don’t think it’s a suitable int extension)
A: This is how I do it based on multiple posts online. The reason for doing this is to make sure enums marked with Flags attribute can also be successfully validated.
public static TEnum ParseEnum<TEnum>(string valueString, string parameterName = null)
{
var parsed = (TEnum)Enum.Parse(typeof(TEnum), valueString, true);
decimal d;
if (!decimal.TryParse(parsed.ToString(), out d))
{
return parsed;
}
if (!string.IsNullOrEmpty(parameterName))
{
throw new ArgumentException(string.Format("Bad parameter value. Name: {0}, value: {1}", parameterName, valueString), parameterName);
}
else
{
throw new ArgumentException("Bad value. Value: " + valueString);
}
}
A: You can use the FluentValidation for your project. Here is a simple example for the "Enum Validation"
Let's create a EnumValidator class with using FluentValidation;
public class EnumValidator<TEnum> : AbstractValidator<TEnum> where TEnum : struct, IConvertible, IComparable, IFormattable
{
public EnumValidator(string message)
{
RuleFor(a => a).Must(a => typeof(TEnum).IsEnum).IsInEnum().WithMessage(message);
}
}
Now we created the our enumvalidator class; let's create the a class to call enumvalidor class;
public class Customer
{
public string Name { get; set; }
public Address address{ get; set; }
public AddressType type {get; set;}
}
public class Address
{
public string Line1 { get; set; }
public string Line2 { get; set; }
public string Town { get; set; }
public string County { get; set; }
public string Postcode { get; set; }
}
public enum AddressType
{
HOME,
WORK
}
Its time to call our enum validor for the address type in customer class.
public class CustomerValidator : AbstractValidator<Customer>
{
public CustomerValidator()
{
RuleFor(x => x.type).SetValidator(new EnumValidator<AddressType>("errormessage");
}
}
A: To expound on the performance scaling specifically regarding Timo/Matt Jenkins method:
Consider the following code:
//System.Diagnostics - Stopwatch
//System - ConsoleColor
//System.Linq - Enumerable
Stopwatch myTimer = Stopwatch.StartNew();
int myCyclesMin = 0;
int myCyclesCount = 10000000;
long myExt_IsDefinedTicks;
long myEnum_IsDefinedTicks;
foreach (int lCycles in Enumerable.Range(myCyclesMin, myCyclesMax))
{
Console.WriteLine(string.Format("Cycles: {0}", lCycles));
myTimer.Restart();
foreach (int _ in Enumerable.Range(0, lCycles)) { ConsoleColor.Green.IsDefined(); }
myExt_IsDefinedTicks = myTimer.ElapsedTicks;
myTimer.Restart();
foreach (int _ in Enumerable.Range(0, lCycles)) { Enum.IsDefined(typeof(ConsoleColor), ConsoleColor.Green); }
myEnum_IsDefinedTicks = myTimer.E
Console.WriteLine(string.Format("object.IsDefined() Extension Elapsed: {0}", myExt_IsDefinedTicks.ToString()));
Console.WriteLine(string.Format("Enum.IsDefined(Type, object): {0}", myEnum_IsDefinedTicks.ToString()));
if (myExt_IsDefinedTicks == myEnum_IsDefinedTicks) { Console.WriteLine("Same"); }
else if (myExt_IsDefinedTicks < myEnum_IsDefinedTicks) { Console.WriteLine("Extension"); }
else if (myExt_IsDefinedTicks > myEnum_IsDefinedTicks) { Console.WriteLine("Enum"); }
}
Output starts out like the following:
Cycles: 0
object.IsDefined() Extension Elapsed: 399
Enum.IsDefined(Type, object): 31
Enum
Cycles: 1
object.IsDefined() Extension Elapsed: 213654
Enum.IsDefined(Type, object): 1077
Enum
Cycles: 2
object.IsDefined() Extension Elapsed: 108
Enum.IsDefined(Type, object): 112
Extension
Cycles: 3
object.IsDefined() Extension Elapsed: 9
Enum.IsDefined(Type, object): 30
Extension
Cycles: 4
object.IsDefined() Extension Elapsed: 9
Enum.IsDefined(Type, object): 35
Extension
This seems to indicate there is a steep setup cost for the static hashset object (in my environment, approximately 15-20ms.
Reversing which method is called first doesn't change that the first call to the extension method (to set up the static hashset) is quite lengthy. Enum.IsDefined(typeof(T), object) is also longer than normal for the first cycle, but, interestingly, much less so.
Based on this, it appears Enum.IsDefined(typeof(T), object) is actually faster until lCycles = 50000 or so.
I'm unsure why Enum.IsDefined(typeof(T), object) gets faster at both 2 and 3 lookups before it starts rising. Clearly there's some process going on internally as object.IsDefined() also takes markedly longer for the first 2 lookups before settling in to be bleeding fast.
Another way to phrase this is that if you need to lots of lookups with any other remotely long activity (perhaps a file operation like an open) that will add a few milliseconds, the initial setup for object.IsDefined() will be swallowed up (especially if async) and become mostly unnoticeable. At that point, Enum.IsDefined(typeof(T), object) takes roughly 5x longer to execute.
Basically, if you don't have literally thousands of calls to make for the same Enum, I'm not sure how hashing the contents is going to save you time over your program execution. Enum.IsDefined(typeof(T), object) may have conceptual performance problems, but ultimately, it's fast enough until you need it thousands of times for the same enum.
As an interesting side note, implementing the ValueCache as a hybrid dictionary yields a startup time that reaches parity with Enum.IsDefined(typeof(T), object) within ~1500 iterations. Of course, using a HashSet passes both at ~50k.
So, my advice: If your entire program is validating the same enum (validating different enums causes the same level of startup delay, once for each different enum) less than 1500 times, use Enum.IsDefined(typeof(T), object). If you're between 1500 and 50k, use a HybridDictionary for your hashset, the initial cache populate is roughly 10x faster. Anything over 50k iterations, HashSet is a pretty clear winner.
Also keep in mind that we are talking in Ticks. In .Net a 10,000 ticks is 1 ms.
For full disclosure I also tested List as a cache, and it's about 1/3 the populate time as hashset, however, for any enum over 9 or so elements, it's way slower than any other method. If all your enums are less than 9 elements, (or smaller yet) it may be the fastest approach.
The cache defined as a HybridDictionary (yes, the keys and values are the same. Yes, it's quite a bit harder to read than the simpler answers referenced above):
//System.Collections.Specialized - HybridDictionary
private static class EnumHybridDictionaryValueCache<T> where T : Enum
{
static T[] enumValues = (T[])Enum.GetValues(typeof(T));
static HybridDictionary PopulateDefinedValues()
{
HybridDictionary myDictionary = new HybridDictionary(enumValues.Length);
foreach (T lEnumValue in enumValues)
{
//Has to be unique, values are actually based on the int value. Enums with multiple aliases for one value will fail without checking.
//Check implicitly by using assignment.
myDictionary[lEnumValue] = lEnumValue;
}
return myDictionary;
}
public static readonly HybridDictionary DefinedValues = PopulateDefinedValues();
}
A: I found this link that answers it quite well. It uses:
(ENUMTYPE)Enum.ToObject(typeof(ENUMTYPE), INT)
A: To validate if a value is a valid value in an enumeration, you only need to call the static method Enum.IsDefined.
int value = 99;//Your int value
if (Enum.IsDefined(typeof(your_enum_type), value))
{
//Todo when value is valid
}else{
//Todo when value is not valid
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/13615",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "94"
} |
Q: Speed difference in using inline strings vs concatenation in php5? (assume php5) consider
<?php
$foo = 'some words';
//case 1
print "these are $foo";
//case 2
print "these are {$foo}";
//case 3
print 'these are ' . $foo;
?>
Is there much of a difference between 1 and 2?
If not, what about between 1/2 and 3?
A: Any differences in execution time are completely negligible.
Please see
*
*NikiC's Blog: Disproving the Single Quotes Performance Myth for a technical explanation how interpolation and concatenation works in PHP and why it is absolutely pointless to care about their speed.
Don't waste time on micro-optimizations like this. Use a profiler to measure the performance of your application in a real world scenario and then optimize where it is really needed. Optimising a single sloppy DB query is likely to make a bigger performance improvement than applying micro-optimisations all over your code.
A: Well, as with all "What might be faster in real life" questions, you can't beat a real life test.
function timeFunc($function, $runs)
{
$times = array();
for ($i = 0; $i < $runs; $i++)
{
$time = microtime();
call_user_func($function);
$times[$i] = microtime() - $time;
}
return array_sum($times) / $runs;
}
function Method1()
{
$foo = 'some words';
for ($i = 0; $i < 10000; $i++)
$t = "these are $foo";
}
function Method2()
{
$foo = 'some words';
for ($i = 0; $i < 10000; $i++)
$t = "these are {$foo}";
}
function Method3()
{
$foo = 'some words';
for ($i = 0; $i < 10000; $i++)
$t = "these are " . $foo;
}
print timeFunc('Method1', 10) . "\n";
print timeFunc('Method2', 10) . "\n";
print timeFunc('Method3', 10) . "\n";
Give it a few runs to page everything in, then...
0.0035568
0.0035388
0.0025394
So, as expected, the interpolation are virtually identical (noise level differences, probably due to the extra characters the interpolation engine needs to handle). Straight up concatenation is about 66% of the speed, which is no great shock. The interpolation parser will look, find nothing to do, then finish with a simple internal string concat. Even if the concat were expensive, the interpolator will still have to do it, after all the work to parse out the variable and trim/copy up the original string.
Updates By Somnath:
I added Method4() to above real time logic.
function Method4()
{
$foo = 'some words';
for ($i = 0; $i < 10000; $i++)
$t = 'these are ' . $foo;
}
print timeFunc('Method4', 10) . "\n";
Results were:
0.0014739
0.0015574
0.0011955
0.001169
When you are just declaring a string only and no need to parse that string too, then why to confuse PHP debugger to parse. I hope you got my point.
A: there is a difference when concatenating variables... and what you are doing with the result... and if what you are doing is dumping it to output, is or isn't output buffering on.
also, what is the memory situation of the server? typically memory management on a higher level platform is worse than that at lower platforms...
$a = 'parse' . $this;
is managing memory at the user code platform level...
$a = "parse $this";
is managing memory at the php system code platform level...
so these benchmarks as related to CPU don't tell the full story.
running the benchmark 1000 times vs running the benchmark 1000 times on a server that is attempting to run that same simulation 1000 times concurrently... you might get drastically different results depending on the scope of the application.
A: I seem to remember that the developer of the forum software, Vanilla replaced all the double quotes in his code with single quotes and noticed a reasonable amount of performance increase.
I can't seem to track down a link to the discussion at the moment though.
A: Live benchmarks:
http://phpbench.com/
There is actually a subtle difference when concatenating variables with single vs double quotes.
A: Just to add something else to the mix, if you are using a variable inside a double quoted string syntax:
$foo = "hello {$bar}";
is faster than
$foo = "hello $bar";
and both of these are faster than
$foo = 'hello' . $bar;
A: @Adam's test used
"these are " . $foo
note that the following is even faster:
'these are ' . $foo;
this is due to the fact, that a double quoted "string" gets evaluated, where a single quoted 'string' is just taken as is...
A: Don't get too caught up on trying to optimize string operations in PHP. Concatenation vs. interpolation is meaningless (in real world performance) if your database queries are poorly written or you aren't using any kind of caching scheme. Write your string operations in such a way that debugging your code later will be easy, the performance differences are negligible.
@uberfuzzy Assuming this is just a question about language minutia, I suppose it's fine. I'm just trying to add to the conversation that comparing performance between single-quote, double-quote and heredoc in real world applications in meaningless when compared to the real performance sinks, such as poor database queries.
A: The performance difference has been irrelevant since at least January 2012, and likely earlier:
Single quotes: 0.061846971511841 seconds
Double quotes: 0.061599016189575 seconds
Earlier versions of PHP may have had a difference - I personally prefer single quotes to double quotes, so it was a convenient difference. The conclusion of the article makes an excellent point:
Never trust a statistic you didn’t forge yourself.
(Although the article quotes the phrase, the original quip was likely falsely attributed to Winston Churchill, invented by Joseph Goebbels' propaganda ministry to portray Churchill as a liar:
Ich traue keiner Statistik, die ich nicht selbst gefälscht habe.
This loosely translates to, "I do not trust a statistic that I did not fake myself.")
A: Double quotes can be much slower. I read from several places that that it is better to do this
'parse me '.$i.' times'
than
"parse me $i times"
Although I'd say the second one gave you more readable code.
A: Practically there is no difference at all! See the timings: http://micro-optimization.com/single-vs-double-quotes
A: It should be noted that, when using a modified version of the example by Adam Wright with 3 variables, the results are reversed and the first two functions are actually faster, consistently. This is with PHP 7.1 on CLI:
function timeFunc($function, $runs)
{
$times = array();
for ($i = 0; $i < $runs; $i++)
{
$time = microtime();
call_user_func($function);
@$times[$i] = microtime() - $time;
}
return array_sum($times) / $runs;
}
function Method1()
{
$foo = 'some words';
$bar = 'other words';
$bas = 3;
for ($i = 0; $i < 10000; $i++)
$t = "these are $foo, $bar and $bas";
}
function Method2()
{
$foo = 'some words';
$bar = 'other words';
$bas = 3;
for ($i = 0; $i < 10000; $i++)
$t = "these are {$foo}, {$bar} and {$bas}";
}
function Method3()
{
$foo = 'some words';
$bar = 'other words';
$bas = 3;
for ($i = 0; $i < 10000; $i++)
$t = "these are " . $foo . ", " . $bar . " and " .$bas;
}
print timeFunc('Method1', 10) . "\n";
print timeFunc('Method2', 10) . "\n";
print timeFunc('Method3', 10) . "\n";
I've also tried with '3' instead of just the integer 3, but I get the same kind of results.
With $bas = 3:
0.0016254
0.0015719
0.0019806
With $bas = '3':
0.0016495
0.0015608
0.0022755
It should be noted that these results vary highly (I get variations of about 300%), but the averages seem relatively steady and almost (9 out of 10 cases) always show a faster execution for the 2 first methods, with Method 2 always being slightly faster than method 1.
In conclusion: what is true for 1 single operation (be it interpolation or concatenation) is not always true for combined operations.
A: Yes, originally this is about PHP5, however in few months arrive PHP8 and today the best option tested over my PHP 7.4.5 is use PHP - Nowdoc (tested over WIN 10 + Apache and CentOs 7 + Apache):
function Method6(){
$k1 = 'AAA';
for($i = 0; $i < 10000; $i ++)$t = <<<'EOF'
K1=
EOF
.$k1.
<<<'EOF'
K2=
EOF
.$k1;
}
here the method #5 (using Heredoc to concatenat):
function Method5(){
$k1 = 'AAA';
for($i = 0; $i < 10000; $i ++)$t = <<<EOF
K1= $k1
EOF
.<<<EOF
K2=$k1
EOF;
}
the methods 1 to 4 is in beginning of this post
In all my tests the "winner" is method #6 (Newdoc), no't very easy to read, but very fast in CPU and ever using the function function timeFunc($function) by @Adam Wright.
A: I have tested php 7.4 and php 5.4 with following test cases, It was little still confusing to me.
<?php
$start_time = microtime(true);
$result = "";
for ($i = 0; $i < 700000; $i++) {
$result .= "THE STRING APPENDED IS " . $i;
// AND $result .= 'THE STRING APPENDED IS ' . $i;
// AND $result .= "THE STRING APPENDED IS $i";
}
echo $result;
$end_time = microtime(true);
echo "<br><br>";
echo ($end_time - $start_time) . " Seconds";
PHP 7.4 Outputs
1. "THE STRING APPENDED IS " . $i = 0.16744208335876
2. 'THE STRING APPENDED IS ' . $i = 0.16724419593811
3. "THE STRING APPENDED IS $i" = 0.16815495491028
PHP 5.3 Outputs
1. "THE STRING APPENDED IS " . $i = 0.27664494514465
2. 'THE STRING APPENDED IS ' . $i = 0.27818703651428
3. "THE STRING APPENDED IS $i" = 0.28839707374573
I have tested so many times, In php 7.4 it seems to be all 3 test cases got same result many times but still concatenation have little bittle advantage in performance.
A: Based on @adam-wright answer, I wanted to know if speed difference happens without no concataining / no vars in a string.
== My questions...
*
*is $array['key'] call or set faster than $array["key"] !?
*is $var = "some text"; slower than $var = 'some text'; ?
== My tests with new vars every time to avoid use same memory address :
function getArrDblQuote() {
$start1 = microtime(true);
$array1 = array("key" => "value");
for ($i = 0; $i < 10000000; $i++)
$t1 = $array1["key"];
echo microtime(true) - $start1;
}
function getArrSplQuote() {
$start2 = microtime(true);
$array2 = array('key' => 'value');
for ($j = 0; $j < 10000000; $j++)
$t2 = $array2['key'];
echo microtime(true) - $start2;
}
function setArrDblQuote() {
$start3 = microtime(true);
for ($k = 0; $k < 10000000; $k++)
$array3 = array("key" => "value");
echo microtime(true) - $start3;
}
function setArrSplQuote() {
$start4 = microtime(true);
for ($l = 0; $l < 10000000; $l++)
$array4 = array('key' => 'value');
echo microtime(true) - $start4;
}
function setStrDblQuote() {
$start5 = microtime(true);
for ($m = 0; $m < 10000000; $m++)
$var1 = "value";
echo microtime(true) - $start5;
}
function setStrSplQuote() {
$start6 = microtime(true);
for ($n = 0; $n < 10000000; $n++)
$var2 = 'value';
echo microtime(true) - $start6;
}
print getArrDblQuote() . "\n<br>";
print getArrSplQuote() . "\n<br>";
print setArrDblQuote() . "\n<br>";
print setArrSplQuote() . "\n<br>";
print setStrDblQuote() . "\n<br>";
print setStrSplQuote() . "\n<br>";
== My Results :
array get double quote 2.1978828907013
array get single quote 2.0163490772247
array set double quote 1.9173440933228
array get single quote 1.4982950687408
var set double quote 1.485809803009
var set single quote 1.3026781082153
== My conclusion !
So, result is that difference is not very significant. However, on a big project, I think it can make the difference !
| {
"language": "en",
"url": "https://stackoverflow.com/questions/13620",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "57"
} |
Q: Is there an easy way to convert C# classes to PHP? I am used to writing C# Windows applications. However, I have some free hosted PHP webspace that I would like to make use of. I have a basic understanding of PHP but have never used its object-oriented capabilities.
Is there an easy way to convert C# classes to PHP classes or is it just not possible to write a fully object-oriented application in PHP?
Update: There is no reliance on the .NET framework beyond the basics. The main aim would be to restructure the class properties, variable enums, etc. The PHP will be hosted on a Linux server.
A: PHP doesn't support enums, which might be one area of mismatch.
Also, watch out for collection types, PHP despite it's OO features, tends to have no alternative to over-using the array datatype. Check out the sections on the PHP manual on iterators if you would like to see beyond this.
Public, protected, private, and static properties of classes all work roughly as expected.
A: A huge problem would be to replicate the .Net Framework in PHP if the C# class usses it.
A: It is entirely possible to write a PHP application almost entirely in an object-oriented methodology. You will have to write some procedural code to create and launch your first object but beyond that there are plenty of MVC frameworks for PHP that are all object-oriented. One that I would look at as an example is Code Igniter because it is a little lighter weight in my opinion.
A: I don't know about a tool to automate the process but you could use the Reflexion API to browse your C# class and generate a corresponding PHP class.
Of course, the difficulty here is to correctly map C# types to PHP but with enough unit testing, you should be able to do what you want.
I advice you to go this way because I already did a C# to VB and C++ conversion. That was a pain but the result was worth it.
A: If the problem is that you want to transition to PHP and you are happy to continue running on a windows server with .NET support you might consider wrapping your code using swig.
This can be used to generated stubs to execute from php and you can then go about rewriting the .NET code into PHP in an incremental fashion.
This works for any of the supported languages. ie. you could incrementally rewrite an application in c++ to java if you really wanted to.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/13647",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: C# .NET listing contents of remote files Is it possible in .NET to list files on a remote location like an URL? Much in the same way the System.IO classes work. All I need is the URLs to images that are on a remote server.
A: Short answer: No, unless you have more control over that web-server
Long answer: Here are possible solutions...
*
*You will need server-side script that will do it locally and output this list in your preferred format.
*Most of the web-servers implement default file-browsing pages, so you could theoretically parse those but this solution will be very fragile and not very portable even between different versions of the same web-server.
*If you have FTP access...
A:
Is it possible in .NET to list files on a remote location like an URL?
You should specify which protocol we're talking about.
For HTTP, lubos hasko provided the answer: no. HTTP has no concept of files; only of resources. If you have control over the web server, you can ask it to provide a directory listing, or, better yet, you can write code that lists the directory server-side for you. Without such control, you have to rely on the server to provide a listing, which 1) may be disabled for security reasons, 2) is non-standardized in its format, 3) will be, like lubos said, fragile to parse ("scrape").
If you mean / if the server provides a protocol intended for file transfer, such as FTP, SMB/CIFS, etc., it'll be a lot easier. For example, for FTP, you'll want to look into WebRequestMethods.Ftp.ListDirectoryDetails.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/13655",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Textual versus Graphical Programming Languages I am part of a high school robotics team, and there is some debate about which language to use to program our robot. We are choosing between C (or maybe C++) and LabVIEW. There are pros for each language.
C(++):
*
*Widely used
*Good preparation for the future (most programming positions require text-based programmers.)
*We can expand upon our C codebase from last year
*Allows us to better understand what our robot is doing.
LabVIEW
*
*Easier to visualize program flow (blocks and wires, instead of lines of code)
*Easier to teach (Supposedly...)
*"The future of programming is graphical." (Think so?)
*Closer to the Robolab background that some new members may have.
*Don't need to intimately know what's going on. Simply tell the module to find the red ball, don't need to know how.
This is a very difficult decision for us, and we've been debating for a while. Based on those pros for each language, and on the experience you've got, what do you think the better option is? Keep in mind that we aren't necessarily going for pure efficiency. We also hope to prepare our programmers for a future in programming.
Also:
*
*Do you think that graphical languages such as LabVEIW are the future of programming?
*Is a graphical language easier to learn than a textual language? I think that they should be about equally challenging to learn.
*Seeing as we are partailly rooted in helping people learn, how much should we rely on prewritten modules, and how much should we try to write on our own? ("Good programmers write good code, great programmers copy great code." But isn't it worth being a good programmer, first?)
Thanks for the advice!
Edit:
I'd like to emphasize this question more:
The team captain thinks that LabVIEW is better for its ease of learning and teaching. Is that true? I think that C could be taught just as easily, and beginner-level tasks would still be around with C. I'd really like to hear your opinions. Is there any reason that typing while{} should be any more difficult than creating a "while box?" Isn't it just as intuitive that program flows line by line, only modified by ifs and loops, as it is intuitive that the program flows through the wire, only modified by ifs and loops!?
Thanks again!
Edit:
I just realized that this falls under the topic of "language debate." I hope it's okay, because it's about what's best for a specific branch of programming, with certain goals. If it's not... I'm sorry...
A: I think the choice of LabVIEW or not comes down to whether you want to learn to program in a commonly used language as a marketable skill, or just want to get stuff done. LabVIEW enables you to Get Stuff Done very quickly and productively. As others have observed, it doesn't magically free you from having to understand what you're doing, and it's quite possible to create an unholy mess if you don't - although anecdotally, the worst examples of bad coding style in LabVIEW are generally perpetrated by people who are experienced in a text language and refuse to adapt to how LabVIEW works because they 'already know how to program, dammit!'
That's not to imply that LabVIEW programming isn't a marketable skill, of course; just that it's not as mass-market as C++.
LabVIEW makes it extremely easy to manage different things going on in parallel, which you may well have in a robot control situation. Race conditions in code that should be sequential shouldn't be a problem either (i.e. if they are, you're doing it wrong): there are simple techniques for making sure that stuff happens in the right order where necessary - chaining subVI's using the error wire or other data, using notifiers or queues, building a state machine structure, even using LabVIEW's sequence structure if necessary. Again, this is simply a case of taking the time to understand the tools available in LabVIEW and how they work. I don't think the gripe about having to make subVI icons is very well directed; you can very quickly create one containing a few words of text, maybe with a background colour, and that will be fine for most purposes.
'Are graphical languages the way of the future' is a red herring based on a false dichotomy. Some things are well suited to graphical languages (parallel code, for instance); other things suit text languages much better. I don't expect LabVIEW and graphical programming to either go away, or take over the world.
Incidentally, I would be very surprised if NASA didn't use LabVIEW in the space program. Someone recently described on the Info-LabVIEW mailing list how they had used LabVIEW to develop and test the closed loop control of flight surfaces actuated by electric motors on the Boeing 787, and gave the impression that LabVIEW was used extensively in the plane's development. It's also used for real-time control in the Large Hadron Collider!
The most active place currently for getting further information and help with LabVIEW, apart from National Instruments' own site and forums, seems to be LAVA.
A: This doesn't answer you question directly, but you may want to consider a third option of mixing in an interpreted language. Lua, for example, is already used in the robotics field. It's fast, light-weight and can be configured to run with fixed-point numbers instead of floating-point since most microcontrollers don't have an FPU. Forth is another alternative with similar usage.
It should be pretty easy to write a thin interface layer in C and then let the students loose with interpreted scripts. You could even set it up to allow code to be loaded dynamically without recompiling and flashing a chip. This should reduce the iteration cycle and allow students to learn better by seeing results more quickly.
I'm biased against using visual tools like LabVIEW. I always seem to hit something that doesn't or won't work quite like I want it to do. So, I prefer the absolute control you get with textual code.
A: LabVIEW's other strength (besides libraries) is concurrency. It's a dataflow language, which means that the runtime can handle concurrency for you. So if you're doing something highly concurrent and don't want to have to do traditional synchronization, LabVIEW can help you there.
The future doesn't belong to graphical languages as they stand today. It belongs to whoever can come up with a representation of dataflow (or another concurrency-friendly type of programming) that's as straightforward as the graphical approach is, but is also parsable by the programmer's own tools.
A: There is a published study of the topic hosted by National Instruments:
A Study of Graphical vs. Textual Programming for Teaching DSP
It specifically looks at LabVIEW versus MATLAB (as opposed to C).
A: I think that graphical languages wil always be limited in expressivity compared to textual ones. Compare trying to communicate in visual symbols (e.g., REBUS or sign language) to communicating using words.
For simple tasks, using a graphical language is usually easier but for more intricate logic, I find that graphical languages get in the way.
Another debate implied in this argument, though, is declarative programming vs. imperative. Declarative is usually better for anything where you really don't need the fine-grained control over how something is done. You can use C++ in a declarative way but you would need more work up front to make it so, whereas LABView is designed as a declarative language.
A picture is worth a thousand words but if a picture represents a thousand words that you don't need and you can't change that, then in that case a picture is worthless. Whereas, you can create thousands of pictures using words, specifying every detail and even leading the viewer's focus explicitly.
A: LabVIEW lets you get started quickly, and (as others have already said) has a massive library of code for doing various test, measurement & control related things.
The single biggest downfall of LabVIEW, though, is that you lose all the tools that programmers write for themselves.
Your code is stored as VIs. These are opaque, binary files. This means that your code really isn't yours, it's LabVIEW's. You can't write your own parser, you can't write a code generator, you can't do automated changes via macros or scripts.
This sucks when you have a 5000 VI app that needs some minor tweak applied universally. Your only option is to go through every VI manually, and heaven help you if you miss a change in one VI off in a corner somewhere.
And yes, since it's binary, you can't do diff/merge/patch like you can with textual languages. This does indeed make working with more than one version of the code a horrific nightmare of maintainability.
By all means, use LabVIEW if you're doing something simple, or need to prototype, or don't plan to maintain your code.
If you want to do real, maintainable programming, use a textual language. You might be slower getting started, but you'll be faster in the long run.
(Oh, and if you need DAQ libraries, NI's got C++ and .Net versions of those, too.)
A: My first post here :) be gentle ...
I come from an embedded background in the automotive industry and now i'm in the defense industry. I can tell you from experience that C/C++ and LabVIEW are really different beasts with different purposes in mind. C/C++ was always used for the embedded work on microcontrollers because it was compact and compilers/tools were easy to come by. LabVIEW on the other hand was used to drive the test system (along with test stand as a sequencer). Most of the test equipment we used were from NI so LabVIEW provided an environment where we had the tools and the drivers required for the job, along with the support we wanted ..
In terms of ease of learning, there are many many resources out there for C/C++ and many websites that lay out design considerations and example algorithms on pretty much anything you're after freely available. For LabVIEW, the user community's probably not as diverse as C/C++, and it takes a little bit more effort to inspect and compare example code (have to have the right version of LabVIEW etc) ... I found LabVIEW pretty easy to pick up and learn, but there a nuisances as some have mentioned here to do with parallelism and various other things that require a bit of experience before you become aware of them.
So the conclusion after all that? I'd say that BOTH languages are worthwhile in learning because they really do represent two different styles of programming and it is certainly worthwhile to be aware and proficient at both.
A: Before I arrived, our group (PhD scientists, with little programming background) had been trying to implement a LabVIEW application on-and-off for nearly a year. The code was untidy, too complex (front and back-end) and most importantly, did not work. I am a keen programmer but had never used LabVIEW. With a little help from a LabVIEW guru who could help translate the textual progamming paradigms I knew into LabVIEW concepts it was possible to code the app in a week. The point here is that the basic coding concepts still have to be learnt, the language, even one like LabVIEW, is just a different way of expressing them.
LabVIEW is great to use for what it was originally designed for. i.e. to take data from DAQ cards and display it on-screen perhaps with some minor manipulations in-between. However, programming algorithms is no easier and I would even suggest that it is more difficult. For example, in most procedural languages execution order is generally followed line by line, using pseudo mathematical notation (i.e. y = x*x + x + 1) whereas LabVIEW would implement this using a series of VI's which don't necessarily follow from each other (i.e. left-to-right) on the canvas.
Moreover programming as a career is more than knowing the technicalities of coding. Being able to effectively ask for help/search for answers, write readable code and work with legacy code are all key skills which are undeniably more difficult in a graphical language such as LabVIEW.
I believe some aspects of graphical programming may become mainstream - the use of sub-VIs perfectly embodies the 'black-box' principal of programming and is also used in other language abstractions such as Yahoo Pipes and the Apple Automator - and perhaps some future graphical language will revolutionise the way we program but LabVIEW itself is not a massive paradigm shift in language design, we still have while, for, if flow control, typecasting, event driven programming, even objects. If the future really will be written in LabVIEW, C++ programmer won't have much trouble crossing over.
As a postcript I'd say that C/C++ is more suited to robotics since the students will no doubt have to deal with embedded systems and FPGAs at some point. Low level programming knowledge (bits, registers etc.) would be invaluable for this kind of thing.
@mendicant Actually LabVIEW is used a lot in industry, especially for control systems. Granted NASA unlikely use it for on-board satellite systems but then software developement for space-systems is a whole different ball game...
A: Oh my God, the answer is so simple. Use LabView.
I have programmed embedded systems for 10 years, and I can say that without at least a couple months of infrastructure (very careful infrastructure!), you will not be as productive as you are on day 1 with LabView.
If you are designing a robot to be sold and used for the military, go ahead and start with C - it's a good call.
Otherwise, use the system that allows you to try out the most variety in the shortest amount of time. That's LabView.
A: I love LabVIEW. I would highly recommend it especially if the other remembers have used something similar. It takes a while for normal programmers to get used to it, but the result's are much better if you already know how to program.
C/C++ equals manage your own memory. You'll be swimming in memory links and worrying about them. Go with LabVIEW and make sure you read the documentation that comes with LabVIEW and watch out for race conditions.
Learning a language is easy. Learning how to program is not. This doesn't change even if it's a graphical language. The advantage of Graphical languages is that it is easier to visual what the code will do rather than sit there and decipher a bunch of text.
The important thing is not the language but the programming concepts. It shouldn't matter what language you learn to program in, because with a little effort you should be able to program well in any language. Languages come and go.
A: Disclaimer: I've not witnessed LabVIEW, but I have used a few other graphical languages including WebMethods Flow and Modeller, dynamic simulation languages at university and, er, MIT's Scratch :).
My experience is that graphical languages can do a good job of the 'plumbing' part of programming, but the ones I've used actively get in the way of algorithmics. If your algorithms are very simple, that might be OK.
On the other hand, I don't think C++ is great for your situation either. You'll spend more time tracking down pointer and memory management issues than you do in useful work.
If your robot can be controlled using a scripting language (Python, Ruby, Perl, whatever), then I think that would be a much better choice.
Then there's hybrid options:
If there's no scripting option for your robot, and you have a C++ geek on your team, then consider having that geek write bindings to map your C++ library to a scripting language. This would allow people with other specialities to program the robot more easily. The bindings would make a good gift to the community.
If LabVIEW allows it, use its graphical language to plumb together modules written in a textual language.
A: I've encountered a somewhat similar situation in the research group I'm currently working in. It's a biophysics group, and we're using LabVIEW all over the place to control our instruments. That works absolutely great: it's easy to assemble a UI to control all aspects of your instruments, to view its status and to save your data.
And now I have to stop myself from writing a 5 page rant, because for me LabVIEW has been a nightmare. Let me instead try to summarize some pros and cons:
Disclaimer I'm not a LabVIEW expert, I might say things that are biased, out-of-date or just plain wrong :)
LabVIEW pros
*
*Yes, it's easy to learn. Many PhD's in our group seem to have acquired enough skills to hack away within a few weeks, or even less.
*Libraries. This is a major point. You'd have to carefully investigate this for your own situation (I don't know what you need, if there are good LabVIEW libraries for it, or if there are alternatives in other languages). In my case, finding, e.g., a good, fast charting library in Python has been a major problem, that has prevented me from rewriting some of our programs in Python.
*Your school may already have it installed and running.
LabVIEW cons
*
*It's perhaps too easy to learn. In any case, it seems no one really bothers to learn best practices, so programs quickly become a complete, irreparable mess. Sure, that's also bound to happen with text-based languages if you're not careful, but IMO it's much more difficult to do things right in LabVIEW.
*There tend to be major issues in LabVIEW with finding sub-VIs (even up to version 8.2, I think). LabVIEW has its own way of knowing where to find libraries and sub-VIs, which makes it very easy to completely break your software. This makes large projects a pain if you don't have someone around who knows how to handle this.
*Getting LabVIEW to work with version control is a pain. Sure, it can be done, but in any case I'd refrain from using the built-in VC. Check out LVDiff for a LabVIEW diff tool, but don't even think about merging.
(The last two points make working in a team on one project difficult. That's probably important in your case)
*
*This is personal, but I find that many algorithms just don't work when programmed visually. It's a mess.
*
*One example is stuff that is strictly sequential; that gets cumbersome pretty quickly.
*It's difficult to have an overview of the code.
*If you use sub-VI's for small tasks (just like it's a good practice to make functions that perform a small task, and that fit on one screen), you can't just give them names, but you have to draw icons for each of them. That gets very annoying and cumbersome within only a few minutes, so you become very tempted not to put stuff in a sub-VI. It's just too much of a hassle. Btw: making a really good icon can take a professional hours. Go try to make a unique, immediately understandable, recognizable icon for every sub-VI you write :)
*You'll have carpal tunnel within a week. Guaranteed.
*@Brendan: hear, hear!
Concluding remarks
As for your "should I write my own modules" question: I'm not sure. Depends on your time constraints. Don't spend time on reinventing the wheel if you don't have to. It's too easy to spend days on writing low-level code and then realize you've run out of time. If that means you choose LabVIEW, go for it.
If there'd be easy ways to combine LabVIEW and, e.g., C++, I'd love to hear about it: that may give you the best of both worlds, but I doubt there are.
But make sure you and your team spend time on learning best practices. Looking at each other's code. Learning from each other. Writing usable, understandable code. And having fun!
And please forgive me for sounding edgy and perhaps somewhat pedantic. It's just that LabVIEW has been a real nightmare for me :)
A: I think that graphical languages might be the language of the future..... for all those adhoc MS Access developers out there. There will always be a spot for the purely textual coders.
Personally, I've got to ask what is the real fun of building a robot if it's all done for you? If you just drop a 'find the red ball' module in there and watch it go? What sense of pride will you have for your accomplishment? Personally, I wouldn't have much. Plus, what will it teach you of coding, or of the (very important) aspect of the software/hardware interface that is critical in robotics?
I don't claim to be an expert in the field, but ask yourself one thing: Do you think that NASA used LabVIEW to code the Mars Rovers? Do you think that anyone truly prominent in robotics is using LabView?
Really, if you ask me, the only thing using cookie cutter things like LabVIEW to build this is going to prepare you for is to be some backyard robot builder and nothing more. If you want something that will give you something more like industry experience, build your own 'LabVIEW'-type system. Build your own find-the-ball module, or your own 'follow-the-line' module. It will be far more difficult, but it will also be way more cool too. :D
A: You're in High School. How much time do you have to work on this program? How many people are in your group? Do they know C++ or LabView already?
From your question, I see that you know C++ and most of the group does not. I also suspect that the group leader is perceptive enough to notice that some members of the team may be intimidated by a text based programming language. This is acceptable, you're in high school, and these people are normies. I feel as though normal high schoolers will be able to understand LabView more intuitively than C++. I'm guessing most high school students, like the population in general, are scared of a command line. For you there is much less of a difference, but for them, it is night and day.
You are correct that the same concepts may be applied to LabView as C++. Each has its strengths and weaknesses. The key is selecting the right tool for the job. LabView was designed for this kind of application. C++ is much more generic and can be applied to many other kinds of problems.
I am going to recommend LabView. Given the right hardware, you can be up and running almost out-of-the-box. Your team can spend more time getting the robot to do what you want, which is what the focus of this activity should be.
Graphical Languages are not the future of programming; they have been one of the choices available, created to solve certain types of problems, for many years. The future of programming is layer upon layer of abstraction away from machine code. In the future, we'll be wondering why we wasted all this time programming "semantics" over and over.
how much should we rely on prewritten modules, and how much should we try to write on our own?
You shouldn't waste time reinventing the wheel. If there are device drivers available in Labview, use them. You can learn a lot by copying code that is similar in function and tailoring it to your needs - you get to see how other people solved similar problems, and have to interpret their solution before you can properly apply it to your problem. If you blindly copy code, chances of getting it to work are slim. You have to be good, even if you copy code.
Best of luck!
A: I would suggest you use LabVIEW as you can get down to making the robot what you want to do faster and easier. LabVIEW has been designed with this mind. OfCourse C(++) are great languages, but LabVIEW does what it is supposed to do better than anything else.
People can write really good software in LabVIEW as it provides ample scope and support for that.
A: There is one huge thing I found negative in using LabVIEW for my applications: Organize design complexity. As a physisist I find Labview great for prototyping, instrument control and mathematical analysis. There is no language in which you get faster and better a result then in LabVIEW. I used LabView since 1997. Since 2005 I switched completely to the .NET framework, since it is easier to design and maintain.
In LabVIEW a simple 'if' structure has to be drawn and uses a lot of space on your graphical design. I just found out that many of our commercial applications were hard to maintain. The more complex the application became, the more difficult it was to read.
I now use text laguages and I am much better in maintaining everything. If you would compare C++ to LabVIEW I would use LabVIEW, but compared to C# it does not win
A: As allways, it depends.
I am using LabVIEW since about 20 years now and did quite a large kind of jobs, from simple DAQ to very complex visualization, from device controls to test sequencers. If it was not good enough, I for sure would have switched. That said, I started coding Fortran with punchcards and used a whole lot of programming languages on 8-bit 'machines', starting with Z80-based ones. The languages ranged from Assembler to BASIC, from Turbo-Pascal to C.
LabVIEW was a major improvement because of its extensive libraries for data acqusition and analysis. One has, however, to learn a different paradigma. And you definitely need a trackball ;-))
A: I don't anything about LabView (or much about C/C++), but..
Do you think that graphical languages such as LabVEIW are the future of programming?
No...
Is a graphical language easier to learn than a textual language? I think that they should be about equally challenging to learn.
Easier to learn? No, but they are easier to explain and understand.
To explain a programming language you have to explain what a variable is (which is surprisingly difficult). This isn't a problem with flowgraph/nodal coding tools, like the LEGO Mindstroms programming interface, or Quartz Composer..
For example, in this is a fairly complicated LEGO Mindstroms program - it's very easy to understand what is going in... but what if you want the robot to run the INCREASEJITTER block 5 times, then drive forward for 10 seconds, then try the INCREASEJITTER loop again? Things start getting messy very quickly..
Quartz Composer is a great exmaple of this, and why I don't think graphical languages will ever "be the future"
It makes it very easy to really cool stuff (3D particle effects, with a camera controlled by the average brightness of pixels from a webcam).. but incredibly difficult to do easy things, like iterate over the elements from an XML file, or store that average pixel value into a file.
Seeing as we are partailly rooted in helping people learn, how much should we rely on prewritten modules, and how much should we try to write on our own? ("Good programmers write good code, great programmers copy great code." But isn't it worth being a good programmer, first?)
For learning, it's so much easier to both explain and understand a graphical language..
That said, I would recommend a specialised text-based language language as a progression. For example, for graphics something like Processing or NodeBox. They are "normal" languages (Processing is Java, NodeBox is Python) with very specialised, easy to use (but absurdly powerful) frameworks ingrained into them..
Importantly, they are very interactive languages, you don't have to write hundreds of lines just to get a circle onscreen.. You just type oval(100, 200, 10, 10) and press the run button, and it appears! This also makes them very easy to demonstrate and explain.
More robotics-related, even something like LOGO would be a good introduction - you type "FORWARD 1" and the turtle drives forward one box.. Type "LEFT 90" and it turns 90 degrees.. This relates to reality very simply. You can visualise what each instruction will do, then try it out and confirm it really works that way.
Show them shiney looking things, they will pickup the useful stuff they'd learn from C along the way, if they are interested or progress to the point where they need a "real" language, they'll have all that knowledge, rather than run into the syntax-error and compiling brick-wall..
A: It seems that if you are trying to prepare our team for a future in programming that C(++) ma be the better route. The promise of general programming languages that are built with visual building blocks has never seemed to materialize and I am beginning to wonder if they ever will. It seems that while it can be done for specific problem domains, once you get into trying to solve many general problems a text based programming language is hard to beat.
At one time I had sort of bought into the idea of executable UML but it seems that once you get past the object relationships and some of the process flows UML would be a pretty miserable way to build an app. Imagine trying to wire it all up to a GUI. I wouldn't mind being proven wrong but so far it seems unlikely we'll be point and click programming anytime soon.
A: I started with LabVIEW about 2 years ago and now use it all the time so may be biased but find it ideal for applications where data acquisition and control are involved.
We use LabVIEW mainly for testing where we take continuous measurements and control gas valves and ATE enclosures. This involves both digital and analogue input and outputs with signal analysis routines and process control all running from a GUI. By breaking down each part into subVIs we are able to reconfigure the tests with the click and drag of the mouse.
Not exactly the same as C/C++ but a similar implementation of measurement, control and analysis using Visual BASIC appears complex and hard to maintain by comparision.
I think the process of programming is more important than the actual coding language and you should follow the style guidelines for a graphical programming language. LabVIEW block diagrams show the flow of data (Dataflow programming) so it should be easy to see potential race conditions although I've never had any problems. If you have a C codebase then building it into a dll will allow LabVIEW to call it directly.
A: There are definitely merits to both choices; however, since your domain is an educational experience I think a C/C++ solution would most benefit the students. Graphical programming will always be an option but simply does not provide the functionality in an elegant manner that would make it more efficient to use than textual programming for low-level programming. This is not a bad thing - the whole point of abstraction is to allow a new understanding and view of a problem domain. The reason I believe many may be disappointed with graphical programming though is that, for any particular program, the incremental gain in going from programming in C to graphical is not nearly the same as going from assembly to C.
Knowledge of graphical programming would benefit any future programmer for sure. There will probably be opportunities in the future that only require knowledge of graphical programming and perhaps some of your students could benefit from some early experience with it. On the other hand, a solid foundation in fundamental programming concepts afforded by a textual approach will benefit all of your students and surely must be the better answer.
A:
The team captain thinks that LabVIEW
is better for its ease of learning and
teaching. Is that true?
I doubt that would be true for any single language, or paradigm. LabView could surely be easier for people with electronics engineering background; making programs in it is "simply" drawing wires. Then again, such people might already be exposed to programming, as well.
One essential difference - apart from from the graphic - is that LV is demand based (flow) programming. You start from the outcome and tell, what is needed to get to it. Traditional programming tends to be imperative (going the other way round).
Some languages can provide the both. I crafted a multithreading library for Lua recently (Lanes) and it can be used for demand-based programming in an otherwise imperative environment. I know there are successful robots run mostly in Lua out there (Crazy Ivan at Lua Oh Six).
A: Have you had a look at the Microsoft Robotics Studio?
http://msdn.microsoft.com/en-us/robotics/default.aspx
It allows for visual programming (VPL):
http://msdn.microsoft.com/en-us/library/bb483047.aspx
as well as modern languages such as C#.
I encourage you to at least take a look at the tutorials.
A: My gripe against Labview (and Matlab in this respect) is that if you plan on embedding the code in anything other than x86 (and Labview has tools to put Labview VIs on ARMs) then you'll have to throw as much horsepower at the problem as you can because it's inefficient.
Labview is a great prototyping tool: lots of libraries, easy to string together blocks, maybe a little difficult to do advanced algorithms but there's probably a block for what you want to do. You can get functionality done quickly. But if you think you can take that VI and just put it on a device you're wrong. For instance, if you make an adder block in Labview you have two inputs and one output. What is the memory usage for that? Three variables worth of data? Two? In C or C++ you know, because you can either write z=x+y or x+=y and you know exactly what your code is doing and what the memory situation is. Memory usage can spike quickly especially because (as others have pointed out) Labview is highly parallel. So be prepared to throw more RAM than you thought at the problem. And more processing power.
In short, Labview is great for rapid prototyping but you lose too much control in other situations. If you're working with large amounts of data or limited memory/processing power then use a text-based programming language so you can control what goes on.
A: People always compare labview with C++ and day "oh labview is high level and it has so much built in functionality try acquiring data doing a dfft and displaying the data its so easy in labview try it in C++".
Myth 1: It's hard to get anything done with C++ its because its so low level and labview has many things already implemented.
The problem is if you are developing a robotic system in C++ you MUST use libraries like opencv , pcl .. ect and you would be even more smarter if you use a software framework designed for building robotic systems like ROS (robot operating system). Therefore you need to use a full set of tools. Infact there are more high level tools available when you use, ROS + python/c++ with libraries such as opencv and pcl. I have used labview robotics and frankly commonly used algorithms like ICP are not there and its not like you can use other libraries easily now.
Myth2: Is it easier to understand graphical programming languages
It depends on the situation. When you are coding a complicated algorithm the graphical elements will take up valuable screen space and it will be difficult to understand the method. To understand labview code you have to read over an area that is O(n^2) complexity in code you just read top to bottom.
What if you have parallel systems. ROS implements a graph based architecture based on subcriber/publisher messages implemented using callback and its pretty easy to have multiple programs running and communicating. Having individual parallel components separated makes it easier to debug. For instance stepping through parallel labview code is a nightmare because control flow jumps form one place to another. In ROS you don't explicitly 'draw out your archietecture like in labview, however you can still see it my running the command ros run rqt_graph ( which will show all connected nodes)
"The future of programming is graphical." (Think so?)
I hope not, the current implementation of labview does not allow coding using text-based methods and graphical methods. ( there is mathscript , however this is incredibly slow)
Its hard to debug because you cant hide the parallelism easily.
Its hard to read labview code because there you have to look over so much area.
Labview is great for data aq and signal processing but not experimental robotics, because most of the high level components like SLAM (simultaneous localisation and mapping), point cloud registration, point cloud processing ect are missing. Even if they do add these components and they are easy to integrate like in ROS, because labview is proprietary and expensive they will never keep up with the open source community.
In summary if labview is the future for mechatronics i am changing my career path to investment banking... If i can't enjoy my work i may as well make some money and retire early...
| {
"language": "en",
"url": "https://stackoverflow.com/questions/13678",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "35"
} |
Q: php Zend / MVC without mod_rewrite I've seen it mentioned in many blogs around the net, but I believe it shoud be discussed here.
What can we do when we have an MVC framework (I am interested in ZEND) in PHP but our host does not provide mod_rewrite?
Are there any "short-cuts"? Can we transfer control in any way (so that a mapping may occur between pages)? Any ideas?
Thank you :-)
A: Zend framework should work without mod_rewrite. If you can live with your URL:s looking more like "/path/to/app/index.php/controller/action". If you had mod_rewrite you could do away with the "index.php" bit, but it should work with too.
It's all a matter of setting up the routes to accept the index.php part.
A: OK my verdict :-): I have used successfully zend without mod_rewrite and it's as you've all said site/index.php/controller/action. I knew that before posting this. I've also found out around the net a technique that "pushes" 404 pages to index.php therefore what is not a resource (eg. CSS, image, etc) get there, with one exception: POST values.
So I decided that the next time an application has to be made in the specific server, to ask politely for mod_rewrite. If the administrator can not provide it, talk with my boss or if it is for me, switch provider.
Generally, it is a shame sometimes that the PHP market is so much fragmented (php4, php5, php6, mod_rewrite, mod_auth, mod_whatever), but this is another story...
A: mod_rewrite is almost essential in today's hosting environment..but unfortunately not everyone got the message.
Lots of the large php programs (I'm thinking magento, but most can cope) have a pretty-url fall back mode for when mod_rewrite isn't available.
URLs end up looking like www.site.com/index.php?load-this-page
They must be running some magic to grab the variable name from the $_GET variable and using it as the selector for what module/feature to execute.
In a related note, I've seen lots of messed up URLs in the new facebook site where it's using the #. So links look like www.new.facebook.com/home.php#/inbox/ Clearly we're not meant to see that but it suggests that they're probably parsing the $_SERVER['REQUEST_URI'] variable.
A: If you can find a non-mod_rewrite way to redirect all requests to index.php (or wherever your init script is), you can, as mentioned above, use 'REQUEST_URI' to grab the portion of the address after the domain and then parse it as you like and make the request do what you want it to. This is how Wordpress does it (granted, with mod_rewrite). As long as you can redirect requests to your index page while retaining the same URI, you can do what you need to to process the request.
A: Drupal's rewrite rules translate
http://example.com/path/goes/here
into
http://example.com/index.php?q=path/goes/here
...and has logic to decide which flavor of URLs to generate. If you can live with ugly URLs, this would let you keep all the logic of a single front controller in place w/o relying on URL rewriting.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/13698",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Suggestions for Migrating from ASP.NET WebForms to ASP.NET MVC? ASP.NET MVC has been discussed on this forum a few times. I'm about to do a large migration of several websites from classic ASP/ASP.NET WebForms to ASP.NET MVC and was wondering what kind of advice those of you with experience in both technologies have.
What I have: a typical ASP.NET app with heavily coupled presentation/business logic, all sorts of messy ASP.NET-generated Javascript cruft, and so forth.
What I want: clean ASP.NET MVC-generated agnostic markup. 'Nuff said.
Any pointers, tips, tricks, or gotchas to be aware of?
Thanks!
A: Wow, I'm not sure we're talking migration here anymore - the difference is more like re-writing!
As others have also said, MVC is a whole new way to build web apps - most of your presentation code won't carry across.
However, if you are re-writing in MVC what you already have is a good prototype. Your problem is likely to be that it would be hard to do bit by bit - for instance MVC uses URL renaming out-of-the-box, making linking back and forth rather messy.
Another question would be why? Many of us have big sprawling legacy applications that we'd like to be in the latest technologies, but if your application is already working why switch?
If I was looking at a new application right now MVC would be a very strong candidate, but there's no gain large enough to switching to it late in a project.
A:
Any pointers, tips, tricks, or
gotchas to be aware of?
Well, I think you're probably a little ways away from thinking about tricks & gotchas :) As I'm sure you're aware, ASP.NET MVC is not some new version of ASP.NET, but a totally different paradigm from ASP.NET, you won't be migrating, you'll be initiating a brand new development effort to replace an existing system. So maybe you can get a leg up on determining requirements for the app, but the rest will probably re-built from scratch.
Based on the (very common) problems you described in your existing code base you should consider taking this opportunity to learn some of the current best practices in designing loosely coupled systems. This is easy to do because modern "best practices" are easy to understand and easy to practice, and there is enormous community support, and high quality, open source tooling to help in the process.
We are moving an ASP/ASP.NET application to ASP.NET MVC at this time as well, and this is the conclusion my preparatory research has led me to, anyway.
Here is a post to links on using ASP.NET MVC, but I would start by reading this post. The post is about NHibernate (an ORM tool) on its surface but the discussion and the links are about getting the foundations right and is the result of preparing to port an ASP.NET site to MVC. Some of the reference architectures linked to in that post are based on ASP.NET MVC. Here is another post about NHibernate, but in the "Best Practices & Reference Applications" section most if not all of the reference applications listed are ASP.NET MVC applications also. Reference architectures can be extremely useful for quickly getting a feeling for how an optimal, maintainable ASP.NET MVC site might be designed.
A: WebForms can live with MVC controllers in the same app. By default, routing does not route requests for files that exist on disk. So you could start rewriting small parts of your site at a time to use the MVC pattern, and leave the rest of it using WebForms.
A: My opinion is that the two technologies are so different that if you have tightly coupled code in the original Web Form applications that the best approach is to start by picking one of them and converting it by creating a new ASP.NET MVC application and ripping out code into their respective layers. Which will put you on the trail of reuse for porting the other applications.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/13704",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Resources on wordpress theme-development What are the best resources for Wordpress theme-development? I am currently in the phase of starting my own blog, and don't want to use one of the many free themes. I already have a theme for my website, so I want to read about best-practices.
Any advice on how to get started would be very welcome :)
I have now created my theme (wohoo!), and thought I should summarize the best resources I found. Lets see..
Resources:
*
*ThemeTation's three-part guide to create a wordpress-theme from scratch
*Nettuts.com's guide: How to Create a Wordpress Theme from Scratch
Didn't actually use this, it's a quite new article, but anyway - it's great. It will get a follow-up in the next few days too..
*Wordpress.org's own guide on templates
Definatly a must-read for everyone new to wordpress-designing..
*"The loop"
Essential knowledge, also a must-read
*Directory of all the template tags
Used by wordpress to actually output blog-content..
Inspiration:
*
*Smashing Magazine's lists: first, one more, yet another one
*Wordpress.org's theme-directory
A: I think that the best way to learn is to look at how other people construct their themes. The first one to start one is the Default Kubrick theme that is included in the standard WordPress install. It has all of the basics and will show you some advanced techniques like including sidebar widgets. Next, in conjunction with the docs on theme development (previously mentioned by Mark), Blog Design and Layout and Using Themes, go to the Theme Directory on the Wordpress.org site, download a couple of popular themes, and go through them, looking up any template tags or techniques that you don't understand. After you do this, you should be more than well-equipped to write your own theme from scratch, or modify an existing theme to your needs.
A: The Wordpress part is the easy bit. That's basically taking your static HTML pages then converting them to PHP and inserting the Wordpress tags to pull content from the database. In some places these tags will be in a loop, e.g. for a list of pages.
The most difficult part is the design. You should identify the page types you want (e.g. main page, lists of posts, static pages, about) and create the actual templates with mockup text. ONly when you're happy should you think about the Wordpress part.
If you search with Google you'll find plenty of pages on creating your own Wordpress theme or converting HTML to a theme.
A: Here's another good article on the topic. And of course, there are many more like it. I have reviewed this particular article in the past though, and found it to be a good, detailed tutorial with some nice external references where required.
A: codex.wordpress.org.
A: Here's the WordPress doc on Theme Development
A: I've found this illustration shared by Yoast to be really helpful for explaining how a Wordpress theme is built:
Full image here
This is a crop of the full image - I'd suggest checking it the full page there, as it's extremely clear, and very well thought out.
A: Found a new one over here. it's a good resource if you want to make a simple theme. :)
http://www.webhostingsearch.com/articles/create-your-own-wordpress-theme-tutorial.php
A: check css-tricks.com Chris has a few screencasts up.
A: There is a really good book on WordPress Theme Design, called "WordPress Theme Design" :)
Available from Amazon and other book stores.
A: 'Starkers' is a great blank WordPress theme to start any theme development.
http://elliotjaystocks.com/blog/2008/free-starkers-wordpress-theme/
A: This WordPress Meta Box PHP Helper class might help you when working with WordPress Meta Boxes.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/13708",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13"
} |
Q: How to schedule a batch process in asp.net I want to run a weekly batch process in an asp.net page. How can that be done?
Thanks
A:
Is there any known drawbacks with the
solution?
Here is the blog post in which Jeff Atwood discusses this approach. As with most of Jeff's post, the meat is in the comments where the pros and cons have been discussed in extreme detail by a large number of opinionated folks, so that is an ideal place to have that particular part of your question answered.
A: Consider using the Cache.
A: Develop a Windows Service and schedule it to run weekly once.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/13718",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: In Cocoa do you prefer NSInteger or int, and why? NSInteger/NSUInteger are Cocoa-defined replacements for the regular built-in types.
Is there any benefit to using the NS* types over the built-ins? Which do you prefer and why? Are NSInteger and int the same width on 32-bit / 64-bit platforms?
A: The way I understand it is that NSInteger et al. are architecture safe versions of the corresponding C types. Basically their size vary depending on the architecture, but NSInteger, for example, is guaranteed to hold any valid pointer for the current architecture.
Apple recommends that you use these to work with OS X 10.5 and onwards, and Apple's API:s will use them, so it's definitely a good idea to get into the habit of using them. They require a little more typing, but apart from that it doesn't seem to be any reason not to use them.
A: Quantisation issues for 64-bit runtime
In some situations there may be good reason to use standard types instead of NSInteger: "unexpected" memory bloat in a 64-bit system.
Clearly if an integer is 8 instead of 4 bytes, the amount of memory taken by values is doubled. Given that not every value is an integer, though, you should typically not expect the memory footprint of your application to double. However, the way that Mac OS X allocates memory changes depending on the amount of memory requested.
Currently, if you ask for 512 bytes or fewer, malloc rounds up to the next multiple of 16 bytes. If you ask for more than 512 bytes, however, malloc rounds up to the next multiple of 512 (at least 1024 bytes). Suppose then that you define a class that -- amongst others -- declares five NSInteger instance variables, and that on a 32-bit system each instance occupies, say, 272 bytes. On a 64-bit system, instances would in theory require 544 bytes. But, because of the memory allocation strategy, each will actually occupy 1024 bytes (an almost fourfold increase). If you use a large number of these objects, the memory footprint of your application may be considerably greater than you might otherwise expect. If you replaced the NSInteger variables with sint_32 variables, you would only use 512 bytes.
When you're choosing what scalar to use, therefore, make sure you choose something sensible. Is there any reason why you need a value greater than you needed in your 32-bit application? Using a 64-bit integer to count a number of seconds is unlikely to be necessary...
A: 64-bit is actually the raison d'être for NSInteger and NSUInteger; before 10.5, those did not exist. The two are simply defined as longs in 64-bit, and as ints in 32-bit:
#if __LP64__ || NS_BUILD_32_LIKE_64
typedef long NSInteger;
typedef unsigned long NSUInteger;
#else
typedef int NSInteger;
typedef unsigned int NSUInteger;
#endif
Thus, using them in place of the more basic C types when you want the 'bit-native' size.
CocoaDev has some more info.
A: I prefer the standard c style declarations but only because I switch between several languages and I don't have to think too much about it but sounds like I should start looking at nsinteger
A: For importing and exporting data to files or over the net I use UInt32, SInt64 etc...
These are guaranteed to be of a certain size regardless of the architecture and help in porting code to other platforms and languages which also share those types.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/13725",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "48"
} |
Q: Differences Between C# and VB.net
Possible Duplicate:
What are the most important functional differences between C# and VB.NET?
Other than syntax, what are the major differences between C# and vb.net?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/13731",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Resharper and TortoiseSVN Is there any good way to deal with the class renaming refactor from Resharper when the file is under source control and TortoiseSVN is the client. I have am trying VisualSVN right now but I haven't had the need to rename anything recently. I don't want to change our repository just to try this out.
Also not sure if this feature alone is worth the cost of VisualSVN.
Update: I have uninstalled the trial of VisualSVN and tried AhknSVN. I seems to provided the same functionality so far.
I know this my sound trivial but the indicators seem to be lacking some functionality, it seems like they don't trickle up. (If a file in the project is different I would think the project indicator would indicate this as well.) I tend to keep my projects rolled as much as possible, so it is hard to tell what files have changed unless the project is expanded.
A: I find VisualSVN to be well worth the money. There are ways to do it with Tortoise, but the integration of VisualSVN is very nice. I had tried over VS-integration tools before like Ankh and was not impressed. V-SVN has really upped the level of interaction with the repository from the IDE.
The quick trick in TortoiseSVN to fix the move sounds pretty nice as well, I need to try that out.
Another bonus: I've yet to "forgot" to add a file to the repository since I got Visual SVN.
A: TortoiseSVN 1.5 has a neat hidden feature on the check in window:
Select a missing file and a new file and right-click. One of the options will be "fix move".
I tend to refactor away, and then use this to fix any files where the name has changed.
A: You should really check the Free as in Beer option of AnkhSVN. They made some major improvements in v2.x and I don't feel penalized anymore when doing ReSharper refactoring-ninja moves inside Visual Studio.
A: Time to branch your repository. That's the nice part about version control, you can create new branches without totaling the old ones.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/13745",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13"
} |
Q: How can I permanently enable line numbers in IntelliJ? How can I permanently enable line numbers in IntelliJ IDEA?
A: In Intellij 13 the layout has changed, the Settings button can only be found in File -> Settings and not in the toolbars, and from there you follow the same steps: Editor -> Appearance -> Show line numbers, or search for Line numbers in the Settings search input.
A: Just an update for Android Studio 1.5.1 on Windows:
Go to File -> Settings -> follow picture
A: Android Studio 1.3.2 and on, IntelliJ 15 and on
Global configuration
File -> Settings -> Editor -> General -> Appearance -> Show line numbers
Current editor configuration
First way: View -> Active Editor -> Show Line Numbers (this option will only be available if you previously have clicked into a file of the active editor)
Second way: Right click on the small area between the project's structure and the active editor (that is, the one that you can set breakpoints) -> Show Line Numbers.
A: On IntelliJ IDEA 2016.1.2
Go to Settings > Editor > General > Appearance
then check the Show Line number option
A: IntelliJ 2019 community edition has line number by default. If you want to show or hide line numbers, go to the following settings to change the appearance.
go to → File → Setting → Editor → General → Appearance → [Check] Show line numbers
A: For InteliJ IDEA 11.0 and above
Goto File --> Settings in the Settings window Editor --> Appearance
and tick Show line numbers check box.
A: For IntelliJ 20.1 or above, on Mac OSX:
IntelliJ IDEA -> Editor -> General -> Appearance -> Show line numbers
Point to be noted: Always look for Editor
For shortcut:
⌘ + ⇧ + A (command + shift + A)
type
and click on the pop up to turn on Show line numbers and you are good to go.
A: The question is obviously well answered already, but since IJ 13 you can enable line numbers in 2 seconds flat:
*
*Press shift twice
*Type "line number"
*The option shows in the menu and press enter to enable/disable.
Et voila ;)
A: IntelliJ 14 (Ubuntu):
See: how-do-i-turn-on-line-numbers-permanently-in-intellij-14
Permanently:
File > Settings > Editor > General > Appearance > show line numbers
For current Editor:
View > Active Editor > Show Line Numbers
A: IntelliJ IDEA 15
5 approaches
Global change
*
*File > Settings... > Editor > General > Appearance > Show line numbers
*Hit Shift twice > write "line numbers" > Show Line Numbers (that one that has the toggle) > change the toggle to ON
Change for the Active Editor
*
*Right click on the left side bar > Show Line Numbers
*Hit Shift twice > write "line" > Show Line Numbers (the line doesn't have the toggle)
*Ctrl + Shift + A > write "Show line" > Active Editor: Show Line Numbers > change the toggle to ON
A: Ok in intelliJ 14 Ultimate using the Mac version this is it.
IntelliJ Idea > Preferences > Editor > General > Appearance > Show Line Numbers
A: On IntelliJ 12 on MAC OSX, I had a hard time finding it. The search wouldn't show me the way for some reason. Go to Preferences and under IDE Settings, Editor, Appearance and select 'Show line numbers'
A: Android Studio
Go to Android Studio => Preferences => Editor => General => Appearance => set Checked "Show line numbers"
A: I just hit this with IdeaVim plugin installed, where even if I set Show Line Numbers, it continued to revert to hiding them.
The (forehead-slapping-worthy) solution was:
:set nu
A: IntelliJ 14.X Onwards
From version 14.0 onwards, the path to the setting dialog is slightly different, a General submenu has been added between Editor and Appearance as shown below
IntelliJ 8.1.2 - 13.X
From IntelliJ 8.1.2 onwards, this option is in File | Settings1. Within the IDE Settings section of that dialog, you'll find it under Editor | Appearance.
*
*On a Mac, these are named IntelliJ IDEA | Preferences...
A: I add this response for IntelliJ IDEA 2018.2 - Ultimate.
Using menu
IntelliJ Idea > Preferences > Editor > General > Appearance > Show Line Numbers
Using Shortcuts - First way
For Windows : Ctrl+Shift+a
For Mac : Cmd+shift+a
Using Shortcuts - Seconde way
Touch Shift twice
These three methods exist since the last 4 versions of Intellij and I think they remain valid for a long time.
A: For 9.0.4
File > Settings
In the tree view group
------------IDE Settings ---------
Click the Editor [+]
Select Appearance... Its there.
A: 1st way: Go to View => Active Editor => show line numbers
2nd way Go to Android Studio => Preferences => Editor=> Appearance => set Checked "Show Line Numbers"
A: NOT a solution, rather a TEMPORARY solution which only works only for the current editor and current session:
Simply right click on the place where the line-numbers should be, and there you'll see a small menu.
then, mark the "Show Line Numbers" option.
Please note that this only works on a per-file basis, however. Enjoy.
A: On Mac Intellij 12.1.2 there is no File-Settings:
There is an application-name menu item to the left of "File" with a "preferences" menu item:
and within that is the "Settings" dialog shown by the Windows Intellij.
A: In IntelliJ 14 it has moved again somewhat down the menu.
Now we have it unter Editor -> General -> Appearance
| {
"language": "en",
"url": "https://stackoverflow.com/questions/13751",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1383"
} |
Q: quoting System.DirectoryServices.ResultPropertyCollection I'm missing something here:
$objSearcher = New-Object System.DirectoryServices.DirectorySearcher
$objSearcher.SearchRoot = New-Object System.DirectoryServices.DirectoryEntry
$objSearcher.Filter = ("(objectclass=computer)")
$computers = $objSearcher.findall()
So the question is why do the two following outputs differ?
$computers | %{
"Server name in quotes $_.properties.name"
"Server name not in quotes " + $_.properties.name
}
PS> $computers[0] | %{"$_.properties.name"; $_.properties.name}
System.DirectoryServices.SearchResult.properties.name
GORILLA
A: When you included $_.properties.name in the string, it was returning the type name of the property. When a variable is included in a string and the string is evaluated, it calls the ToString method on that object referenced by the variable (not including the members specified after).
In this case, the ToString method is returning the type name. You can force the evaluation of the variable and members similar to what EBGreen suggested, but by using
"Server name in quotes $($_.properties.name)"
In the other scenario PowerShell is evaluating the variable and members specified first and then adding it to the previous string.
You are right that you are getting back a collection of properties. If you pipe $computer[0].properties to get-member, you can explore the object model right from the command line.
The important part is below.
TypeName: System.DirectoryServices.ResultPropertyCollection
Name MemberType Definition
Values Property System.Collections.ICollection Values {get;}
A: I believe it has to do with the way that PS interpolates information in the "". Try this:
"Server name in quotes $($_.properties).name"
Or you may even need one more set of $(). I'm not somewhere that I can test it at right now.
A: Close-- The below works correctly, but I'd be interested if anyone has a deeper explanation.
PS C:\> $computers[0] | %{ "$_.properties.name"; "$($_.properties.name)" }
System.DirectoryServices.SearchResult.properties.name
GORILLA
So it would seem that $_.properties.name doesn't deference like I expected it to. If I'm visualizing properly, the fact that the name property is multivalued causes it to return an array. Which (I think) would explain why the following works:
$computers[0] | %{ $_.properties.name[0]}
If "name" were a string this should return the first character, but because it's an array it returns the first string.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/13753",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: How can I remove a child node in HTML using JavaScript? Is there a function like document.getElementById("FirstDiv").clear()?
A: If you want to clear the div and remove all child nodes, you could put:
var mydiv = document.getElementById('FirstDiv');
while(mydiv.firstChild) {
mydiv.removeChild(mydiv.firstChild);
}
A: You have to remove any event handlers you've set on the node before you remove it, to avoid memory leaks in IE
A: A jQuery solution
HTML
<select id="foo">
<option value="1">1</option>
<option value="2">2</option>
<option value="3">3</option>
</select>
Javascript
// remove child "option" element with a "value" attribute equal to "2"
$("#foo > option[value='2']").remove();
// remove all child "option" elements
$("#foo > option").remove();
References:
Attribute Equals Selector [name=value]
Selects elements that have the
specified attribute with a value
exactly equal to a certain value.
Child Selector (“parent > child”)
Selects all direct child elements
specified by "child" of elements
specified by "parent"
.remove()
Similar to .empty(), the .remove()
method takes elements out of the DOM.
We use .remove() when we want to
remove the element itself, as well as
everything inside it. In addition to
the elements themselves, all bound
events and jQuery data associated with
the elements are removed.
A: Use the following code:
//for Internet Explorer
document.getElementById("FirstDiv").removeNode(true);
//for other browsers
var fDiv = document.getElementById("FirstDiv");
fDiv.removeChild(fDiv.childNodes[0]); //first check on which node your required node exists, if it is on [0] use this, otherwise use where it exists.
A: Modern Solution - child.remove()
For your use case:
document.getElementById("FirstDiv").remove()
This is recommended by W3C since late 2015, and is vanilla JS. All major browsers support it.
Mozilla Docs
Supported Browsers - 96% May 2020
A: To answer the original question - there are various ways to do this, but the following would be the simplest.
If you already have a handle to the child node that you want to remove, i.e. you have a JavaScript variable that holds a reference to it:
myChildNode.parentNode.removeChild(myChildNode);
Obviously, if you are not using one of the numerous libraries that already do this, you would want to create a function to abstract this out:
function removeElement(node) {
node.parentNode.removeChild(node);
}
EDIT: As has been mentioned by others: if you have any event handlers wired up to the node you are removing, you will want to make sure you disconnect those before the last reference to the node being removed goes out of scope, lest poor implementations of the JavaScript interpreter leak memory.
A: var p=document.getElementById('childId').parentNode;
var c=document.getElementById('childId');
p.removeChild(c);
alert('Deleted');
p is parent node and c is child node
parentNode is a JavaScript variable which contains parent reference
Easy to understand
A: You should be able to use the .RemoveNode method of the node or the .RemoveChild method of the parent node.
A: You should probably use a JavaScript library to do things like this.
For example, MochiKit has a function removeElement, and jQuery has remove.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/13763",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "82"
} |
Q: Open ID - What happens when you decide you don't like your existing provider? So I'm not quite convinced about OpenID yet, and here is why:
I already have an OpenID because I have a Blogger account. But I discovered that Blogger seems to be a poor provider when I tried to identify myself on the altdotnet page and recieved the following message:
You must use an OpenID persona that specifies a valid email address.
Lets forget the details of this little error and assume that I want to change to a different provider. So I sign up with a different provider and get a new, different OpenID - how would I switch my existing StackOverflow account to be associated with my new OpenID?
I understand this would be easy if I had my own domain set up to delegate to a provider, because I could just change the delegation. Assume I do not have my own domain.
A: So the OpenID protocol doesn't actually offer a solution for this situation? I would have to rely on individual sites to offer some sort of migration function? That's quite unfortunate. The whole design of OpenID seems focused on a "all your eggs in one basket" approach, i.e. you should try to use your OpenID everywhere you can. This would be fine if all providers are identical, but they are not.
Imagine the worse case, where you pick a provider that ends up closing down. Wouldn't you potentially lose your accounts on many sites?
A: Ideally Stack Overflow would allow you to change your OpenID.
OTOH, ideally you would have set up OpenID delegation on your own site, and used that to identify yourself.
With delegation, you would need only change which service you delegate to. You'd still be identified by your own URL that you control. But that doesn't help now unless Stack Overflow lets you change it. Most sites tie OpenIDs to real accounts, and would let you switch or at least add additional OpenIDs. Doesn't seem like you could map OpenIDs to accounts 1:1 unless the result of access is totally trivial; otherwise you're in a situation like this where you lose your existing questions, answers, and reputation for switching IDs.
A: This is a problem for me because I changed my email in the way of the new fad of firstName.lastName@gmail.com. After much scouring of this Web site, I am confirming that those of you in my situation are out of luck until further notice because of the issue described in the question.
Either hold on to that old ID or give up your points.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/13768",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17"
} |
Q: SoapException: Root element is missing occurs when .NET web service called from Flex I have a .net web application that has a Flex application embedded within a page. This flex application calls a .net webservice. I can trace the execution proccess through the debugger and all looks great until I get the response:
soap:ReceiverSystem.Web.Services.Protocols.SoapException: Server was unable to process request
. ---> System.Xml.XmlException: Root element is missing.
at System.Xml.XmlTextReaderImpl.Throw(Exception e)
at System.Xml.XmlTextReaderImpl.ThrowWithoutLineInfo(String res)
at System.Xml.XmlTextReaderImpl.ParseDocumentContent()
at System.Xml.XmlTextReaderImpl.Read()
at System.Xml.XmlTextReader.Read()
at System.Web.Services.Protocols.SoapServerProtocol.SoapEnvelopeReader.Read()
at System.Xml.XmlReader.MoveToContent()
at System.Web.Services.Protocols.SoapServerProtocol.SoapEnvelopeReader.MoveToContent()
at System.Web.Services.Protocols.SoapServerProtocolHelper.GetRequestElement()
at System.Web.Services.Protocols.Soap12ServerProtocolHelper.RouteRequest()
at System.Web.Services.Protocols.SoapServerProtocol.RouteRequest(SoapServerMessage message)
at System.Web.Services.Protocols.SoapServerProtocol.Initialize()
at System.Web.Services.Protocols.ServerProtocolFactory.Create(Type type, HttpContext context, HttpRequest
request, HttpResponse response, Boolean& abortProcessing)
--- End of inner exception stack trace ---
The call from flex looks good, the execution through the webservice is good, but this is the response I capture via wireshark, what is going on here?
I have tried several web methods, from "Hello World" to paramatized methods...all comeback with the same response...
I thought it may have something to do with encoding with the "--->", but I'm unsure how to control what .net renders as the response.
A: It looks like you might be sending a poorly formed XML document to the service. Can you use Fiddler or something like that to get a copy of the actual call that is going to the web service? That would be a huge help in figured out what the issue is.
A: I recently used a .NET REST interface which I consumed using a Flex app. I got some strange exceptions as well and my issue was that I forgot to include in xmlns (the namespace) in the root element when sending requests. This is a wild guess but I hope it helps.
A: Are you using Flex 3? If so, you can set a breakpoint when the webservice is executed and actually step through the Flex framework as it encodes your request. Look in mx.rpc.soap.SoapEncoder and you'll be able to see exactly what is going to be sent over the wire.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/13775",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Should we support IE6 anymore? Are we supposed to find workarounds in our web applications so that they will work in every situation? Is it time to do away with IE6 programming?
A: It depends on your audience, and whether the cost (development, maintenance, opportunity cost of developing to a 7 year old lowest common denominator) is worth it to gain those IE6 viewers.
Also worth asking - is the IE6 demographic likely to care about or use your site? I think a large amount of IE6 users don't care about new technology (duh) or are accessing the web from corporate networks which restrict browser installations. Maybe those viewers aren't worth the effort - only you can answer that.
I was happy to see that Apple's Mobile Me site won't support IE6.
A: There's no hard and fast rule on this. Supporting IE6 and IE7 takes an investment of time and knowledge that you may not have available, but on the other hand, if you want your site to look as you intend, it's an investment that has to be made. So the question becomes: which is more important to you?
You say the "if I check the statistics of the pages, i noticed that almost half of the visitors uses this kind of browsers," which says to me that unless you're OK with half your visitors seeing something other than the design/layout you intended, you're going to need to make that investment or get the help of someone who can.
If that's not an option, you could try using some of the CSS "frameworks," like Blueprint or Grid960, and see if that's easier, but that will require a little bit of learning as well.
The other options are either going with a simpler design likely to work across browsers, removing the stylesheet for IE6/7 and letting viewers see the raw HTML document structure, or using table-based layouts if you know how to wield them (and contrary to what some people will tell you, there's nothing at all wrong with this route if it's the one that best fits the requirements of your project combined with the constraints on your abilities and resources).
A: I recommend people check their own user stats for their site before making this decision, but here's a common reference regarding popular browser versions :
http://www.w3schools.com/browsers/browsers_stats.asp
A: It's all about putting in enough effort so that your site degrades gracefully as you go to older and older browsers (or for disabled users). Unfortunately, there are a lot of IE6 and IE7 users out there who more or less can't switch, so it seems unlikely that your site will force many to do so. If your site just looks bad, that's okay. If it's unusable, you have a real problem. In general, the more you adhere to current standards (instead of just chasing the latest browsers), the better you'll end up in old browsers without extra effort.
A: Depends on the situation. A site like this, were most people are techy I think it is safe to assume people have the latest browsers.
However if you are open to a wide public of possibly not-so techy people, you'll probably have IE6 hitting your site alot.
A: Someone asked the same question about a week ago and got some good answers. My personal favorite was doekman's suggestion to try IE7-js.
A: Sadly, we still need to support IE6 in most cases as it still represents a significant portion of the internet surfing users. If you are in a corporate environment, this is even more true, as corporations have less incentive to upgrade stuff the works simply for some pie-in-the-sky "web standards."
If not, try to gmail approach and just toss up an error for IE6 viewers and/or display a disclaimer that if they upgrade, the site will work/look better.
A: This depends so much on the context of the application, and of its users. There are two key aspects: what browsers are your users using; and how important is it that they can access/interact with your site.
The first part is generally easily establish, if you have an existing version with stats (Google Analytics or similar is simple and great) or you have access to such data from a similar app / product.
The later is a little harder to decide. If you're developing a publically availalbe, ad-sponsored site for exmple, it's just a numbers game - work out how much of your audience you lose and factor what that's worth against the additional development time. If, however you're doing something specifically at the request of a group of users - like an enterprise web app for example - you may be stuck with what those users are browsing with.
In my experience those two things can change significantly for different apps. We've got web apps still (stats from last week) with close to 70% IE6 usage (20% IE7, the rest split between IE5.5 and FF2) and others with close to 0% IE6. For relatively ovbivous reasons, the latter are the kind of apps where losing a few users isn't so important.
Having said all that, we generally find it easy to support IE6 (and IE5.5 as others point out) simply because we've been doing so for a while. Yes, it's a pain and yes, it takes more time, but often not too much. There are very few situations where having to support IE6 drastically changes what kind development you do - it just means a little more work. The other nice benefit of supporting it (and testing for it) is that you generally end up doing better all-round browser and quirks testing as a result of the polarity of IE6's behaviours.
You need to decide whether or not you're supposed to find workarounds, based on the requirements of your app/product. That's it's IE6 isn't really that relevant - this kind of problem happens all the time in other situations, it just so happens that IE6 is a great example of the costs and implications of mixed standards, versioning and legacy support.
A: Unfortunately not - I'd rate myself as a fairly techy person and at home I use Firefox 3 and IE7, but at work (a large American Pharma) I have to use IE6, and I don't think that's going change any time soon. The company has a significant investment in an internal line of web-based apps - the business case for testing and upgrading them all against another browser (or even an upgrade) isn't compelling.
A: Ask your customer this: are they willing to upgrade to Vista? If they say yes, then don't support IE6. Your target customers are the people who goes "whoa! vista. drool". They're also the kind of people who want the fastest and most powerful computer.
If your customer goes, "huh? what's vista? I want my screensaver of cats back please", then you need to support IE6.
In short: if they have Vista, then they don't have IE6.
The irony is: for web developers to finally get rid of IE6 and its legacy, they have to promote Vista or hope that Vista will be successful.
A: I'm a coder for a group that creates free templates for gaming clans. Our view is that we will drop IE6 support when IE8 is fully released. But at the end of the day, as many people have stated, it depends on your user audience. Our target audience is relatively wide (people download and use our templates in places where we can't predict) - however it is primarily gamers who are generally smart enough to keep their software up-to-date.
I find my natural coding style works in IE6 on my first try usually, and the bugs are easy enough to root out so maybe I don't find it as much a pain as other people do. Personally I'll drop support for IE6 when it reaches it's end of life or IE8's full release - whichever comes first.
A:
Is it time to do away with IE6 programming?
Yes.
A: Simply because IE6 still represents 27.21% of the web's population (or 15.21% depending on your numbers) as of July 2009.
Now I know some of you will probably tell me that if more and more sites stop supporting IE6, the browser will eventually disappear. That's a lie.
Picture this:
Corporation ACME has over 150 000 computers all running Windows 2000/XP. They also have a nice intranet site developed 7 years ago which works in IE6 quite well, but not so much in other browsers.
Do you really think they are going to invest money into fixing their intranet application when they control their complete IT infrastructure and who gets what updates? It's less costly to just postpone the update until they migrate to a new system.
A lot of corporations are in that situation.
Here is another example:
Business FooBar sells its products on the Internet. A little more than a quarter of their traffic is coming from IE6, which also means a quarter of their sales.
Do you think FooBar will simply block off those customers or annoy them with a huge notice telling them they are using a buggy browser? That would cost them nearly a quarter of their sales! As long as there is monetary value to supporting IE6 (and it does and will till its market share drops below about 8%), IE6 will prevail, which is also why Google won't be phasing out support for IE6 anytime soon.
Campaigns such as Browse Sad do not understand the mentality of the corporate culture (change is costly) and do not understand that in the end, consumers have a negligible impact on the worldwide IT ecosystem. The big corporations control it.
Consumers do have a growing impact but it is still insignificant compared to the impact corporations have.
And let's be truthful here: everyone who has the technical expertise and who could upgrade to a better browser already did. The rest are people still running outdated OSes, don't know how to upgrade, or don't have admin rights on their machine.
A: My guess is the majority of IE6 users these days are due to a large number of companies/organizations that are stuck with illogical browser upgrade fear.
I work as a contractor for the US Government and, as of the time of this post, the entire Heath and Human Services department of the US government is still standardized on IE6 (and doesn't appear to be planning on upgrading anytime soon). When I ask the IT people about it, they claim it's too expensive for the government to test new browsers for compliance with security standards, but I get the sense the real reason is they are afraid of having to deal with things rendering differently across browsers.
A: Yes (emphatically) and No (doubtfully).
Unless you are creating some manner of internal tool for a group where you know IE6 penetration (no pun intended) is high; ignore IE6. With vigor.
As for IE7, it's a bit of a toss-up. Generally speaking, if you are aiming for the private sector, you can get away with ignoring it (for the most part) and assuming that your IE8 support will take care of the most heinous problems; but if it's a site for selling stuff (specifically a web-shop; sales pitch site, etc.), you might want to at least check that it looks somewhat sane, and add a few small fixes as appropriate.
As an aside; and a real-world example; at my site of employment (we do web sites) we are currently undergoing (or rather, considering) a shift vis-a-vis IE-support in general: Prices are stated with basic IE8 support; full IE8 support would cost ~10% more; IE7 ~30% more and IE6 support ~100% more.
Edit: Think of it as charging extra to make a camper wagon designed for a WV to work with, respectively, a pinto, a yugo and a horse-drawn carriage.
A: Under IE6, make it at least show something. A page for FF3 that just dies on IE6 just looks bad, like you didn't plan well. If you don't support IE6 at all, make sure the user knows it is deliberate by showing a special page advising them where to go.
If you are expecting corporate visitors, it has to work under IE6 even if only a simplified version. If not, you can drop IE6 entirely if you handle it well as described above.
The time is nowhere near ready to consider dropping IE7 though. I'd expect this is the default browser on XP, which is the most prevalent OS.
A: If you don't want to spend effort in supporting your site for IE6 you could possibly use any one approach in the below URL.
These approaches suggests the user to download any of the advanced browsers like IE7+, Firefox 3+, Safari 3+, Opera 9.5+ or Google Chrome
http://garmahis.com/tools/ie6-update-warning/
But, that's about IE6. I believe you should still support IE7.
A: Keep always in mind your target audience, client needs/requirements, project objectives and of course keep it real (according to your budget/time)
Code/design a site that fits most browsers is not an easy task you will need to use those so called "hacks" to work-around common problems (yes mostly on IE browsers) this is something I personally discourage but I've been there when the target it's mostly IE.
Nowadays you have several options, you can choose to detect what browser its in use to browse your site and trigger an script to recommend an alternative browser that follow better standards (with or w/out showing a legible content) or you can code an alternate entry page for those IE fellas or what (most of the time) I prefer is to gracefully degrade the page and make the user aware about his/her outdated browser and recommend an option.
I have read you're using a CMS to create these sites, most CMS work "fine" on most browsers out of the box still as you pointed some CSS and JavaScript elements doesn't work as you go using more "edgy" techniques.
If you intent to develop more sites allow me to recommend the following sites:
To try how your site looks on several browsers (versions, OSes, JavaScript, Java, etc.) you can use
http://browsershots.org/
Compare your favorite CMS options try
http://www.cmsmatrix.org/
To start learning (x)html, css, php and more you can go to
http://www.w3schools.com/
A good CSS reset style sheet is the Meyer's
http://meyerweb.com/eric/thoughts/2007/05/01/reset-reloaded/
I have to say that this is an starting point to archive consistency across browsers :)
I am sure you may have hear or know these sites they are just tools I use from time to time looking for reference, new knowledge or alternatives I can also recommend several FF extensions like Web Developer Toolbar and FireBug.
I guess it's all for now, hope it helps and wish you happy coding/webdev.
A: The problem is that if you're not willing to add support for IE6/7, there are plenty of competitors out there that will glady "steal" your customers in exchange for a little browser hacking. As long as there is money involved, support for these browsers will phase out very slowly.
A: You might want to take a look at IE7.js.
IE7.js is a JavaScript library to make Microsoft Internet Explorer behave like a standards-compliant browser. It fixes many HTML and CSS issues and makes transparent PNG work correctly under IE5 and IE6.
Their IE9.js claims to:
Upgrade MSIE5.5-8 to be compatible with modern browsers.
I have not tested this myself with Acid or other standards tests, but this might be promising.
A: I'm all for pushing users to upgrade to the newest available version of IE (since problems improve with every release), however I'm also against telling people to upgrade or change their browsers.
I still support IE6 on my website. I even support as far back as IE5.5 pretty well I think.
Generally it is a good practice to never force your users to upgrade their system just to view your website. Unless, of course, you're developing an internal application, then I'd say everyone should upgrade to the newest available version.
A: Dean Edwards' ie7.js makes IE6 behave (mostly) like a respectable web browser. It requires the client to have Javascript turned on, but that's a reasonable concession to make. I use that script and the script from Save the Developers on sites I create, and it makes supporting IE6 a breeze.
A: It would be nice if we could deny support for terribly non-compliant browsers. The problem is, denying IE support hurts your site, hurts your prospective users, but doesn't hurt IE. That's note exactly what we're going for. I propose a different technique. What if all anti-IE developers put a "Please stop using your crappy browser" splash screen for all IE(6) users accessing their web site. They could provide a few good, simple reasons to switch, that the user can't ignore, but then allow the user to access the (IE compliant) site. That way they could get the point across, without hurting themselves (much), or the user (except a little).
A: It depends on your target audience and if you think you can afford to alienate users. If you are making a geeky web app and you think most users will use firefox, then don't worry about IE6. I would launch with it working in Firefox, IE7, and Safari and look at who goes to your site. If you see the need to make it work in IE6 then start working on it then.
A: Notice that some users in the Enterprise have no choice.
So if you target Enterprise customers, notice they are still on IE6. In general, Enterprise moves slower than consumer.
A: Vista's failure to gain mass acceptance is largely responsible for the reason we still have to support IE6. Most of the people still using IE6 are the ones who never upgrade their browser or update their OS. If most of them just moved to Vista, IE7 would automatically replace IE6
A: depends on your target audience.. I mean, some universities have firefox on them, right? only (i think) the third world countries have IE6 for default. (I know, I see them) I don't know about other countries, though. But I'm pretty sure still a large chunk of the population still use IE6 by default.
If you think it's really necessary ( I think so), go ahead. I don't see any problem in it. ('cuz I'm inexperienced in software development and such.. XD)
A: Support IE6 by not blocking it and letting it fend for itself for the most part. Only work around IE6 bugs that break major functionality.
As for JS bugs and horrible DOM support, you still have that in IE7 and IE8. In that case, you might as well use a JS toolkit and get IE6 support for almost free.
Bugs are bugs and they should be fixed (in any browser) instead of being worked around. But, you gotta do what you gotta do to please visitors.
One day, working around IE6 bugs will be asking too much.
A: I am certainly opposed to excluding browsers from a public facing site. There is nothing more irritating than going to a website and discovering they ONLY support IE because some dev somewhere couldn't make things "work".
As many of the other authors above have noted there is a considerable number of users out there who use a company imposed desktop build or install of IE6. Your best bet is always to identify and communicate with your users, not impose your draconian concepts upon them.
Ryan Farley had an entry about this recently which describes what I think is the best first step to transitioning over users to a different browser. It encourages people to upgrade and explains why things may not render correctly in one graphic. Many years ago, BinaryBonsai.com was the first blog I encountered which had a badge appear suggesting FireFox and I totally downloaded it just not to be bothered with an additional graphic.
There really is nothing like nerd peer-pressure.
A: If you're writing an application that's free or open to the public, maybe give reduced support to IE6 in order to have time to build more things for the majority of your users.
If you're writing an application that's not free, base it on your users. Odds are you'll want to give IE6 full support for another year or two.
A: I wouldn't really bother supporting IE6. It is being phased out (and should be updated by anyone who is still using it).
I would still try to support IE7, as I think it still is somewhat popular. You could always have a thing on the site that says "This site performs much better in: Firefox/Chrome/Safari/IE8"
A: This is not a yes or no question. This is a matter of negotiation between you and your client (those who pay you to create the site). The negotiation usually goes like this: Your website will cost you $x and support browser a,b,c. If you want IE6 support it will be $x+$y, etc. It's then your clients call to decide if $y is worth spending to be able to serve those of their customers who insist on using IE6.
If you are your own customer you can cut out the middle bit and make the call yourself ;-)
(same for IE7)
A: Hell yea. At least with IE6. IE7 is not that bad to support. I've been in web developing for quite some time now and the thing I do is display a warning: "You are using an outdated browser. Some parts of this webpage may not work properly Please upgrade or choose Firefox", because you can't simply ignore these users, you have to give them an option.
A: I have to agree with those that say "it depends".
The most important part is what the audience of the site use, if they use IE6, develop your page to support IE6.
In my experience you can expect audiences like this:
private: IE7+ or FF
private & technical || gamer: IE8, FF, Chrome
Many still use XP with IE6/7 unless they are in any way technical people or use the PC very often.
art & design: Safari, FF (often Mac-Based)
Macs are still widely used in graphical environments and nearly every artist and designer owns one, even if it is just to fit in with the crowd.
elderly: IE6, rarely IE7
Windows 95/98 has no support for IE7. You should also avoid javascript for this audience as they usually have crappy PCs.
corporate: IE6/IE7
Many companies update very slowly since it involves a lot of paperwork, organization and a budget. Depending on who in the corporation might use your page and what type of company it is you can expect worse (Secretaries, lowly wage-slaves, etc.) or better (CEO, CTO, ...).
Note that this is not based on any statistic, only my personal experience.
Dropping IE6 and IE7 would be nice but especially commercial sites just cannot afford to lose users. You as creator of a web-site have no power at all to make the users switch browsers. In some cases (like corporate environments) not even the users have the power, they can just use what the admins allow. In other cases they have no idea how to install anything (elderly, kids, ...).
A: I'm on IE6 until next year in a large global corporate environment.
I'm looking to spend a few million on some new whizzy software. Must support IE6...
| {
"language": "en",
"url": "https://stackoverflow.com/questions/13786",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15"
} |
Q: Python Regular Expressions to implement string unescaping I am trying to implement string unescaping with Python regex and backreferences, and it doesn't seem to want to work very well. I'm sure it's something I'm doing wrong but I can't figure out what...
>>> import re
>>> mystring = r"This is \n a test \r"
>>> p = re.compile( "\\\\(\\S)" )
>>> p.sub( "\\1", mystring )
'This is n a test r'
>>> p.sub( "\\\\\\1", mystring )
'This is \\n a test \\r'
>>> p.sub( "\\\\1", mystring )
'This is \\1 a test \\1'
I'd like to replace \\[char] with \[char], but backreferences in Python don't appear to follow the same rules they do in every other implementation I've ever used. Could someone shed some light?
A: Well, I think you might have missed the r or miscounted the backslashes...
"\\n" == r"\n"
>>> import re
>>> mystring = r"This is \\n a test \\r"
>>> p = re.compile( r"[\\][\\](.)" )
>>> print p.sub( r"\\\1", mystring )
This is \n a test \r
>>>
Which, if I understood is what was requested.
I suspect the more common request is this:
>>> d = {'n':'\n', 'r':'\r', 'f':'\f'}
>>> p = re.compile(r"[\\]([nrfv])")
>>> print p.sub(lambda mo: d[mo.group(1)], mystring)
This is \
a test \
>>>
The interested student should also read Ken Thompson's Reflections on Trusting Trust", wherein our hero uses a similar example to explain the perils of trusting compilers you haven't bootstrapped from machine code yourself.
A: Isn't that what Anders' second example does?
In 2.5 there's also a string-escape encoding you can apply:
>>> mystring = r"This is \n a test \r"
>>> mystring.decode('string-escape')
'This is \n a test \r'
>>> print mystring.decode('string-escape')
This is
a test
>>>
A: The idea is that I'll read in an escaped string, and unescape it (a feature notably lacking from Python, which you shouldn't need to resort to regular expressions for in the first place). Unfortunately I'm not being tricked by the backslashes...
Another illustrative example:
>>> mystring = r"This is \n ridiculous"
>>> print mystring
This is \n ridiculous
>>> p = re.compile( r"\\(\S)" )
>>> print p.sub( 'bloody', mystring )
This is bloody ridiculous
>>> print p.sub( r'\1', mystring )
This is n ridiculous
>>> print p.sub( r'\\1', mystring )
This is \1 ridiculous
>>> print p.sub( r'\\\1', mystring )
This is \n ridiculous
What I'd like it to print is
This is
ridiculous
A: You are being tricked by Python's representation of the result string. The Python expression:
'This is \\n a test \\r'
represents the string
This is \n a test \r
which is I think what you wanted. Try adding 'print' in front of each of your p.sub() calls to print the actual string returned instead of a Python representation of the string.
>>> mystring = r"This is \n a test \r"
>>> mystring
'This is \\n a test \\r'
>>> print mystring
This is \n a test \r
A: Mark; his second example requires every escaped character thrown into an array initially, which generates a KeyError if the escape sequence happens not to be in the array. It will die on anything but the three characters provided (give \v a try), and enumerating every possible escape sequence every time you want to unescape a string (or keeping a global array) is a really bad solution. Analogous to PHP, that's using preg_replace_callback() with a lambda instead of preg_replace(), which is utterly unnecessary in this situation.
I'm sorry if I'm coming off as a dick about it, I'm just utterly frustrated with Python. This is supported by every other regular expression engine I've ever used, and I can't understand why this wouldn't work.
Thank you for responding; the string.decode('string-escape') function is precisely what i was looking for initially. If someone has a general solution to the regex backreference problem, feel free to post it and I'll accept that as an answer as well.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/13791",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17"
} |
Q: Any Windows APIs to get file handles besides createfile and openfile? I am trying to snoop on a log file that an application is writing to.
I have successfully hooked createfile with the detours library from MSR, but createfile never seems to be called with file I am interested in snooping on. I have also tried hooking openfile with the same results.
I am not an experienced Windows/C++ programmer, so my initial two thoughts were either that the application calls createfile before I hook the apis, or that there is some other API for creating files/obtaining handles for them.
A: You can use Sysinternal's FileMon.
It is an excellent monitor that can tell you exactly which file-related system calls are being
made and what are the parameters.
I think that this approach is much easier than hooking API calls and much less intrusive.
A: Here's a link which might be of use:
Guerilla-Style File Monitoring with C# and C++
It is possible to create a file without touching CreateFile API but can I ask what DLL injection method you're using? If you're using something like Windows Hooks your DLL won't be installed until sometime after the target application initializes and you'll miss early calls to CreateFile. Whereas if you're using something like DetourCreateProcessWithDll your CreateFile hook can be installed prior to any of the application startup code running.
In my experience 99.9% of created/opened files result in a call to CreateFile, including files opened through C and C++ libs, third-party libs, etc. Maybe there are some undocumented DDK functions which don't route through CreateFile, but for a typical log file, I doubt it.
A: Process Monitor from sysinternals could help too.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/13806",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: How do I create an automated build file for VB.Net in NAnt? I have taken over the development of a web application that is targeted at the .net 1.0 framework and is written in C# and Visual Basic.
I decided that the first thing we need to do is refine the build process, I wrote build files for the C# projects, but am having tons of problems creating a build file for Visual Basic.
Admittedly, I do not personally know VB, but it seems like I have to hardcode all the imports and references in my build file to get anything to work...certainly not the best way to be doing things...
For any example: if I do not include the namespace System in the build file I will get several errors of common Unkown Types e.g: Guid
does NAnt typically require this for VB code or is does the VB code need a possible NAnt-freindly refactoring?
Does anybody have VB NAnt tips?
A: I have had a similar experience with NAnt and the vbc compiler for VB.NET projects that are developed with Visual Studio. My solution has been to avoid importing namespaces at the project level in Visual Studio (which occurs by default), and use explicit Imports statements at the class/file level. C# projects work this way by default (no project level namespace imports), and I like the extra information provided by explicit namespace directives when looking at a file.
Interesting that VB.NET and C# VS projects are so different in that respect.
A: I'm not sure, if you talk about VB or VB.Net.
Either way, have a look at Nant Contrib. Maybe they have a solution.
A: Are you calling msbuild to build? Or are you calling the VS.NET IDE exe to build. We've had no problems with our c#/VB.NET mix using CC.NET and NAnt and do not have to specify referenced assemblies inside of the build files.
What we do is using the IDE exe to build solutions that contain the projects we want to build.
A: I would recommend that you take the language specific compilers out of the equation for this one. And you can still use NAnt to do this:
First start off with a target that uses MSBuild because that will compile your project regardless of language used and take care of the dependencies for you. That means you don't need to hard code them in.
Example:
<target name="WinBuild">
<exec program="msbuild.exe"
basedir="${DotNetPath}"
workingdir="${SolutionPath}"
commandline="MySolution.sln
/nologo /verbosity:normal /noconsolelogger
/p:Configuration=Debug /target:Rebuild" />
</target>
I think once you've got that nailed - you can spend plenty of time trying to get NAnt to compile natively, but in my opinion, this is what I would use for this project since it seems to be a once off?
Hope that helps,
Cheers,
Rob G
| {
"language": "en",
"url": "https://stackoverflow.com/questions/13832",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: VisualSVN undelete with TortoiseSVN Using TortoiseSVN against VisualSVN I delete a source file that I should not have deleted. Now this isn't a train smash because I can get the file back from the daily backup. However I would like to undelete it from SVN (VisualSVN) so that I can get the history back. However I can't work out how to do that.
Anybody know how to undelete a file from VisualSVN either using the VisualSVN interface or the latest version of TortoiseSVN?
A: What you have to do is the following:
*
*Right click on the folder where you think it is.
*Choose Show Log under TortioseSVN
*Find the checkin that the file was deleted in
*Go down the list and find the file
*Select Revert changes for this version to undelete.
A: Did you only delete it locally, or did you commit the deletion? In the latter case, try checking out the previous revision.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/13848",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: IE7: header above menu I am having trouble with IE7. I have a header, which is an IMG. Under it I have a div that represents a menu, they have to be attached to each other without space in between. Both are 1000px width. In Opera and FireFox the header and the menu are neatly attached to each other. However, in IE7, there is a small space between the menu DIV and the IMG. I have tried explicitly defining padding and margin on the IMG, however it does not work. I have had this problem before, so it seems to be a IE7 quirk.
My HTML Code:
<div id="middle">
<img id="ctl00_headerHolder_headerImage" src="pictures/headers/header_home.jpg" style="border-width:0px;" />
<div id="ctl00_menuPanel" class="menu">
<a id="ctl00_home" href="Default.aspx" style="color:#FFCC33;">Home</a> |
<a id="ctl00_leden" href="Leden.aspx">Leden</a> |
<a id="ctl00_agenda" href="Agenda.aspx">Agenda</a> |
<a id="ctl00_fotos" href="Fotos.aspx">Foto's</a> |
<a id="ctl00_geschiedenis" href="Geschiedenis.aspx">Geschiedenis</a> |
<a id="ctl00_gastenboek" href="Gastenboek.aspx">Gastenboek</a>
</div>
</div>
A: Try the IE Developer Toolbar, which will let you inspect what is going on with the elements and give you outlines of the areas covered. It might give you a better understanding of the problem.
A: The solution:
img {
padding: 0px;
margin: 0px;
display: block;
}
display: block
A: I run into this a lot. Rather than hunting down the specific behavior, try sanity checking by explicity setting padding and margin properties for img/div/etc selectors to 0, set border-style: none border-width: 0px border="0" etc.
IE Dev Toolbar is a must-have but whether it helps you with figuring out single-pixel issues is unlikely.
A: Instead of resorting to display block, note that IE7 does some seriously odd things with whitespace; try removing the whitespace between the image and the div, and see what happens.
A: CSS Resets (like the YUI Reset CSS) are great for this kind of thing. They reset paddings, margins, and other display properties on a lot of HTML elements to minimize the display differences.
A:
The solution...display: block
That question couldn't be answered properly without knowing the rendering mode that the browser was in; you need to tell people what doctype you have if you have CSS rendering issues. The image behaviour you refer to is different in quirks mode as opposed to standards mode. A minimal test case must include a full HTML document and the CSS to reproduce the problem. Please don't ask people for help without giving them the information they need to answer easily without wasting their time...
A: The real solution:
#middle { font-size: 0; line-height: 0; }
| {
"language": "en",
"url": "https://stackoverflow.com/questions/13851",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Can you explain closures (as they relate to Python)? I've been reading a lot about closures and I think I understand them, but without clouding the picture for myself and others, I am hoping someone can explain closures as succinctly and clearly as possible. I'm looking for a simple explanation that might help me understand where and why I would want to use them.
A: I've never heard of transactions being used in the same context as explaining what a closure is and there really aren't any transaction semantics here.
It's called a closure because it "closes over" the outside variable (constant)--i.e., it's not just a function but an enclosure of the environment where the function was created.
In the following example, calling the closure g after changing x will also change the value of x within g, since g closes over x:
x = 0
def f():
def g():
return x * 2
return g
closure = f()
print(closure()) # 0
x = 2
print(closure()) # 4
A: # A Closure is a function object that remembers values in enclosing scopes even if they are not present in memory.
# Defining a closure
# This is an outer function.
def outer_function(message):
# This is an inner nested function.
def inner_function():
print(message)
return inner_function
# Now lets call the outer function and return value bound to name 'temp'
temp = outer_function("Hello")
# On calling temp, 'message' will be still be remembered although we had finished executing outer_function()
temp()
# Technique by which some data('message') that remembers values in enclosing scopes
# even if they are not present in memory is called closures
# Output: Hello
Criteria to met by Closures are:
*
*We must have nested function.
*Nested function must refer to the value defined in the enclosing function.
*Enclosing function must return the nested function.
# Example 2
def make_multiplier_of(n): # Outer function
def multiplier(x): # Inner nested function
return x * n
return multiplier
# Multiplier of 3
times3 = make_multiplier_of(3)
# Multiplier of 5
times5 = make_multiplier_of(5)
print(times5(3)) # 15
print(times3(2)) # 6
A: It's simple: A function that references variables from a containing scope, potentially after flow-of-control has left that scope. That last bit is very useful:
>>> def makeConstantAdder(x):
... constant = x
... def adder(y):
... return y + constant
... return adder
...
>>> f = makeConstantAdder(12)
>>> f(3)
15
>>> g = makeConstantAdder(4)
>>> g(3)
7
Note that 12 and 4 have "disappeared" inside f and g, respectively, this feature is what make f and g proper closures.
A: Here's a typical use case for closures - callbacks for GUI elements (this would be an alternative to subclassing the button class). For example, you can construct a function that will be called in response to a button press, and "close" over the relevant variables in the parent scope that are necessary for processing the click. This way you can wire up pretty complicated interfaces from the same initialization function, building all the dependencies into the closure.
A: To be honest, I understand closures perfectly well except I've never been clear about what exactly is the thing which is the "closure" and what's so "closure" about it. I recommend you give up looking for any logic behind the choice of term.
Anyway, here's my explanation:
def foo():
x = 3
def bar():
print x
x = 5
return bar
bar = foo()
bar() # print 5
A key idea here is that the function object returned from foo retains a hook to the local var 'x' even though 'x' has gone out of scope and should be defunct. This hook is to the var itself, not just the value that var had at the time, so when bar is called, it prints 5, not 3.
Also be clear that Python 2.x has limited closure: there's no way I can modify 'x' inside 'bar' because writing 'x = bla' would declare a local 'x' in bar, not assign to 'x' of foo. This is a side-effect of Python's assignment=declaration. To get around this, Python 3.0 introduces the nonlocal keyword:
def foo():
x = 3
def bar():
print x
def ack():
nonlocal x
x = 7
x = 5
return (bar, ack)
bar, ack = foo()
ack() # modify x of the call to foo
bar() # print 7
A: In Python, a closure is an instance of a function that has variables bound to it immutably.
In fact, the data model explains this in its description of functions' __closure__ attribute:
None or a tuple of cells that contain bindings for the function’s free variables. Read-only
To demonstrate this:
def enclosure(foo):
def closure(bar):
print(foo, bar)
return closure
closure_instance = enclosure('foo')
Clearly, we know that we now have a function pointed at from the variable name closure_instance. Ostensibly, if we call it with an object, bar, it should print the string, 'foo' and whatever the string representation of bar is.
In fact, the string 'foo' is bound to the instance of the function, and we can directly read it here, by accessing the cell_contents attribute of the first (and only) cell in the tuple of the __closure__ attribute:
>>> closure_instance.__closure__[0].cell_contents
'foo'
As an aside, cell objects are described in the C API documentation:
"Cell" objects are used to implement variables referenced by multiple
scopes
And we can demonstrate our closure's usage, noting that 'foo' is stuck in the function and doesn't change:
>>> closure_instance('bar')
foo bar
>>> closure_instance('baz')
foo baz
>>> closure_instance('quux')
foo quux
And nothing can change it:
>>> closure_instance.__closure__ = None
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: readonly attribute
Partial Functions
The example given uses the closure as a partial function, but if this is our only goal, the same goal can be accomplished with functools.partial
>>> from __future__ import print_function # use this if you're in Python 2.
>>> partial_function = functools.partial(print, 'foo')
>>> partial_function('bar')
foo bar
>>> partial_function('baz')
foo baz
>>> partial_function('quux')
foo quux
There are more complicated closures as well that would not fit the partial function example, and I'll demonstrate them further as time allows.
A: I like this rough, succinct definition:
A function that can refer to environments that are no longer active.
I'd add
A closure allows you to bind variables into a function without passing them as parameters.
Decorators which accept parameters are a common use for closures. Closures are a common implementation mechanism for that sort of "function factory". I frequently choose to use closures in the Strategy Pattern when the strategy is modified by data at run-time.
In a language that allows anonymous block definition -- e.g., Ruby, C# -- closures can be used to implement (what amount to) novel new control structures. The lack of anonymous blocks is among the limitations of closures in Python.
A: Closure on closures
Objects are data with methods
attached, closures are functions with
data attached.
def make_counter():
i = 0
def counter(): # counter() is a closure
nonlocal i
i += 1
return i
return counter
c1 = make_counter()
c2 = make_counter()
print (c1(), c1(), c2(), c2())
# -> 1 2 1 2
A: Here is an example of Python3 closures
def closure(x):
def counter():
nonlocal x
x += 1
return x
return counter;
counter1 = closure(100);
counter2 = closure(200);
print("i from closure 1 " + str(counter1()))
print("i from closure 1 " + str(counter1()))
print("i from closure 2 " + str(counter2()))
print("i from closure 1 " + str(counter1()))
print("i from closure 1 " + str(counter1()))
print("i from closure 1 " + str(counter1()))
print("i from closure 2 " + str(counter2()))
# result
i from closure 1 101
i from closure 1 102
i from closure 2 201
i from closure 1 103
i from closure 1 104
i from closure 1 105
i from closure 2 202
A: we all have used Decorators in python. They are nice examples to show what are closure functions in python.
class Test():
def decorator(func):
def wrapper(*args):
b = args[1] + 5
return func(b)
return wrapper
@decorator
def foo(val):
print val + 2
obj = Test()
obj.foo(5)
here final value is 12
Here, the wrapper function is able to access func object because wrapper is "lexical closure", it can access it's parent attributes.
That is why, it is able to access func object.
A: I would like to share my example and an explanation about closures. I made a python example, and two figures to demonstrate stack states.
def maker(a, b, n):
margin_top = 2
padding = 4
def message(msg):
print('\n’ * margin_top, a * n,
' ‘ * padding, msg, ' ‘ * padding, b * n)
return message
f = maker('*', '#', 5)
g = maker('', '♥’, 3)
…
f('hello')
g(‘good bye!')
The output of this code would be as follows:
***** hello #####
good bye! ♥♥♥
Here are two figures to show stacks and the closure attached to the function object.
when the function is returned from maker
when the function is called later
When the function is called through a parameter or a nonlocal variable, the code needs local variable bindings such as margin_top, padding as well as a, b, n. In order to ensure the function code to work, the stack frame of the maker function which was gone away long ago should be accessible, which is backed up in the closure we can find along with the 'message's function object.
A: For me, "closures" are functions which are capable to remember the environment they were created. This functionality, allows you to use variables or methods within the closure wich, in other way,you wouldn't be able to use either because they don't exist anymore or they are out of reach due to scope. Let's look at this code in ruby:
def makefunction (x)
def multiply (a,b)
puts a*b
end
return lambda {|n| multiply(n,x)} # => returning a closure
end
func = makefunction(2) # => we capture the closure
func.call(6) # => Result equal "12"
it works even when both, "multiply" method and "x" variable,not longer exist. All because the closure capability to remember.
A: The best explanation I ever saw of a closure was to explain the mechanism. It went something like this:
Imagine your program stack as a degenerate tree where each node has only one child and the single leaf node is the context of your currently executing procedure.
Now relax the constraint that each node can have only one child.
If you do this, you can have a construct ('yield') that can return from a procedure without discarding the local context (i.e. it doesn't pop it off the stack when you return). The next time the procedure is invoked, the invocation picks up the old stack (tree) frame and continues executing where it left off.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/13857",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "98"
} |
Q: Parsing search queries in Java I have been trying to find an easy way to parse a search query and convert it to an SQL query for my DB.
I have found two solutions:
*
*Lucene: Powerful Java-based search engine, contains a query parser but it isn't very configurable and I could find a way to easily hack/adapt it to create SQL queries.
*ANTLR: A veteran text lexer-parser. Used for building anything from compilers to sky scrapers. ANTLR is highly configurable but everyone touching the code from now on will have to learn a new language...
Any other ideas?
A: SQL-ORM is a very lightweight Java library which includes the ability to construct a (dynamic) SQL query in Java as a graph of objects
IMHO, this is a far better technique for building dynamic SQL queries than the usual String concatentation method.
Disclaimer: I have made some very minor contributions to this project
A: What exactly do you have in mind? I've used Lucene for text-searching, but where it excels is building an index and searching that instead of hitting the database at all.
I recently set up an system where I index a table in Lucene by concatenating all the columns (separated by spaces) into one field, and popping that into Lucene, and then also adding the primary key in a separate column. Lucene does all the searching and returned a list of primary keys, which I used to pull up a populated set of results and display to the user.
Converting a search query into a SQL statement would seem to me to be a little messy.
Also, here's a great beginning tutorial explaining the basic structure of Lucene.
A: You could try using something like javacc (Java Compiler Compiler) to implement a parser or else just manually parse the string by brute force. Every time you come across an expression you represent it as an object. Then you just have to translate your expression tree into a where clause.
For example: "Harry Potter" becomes
new AndExp(new FieldContainsExp("NAME", "Harry"), new FieldContainsExp("NAME", "Potter")
And "publisher:Nature* pages > 100" becomes
new AndExp(new FieldContainsExp("PUBLISHER", "Nature"), FieldGreaterThan("PAGES", 100))
Then, once you have these, it's pretty easy to turn them into SQL:
FieldContainsExp.toSQL(StringBuffer sql, Collection<Object> args) {
sql.append(fieldName);
sql.append(" like ");
sql.append("'%?%'");
args.add(value);
}
AndExp.toSQL(StringBuffer sql, Collection<Object> args) {
exp1.toSQL(sql, args);
sql.append(" AND ");
exp2.toSQL(sql, args);
}
You can imagine the rest. You can nest And expressions as deeply as you want.
A: Depends a lot on the kind of queries you've got to parse and somewhat on the structure of the data in your database. I'm going to assume that you're not trying to do full text search in a DB (i.e. a search engine across your entire DB) because, as most Information Retrieval people will tell you, the performance for that is terrible. Inverted indexes are most certainly the best way of doing that.
Tell us a bit more about the actual problem: what are the users going to input, what are they expecting as output, and what's the data model like. Design a search solution without those pieces of information, and you'll get a far from optimal result.
A: You are correct to assume that I am not looking for full text search.
The information looks something like this schema for book info:
Name: string, publisher:string, num_pages int, publish_date:date...
The search queries are of the sort:
*
*Harry Potter (search any books whos name has both Harry and Potter)
*publisher:Nature* pages>100 (books from a publisher starting with Nature with more than 100 books)
*("New years" or Christmas) and present (you get the picture...)
*physics and publish>1/1/2008 (new physics books)
A: Try to combine an ORM tool (like openJPA) and Compass (framework for OSEM).
It automatically indexes the updates done through the ORM tools and gives you the Lucene power for search. After that you can of-course retrieve the object from the DB.
It out-performs any SQL-based searching solution.
A: String [] array;
int checkWord(String searchWord)
{
for(int i = 0; i < array.length; i++)
{
if(searchWord.equals(array[i]))
return i;
}
return 0;
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/13863",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: Getting started with a Picasa Plugin Does anyone here know any resources on how to get started writing a plugin for Google's Picasa? I love it for photo management, but I have some ideas for how it could be better.
*
*Riya-esque facial search: given a large enough corpus of faces and pictures (people tend to be repeated often in individuals' albums (family, friends), I would think some semi-workable version of this could be done. And with 13+ gigs/7 years of photos, it would be very nice for search.
*Upload to Facebook EDIT: Someone already made a very nice version
*Upload to any non-Google property, actually.
I know there are certain APIs and a Picasa2Flickr plugin out there, and I was wondering if anyone had seen any resources on this topic or had any experience
A: there is a an Opensource Project which created a "Upload To FlickR" Plugin. Maybe you could use it as an startingpoint...
A: I thought about facial recognition many years ago but my search only found a web API - no plugin api. My idea was to use an external facial recognition program to slowly index my entire catalogue of pictures and reliably tag them according to who was in them. It wouldn't need to be 100% accurate, but anything over 85% would be acceptable.
A: I would start with the Picasa API:
Picasa API
| {
"language": "en",
"url": "https://stackoverflow.com/questions/13881",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: How do I make a custom .net client profile installer? For .net 3.5 SP1, Microsoft have the new client profile which installs only a subset of .net 3.5 SP1 on to Windows XP user's machines.
I'm aware of how to make my assemblies client-profile ready. And I've read the articles on how to implement an installer for ClickOnce or MSI.
But I've been using Inno Setup for my project so far and I'd like to continue to use it (as an Express user, I can't easily make MSIs), I need to know how to use the client-profile installer in a custom environment.
There is an article on a Deployment.xml schema, but no indication of how to write one, package it or anything else. Can someone explain this process? Finding the articles I linked to alone was a painful search experience.
A: Microsoft has now shipped the Client Profile Configuration Designer (Beta).
This designer lets you edit the XML files with some limitations, this isn't a 'Google beta' by any means.
Information and download
A: Can you clarify: Are you trying to write an installer for your app, which depends on the Client-Profile, or are you trying to write a custom installer for the client-profile?
I haven't used it personally, but if it's anything like the dotnetfx 1 and 2 msi's, you basically have to just invoke it's executable yourself from your own .exe file, or from an Msi BEFORE the InstallExecuteSequence starts up - you can't "embed" those in your own app, MS go out of their way to tell you not to do that due to suckage of MSI.
A: Client profile works only on clean XP. If your user as .Net 1 or 2 installed, client profile wont install...
You have an offline version (integrating Full .Net3.5 Install in case Client Wont install) 200 to 300Mo don't remember
Online version will get required files.
You can call a silent install from the first steps of your install.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/13893",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: In Cocoa do I need to remove an Object from receiving KVO notifications when deallocating it? When I've registered an object foo to receive KVO notifications from another object bar (using addObserver:...), if I then deallocate foo do I need to send a removeObserver:forKeyPath: message to bar in -dealloc?
A: A bit of extra info that I've gained by painful experience: although NSNotificationCenter uses zeroing weak references when running under garbage collection, KVO does not. Thus, you can get away with not removing an NSNotificationCenter observer when using GC (when using retain/release, you still need to remove your observer), but you must still remove your KVO observers, as Chris describes.
A: You need to use -removeObserver:forKeyPath: to remove the observer before -[NSObject dealloc] runs, so yes, doing it in the -dealloc method of your class would work.
Better than that though would be to have a deterministic point where whatever owns the object that's doing the observing could tell it it's done and will (eventually) be deallocated. That way, you can stop observing immediately when the thing doing the observing is no longer needed, regardless of when it's actually deallocated.
This is important to keep in mind because the lifetime of objects in Cocoa isn't as deterministic as some people seem to think it is. The various Mac OS X frameworks themselves will send your objects -retain and -autorelease, extending their lifetime beyond what you might otherwise think it would be.
Furthermore, when you make the transition to Objective-C garbage collection, you'll find that -finalize will run at very different times — and in very different contexts — than -dealloc did. For one thing, finalization takes place on a different thread, so you really can't safely send -removeObserver:forKeyPath: to another object in a -finalize method.
Stick to memory (and other scarce resource) management in -dealloc and -finalize, and use a separate -invalidate method to have an owner tell an object you're done with it at a deterministic point; do things like removing KVO observations there. The intent of your code will be clearer and you will have fewer subtle bugs to take care of.
A: Definitely agree with Chris on the "Stick to memory (and other scarce resource) management in -dealloc and -finalize..." comment. A lot of times I'll see people try to invalidate NSTimer objects in their dealloc functions. The problem is, NSTimer retains it's targets. So, if the target of that NSTimer is self, dealloc will never get called resulting in some potentially nasty memory leaks.
Invalidate in -invalidate and do other memory cleanup in your dealloc and finalize.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/13927",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "23"
} |
Q: How do I run (unit) tests in different folders/projects separately in Visual Studio?
I need some advice as to how I easily can separate test runs for unit tests and integration test in Visual Studio. Often, or always, I structure the solution as presented in the above picture: separate projects for unit tests and integration tests. The unit tests is run very frequently while the integration tests naturally is run when the context is correctly aligned.
My goal is to somehow be able configure which tests (or test folders) to run when I use a keyboard shortcut. The tests should preferably be run by a graphical test runner (ReSharpers). So for example
*
*Alt+1 runs the tests in project BLL.Test,
*Alt+2 runs the tests in project DAL.Tests,
*Alt+3 runs them both (i.e. all the tests in the [Tests] folder, and
*Alt+4 runs the tests in folder [Tests.Integration].
TestDriven.net have an option of running just the test in the selected folder or project by right-clicking it and select Run Test(s). Being able to do this, but via a keyboard command and with a graphical test runner would be awesome.
Currently I use VS2008, ReSharper 4 and nUnit. But advice for a setup in the general is of course also appreciated.
A: I actually found kind of a solution for this on my own by using keyboard command bound to a macro. The macro was recorded from the menu Tools>Macros>Record TemporaryMacro. While recording I selected my [Tests] folder and ran ReSharpers UnitTest.ContextRun. This resulted in the following macro,
Sub TemporaryMacro()
DTE.Windows.Item(Constants.vsWindowKindSolutionExplorer).Activate
DTE.ActiveWindow.Object.GetItem("TestUnitTest\Tests").Select(vsUISelectionType.vsUISelectionTypeSelect)
DTE.ExecuteCommand("ReSharper.UnitTest_ContextRun")
End Sub
which was then bound to it's own keyboard command in Tools>Options>Environment>Keyboard.
However, what would be even more awesome is a more general solution where I can configure exactly which projects/folders/classes to run and when. For example by the means of an xml file. This could then easily be checked in to version control and distributed to everyone who works with the project.
A: This is a bit of fiddly solution, but you could configure some external tools for each of group of tests you want to run. I'm not sure if you'll be able to launch the ReSharper test runner this way, but you can run the console version of nunit. Once you have of those tools setup, you can assigned keyboard shortcuts to the commands "Tools.ExternalCommand1", "Tools.ExternalCommand2", etc.
This wont really scale very well, and it's awkward to change - but it will give you keyboard shortcuts for running your tests. It does feel like there should be a much simpler way of doing this.
A: You can use a VS macro to parse the XML file and then call nunit.exe with the /fixture command line argument to specify which classes to run or generate a selection save file and run nunit using that.
A: I have never used this but maybe it could help....
http://www.codeplex.com/VS2008UnitTestGUI
"Project Description
This project is about running all unit test inside multiple .NET Unit tests assembly coded with Visual Studio 2008."
| {
"language": "en",
"url": "https://stackoverflow.com/questions/13938",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Python Sound ("Bell") I'd like to have a python program alert me when it has completed its task by making a beep noise. Currently, I use import os and then use a command line speech program to say "Process complete". I much rather it be a simple "bell."
I know that there's a function that can be used in Cocoa apps, NSBeep, but I don't think that has much anything to do with this.
I've also tried
print(\a)
but that didn't work.
I'm using a Mac, if you couldn't tell by my Cocoa comment, so that may help.
A: Have you tried :
import sys
sys.stdout.write('\a')
sys.stdout.flush()
That works for me here on Mac OS 10.5
Actually, I think your original attempt works also with a little modification:
print('\a')
(You just need the single quotes around the character sequence).
A: I tried the mixer from the pygame module, and it works fine. First install the module:
$ sudo apt-get install python-pygame
Then in the program, write this:
from pygame import mixer
mixer.init() #you must initialize the mixer
alert=mixer.Sound('bell.wav')
alert.play()
With pygame you have a lot of customization options, which you may additionally experiment with.
A: I had to turn off the "Silence terminal bell" option in my active Terminal Profile in iTerm for print('\a') to work. It seemed to work fine by default in Terminal.
You can also use the Mac module Carbon.Snd to play the system beep:
>>> import Carbon.Snd
>>> Carbon.Snd.SysBeep(1)
>>>
The Carbon modules don't have any documentation, so I had to use help(Carbon.Snd) to see what functions were available. It seems to be a direct interface onto Carbon, so the docs on Apple Developer Connection probably help.
A: Building on Barry Wark's answer...
NSBeep() from AppKit works fine, but also makes the terminal/app icon in the taskbar jump.
A few extra lines with NSSound() avoids that and gives the opportunity to use another sound:
from AppKit import NSSound
#prepare sound:
sound = NSSound.alloc()
sound.initWithContentsOfFile_byReference_('/System/Library/Sounds/Ping.aiff', True)
#rewind and play whenever you need it:
sound.stop() #rewind
sound.play()
Standard sound files can be found via commandline locate /System/Library/Sounds/*.aiff
The file used by NSBeep() seems to be '/System/Library/Sounds/Funk.aiff'
A: If you have PyObjC (the Python - Objective-C bridge) installed or are running on OS X 10.5's system python (which ships with PyObjC), you can do
from AppKit import NSBeep
NSBeep()
to play the system alert.
A: By the way: there is a module for that. ;-)
Just install via pip:
pip3 install mac_alerts
run your sound:
from mac_alerts import alerts
alerts.play_error() # plays an error sound
A: Play sound worked for me. Install using pip
pip3 install playsound
To play sound
from playsound import playsound
playsound('beep.wav')
References:
Found the examples here
downloaded beep.wav from here
| {
"language": "en",
"url": "https://stackoverflow.com/questions/13941",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "72"
} |
Q: Best method of Textfile Parsing in C#? I want to parse a config file sorta thing, like so:
[KEY:Value]
[SUBKEY:SubValue]
Now I started with a StreamReader, converting lines into character arrays, when I figured there's gotta be a better way. So I ask you, humble reader, to help me.
One restriction is that it has to work in a Linux/Mono environment (1.2.6 to be exact). I don't have the latest 2.0 release (of Mono), so try to restrict language features to C# 2.0 or C# 1.0.
A: I was looking at almost this exact problem the other day: this article on string tokenizing is exactly what you need. You'll want to define your tokens as something like:
@"(?<level>\s) | " +
@"(?<term>[^:\s]) | " +
@"(?<separator>:)"
The article does a pretty good job of explaining it. From there you just start eating up tokens as you see fit.
Protip: For an LL(1) parser (read: easy), tokens cannot share a prefix. If you have abc as a token, you cannot have ace as a token
Note: The article's missing the | characters in its examples, just throw them in.
A:
I considered it, but I'm not going to use XML. I am going to be writing this stuff by hand, and hand editing XML makes my brain hurt. :')
Have you looked at YAML?
You get the benefits of XML without all the pain and suffering. It's used extensively in the ruby community for things like config files, pre-prepared database data, etc
here's an example
customer:
name: Orion
age: 26
addresses:
- type: Work
number: 12
street: Bob Street
- type: Home
number: 15
street: Secret Road
There appears to be a C# library here, which I haven't used personally, but yaml is pretty simple, so "how hard can it be?" :-)
I'd say it's preferable to inventing your own ad-hoc format (and dealing with parser bugs)
A: Using a library is almost always preferably to rolling your own. Here's a quick list of "Oh I'll never need that/I didn't think about that" points which will end up coming to bite you later down the line:
*
*Escaping characters. What if you want a : in the key or ] in the value?
*Escaping the escape character.
*Unicode
*Mix of tabs and spaces (see the problems with Python's white space sensitive syntax)
*Handling different return character formats
*Handling syntax error reporting
Like others have suggested, YAML looks like your best bet.
A: There is another YAML library for .NET which is under development. Right now it supports reading YAML streams and has been tested on Windows and Mono. Write support is currently being implemented.
A: It looks to me that you would be better off using an XML based config file as there are already .NET classes which can read and store the information for you relatively easily. Is there a reason that this is not possible?
@Bernard: It is true that hand editing XML is tedious, but the structure that you are presenting already looks very similar to XML.
Then yes, has a good method there.
A: You can also use a stack, and use a push/pop algorithm. This one matches open/closing tags.
public string check()
{
ArrayList tags = getTags();
int stackSize = tags.Count;
Stack stack = new Stack(stackSize);
foreach (string tag in tags)
{
if (!tag.Contains('/'))
{
stack.push(tag);
}
else
{
if (!stack.isEmpty())
{
string startTag = stack.pop();
startTag = startTag.Substring(1, startTag.Length - 1);
string endTag = tag.Substring(2, tag.Length - 2);
if (!startTag.Equals(endTag))
{
return "Fout: geen matchende eindtag";
}
}
else
{
return "Fout: geen matchende openeningstag";
}
}
}
if (!stack.isEmpty())
{
return "Fout: geen matchende eindtag";
}
return "Xml is valid";
}
You can probably adapt so you can read the contents of your file. Regular expressions are also a good idea.
A: @Gishu
Actually once I'd accommodated for escaped characters my regex ran slightly slower than my hand written top down recursive parser and that's without the nesting (linking sub-items to their parents) and error reporting the hand written parser had.
The regex was a slightly faster to write (though I do have a bit of experience with hand parsers) but that's without good error reporting. Once you add that it becomes slightly harder and longer to do.
I also find the hand written parser easier to understand the intention of. For instance, here is the a snippet of the code:
private static Node ParseNode(TextReader reader)
{
Node node = new Node();
int indentation = ParseWhitespace(reader);
Expect(reader, '[');
node.Key = ParseTerminatedString(reader, ':');
node.Value = ParseTerminatedString(reader, ']');
}
A: Regardless of the persisted format, using a Regex would be the fastest way of parsing.
In ruby it'd probably be a few lines of code.
\[KEY:(.*)\]
\[SUBKEY:(.*)\]
These two would get you the Value and SubValue in the first group. Check out MSDN on how to match a regex against a string.
This is something everyone should have in their kitty. Pre-Regex days would seem like the Ice Age.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/13963",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
} |
Q: How do I access the Ruby AST from C level code? I understand that the Ruby 1.8 AST is traversed at runtime using a big switch statement, and many things like calling a method in a class or parent module involve the interpreter looking up and down the tree as it goes. Is there a straightforward way of accessing this AST in a Ruby C extension? Does it involve the Ruby extension API, or necessitate hacking the internal data structures directly?
A: A good starting point is probably to read the source of the ParseTree library, which lets you get at and mess with the AST from ruby.
A: Thanks for the tip. You're right - ParseTree seems to be the only code out there with any manipulation of the AST going on, except that it's actually written in RubyInline.
So, it's a strange mixture between Ruby and C code. Very interesting reading, though.
The other reference of course is eval.c from Ruby itself.
It's going to take a fair bit of reading of both, to get my head around it.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/13981",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Genetic Programming in C# I've been looking for some good genetic programming examples for C#. Anyone knows of good online/book resources? Wonder if there is a C# library out there for Evolutionary/Genetic programming?
A: I saw a good high-level discussion of it on channel9 by Mike Swanson at http://channel9.msdn.com/posts/Charles/Algorithms-and-Data-Structures-Mike-Swanson-Genetic-Session-Scheduler/
A: If you're interested in genetic algorithms or heuristic optimization in general you might want to take a look at HeuristicLab. It is developed for several years, 1.5 years since we released the new version. It is programmed in C# 4 and has a nice GUI. There are many algorithms already available like Genetic Algorithm, Genetic Programming, Evolution Strategy, Local Search, Tabu Search, Particle Swarm Optimization, Simulated Annealing and more. There are also several problems implemented like a vehicle routing problem, traveling salesman, real function optimization, knapsack, quadratic assignment problem, classification, regression, and many more. There are tutorials also and we have protocol buffers integrated so you can communicate with external programs for solution evaluation. It is licensed under GPL. In 2009 the software has received the Microsoft innovation award of Microsoft Austria.
We've also written a book on the subject: Genetic Algorithms and Genetic Programming.
A: Do you mean actual genetic programming, as opposed to genetic algorithms in general?
If so, C#/.net isn't the best language for it. LISP, for example, has always been a mainstay of GP.
However, if you must, you're probably going to want to dynamically generate CIL / MSIL. You could do this using System.Reflection.Emit, however I'd recommend Mono.Cecil. It lacks good docs (as if reflection emit has them).. But it offers much better assembly emission and reflection.
Another issue is that it is less than trivial to load code, and later dispose of it, in the .net framework. At least, you cannot unload assemblies. You can unload appdomains, but the whole business of loading code into a seperate appdomain, and calling it externally can get pretty messy. .NET 3.5's System.Addin stuff should make this easier.
A: I am reading A Field Guide to Genetic Programming right now (free PDF download). It is also available as a paperback. It discuses the use of a library written in Java called TinyGP. You might get some mileage out of that. I have not started doing any actual programming but am hoping to applies some of the concepts in C#.
A: I've forked ECJ to C# .NET 4.0 if you are interested in a full-featured Evolutionary Computation framework. The package includes everything from the original ECJ Java project, including all of the working samples.
I also wrote 500 unit tests to verify many aspects of the conversion. But many more tests are needed. In particular, the distributed computation aspects are not fully tested. That's because I plan on converting from ECJ's simple use of sockets to a more robust strategy using WCF and WF. I'll also be reworking the framework to utilize TPL (Task Parallel Library).
Anyway, you can download the initial conversion here:
http://branecloud.codeplex.com
I am also in the process of converting several other frameworks from Java to .NET that relate to "synthetic intelligence" research (when I can find the time).
Ben
A: The Manning book: "Metaprogramming in .NET" dedicates a large section on GP via expression trees.
A: You can try GeneticSharp.
It has all classic GA operations, like selection, crossover, mutation, reinsertion and termination.
It's very extensible, you can define your own chromosomes, fitness function, population generation strategy and all cited operations above too.
It can be used in many kind of apps, like C# libraries and Unity 3D games, there is samples running it in a GTK# app and Unity 3D checkers game.
It also works in Win and OSX.
Here is a basic sample how to use the library:
var selection = new EliteSelection();
var crossover = new OrderedCrossover();
var mutation = new ReverseSequenceMutation();
var fitness = new YourFitnessFunction();
var chromosome = new YourChromosome();
var population = new Population (50, 70, chromosome);
var ga = new GeneticAlgorithm(population, fitness, selection, crossover, mutation);
ga.Start();
A: After developing my own Genetic Programming didactic application, I found a complete Genetic Programming Framework called AForge.NET Genetics. It's a part of the Aforge.NET library. It's licensed under LGPL.
A: MSDN had an article last year about genetic programming: Genetic Algorithms: Survival of the Fittest with Windows Forms
A: I would recommend against actually generating assemblies unless you absolutely need to, particularly if you are just getting started with implementing the genetic algorithm.
The genetic algorithm is easiest to implement when the target language is functional and dynamically typed. That is generally why most genetic algorithm research is written in LISP. As a result, if you are going to implement it in C#, you are probably better off defining your own mini "tree language", having the algorithm generate trees, and just interpreting the trees when it comes time to run each iteration of the algorithm.
I did a project like this when I was in college (an implementation of the genetic algorithm in C#), and that was the approach I took.
Doing it that way will give you the advantage of only having 1 representation to work with (the AST representation) that is optimally suited for both execution and the genetic algorithm "reproduction" steps.
Alternatively, if you try to generate assemblies you are probably going to end up adding a large amount of unneeded complexity to the app. Currently, the CLR does not allow an assembly to be unloaded from an App domain unless the entire app domain is destroyed. This would mean that you would need to spin up a separate app domain for each generated program in each iteration of the algorithm to avoid introducing a giant memory leak into your app. In general, the whole thing would just add a bunch of extra irritation.
Interpreted AST's, on the other hand, are garbage collectible just like any other object, and so you wouldn't need to monkey around with multiple app domains. If, for performance reasons you want to code-gen the final result you can add support for that later. However, I you would recommend that you do that using the DynamicMethod class. It will allow you to convert an AST into a compiled delegate dynamically at runtime. That will enable you to deploy a single DLL while keeping the code generation stuff as simple as possible. Also, DynamicMethod instances are garbage collectible so you could end up employing them as part of the genetic algorithm to speed things up there as well.
A: You might be able to implement genetic programming using LINQ expression trees -- it's more likely to generate something usable than random IL generation.
A: I maintain a port of ECJ in C#. It's great.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/14008",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "60"
} |
Q: Disabling a ListView in C#, but still showing the current selection I have a ListView control, and I'm trying to figure out the easiest/best way to disallow changing the selected row(s), without hiding the selected row(s).
I know there's a HideSelection property, but that only works when the ListView is still enabled (but not focused). I need the selection to be viewable even when the ListView is disabled.
How can I implement this?
A: You could also make the ListView ownerdraw. You then have complete control over how the items look whether they are selected or not or whether the ListView itself is enabled or not. The DrawListViewItemEventArgs provides a way to ask the ListView to draw individual parts of the item so you only have to draw the bits you're interested in. For example, you can draw the background of the item but leave it up to the ListView to draw the text.
A: There are two options, change the selected rows disabled colors. Or change all the other rows to simulate they are disabled except for the selected one. The first option is obviously the easiest, and the second option obviously is going to need some extra protections.
I have actually done the first option before and it works quite well. You just have to remember to change the colors back to the defaults in case another row is selected later on in the process.
A: Implement SelectedIndexChanged and do this
private void listViewABC_SelectedIndexChanged(object sender, EventArgs e)
{
listViewABC.SelectedItems.Clear();
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/14029",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: Database triggers In the past I've never been a fan of using triggers on database tables. To me they always represented some "magic" that was going to happen on the database side, far far away from the control of my application code. I also wanted to limit the amount of work the DB had to do, as it's generally a shared resource and I always assumed triggers could get to be expensive in high load scenarios.
That said, I have found a couple of instances where triggers have made sense to use (at least in my opinion they made sense). Recently though, I found myself in a situation where I sometimes might need to "bypass" the trigger. I felt really guilty about having to look for ways to do this, and I still think that a better database design would alleviate the need for this bypassing. Unfortunately this DB is used by mulitple applications, some of which are maintained by a very uncooperative development team who would scream about schema changes, so I was stuck.
What's the general consesus out there about triggers? Love em? Hate em? Think they serve a purpose in some scenarios?
Do think that having a need to bypass a trigger means that you're "doing it wrong"?
A: Think of a database as a great big object - after each call to it, it ought to be in a logically consistent state.
Databases expose themselves via tables, and keeping tables and rows consistent can be done with triggers. Another way to keep them consistent is to disallow direct access to the tables, and only allowing it through stored procedures and views.
The downside of triggers is that any action can invoke them; this is also a strength - no-one is going to screw up the integrity of the system through incompetence.
As a counterpoint, allowing access to a database only through stored procedures and views still allows the backdoor access of permissions. Users with sufficient permissions are trusted not to break database integrity, all others use stored procedures.
As to reducing the amount of work: databases are stunningly efficient when they don't have to deal with the outside world; you'd be really surprised how much even process switching hurts performance. That's another upside of stored procedures: rather than a dozen calls to the database (and all the associated round trips), there's one.
Bunching stuff up in a single stored proc is fine, but what happens when something goes wrong? Say you have 5 steps and the first step fails, what happens to the other steps? You need to add a whole bunch of logic in there to cater for that situation. Once you start doing that you lose the benefits of the stored procedure in that scenario.
Business logic has to go somewhere, and there's a lot of implied domain rules embedded in the design of a database - relations, constraints and so on are an attempt to codify business rules by saying, for example, a user can only have one password. Given you've started shoving business rules onto the database server by having these relations and so on, where do you draw the line? When does the database give up responsibility for the integrity of the data, and start trusting the calling apps and database users to get it right? Stored procedures with these rules embedded in them can push a lot of political power into the hands of the DBAs. It comes down to how many tiers are going to exist in your n-tier architecture; if there's a presentation, business and data layer, where does the separation between business and data lie? What value-add does the business layer add? Will you run the business layer on the database server as stored procedures?
Yes, I think that having to bypass a trigger means that you're "doing it wrong"; in this case a trigger isn't for you.
A: I work with web and winforms apps in c# and I HATE triggers with a passion. I have never come across a situation where I could justify using a trigger over moving that logic into the business layer of the application and replicating the trigger logic there.
I don't do any DTS type work or anything like that, so there might be some use cases for using trigger there, but if anyone in any of my teams says that they might want to use a trigger they better have prepared their arguments well because I refuse to stand by and let triggers be added to any database I'm working on.
Some reasons why I don't like triggers:
*
*They move logic into the database. Once you start doing that, you're asking for a world of pain because you lose your debugging, your compile time safety, your logic flow. It's all downhill.
*The logic they implement is not easily visible to anyone.
*Not all database engines support triggers so your solution creates dependencies on database engines
I'm sure I could think of more reasons off the top of my head but those alone are enough for me not to use triggers.
A: Triggers can be very helpful. They can also be very dangerous. I think they're fine for house cleaning tasks like populating audit data (created by, modified date, etc) and in some databases can be used for referential integrity.
But I'm not a big fan of putting lots of business logic into them. This can make support problematic because:
*
*it's an extra layer of code to research
*sometimes, as the OP learned, when you need to do a data fix the trigger might be doing things with the assumption that the data change is always via an application directive and not from a developer or DBA fixing a problem, or even from a different app
As for having to bypass a trigger to do something, it could mean you are doing something wrong, or it could mean that the trigger is doing something wrong.
The general rule I like to use with triggers is to keep them light, fast, simple, and as non-invasive as possible.
A: "Never design a trigger to do integrity constraint checking that crosses rows in a table" -- I can't agree. The question is tagged 'SQL Server' and CHECK constraints' clauses in SQL Server cannot contain a subquery; worse, the implementation seems to have a 'hard coded' assumption that a CHECK will involve only a single row so using a function is not reliable. So if I need a constraint which does legitimately involve more than one row -- and a good example here is the sequenced primary key in a classic 'valid time' temporal table where I need to prevent overlapping periods for the same entity -- how can I do that without a trigger? Remember this is a primary key, something to ensure I have data integrity, so enforcing it anywhere other than the DBMS is out of the question. Until CHECK constraints get subqueries, I don't see an alternative to using triggers for certain kinds of integrity constraints.
A: Triggers are generally used incorrectly, introduce bugs and therefore should be avoided. Never design a trigger to do integrity constraint checking that crosses rows in a table (e.g "the average salary by dept cannot exceed X).
Tom Kyte, VP of Oracle has indicated that he would prefer to remove triggers as a feature of the Oracle database because of their frequent role in bugs. He knows it is just a dream, and triggers are here to stay, but if he could he would remove triggers from Oracle, he would (along with the WHEN OTHERS clause and autonomous transactions).
Can triggers be used correctly? Absolutely.
The problem is - they are not used correctly in so
many cases that I'd be willing to give
up any perceived benefit just to get
rid of the abuses (and bugs) caused by
them. - Tom Kyte
A: I find myself bypassing triggers when doing bulk data imports. I think it's justified in such circumstances.
If you end up bypassing the triggers very often though, you probably need to take another look at what you put them there for in the first place.
In general, I'd vote for "they serve a purpose in some scenarios". I'm always nervous about performance implications.
A: I'm not a fan, personally. I'll use them, but only when I uncover a bottleneck in the code that can be cleared by moving actions into a trigger. Generally, I prefer simplicity and one way to keep things simple is to keep logic in one place - the application. I've also worked on jobs where access is very compartmentalized. In those environments, the more code I pack into triggers the more people I have to engage for even the simplest fixes.
A: I first used triggers a couple of weeks ago. We changed over a production server from SQL 2000 to SQL 2005 and we found that the drivers were behaving differently with NText fields (storing a large XML document), dropping off the last byte. I used a trigger as a temporary fix to add an extra dummy byte (a space) to the end of the data, solving our problem until a proper solution could be rolled out.
Other than this special, temporary case, I would say that I would avoid them since they do hide what is going on, and the function they provide should be handled explictly by the developer rather then as some hidden magic.
A: Honestly the only time I use triggers to simulate a unique index that is allowed to have NULL that don't count for the uniqueness.
A:
As to reducing the amount of work: databases are stunningly efficient when they don't have to deal with the outside world; you'd be really surprised how much even process switching hurts performance. That's another upside of stored procedures: rather than a dozen calls to the database (and all the associated round trips), there's one.
this is a little off topic, but you should also be aware that you're only looking at this from one potential positive.
Bunching stuff up in a single stored proc is fine, but what happens when something goes wrong? Say you have 5 steps and the first step fails, what happens to the other steps? You need to add a whole bunch of logic in there to cater for that situation. Once you start doing that you lose the benefits of the stored procedure in that scenario.
A: Total fan,
but really have to use it sparingly when,
*
*Need to maintain consistency (especially when dimension tables are used in a warehouse and we need to relate the data in the fact table with their proper dimension . Sometime, the proper row in the dimension table can be very expensive to compute so you want the key to be written straight to the fact table, one good way to maintain that "relation" is with trigger.
*Need to log changes (in a audit table for instance, it's useful to know what @@user did the change and when it occurred)
Some RDBMS like sql server 2005 also provide you with triggers on CREATE/ALTER/DROP statements (so you can know who created what table, when, dropped what column, when, etc..)
Honestly, using triggers in those 3 scenarios, I don't see why would you ever need to "disable" them.
A: The general rule of thumb is: do not use triggers. As mentioned before, they add overhead and complexity that can easily be avoided by moving logic out of the DB layer.
Also, in MS SQL Server, triggers are fired once per sql command, and not per row. For example, the following sql statement will execute the trigger only once.
UPDATE tblUsers
SET Age = 11
WHERE State = 'NY'
Many people, including myself, were under the impression that the triggers are fired on every row, but this isn't the case. If you have a sql statement like the one above that may change data in more than one row, you might want to include a cursor to update all records affected by the trigger. You can see how this can get convoluted very quickly.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/14031",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "18"
} |
Q: What's a Good Database ERD Tool for Linux? I've recently switched to Linux on my work machine and, new to the Linux desktop environment, I'd like to find a decent ERD tool for database design. Booting back into my Windows partition every time I need to create a diagram is going to get unpleasant quickly. I looked at Dia, but didn't see any DB tools - only UML, networking, etc.
Anyone have any recommendations? For what it's worth, I'm using Ubuntu (Hardy Heron).
Thanks.
A: I'd definitely recommend Gliffy.com for simple ER diagrams; it's an online flash-based tool. I wrote a small review of it a week ago.
A: For a generic (vendor independent) tool, you can try dia (I prefer the dia-gnome package).
There are also some plugins for generating the SQL files.
A: MySQL Workbench is available on MacOS, Fedora, Ubuntu, Windows.
WB 5.1 is focused on Data Modeling (replacing Mike Zinner’s popular DBDesigner product).
WB 5.2 (coming April 2009) will include a ground up rewrite of the MySQL Query Browser.
http://forums.mysql.com/index.php?151
A: Check out SQL Developer: [http://sqldeveloper.solyp.com/download/index.html]
A: Mmm I think the Linux version of MySQL Workbench is out for download at:
http://forums.mysql.com/read.php?3,56274,56274#msg-56274
You can see the pre-release announcement here:
http://dev.mysql.com/workbench/?p=138
They are still in alpha, but judging from the windows version this is gonna be "THE" ERD tool.
PD: For the ubuntu part, you are in luck, they say that ubuntu is "our Linux distro of choice".
A: Look at Oracle JDeveloper (freeware). It is pure Java, so it will run on any platform. It will work against any database that you can connect to via JDBC. It builds database diagrams (and lots of other diagrams - it happens to be a complete Java IDE).
It works with a concept of "offline database objects" stored in XML files. So if you have existing database objects, you start by capturing them into JDeveloper and then build your diagram. If you make changes to your offline objects, you can "reconcile" them back into your database, either as new objects (DROP-REPLACE) or as modifications (ALTER).
Download at http://www.oracle.com/technology/software/products/jdev/index.html
A: MySQL just officially released the alpha of "MySQL Workbech for linux":
See the announcement here:
MySQL Workbench 5.1 Alpha for Linux available.
A: No recommendations as such, but,
You might want to broaden your search to Eclipse plugins such as http://eclipse-erd.sourceforge.net/.
Apart from that there are various ERD tools you have to pay for like Data Architect.
A: I had bad experience with Workbench on Linux in the past and wish it got better now.
I am quite happy with SchemaBank these days 'cause they are purely web-based. You drop them a few bucks every month and they host your diagram for private / public sharing. Usual stuff like forward / reverse engineering, alter scripts, etc are all supported.
A: As a stop gap, I've installed DBDesigner via Wine (I should have just done that first) since that's what my Windows developers are using, but will look at both of these as well. The Eclipse plugin would be ideal if it's decent.
Thanks.
A: You can try ORM Designer http://www.orm-designer.com
Tool is similar to DBDesigner, but has much more functions and is under everyday development.
A: You can try Base from LibreOffice. It can connect to any database and you can easily create, design and write queries using visual wizards and tools.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/14032",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "28"
} |
Q: Preventing Memory Leaks with Attached Behaviours I've created an "attached behaviour" in my WPF application which lets me handle the Enter keypress and move to the next control. I call it EnterKeyTraversal.IsEnabled, and you can see the code on my blog here.
My main concern now is that I may have a memory leak, since I'm handling the PreviewKeyDown event on UIElements and never explicitly "unhook" the event.
What's the best approach to prevent this leak (if indeed there is one)? Should I keep a list of the elements I'm managing, and unhook the PreviewKeyDown event in the Application.Exit event? Has anyone had success with attached behaviours in their own WPF applications and come up with an elegant memory-management solution?
A: I do not agree DannySmurf
Some WPF layout objects can clog up your memory and make your application really slow when they are not garbage collected. So I find the choice of words to be correct, you are leaking memory to objects you no longer use. You expect the items to be garbage collected, but they aren't, because there is a reference somewhere (in this case in the from an event handler).
Now for a real answer :)
I advise you to read this WPF Performance article on MSDN
Not Removing Event Handlers on Objects
may Keep Objects Alive
The delegate that an object passes to
its event is effectively a reference
to that object. Therefore, event
handlers can keep objects alive longer
than expected. When performing clean
up of an object that has registered to
listen to an object's event, it is
essential to remove that delegate
before releasing the object. Keeping
unneeded objects alive increases the
application's memory usage. This is
especially true when the object is the
root of a logical tree or a visual
tree.
They advise you to look into the Weak Event pattern
Another solution would be to remove the event handlers when you are done with an object. But I know that with Attached Properties that point might not always be clear..
Hope this helps!
A: Philosophical debate aside, in looking at the OP's blog post, I don't see any leak here:
ue.PreviewKeyDown += ue_PreviewKeyDown;
A hard reference to ue_PreviewKeyDown is stored in ue.PreviewKeyDown.
ue_PreviewKeyDown is a STATIC method and can't be GCed.
No hard reference to ue is being stored, so nothing is preventing it from being GCed.
So... Where is the leak?
A: Yes I know that in the old days Memory Leaks are an entirely different subject. But with managed code, new meaning to the term Memory Leak might be more appropriate...
Microsoft even acknowledges it to be a memory leak:
Why Implement the WeakEvent Pattern?
Listening for events can lead to
memory leaks. The typical technique
for listening to an event is to use
the language-specific syntax that
attaches a handler to an event on a
source. For instance, in C#, that
syntax is: source.SomeEvent += new
SomeEventHandler(MyEventHandler).
This technique creates a strong
reference from the event source to the
event listener. Ordinarily, attaching
an event handler for a listener causes
the listener to have an object
lifetime that influenced by the object
lifetime for the source (unless the
event handler is explicitly removed).
But in certain circumstances you might
want the object lifetime of the
listener to be controlled only by
other factors, such as whether it
currently belongs to the visual tree
of the application, and not by the
lifetime of the source. Whenever the
source object lifetime extends beyond
the object lifetime of the listener,
the normal event pattern leads to a
memory leak: the listener is kept
alive longer than intended.
We use WPF for a client app with large ToolWindows that can be dragged dropped, all the nifty stuff, and all compatible with in a XBAP.. But we had the same problem with some ToolWindows that weren't garbage collected.. This was due to the fact that it was still dependent on event listeners.. Now this might not be a problem when you close your window and shut down your app. But if you are creating very large ToolWindows with a lot of commands, and all these commands gets re-evaluated over and over again, and people must use your application all day long.. I can tell you.. it really clogs up your memory and response time of your app..
Also, I find it much easier to explain to my manager that we have a memory leak, than explaining to him that some objects are not garbage collected due to some events that needs cleaning ;)
A: @Nick Yeah, the thing with attached behaviours is that by definition they're not in the same object as the elements whose events you're handling.
I think the answer lies within using WeakReference somehow, but I've not seen any simple code samples to explain it to me. :)
A: To explain my comment on John Fenton post here is my answer. Lets see the following example:
class Program
{
static void Main(string[] args)
{
var a = new A();
var b = new B();
a.Clicked += b.HandleClicked;
//a.Clicked += B.StaticHandleClicked;
//A.StaticClicked += b.HandleClicked;
var weakA = new WeakReference(a);
var weakB = new WeakReference(b);
a = null;
//b = null;
GC.Collect();
GC.WaitForPendingFinalizers();
GC.Collect();
Console.WriteLine("a is alive: " + weakA.IsAlive);
Console.WriteLine("b is alive: " + weakB.IsAlive);
Console.ReadKey();
}
}
class A
{
public event EventHandler Clicked;
public static event EventHandler StaticClicked;
}
class B
{
public void HandleClicked(object sender, EventArgs e)
{
}
public static void StaticHandleClicked(object sender, EventArgs e)
{
}
}
If you have
a.Clicked += b.HandleClicked;
and set only b to null both references weakA and weakB stay alive! If you set only a to null b stays alive but not a (which proves that John Fenton is wrong stating that a hard reference is stored in the event provider - in this case a).
This lead me to the WRONG conclusion that
a.Clicked += B.StaticHandleClicked;
would lead to a leak because i though the instance of a would be kept by the static handler. This is not the case (test my program). In the case of static event handler or events it is the other way around. If you write
A.StaticClicked += b.HandleClicked;
a reference will be kept to b.
A: Have you though of implementing the "Weak Event Pattern" instead of regular events?
*
*Weak Event Pattern in WPF
*Weak Event Patterns (MSDN)
A: Make sure event referencing elements are with in the object they are referencing, like text boxes in the form control. Or if that can't be prevented. Create a static event on a global helper class and then monitor the global helper class for events. If these two steps cannot be done try using a WeakReference, they are usually perfect for these situations, but they come with overhead.
A: I just read your blog post and I think you got a bit of misleading advice, Matt. If there is an actual memory leak here, then that is a bug in the .NET Framework, and not something you can necessarily fix in your code.
What I think you (and the poster on your blog) are actually talking about here is not actually a leak, but rather an ongoing consumption of memory. That's not the same thing. To be clear, leaked memory is memory that is reserved by a program, then abandoned (ie, a pointer is left dangling), and which subsequently cannot be freed. Since memory is managed in .NET, this is theoretically impossible. It is possible, however, for a program to reserve an ever-increasing amount of memory without allowing references to it to go out of scope (and become eligible for garbage collection); however that memory is not leaked. The GC will return it to the system once your program exits.
So. To answer your question, I don't think you actually have a problem here. You certainly don't have a memory leak, and from your code, I don't think you need to worry, as far as memory consumption goes either. As long as you make sure that you are not repeatedly assigning that event handler without ever de-assigning it (ie, that you either only ever set it once, or that you remove it exactly once for each time that you assign it), which you seem to be doing, your code should be fine.
It seems like that's the advice that the poster on your blog was trying to give you, but he used that alarming work "leak," which is a scary word, but which many programmers have forgotten the real meaning of in the managed world; it doesn't apply here.
A: @Arcturus:
... clog up your memory and make your
application really slow when they are
not garbage collected.
That's blindingly obvious, and I don't disagree. However:
...you are leaking memory to object
that you no longer use... because
there is a reference to them.
"memory is allocated to a program, and that program subsequently loses the ability to access it due to program logic flaws" (Wikipedia, "Memory leak")
If there is an active reference to an object, which your program can access, then by definition it is not leaking memory. A leak means that the object is no longer accessible (to you or to the OS/Framework), and will not be freed for the lifetime of the operating system's current session. This is not the case here.
(Sorry to be a semantic Nazi... maybe I'm a bit old school, but leak has a very specific meaning. People tend to use "memory leak" these days to mean anything that consumes 2KB of memory more than they want...)
But of course, if you do not release an event handler, the object it's attached to will not be freed until your process' memory is reclaimed by the garbage collector at shutdown. But this behaviour is entirely expected, contrary to what you seem to imply. If you expect an object to be reclaimed, then you need to remove anything that may keep the reference alive, including event handlers.
A: True true,
You are right of course.. But there is a whole new generation of programmers being born into this world that will never touch unmanaged code, and I do believe language definitions will re-invent itself over and over again. Memory leaks in WPF are in this way different than say C/Cpp.
Or course to my managers I referred to it as a memory leaks.. to my fellow colleagues I referred to it as a performance issue!
Referring to the Matt's problem, it might be a performance issue that you might need to tackle. If you just use a few screens and you make those screen controls singletons, you might not see this problem at all ;).
A: Well that (the manager bit) I can certainly understand, and sympathize with.
But whatever Microsoft calls it, I don't think a "new" definition is appropriate. It's complicated, because we don't live in a 100% managed world (even though Microsoft likes to pretend that we do, Microsoft itself does not live in such a world). When you say memory leak, you could mean that a program is consuming too much memory (that's a user's definition), or that a managed reference will not be freed until exit (as here), or that an unmanaged reference is not being properly cleaned up (that would be a real memory leak), or that unmanaged code called from managed code is leaking memory (another real leak).
In this case, it's obvious what "memory leak" means, even though we're being imprecise. But it gets awfully tedious talking to some people, who call every over-consumption, or failure-to-collect a memory leak; and it's frustrating when these people are programmers, who supposedly know better. It's kind of important for technical terms to have unambiguous meanings, I think. Debugging is so much easier when they do.
Anyway. Don't mean to turn this into an airy-fairy discussion about language. Just saying...
| {
"language": "en",
"url": "https://stackoverflow.com/questions/14061",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12"
} |
Q: Automated testing a game Question
How would you go adding automated testing to a game?
I believe you can unit test a lot of the game engine's functionality (networking, object creation, memory management, etc), but is it possible to automate test the actual game itself?
I'm not talking about gameplay elements (like Protoss would beat Zerg in map X), but I'm talking about the interaction between the game and the engine.
Introduction
In game development, the engine is just a platform for the game. You could think of the game engine as an OS and the game as a software the OS would run. The game could be a collection of scripts or an actual subroutine inside the game engine.
Possible Answers
My idea is this:
You would need an engine that is deterministic. This means that given one set of input, the output would be exactly the same. This would inlude the random generator being seeded with the same input.
Then, create a bare-bone level which contains a couple of objects the avatar/user can interact with. Start small and then add objects into the level as more interactions are developed.
Create a script which follows a path (tests pathfinding) and interact with the different objects (store the result or expected behavior). This script would be your automated test. After a certain amount of time (say, one week), run the script along with your engine's unit tests.
A: This post at Games From Within might be relevant/interesting.
A: Riot Games has an article on using automated testing for League of Legends (LoL), a multiplayer online RTS game.
According to the developers, there are many changes to both the game code and game balance everyday. They built a Python test framework that is basically a simpler game client that sends commands to the Continuous Integration server that is running an instance of LoL's game server. The server then send the test framework the effect of the command, allowing the response to be tested.
The framework provides an event queue that records the events, data, and effect from a particular point in time. The article calls this a "snapshot".
The article described an example of a unittest for a spell:
Setup
1. Give a character the ability.
2. Spawn an enemy character in the midlane (a location on the map).
3. Spawn a creep in the midlane. (In the context of LoL, creeps are weak non-controllable characters that are part of each team's army. They are basically canon fodder and is a source of experience and gold for the enemy team. But if left unchecked, they can overwhelm the opposing team)
4. Teleport the character to the midlane.
Execute
1. Take a snapshot of all the variables (e.g. the current life from the player, enemy and normal characters).
2. Cast the spell.
3. Activate the spell's effects (for example, there are some spells that will proc on hit) on an enemy character.
4. Reset the spell's cooldown so it can be cast again immediately.
5. Cast the spell.
6. Activate the spell's effects on a creep (in the context of LoL, most spells have different calculations when used on creeps).
7. Take another snapshot.
Verify
Starting from the first snapshot, replay the events, and assert that the expected results (from a game designer's point of view) are correct. Examples of events that can be verified are: The damage is within the range of the spell's damage (LoL uses random numbers to give variance to attacks), Damage is properly resisted when compared with a player character and a creep, and spells are cast within its effective range.
The article shows that a video of the test can be extracted when the test server is viewed from a normal game client.
A:
Values are so random within the gameplay aspects of development that it would be a far fetched idea to test for absolute values
But we can test deterministic values. For example, a unit test might have Guybrush Threepwood move toward a door (pathfinding), open the door (use command), fail because he doesn't have a key in his inventory (feedback), pick the door key (pathfinding + inventory management) and then finally opening the door.
All of these paths are deterministic. With this unit test, I can refactor the memory manager and if it somehow broke the inventory management routine, the unit test would fail.
This is just one idea for unit testing in games. I would love to know other ideas, hence, the motivation for this post.
A: I did something similar to your idea once and it was very successful, though I suspect it is really more of a system test than a unit test. As you suggest your random number generator must be seeded with the same value, and must produce an identical sequence each time.
The game ran on 50hz cycles, so timing was not an issue. I had a system that would record mouse clicks and locations, and used this to manually generate a 'script' which could be replayed to produce the same results. By removing the timing delays and turning off the graphic generation an hour of gameplay could be replicated in a few seconds.
The biggest problem was that changes to the game design would invalidate the script.
If your barebones room contained logic that was independent of the general game play then it could work very well. The engine could start up without any ui and start the script as soon as initialisation is complete. Testing for crashing along the way would be simple, but more complex tests such as leaving the characters in the correct positions would be more complex. If the recording of the scripts are simple enough, which they were in my system, then they can be updated very easily, and special scripts to test specialised behavior can be set up very quickly. My system had the added advantage that it could be used during game testing, and the exact sequence of events recorded to make bug fixing easier.
A: An article from Power of Two GamesGames From Within was mentioned in another answer already, but I suggest reading everything (or nearly everything) there, as they are all really well-written and apply directly to games development. The article on Assert is particularly good. You can also visit their previous website at Games From Within, which has a lot written about Test Driven Development, which is unit testing taken to the extreme.
The Power of Two guys are the ones who implemented UnitCpp, a pretty well-regarded unit testing framework. Personally, I prefer WinUnit.
A: If you are testing the rendering engine I guess you could render specific test scenes, do a screen captures and compare them to reference test renderings. That way you can detect if changes in the engine breaks anything, visually. You can write similar test for the sound engine, or even animation (by comparing a series of frames).
If you want to test game logic or scene progress you can do this by testing various conditions on the scripting variables (assuming you are using scripting to implement most of the scene and story aspects).
A: If you're using XNA (the idea could be extrapolated to other frameworks of course), you could use an in-game unit test framework that lets you access the game's state in the unit test. One such framework is Scurvy.Test :-)
A: http://flea.sourceforge.net/gameTestServer.pdf
This is an interesting discussion on implementing a full-blown functional tester in a game.
The term "unit testing" implies that a "unit" is being tested. This is one thing. If you're doing higher-level testing (e.g. several systems at once), usually this is called functional testing. It is possible to unit test much of a game, however you can't really test for fun.
Determinism isn't necessary, as long as your tests can be fuzzy. E.g. "did the character get hurt" as opposed to "did the character lose 14.7 hitpoints.
A: I have written a paper on that topic -
http://download.springer.com/static/pdf/722/art%253A10.7603%252Fs40601-013-0010-4.pdf?auth66=1407852969_87bc2e71ad5228b36738f0237084ebe5&ext=.pdf
A: This doesn't really answer your question but I was listening to a podcast on Pex from microsoft which does a similar thing to the solution you're proposing and when I was listening to it I remember thinking that it would be really interesting to see if it would be able to test games. I don't know if it would be able to help you specifically, but perhaps you could take a look at some of the ideas they use and apply it to your unit testing.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/14087",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "31"
} |
Q: How would you go about evaluating a programmer? A few weeks ago, I was assigned to evaluate all our programmers. I'm very uncomfortable with this since I was the one who taught everyone the shop's programming language (they all got out of college not knowing the language and as luck would have it, I'm very proficient with it.). On the evaluation, I was very biased on their performance (perfect scores).
I'm glad that our programming shop doesn't require an average performance level but I heard horror stories of shops which do require an average level.
My question are as follows:
*
*As a programmer, what evaluation questions would you like to see?
*As a manager, what evaluation questions would you like to see?
*As the evaluator, how can you prevent bias in your evaluation?
*I would love to remove the evaluation test. Is there any advantages to having an evaluation test? Any disadvantage?
A: I would first consider not necessarily the number of lines of code, but the value of the code that the person adds as reflective of course to what they are assigned to do. Someone told to maintain code verses building a new app is very different. Also consider how the person uses new techniques to make the code relevant and updated? How maintainable is the code the person creates? Do they do things in a manner that is logical and understandable to the rest of the team? Does their coding improve the app or just wreck it? Last and not least does their coding improve over time?
A: Gets things done is really all you need to evaluate a developer. After that you look at the quality that the developer generates. Do they write unit tests and believe in testing and being responsible for the code they generate? Do they take initiative to fix bugs without being assigned them? Are they passionate about coding? Are they always constantly learning, trying to find better ways to accomplish a task or make a process better? These questions are pretty much how I judge developers directly under me. If they are not directly under you and you are not a direct report for them, then you really shouldn't be evaluating them. If you are assigned in evaluating those programmers that aren't under you, then you need to be proactive to answer the above questions about them, which can be hard.
You can't remove the evaluation test. I know it can become tedious sometimes, but I actually enjoy doing it and it's invaluable for the developer you are evaluating. You need to be a manager that cares about how your developers do. You are a direct reflection on them and as they are of you. One question I always leave up to the developer is for them to evaluate me. The evaluation needs to be a two lane road.
I have to also evaluate off a cookie cutter list of questions, which I do, but I always add the above and try to make the evaluation fun and a learning exercise during the time I have the developer one on one, it is all about the developer you are reviewing.
A: What about getting everyone's input? Everyone that a person is working with will have a unique insight into that person. One person might think someone is a slacker, while another person sees that they are spending a lot of time planning before they start coding, etc.
A:
What about getting everyone's input? Everyone that a person is working with will have a unique insight into that person.
That would work if (1) evaluation is conducted with open doors and (2) you've worked with that person on one project or even on the same module. As the person evaluating them, I couldn't judge the programmers who I didn't directly work with.
One person might think someone is a slacker, while another person sees that they are spending a lot of time planning before they start coding
Unfortunately, this is debatable. Someone who looks like a slacker might be in deep thoughts, or maybe not. And is someone who spend a long time planning, necessarily a bad programmer?
I believe a good evaluation question would be able to answer this.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/14106",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
} |
Q: How can I test STDIN without blocking in Perl? I'm writing my first Perl app -- an AOL Instant Messenger bot that talks to an Arduino microcontroller, which in turn controls a servo that will push the power button on our sysadmin's server, which freezes randomly every 28 hours or so.
I've gotten all the hard stuff done, I'm just trying to add one last bit of code to break the main loop and log out of AIM when the user types 'quit'.
The problem is, if I try to read from STDIN in the main program loop, it blocks the process until input is entered, essentially rendering the bot inactive. I've tried testing for EOF before reading, but no dice... EOF just always returns false.
Here's below is some sample code I'm working with:
while(1) {
$oscar->do_one_loop();
# Poll to see if any arduino data is coming in over serial port
my $char = $port->lookfor();
# If we get data from arduino, then print it
if ($char) {
print "" . $char ;
}
# reading STDIN blocks until input is received... AAARG!
my $a = <STDIN>;
print $a;
if($a eq "exit" || $a eq "quit" || $a eq 'c' || $a eq 'q') {last;}
}
print "Signing off... ";
$oscar->signoff();
print "Done\n";
print "Closing serial port... ";
$port->close() || warn "close failed";
print "Done\n";
A: I found that IO::Select works fine as long as STDOUT gets closed, such as when the upstream process in the pipeline exits, or input is from a file. However, if output is ongoing (such as from "tail -f") then any partial data buffered by <STDIN> will not be displayed. Instead, use the unbuffered sysread:
#!/usr/bin/perl
use IO::Select;
$s = IO::Select->new(\*STDIN);
while (++$i) {
if ($s->can_read(2)) {
last unless defined($foo=get_unbuf_line());
print "Got '$foo'\n";
}
}
sub get_unbuf_line {
my $line="";
while (sysread(STDIN, my $nextbyte, 1)) {
return $line if $nextbyte eq "\n";
$line .= $nextbyte;
}
return(undef);
}
A: The Perl built-in is select(), which is a pass-through to the select() system call, but for sane people I recommend IO::Select.
Code sample:
#!/usr/bin/perl
use IO::Select;
$s = IO::Select->new();
$s->add(\*STDIN);
while (++$i) {
print "Hiya $i!\n";
sleep(5);
if ($s->can_read(.5)) {
chomp($foo = <STDIN>);
print "Got '$foo' from STDIN\n";
}
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/14118",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14"
} |
Q: How to create a C++ Boost undirected graph and traverse it in depth first search (DFS) order? How to create a C++ Boost undirected graph and traverse it in depth first search (DFS) order?
A: // Boost DFS example on an undirected graph.
// Create a sample graph, traverse its nodes
// in DFS order and print out their values.
#include <boost/graph/adjacency_list.hpp>
#include <boost/graph/depth_first_search.hpp>
#include <iostream>
using namespace std;
typedef boost::adjacency_list<boost::listS, boost::vecS, boost::undirectedS> MyGraph;
typedef boost::graph_traits<MyGraph>::vertex_descriptor MyVertex;
class MyVisitor : public boost::default_dfs_visitor
{
public:
void discover_vertex(MyVertex v, const MyGraph& g) const
{
cerr << v << endl;
return;
}
};
int main()
{
MyGraph g;
boost::add_edge(0, 1, g);
boost::add_edge(0, 2, g);
boost::add_edge(1, 2, g);
boost::add_edge(1, 3, g);
MyVisitor vis;
boost::depth_first_search(g, boost::visitor(vis));
return 0;
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/14126",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "31"
} |
Q: Subversion and web development I'm introducing Subversion into our web shop. I want to want the checked in files to be uploaded into the server via FTP (and as they get use to Subversion, via SFTP). The files are sent to a release candidate page for testing purposes. A script can be called to move the files into production.
My question is this: How do you transfer the files to the server via Subversion? Is there a script I can add that will do the transfer when the files are checked in?
A: You want to build a script that uses the post commit hook in SubVersion. You can either have the script export from your repository and then FTP to the server, or you can just checkout from your repository into a working directory on your server and call "svn update" on the servers working directory in your post-commit hook script.
There's more information in the Subversion FAQ
A: If you have shell access to your sever, and SVN installed on it (or the ability to install SVN), then your best bet may be just to bypass FTP entirely.
How we deploy our apps is (simplified)
*
*Developers write code and check it into trunk
*Periodically, when trunk is stable, we will take a snapshot of it as a tag
*On the server, svn checkout the tag
If any changes need to be made to the server (or directly on the live server itself) it is trivial to use subversion to sync the code
A: I think you should probably use svn export rather than svn checkout for deployments, so you don't have those .svn directories muddying up your production backup jobs. svn export is a "clean" checkout.
I'd also create a script that handles it all for you. Depending on how your code is structured, you can often version your directories and just update a symlink to the latest version, which makes rollbacks easier.
You could even use something like Capistrano to automate the deployments. I second the recommendation for CruiseControl, though.
A: I think what you're looking for is something like integration with an automatic build script. I have used CruiseControl to do a similar thing with an ASP.Net application. I don't know your exact requirements but I'd bet you could get it to do what you want.
A: Post commit scripts are useful for this. Essentially on every commit a script is called after the event, which you can use to perform an svn export to where-ever.
An interesting article shows how this might be done, and this shows how hook scripts can be used with subversion
A: You can probably use the SVN "hooks" to do this. Basically, you can configure your server to run scripts before or after every checkin. Here's the direct link to the relevant section of the online book.
A: I second Orion's idea. If you've got shell access to the server, it's actually extremely easy to use Subversion itself as the deployment tool. Just make sure you have some web server rules set up so that you don't accidentally expose the .svn directories.
A: svn2web will ftp or scp files from a subversion repository to a web server on every commit. See the SourceForge project for details.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/14135",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
} |
Q: Enabling OpenGL in wxWidgets I installed the wxWidgets source code, compiled it and am linking the libraries thus obtained with my application code. Now I need to use OpenGL in my wxWidgets application. How do I enable this?
A: For building on Windows with project files:
Assume $(WXWIDGETSROOT) is the root directory of your wxWidgets installation.
*
*Open the file $(WXWIDGETSROOT)\include\wx\msw\setup.h
*Search for the #define for wxUSE_GLCANVAS.
*Change its value from 0 to 1.
*Recompile the library.
For building on Linux and other ./configure based platforms:
Just use ./configure --with-opengl
(A mashup answer from two partial answers given by others)
A: If you're using configure to build wxWidgets you just need to add --with-opengl to your command line.
A: Just to add a little bit... If you're on linux you need to watch the logs when running configure. If it can't find opengl dev packages then it will turn opengl off with one line of warning which is easy to miss.
run it like this to make it more obvious what development libraries you're actually missing (it looks like the --with-opengl is on by default in 3.0.0 and possibly earlier versions of wxwidgets, but it can't hurt to include it I suspect).
./configure --with-opengl > configure.log
Once configure can find all the dev libs you think you're going to use you need to rebuild wxwidgets:
make
sudo make install
I had to install these on linux mint to make wxwidget's configure happy as far as opengl was concerned (and should also work for ubuntu) to get the dev libs I needed.
sudo apt-get install mesa-common-dev
sudo apt-get install freeglut3-dev
A: (Assume $(WX_WIDGETS_ROOT) is the root directory of your wxWidgets installation.)
*
*Open the file $(WX_WIDGETS_ROOT)\include\wx\msw\setup.h
*Search and find the option wxUSE_GLCANVAS. Change its value from 0 to 1.
*Recompile the library.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/14138",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Windows-based Text Editors Other than Notepad++, what text editor do you use to program in Windows?
A: Thej already recommended it, but to elaborate:
SciTE - Free, has preset colouring for many languages, and it's multi-platform (Windows & Linux), and lightweight.
alt text http://scitedebug.luaforge.net/scite-debug.png
A: gvim. I also use Dreamweaver for web stuff.
A: Notepad2
*
*Syntax highlighting for html,c#,javascript,css,xml,sql,python,bat
*Rectangular selection, regular expressions
*Indentation, back/foreground customization
Downside: No tabbed windows.
A: I'll echo the others who have endorsed Emacs. I program every day on, at a bare minimum, OS X, Windows, and Linux. Having the same IDE on all three systems gives me an enormous productivity boost. That said, the vanilla version of GNU Emacs...well, it sucks. I'd strongly encourage you to try EmacsW32 instead. In much the way that Aquamacs makes an OS X-friendly version of Emacs, the EmacsW32 project makes Emacs out-of-the-box work just like a Windows text editor. Mind you, all of Emacs' power (and complexity) is there, but if you don't already have muscle memory built up, there's no reason not to use Ctrl-C/X/V as copy/cut/paste instead of M-w/C-k/C-y just to be cool. EmacsW32 also brings Windows-compliant open/save dialogs, sane CRLF file handling, and quite a bit more. If you've ever had an itch to try Emacs, give it a shot. You won't regret it.
A: Not everybody uses Notepad++, it's not that good.
Crimson Editor
http://www.crimsoneditor.com/images/overview.gif
A: EditPlus is my editor of choice. All the features you'd need, and no more.
A: I know this is my own question but I came across this text editor Sublime Text and thought it was pretty sweet. There are a few features in it that i have never seen before. It has multiple line select ( lines that are not continuous ) and a birds eye view navigation. It's a little pricey but I am having fun playing with the free version.
A: I use EDIT.COM for a lot of things, believe it or not. Old habits die hard.
A: Another vote for gvim (about, download). I think once you learn the keystrokes to control it, you won't want to use anything else.
Plus, there is the added benefit of being able to use it on just about any platform, including the nice Windows port.
A: Commercial product (Windows): UltraEdit.
Freeware (Windows): Notepad++, PSPad.
Cross-Platform: JEdit. It's written in Java and runs on almost anything.
If you don't mind taking a performance hit under Windows, JEdit has some amazing capabilities. For native performance on that platform, I would go with one of the others. I tend to switch back and forth between Notepad++ and PSPad. Notepad++ probably edges it out for most tasks. It has section folding, which is very handy. However, you did ask about products other than that one.
A: I personally like ConTEXT.
A lot of people gave their suggestions for favourite text editor here:
https://stackoverflow.com/questions/10238/text-editor-or-ide#10391
A: I have used UltraEdit for years... If I'm working on a project I prefer to use a real IDE, but nothing beats it for quickly making changes to source files, or especially for those small PHP projects where you're just hacking away anyway. The killer feature for me is the compare functionality.
A: I strictly use jEdit.
A: My personal favorite is EditPad Pro. Not because it is superior in any way, but because it was the one I started to use.
A: UltraEdit it my favorite text editor. Too bad I have to pay for it. You can't beat the ability to highlight vertically vs. horizontally.
A: Textpad replaces notepad for me. I couldn't live without it. Some key features that I use with Textpad are:
*
*Find in files (along with open all, replace all, save all, close all).
*Block Select (along with copy/paste of a column).
*Clip Library
*Syntax highlighting
*Ability to attach externals tools (compilers, etc.) and capture the output to a window.
I use Eclipse for Java, Visual Studio for C++, C#, and VB.NET, JellyFish Pro for PowerBasic, I still use Visual Studio 6 for Classic VB, and I use TextPad for perl, python, Powershell, vbscript, SQL, HTML, and batch files.
A: I hate to sound like a broken record, but Vim is my choice. It works the same way everywhere and you'd be hard pressed to find a more powerful editor.
A: I don't code much on Windows, but e text editor is my choice. As far as free editors go nothing beats Emacs.
A: Sublime Text is amazing.
A: Notepad2, apart from Notepad++
A: Visual Studio, notepad2, notepad++.
A: Visual Studio for .Net development. Currently working with VS2008, but seems to be not quite finished yet. 2005 is probably the most stable and complete. Anything else for that would seem quite futile for .Net development
I use e-TextEditor for most other things. It covers most of the topics above including syntax highlighting, multi-select/edit, column select, TextMate bundles for auto-complete.
A: As you can see, asking about a preferred editor will get you a lot of responses. For me: UltraEdit - robust:
Notepad++ - lightweight
Also tend to use the IDE that comes with various tools (e.g. VB, C#, etc.)
But, the best advice is to pick a decent editor and learn it thoroughly. You will be spending a whole lot of time using it. So, the better you know it, the more time it will save you in the long run.
A: VIM on CYGWIN, Textpad, Notepad, and various IDEs ( Eclipse, MS VS C++, MS VS VB6, etc)
A: Vim is the default for me and when I'm in Visual Studio, I use ViEmu and Resharper.
Except for a few hick-ups it really ends up with the best of three worlds. I can use Vim commands, Visual Studio short cuts works as well, and Resharper just adds a bunch of useful features for Visual Studio.
A: certainly sublimetext. it is the best text editor on windows i've ever seen.
A: edit.com
edit.com http://cloud.anyhub.net/0-edit-com.png
A: GNU Emacs is my preferred text editor and it works well on Windows (copy/paste actually works as expected) It's also available on all major platforms so you can reuse your knowledge if you jump around OSes like I tend to do.
I really like JEdit as well. It's a good text editor for code and random text. It's a nice middle ground between Notepad and Eclipse.
If you want something just a step above Notepad for quick, efficient editing I would recommend Notepad2. It's really useful when you replace the standard Notepad with this version. You continue to have a fast startup but the syntax highlighting is a real boon. I replace Notepad with Notepad2 on every one of my Windows machines.
A: I use SciTE
A: Another vote for Textpad here. I tried Notepad++, but was annoyed that it didn't notify me when an open file had been updated (which is a pain when looking at active log files).
A: For free, for quick edits: Notepad2
But the shareware program Textpad is still my favourite. Some key features:
*
*You can download syntax files for just about every language, or make your own.
*You can load hundreds of files into it and apply regular expression search and replace across all of them.
*It has a fast and effective built in file searcher.
*It is very hard to crash it. And it can remember as many undo states as you like.
*You can create keystroke macros
A: I'm another vim user, but what I actually do is I use Visual Studio with viEmu (basically lets you use vim commands in Visual Studio) and it's the best! Visual Studio is a great IDE, and vim is a great text editor, and this allows me to use both.
A: GVIM (www.vim.org) because it's free (donation-ware), cross platform, widely available, efficient, extensible, network enabled, and open source.
VIM Features (not an extensive list)
*
*
*Ability to apply actions across all
buffers
*Autocommands
*Block modification
*Code Completion Code
*Highlighting (and methods for adding your own syntax with REGEX)
*Colorschemes
*DIFF
*Folding
*Indenting/Formatting Key Mapping
*Macros Marks
*Modal Editing
*Project Management (Project Plugin)
*Registers (local, global, etc)
*Regular Expressions
*Scriptable Snippets (Templating)
*Spell Checker
*Tagging
*Text-Objects
I suggest learning the basics from Derek Wyatt's VI/Vim Page (http://www.derekwyatt.org/), tutorial pages, and adding to your skill set on as you go.
Suggested PLUGINS
*
*Project
*XPT Templates
*Minibufexpl
*MRU
*Calendar
*NerdTree
-Taglist
A: Geany is an excellent text editor: lightweight and feature-rich
A: UltraEdit is my second home. It is a great general purpose text editor.
A: I'm a massive fan of Notepad2 - it is so quick!
For quick simple editing of text for me it's close to perfect. It has syntax colouring for Xml and code and can be extended easily.
We use Dreamweaver and Visual Studio for larger coding efforts.
A: Textpad is what I would use for random text editing (checking out HTML source, quick hackery, scripts and the like).
For actual Java development it's Eclipse all the way, although people tell me the IDEA is the cat's pyjamas.
A: E-TextEditor
Is a bit buggy, but beats the pants off any other editors I've used due to it's using the Textmate bundle format (and the bundles) - also gets updated very regularly. I use it every day and would gladly purchase it again.
A: Note that I primarily work in C/C++. For C/C++ code, I use Visual C++ Express Edition or Visual Studio Professional. For the little bit of Python I'm learning, I use the editor in the PythonWin IDE. (Mostly because it does a bit of code completion.) For everything else, I use GViM.
Tip:
After you install ViM on Windows, if you right-click on any file in Explorer, you see the Edit with Vim option in the right-click menu. This is very useful for peeking into and editing every kind of text file without having to bother about specific editors. GViM can understand most formats and thus displays them with syntax coloring. Get used to doing this and soon GViM becomes your defacto generic text editor on Windows. (Even replacing Notepad.)
A: I've always found Visual Studio to be outstanding for code editing. I still think it's pretty much the gold standard for code editing (but I'd love to be proven wrong).
Beyond that, I've used JCreator for Java editing. Of course, I've used notepad for basic stuff. I've used a lot of other text editors as well, but none that I can really recommend.
A: going for the easy answer. emacs
A: I'm attempting to switch to the Code::Blocks IDE for all of my C/C++ editing, but have used Visual Studio 2003, and Programmer's Notepad 2 for C/C++ projects. For Python, I currently use IDLE, but have been looking for something else that has a horozontal scroll bar.
A: I'm a big fan of EditPlus, mainly for its smooth built in ftp open/save functionality. Crimson Editor has this too but that feature seems to be unstable from time to time.
A: @MrBrutal I love Notepad2 as well. The only problem is it's lame with large files. :(
A: UltraEdit for me.
there might be better out there, but it would take me too long to learn it as well as ultra edit that i'd lose any potential roi while learning. that's probably the key ... as someone a few posts above says pick one and learn to be proficient with it. the payoffs will be huge. if you're fickle and switch, you won't learn it well enough to get benefits from it.
-don
A: Notepad++
and RJ TextEd
alt text http://www.rj-texted.se/bilder/mainsync-100.png
A: *
*TextMate on Mac OSX for everything besides ObjC/Cocoa (use XCode for that). The bundles are great and support pretty much every language I came across so far.
*GVim on Windows and Linux, and maybe sometimes OSX if I feel like it :). For C/Python thats all I need.
*For Flash/AS there is pretty much only FlexBuilder I guess. Even though I don't really care for Eclipse otherwise.
A: As long as Notepad++ exists I don't really want to use anything else. On linux I just use vi.
A: I certainly recommend PowerPad if for no other reason than that I wrote it.
Here are some of the wonderful features you will find in it: (Use the latest beta to get all of these)
*
*Multi-tab interface
*Powerful scripting language based on Python
*Unlimited undo
*Syntax highlighting & auto-indent
*Support for opening and editing files over FTP
*Ability to open UTF-8 and UTF-16 encoded files
The scripts currently available enable you to...
*
*Perform RegEx searches
*Lookup the current selection on Google, Wikipedia, etc.
*Encode/decode base64 data
I realise this question is specifically for Windows, but I should point out that PowerPad is available for Linux too.
A: I mostly just use Notepad++, but I like BabelPad when I need to open a file in a unicode path or when I need to have more control over unicode stuff.
I like EditPlus too. You can save a file as a template and create a new instance of it under the file menu. It's also pretty fast at loading moderately large files.
JEDIT would be my favorite, but it's just too slow when editing even slightly big files.
I can't say I'm 100% happy with Notepad++, but it bugs me the least, so...
A: @Derek Park
I also use VS for most of my coding needs, but use Notepad++ for all other plain text files. I was disappointed by VS one time when it failed to open a 500 meg text file that I was hoping to change a few characters in. Seeing as it has support for viewing files in hex (ie. binary data) I was hoping that it would do a better job with large files. It seemed to want to load the whole file rather than the relevant data. Maybe I was just expecting too much from it. (Note: I wasn't able to open the file in NP++, either.)
Edit - My mistake. I didn't mean to imply that Notepad++ successfully opened the file. I don't remember what I used to fix that, actually.
A: @_l0ser
I also use VS for most of my coding needs, but use Notepad++ for all other plain text files. I was disappointed by VS one time when it failed to open a 500 meg text file that I was hoping to change a few characters in. Seeing as it has support for viewing files in hex (ie. binary data) I was hoping that it would do a better job with large files. It seemed to want to load the whole file rather than the relevant data. Maybe I was just expecting too much from it.
If Notepad++ will open a 500meg file usably, that's a definite plus for Notepad++. Every editor I've tried to open a file that large in just thrashed and/or froze until I killed it.
A: Notepad++ is probably the one I use the most, though I use GVIM whenever I need to do repetitive changes.
We got a company license for UltraEdit recently, and it seems to work quite well as well. I've been using that for doing quick edits to java or C++ code when I didn't have the full IDE running and didn't want to wait for it to open up.
A: How about developing your own text editor?
You can own your own editor with priceless experience.
A: gvim with lots of useful plugins, i.e. taglist, c-syntax, matchit, vcscommand, bufexplorer and many more. gvim is also nice in conjunction with file manager Total Commander where F4 invokes gvim to edit the file under the cursor.
A: *
*The Delphi 7 IDE for Delphi projects
*VS2005 for .net projects
*Notepad for any quick stuff (I know it sucks, but it's quick)
A: The Zeus editor/IDE is full of programming features, yet it still feels snappy. It also does a good impersonation of the old Brief editor.
alt text http://www.zeusedit.com/images/lookmain.png
A: Column mode in UltraEdit is fantastic.
A: Another vote for EditPlus. It's a great tool for manually massaging data with column select, macros, and very powerful regex search/replace. Works well with large files. Nice for coding as well with community supplied syntax and autocomplete files.
A: Xemacs -- works with any language on a lot of platforms incuding Windows. has good support for windows conventions.
Let you access sqlplus and other command line SQL environments for POSTGRESQL, MYSQL
A: I use Netbeans for my Ruby development and SciTE for quick edits.
A: I'm a fan of PS Pad
Although there are really no text editors on windows that have everything that I want.
A: IntelliJ is the best Javascript one I've found
Most of these on wikipedia do what the other one does. Ultraedit, Notepad++ are the best of that bunch in my view.
For zero thrills notepad improvement metapad is good.
A: *
*jEdit
*notepad++
*Netbeans for Ruby development
A: Code is not text. It's Code
If you're using a text editor to edit your source code, you're doing yourself an incredible disservice. I mean yeah, it's nice that Notepad++ can do some rudimentary color-coding for you, but really why are you wasting your time like this?
A good IDE like VS.NET + Resharper will background-compile your code on the fly, allowing you to do things you would never expect to be possible if you hadn't see it happen before your eyes. Navigate to actual usages. Import dependencies automatically. Refactor your code at a keyclick. It's just that good. And it's not expensive.
I mean look. This is your job. This is the one piece of software you'll be interacting with all day long every day of your working life. Why are you playing around with freeware garbage? Get the best IDE that money can buy for your niche. It will make you better at what you do.
A: Editpad is useful in Windows
A: gVim is by far my favorite. Notepad++ is ok, but I'm half as productive without my vim keybindings.
A: I use Scite as it is highly customizable, however, I really like DrScheme for working with Scheme. It would be nice to have something similar for Python and Ruby.
A: Chiming in a long time later, I use TotalEdit.
There was zero learning curve. Nice clean interface with the Project Explorer on the left and tabbed, syntax-highlighted code files on the right. It was simple to adjust the toolbars a little to include my most common commands. I also upgraded for $10 to the version that lets you search within directories and not just files. Very pleased.
A: Someone suggeste SCITE above. I'm surprised that -- so far as I can see -- no one ever mentioned an editor which (I believe) uses the SCITE "engine": Programmer's Notepad (a.k.a. PNotepad) -- or if they did, I missed it!* It's well worth having a good look at (click on thumb for full-size):
PNotepad 2.2 screenshot http://i54.tinypic.com/hv0zyu_th.png
I use it beside Notepad++, but I also use AkelPad for very quick things: it's the fastest editor I've got, faster even than Notepad2, I think, and has some very useful plugins. (But they are slightly different tools.)
* Update: I did miss it: it was mentioned before in 2008. It's a much improved coding editor since then!
A: No Eclipse in the list !!!
| {
"language": "en",
"url": "https://stackoverflow.com/questions/14155",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "32"
} |
Q: Strange C++ errors with code that has min()/max() calls I'm seeing strange errors when my C++ code has min() or max() calls. I'm using Visual C++ compilers.
A: Another possibility could be from side effects. Most min/max macros will include the parameters multiple times and may not do what you expect. Errors and warnings could also be generated.
max(a,i++) expands as ((a) > (i++) ? (a) : (i++))
afterwards i is either plus 1 or plus 2
The () in the expansion are to avoid problems if you call it with formulae. Try expanding max(a,b+c)
A: Since Windows defines this as a function-style macro, the following workaround is available:
int i = std::min<int>(3,5);
This works because the macro min() is expanded only when min is followed by (, and not when it's followed by <.
A: Check if your code is including the windows.h header file and either your code or other third-party headers have their own min()/max() definitions. If yes, then prepend your windows.h inclusion with a definition of NOMINMAX like this:
#define NOMINMAX
#include <windows.h>
A: Ugh... scope it, dude: std::min(), std::max().
A: I haven't used it in years but from memory boost assigns min and max too, possibly?
A: Honestly, when it comes to min/max, I find it best to just define my own:
#define min(a,b) ((a) < (b) ? (a) : (b))
#define max(a,b) ((a) >= (b) ? (a) : (b))
| {
"language": "en",
"url": "https://stackoverflow.com/questions/14165",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: System.Data.SqlClient.SqlException: Failed to generate a user instance of SQL Server System.Data.SqlClient.SqlException: Failed to generate a user instance of SQL Server due to a failure in starting the process for the user instance. The connection will be closed.
Anybody ever get this error and/or have any idea on it's cause and/or solution?
This link may have relevant information.
Update
The connection string is =.\SQLEXPRESS;AttachDbFilename=C:\temp\HelloWorldTest.mdf;Integrated Security=True
The suggested User Instance=false worked.
A: You should add an explicit User Instance=true/false to your connection string
A: Here is the answer to your problem:
Very often old user instance creates some temp files that prevent a new SQL Express user instance to be created. When those files are deleted everything start working properly.
First of all confirm that user instances are enabled by running the following SQL in SQL Server Management Studio:
exec sp_configure 'user instances enabled', 1.
GO
Reconfigure
After running the query restart your SQL Server instance. Now delete the following folder:
C:\Documents and Settings\{YOUR_USERNAME}\Local Settings\Application Data\Microsoft\Microsoft SQL Server Data\{SQL_INSTANCE_NAME}
Make sure that you replace {YOUR_USERNAME} and {SQL_INSTANCE_NAME} with the appropriate names.
Source: Fix error "Failed to generate a user instance of SQL Server due to a failure in starting the process for the user instance."
A: I started getting this error this morning in a test deployment environment. I was using SQL Server Express 2008 and the error I was getting was
"Failed to generate a user instance of SQL Server due to a failure in starting the process for the user instance. The connection will be closed."
Unsure about what caused it, I followed the instructions in this post and in other post about deleting the "C:\Users\UserName\AppData\Local\Microsoft\Microsoft SQL Server Data\SQLEXPRESS" directory, but to no avail.
What did the trick for me was to change the connection string from
"Data Source=.\SQLExpress;Initial Catalog=DBFilePath;Integrated Security=SSPI;MultipleActiveResultSets=true"
to
"Data Source=.\SQLExpress;Initial Catalog=DBName;Integrated Security=SSPI;MultipleActiveResultSets=true"
A: I followed all these steps but also had to go into
*
*Control Panel > Administrative Tools > Services
*Right-click on SQL Server (SQLEXPRESS)
*Select the Log On tab
*Select the Local System account and then click OK
Problem solved... thank you
A: I have windows 8 and I test the solution
*
*Enable user instances
exec sp_configure 'user instances enabled', 1.
GO
Reconfigure
*Restart your SQL Server instance.
*Delete the folder:
C:\Users\Arabic\{YOUR_USERNAME}\Local\Microsoft\Microsoft SQL Server Data
Replace {YOUR_USERNAME} with the appropriate names.
the source from Roboblob
A: Please note that I found Jon Limjap's answer helpful except that after I did more research I found that it only applies to database connection strings that contain AttachDBFilename, so I had to change my connection string in web.config from:
connectionString="data source=.\SQLEXPRESS;Integrated Security=SSPI;AttachDBFilename=|DataDirectory|\aspnetdb.mdf"
To:
connectionString="data source=.\SQLEXPRESS;Integrated Security=SSPI;AttachDBFilename=|DataDirectory|\aspnetdb.mdf;User Instance=true"
For details please see If add [user instances=true] to connection string, an exception is thrown
| {
"language": "en",
"url": "https://stackoverflow.com/questions/14209",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: What's the deal with |Pipe-delimited| variables in connection strings? I know that |DataDirectory| will resolve to App_Data in an ASP.NET application but is that hard-coded or is there a generalized mechanism at work along the lines of %environment variables%?
A: From the MSDN Smart Client Data Blog:
In this version, the .NET runtime
added support for what we call the
DataDirectory macro. This allows
Visual Studio to put a special
variable in the connection string that
will be expanded at run-time...
By default, the |DataDirectory|
variable will be expanded as follow:
*
*For applications placed in a
directory on the user machine, this
will be the app's (.exe) folder.
*For apps running under ClickOnce, this will be a special data folder
created by ClickOnce
*For Web apps, this will be the App_Data folder
Under the hood, the value for
|DataDirectory| simply comes from a
property on the app domain. It is
possible to change that value and
override the default behavior by doing
this:
AppDomain.CurrentDomain.SetData("DataDirectory", newpath)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/14241",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Flex: does painless programmatic data binding exist? I've only done a bit of Flex development thus far, but I've preferred the approach of creating controls programmatically over mxml files, because (and please, correct me if I'm wrong!) I've gathered that you can't have it both ways -- that is to say, have the class functionality in a separate ActionScript class file but have the contained elements declared in mxml.
There doesn't seem to be much of a difference productivity-wise, but doing data binding programmatically seems somewhat less than trivial. I took a look at how the mxml compiler transforms the data binding expressions. The result is a bunch of generated callbacks and a lot more lines than in the mxml representation. So here's the question: is there a way to do data binding programmatically that doesn't involve a world of hurt?
A: It exists as of today. :)
I just released my ActionScript data binding project as open source: http://code.google.com/p/bindage-tools
BindageTools is an alternative to BindingUtils (see the play on words there?) that uses a fluent API where you declare your data bindings in a pipeline style:
Bind.fromProperty(person, "firstName")
.toProperty(firstNameInput, "text");
Two-way bindings:
Bind.twoWay(
Bind.fromProperty(person, "firstName"),
Bind.fromProperty(firstNameInput, "text"));
Explicit data conversion and validation:
Bind.twoWay(
Bind.fromProperty(person, "age")
.convert(valueToString()),
Bind.fromProperty(ageInput, "text")
.validate(isNumeric()) // (Hamcrest-as3 matcher)
.convert(toNumber()));
Etc. There are lots more examples on the site. There's lots of other features too-come have a look. --Matthew
Edit: updated APIs
A: Don't be afraid of MXML. It's great for laying out views. If you write your own reusable components then writing them in ActionScript may sometimes give you a little more control, but for non-reusable views MXML is much better. It's more terse, bindings are extemely easy to set up, etc.
However, bindings in pure ActionScript need not be that much of a pain. It will never be as simple as in MXML where a lot of things are done for you, but it can be done with not too much effort.
What you have is BindingUtils and it's methods bindSetter and bindProperty. I almost always use the former, since I usually want to do some work, or call invalidateProperties when values change, I almost never just want to set a property.
What you need to know is that these two return an object of the type ChangeWatcher, if you want to remove the binding for some reason, you have to hold on to this object. This is what makes manual bindings in ActionScript a little less convenient than those in MXML.
Let's start with a simple example:
BindingUtils.bindSetter(nameChanged, selectedEmployee, "name");
This sets up a binding that will call the method nameChanged when the name property on the object in the variable selectedEmployee changes. The nameChanged method will recieve the new value of the name property as an argument, so it should look like this:
private function nameChanged( newName : String ) : void
The problem with this simple example is that once you have set up this binding it will fire each time the property of the specified object changes. The value of the variable selectedEmployee may change, but the binding is still set up for the object that the variable pointed to before.
There are two ways to solve this: either to keep the ChangeWatcher returned by BindingUtils.bindSetter around and call unwatch on it when you want to remove the binding (and then setting up a new binding instead), or bind to yourself. I'll show you the first option first, and then explain what I mean by binding to yourself.
The currentEmployee could be made into a getter/setter pair and implemented like this (only showing the setter):
public function set currentEmployee( employee : Employee ) : void {
if ( _currentEmployee != employee ) {
if ( _currentEmployee != null ) {
currentEmployeeNameCW.unwatch();
}
_currentEmployee = employee;
if ( _currentEmployee != null ) {
currentEmployeeNameCW = BindingUtils.bindSetter(currentEmployeeNameChanged, _currentEmployee, "name");
}
}
}
What happens is that when the currentEmployee property is set it looks to see if there was a previous value, and if so removes the binding for that object (currentEmployeeNameCW.unwatch()), then it sets the private variable, and unless the new value was null sets up a new binding for the name property. Most importantly it saves the ChangeWatcher returned by the binding call.
This is a basic binding pattern and I think it works fine. There is, however, a trick that can be used to make it a bit simpler. You can bind to yourself instead. Instead of setting up and removing bindings each time the currentEmployee property changes you can have the binding system do it for you. In your creationComplete handler (or constructor or at least some time early) you can set up a binding like so:
BindingUtils.bindSetter(currentEmployeeNameChanged, this, ["currentEmployee", "name"]);
This sets up a binding not only to the currentEmployee property on this, but also to the name property on this object. So anytime either changes the method currentEmployeeNameChanged will be called. There's no need to save the ChangeWatcher because the binding will never have to be removed.
The second solution works in many cases, but I've found that the first one is sometimes necessary, especially when working with bindings in non-view classes (since this has to be an event dispatcher and the currentEmployee has to be bindable for it to work).
A: One way to separate the MXML and ActionScript for a component into separate files is by doing something similar to the ASP.Net 1.x code behind model. In this model the declarative part (the MXML in this case) is a subclass of the imperative part (the ActionScript). So I might declare the code behind for a class like this:
package CustomComponents
{
import mx.containers.*;
import mx.controls.*;
import flash.events.Event;
public class MyCanvasCode extends Canvas
{
public var myLabel : Label;
protected function onInitialize(event : Event):void
{
MyLabel.text = "Lorem ipsum dolor sit amet, consectetuer adipiscing elit.";
}
}
}
...and the markup like this:
<?xml version="1.0" encoding="utf-8"?>
<MyCanvasCode xmlns="CustomComponents.*"
xmlns:mx="http://www.adobe.com/2006/mxml"
initialize="onInitialize(event)">
<mx:Label id="myLabel"/>
</MyCanvasCode>
As you can see from this example, a disadvatage of this approach is that you have to declare controls like myLabel in both files.
A: there is a way that I usually use to use mxml and action script together: All my mxml components inherit from a action script class where I add the more complex code. Then you can refer to event listeners implemented in this class in the mxml file.
Regards,
Ruth
| {
"language": "en",
"url": "https://stackoverflow.com/questions/14247",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13"
} |
Q: Best way to let users download a file from my website: http or ftp We have some files on our website that users of our software can download. Some of the files are in virtual folders on the website while others are on our ftp. The files on the ftp are generally accessed by clicking on an ftp:// link in a browser - most of our customers do not have an ftp client. The other files are accessed by clicking an http:// link in a browser.
Should I move all the files to the ftp? or does it not matter? Whats the difference?
A: HTTP has many advantages over FTP:
*
*it is available in more places (think workplaces which block anything other than HTTP/S)
*it works nicely with proxies (FTP requires extra settings for the proxy - like making sure that it allows the CONNECT method)
*it provides built-in compression (with GZIP) which almost all browsers can handle (as opposed to FTP which has a non-official "MODE Z" extension)
*NAT gateways must be configured in a special mode to support active FTP connections, while passive FTP connections require them to allow access to all ports (it it doesn't have conneciton tracking)
*some FTP clients insist on opening a new data connection for each data transfer, which can leave you with a lot of "TIME_WAIT" sockets
A: If speed matters to your users, and they are technically inclined, http allows multiple connections for one file (if the client supports it. I use DownThemAll). Most browsers should handle ftp links just fine, though.
A: I think most users, even today, are more familiar with http than ftp and for that reason you should stick with http by default unless there's a compelling reason to use ftp. It's nit-picking, though.
A: I think it doesn't matter really, because the ftp is also transparent nowdays. You don't have to know anything special, the browser handles all.
I suggest that if they are downloading one file at one time, you can go to http.
However if they have to download several files with one go, I prefer ftp, because it's much more easy to manage.
There are some nice broswer extensions as _l0ser mentioned, but I prefer ftp for mass file-transfer.
A: Both FTP and HTTP seem sufficient for your needs, so I would definitely recommend choosing the simplest approach, which is either to leave things as they currently are or consolidate on HTTP.
Personally, I would put everything on HTTP. If nothing else, it eliminates an extra server. There is no compelling reason to choose FTP over HTTP anymore, and there are a few small advantages to HTTP (as others have pointed out).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/14263",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Using GLUT with Visual C++ Express Edition What are the basic steps to compile an OpenGL application using GLUT (OpenGL Utility Toolkit) under Visual C++ Express Edition?
A: *
*If you don't have Visual C++ Express Edition (VCEE), download and install VCEE.
*The default install of Visual C++ Express Edition builds for the .Net platform. We'll need to build for the Windows platform since OpenGL and GLUT are not yet fully supported under .Net. For this we need the Microsoft Platform SDK. (If you're using an older version of VCEE, download and install the Microsoft Platform SDK. Visual C++ Express Edition will need to be configured to build for Windows platform. All these instructions are available here.)
*If you don't have GLUT, download and unzip Nate Robin's Windows port of GLUT.
*Add glut.h to your Platform SDK/include/GL/ directory
*Link the project with glut.lib. (Go to VCEE Project Properties -> Additional Linker Directories and add the directory which has glut.lib.
*Add glut.dll to the Windows/System32 directory, so that all programs using GLUT
can find it at runtime.
Your program which uses GLUT or OpenGL should compile under Visual C++ Express Edition now.
A: The GLUT port on Nate Robin's site is from 2001 and has some incompatibilities with versions of Visual Studio more recent than that (.NET 2003 and up). The incompatibility manifests itself as errors about redefinition of exit(). If you see this error, there are two possible solutions:
*
*Replace the exit() prototype in glut.h with the one in your stdlib.h so that they match. This is probably the best solution.
*An easier solution is to #define GLUT_DISABLE_ATEXIT_HACK before you #include <gl/glut.h> in your program.
(Due credit: I originally saw this advice on the TAMU help desk website.)
I've been using approach #1 myself since .NET 2003 came out, and have used the same modified glut.h with VC++ 2003, VC++ 2005 and VC++ 2008.
Here's the diff for the glut.h I use which does #1 (but in appropriate #ifdef blocks so that it still works with older versions of Visual Studio):
--- c:\naterobbins\glut.h 2000-12-13 00:22:52.000000000 +0900
+++ c:\updated\glut.h 2006-05-23 11:06:10.000000000 +0900
@@ -143,7 +143,12 @@
#if defined(_WIN32)
# ifndef GLUT_BUILDING_LIB
-extern _CRTIMP void __cdecl exit(int);
+/* extern _CRTIMP void __cdecl exit(int); /* Changed for .NET */
+# if _MSC_VER >= 1200
+extern _CRTIMP __declspec(noreturn) void __cdecl exit(int);
+# else
+extern _CRTIMP void __cdecl exit(int);
+# endif
# endif
#else
/* non-Win32 case. */
| {
"language": "en",
"url": "https://stackoverflow.com/questions/14264",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
} |
Q: Boost warnings with VC++ 9 When the Boost library/headers is used with VC++ 9 compilers (Visual C++ 2008 Express Edition or Visual Studio 2008), a lot of benign warnings are generated. They are of 2 kinds:
*
*Warning about the Wp64 setting.
*Warning about the compiler version.
How can I turn off these warnings?
A: *
*Warning about the Wp64 setting.
Turn off the /Wp64 setting which is set by default. You can find it in Project Properties -> C/C++ -> General.
*Warning about the compiler version.
Go to the Boost trunk (online) and get the latest boost\boost\config\compiler\visualc.hpp header file. Diff it with the current file and merge the sections where _MSC_VER is equal to 1800. (1800 is the VC9 version number used in Boost configuration.)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/14271",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: How to load plugins in .NET? I'd like to provide some way of creating dynamically loadable plugins in my software.
Typical way to do this is using the LoadLibrary WinAPI function to load a dll and calling GetProcAddress to get an pointer to a function inside that dll.
My question is how do I dynamically load a plugin in C#/.Net application?
A: Dynamically Loading Plug-ins
For information on how to dynamically load .NET assemblies see this question (and my answer). Here is some code for loading creating an AppDomain and loading an assembly into it.
var domain = AppDomain.CreateDomain("NewDomainName");
var pathToDll = @"C:\myDll.dll";
var t = typeof(TypeIWantToLoad);
var runnable = domain.CreateInstanceFromAndUnwrap(pathToDll, t.FullName)
as IRunnable;
if (runnable == null) throw new Exception("broke");
runnable.Run();
Unloading Plug-ins
A typical requirement of a plugin framework is to unload the plugins. To unload dynamically loaded assemblies (e.g. plug-ins and add-ins) you have to unload the containing AppDomain. For more information see this article on MSDN on Unloading AppDomains.
Using WCF
There is a stack overflow question and answer that describe how to use the Windows Communication Framework (WCF) to create a plug-in framework.
Existing Plug-in Frameworks
I know of two plug-in frameworks:
*
*Mono.Add-ins - As mentioned in this answer to another question.
*Managed Add-in Framework (MAF) - This is the System.AddIn namespace as mentioned by Matt in his answer.
Some people talk about the Managed Extensibility Framework (MEF) as a plug-in or add-in framework, which it isn't. For more information see this StackOverflow.com question and this StackOverflow.com question.
A: One tip is to load all plugins and such into an own AppDomain, since the code running can be potentially malicious. An own AppDomain can also be used to "filter" assemblies and types that you don't want to load.
AppDomain domain = AppDomain.CreateDomain("tempDomain");
And to load an assembly into the application domain:
AssemblyName assemblyName = AssemblyName.GetAssemblyName(assemblyPath);
Assembly assembly = domain.Load(assemblyName);
To unload the application domain:
AppDomain.Unload(domain);
A: Yes, ++ to Matt and System.AddIn (a two-part MSDN magazine article about System.AddIn are available here and here). Another technology you might want to look at to get an idea where the .NET Framework might be going in the future is the Managed Extensibility Framework currently available in CTP form on Codeplex.
A: Basically you can do it in two ways.
The first is to import kernel32.dll and use LoadLibrary and GetProcAddress as you used it before:
[DllImport("kernel32.dll")]
internal static extern IntPtr LoadLibrary(String dllname);
[DllImport("kernel32.dll")]
internal static extern IntPtr GetProcAddress(IntPtr hModule, String procname);
The second is to do it in the .NET-way: by using reflection. Check System.Reflection namespace and the following methods:
*
*Assembly.LoadFile
*Assembly.GetType
*Assembly.GetTypes
*Type.GetMethod
*MethodInfo.Invoke
First you load the assembly by it's path, then get the type (class) from it by it's name, then get the method of the class by it's name again and finally call the method with the relevant parameters.
A: As of .NET 3.5 there's a formalized, baked-in way to create and load plugins from a .NET application. It's all in the System.AddIn namespace. For more information you can check out this article on MSDN: Add-ins and Extensibility
A: The following code snippet (C#) constructs an instance of any concrete classes derived from Base found in class libraries (*.dll) in the application path and stores them in a list.
using System.IO;
using System.Reflection;
List<Base> objects = new List<Base>();
DirectoryInfo dir = new DirectoryInfo(Application.StartupPath);
foreach (FileInfo file in dir.GetFiles("*.dll"))
{
Assembly assembly = Assembly.LoadFrom(file.FullName);
foreach (Type type in assembly.GetTypes())
{
if (type.IsSubclassOf(typeof(Base)) && type.IsAbstract == false)
{
Base b = type.InvokeMember(null,
BindingFlags.CreateInstance,
null, null, null) as Base;
objects.Add(b);
}
}
}
Edit: The classes referred to by Matt are probably a better option in .NET 3.5.
A: The article is a bit older, but still applicable for creating an extensibility layer within your application:
Let Users Add Functionality to Your .NET Applications with Macros and Plug-Ins
A: This is my implementation, Inspired in this code avoiding to iterate over all assemblies and all types (or at least filtering with linQ). I just load the library and try to load the class which implemets a common shared interface. Simple and fast :)
Just declare an interface in a separated library and reference it in both, your system and your plugin:
public interface IYourInterface
{
Task YourMethod();
}
In your plugin library, declare a class which implements IYourInterface
public class YourClass: IYourInterface
{
async Task IYourInterface.YourMethod()
{
//.....
}
}
In your system, declare this method
using System;
using System.Collections;
using System.Collections.Generic;
using System.Linq.Expressions;
using System.Reflection;
using System.Linq;
public abstract class ReflectionTool<TSource> where TSource : class
{
public static TSource LoadInstanceFromLibrary(string libraryPath)
{
TSource pluginclass = null;
if (!System.IO.File.Exists(libraryPath))
throw new Exception($"Library '{libraryPath}' not found");
else
{
Assembly.LoadFrom(libraryPath);
var fileName = System.IO.Path.GetFileName(libraryPath).Replace(".dll", "");
var assembly = AppDomain.CurrentDomain.GetAssemblies().FirstOrDefault(c => c.FullName.StartsWith(fileName));
var type = assembly.GetTypes().FirstOrDefault(c => c.GetInterface(typeof(TSource).FullName) != null);
try
{
pluginclass = Activator.CreateInstance(type) as TSource;
}
catch (Exception ex)
{
LogError("", ex);
throw;
}
}
return pluginclass;
}
}
And call it like this way:
IYourInterface instance = ReflectionTool<IYourInterface>.LoadInstanceFromLibrary("c:\pathToYourLibrary.dll");
| {
"language": "en",
"url": "https://stackoverflow.com/questions/14278",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "27"
} |
Q: Is there a python module for regex matching in zip files I have over a million text files compressed into 40 zip files. I also have a list of about 500 model names of phones. I want to find out the number of times a particular model was mentioned in the text files.
Is there any python module which can do a regex match on the files without unzipping it. Is there a simple way to solve this problem without unzipping?
A: There's nothing that will automatically do what you want.
However, there is a python zipfile module that will make this easy to do. Here's how to iterate over the lines in the file.
#!/usr/bin/python
import zipfile
f = zipfile.ZipFile('myfile.zip')
for subfile in f.namelist():
print subfile
data = f.read(subfile)
for line in data.split('\n'):
print line
A: You could loop through the zip files, reading individual files using the zipfile module and running your regex on those, eliminating to unzip all the files at once.
I'm fairly certain that you can't run a regex over the zipped data, at least not meaningfully.
A: To access the contents of a zip file you have to unzip it, although the zipfile package makes this fairly easy, as you can unzip each file within an archive individually.
Python zipfile module
A: Isn't it (at least theoretically) possible, to read in the ZIP's Huffman coding and then translate the regexp into the Huffman code? Might this be more efficient than first de-compressing the data, then running the regexp?
(Note: I know it wouldn't be quite that simple: you'd also have to deal with other aspects of the ZIP coding—file layout, block structures, back-references—but one imagines this could be fairly lightweight.)
EDIT: Also note that it's probably much more sensible to just use the zipfile solution.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/14281",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: Increasing camera capture resolution in OpenCV In my C/C++ program, I'm using OpenCV to capture images from my webcam. The camera (Logitech QuickCam IM) can capture at resolutions 320x240, 640x480 and 1280x960. But, for some strange reason, OpenCV gives me images of resolution 320x240 only. Calls to change the resolution using cvSetCaptureProperty() with other resolution values just don't work. How do I capture images with the other resolutions possible with my webcam?
A: I've done image processing in linux before and skipped OpenCV's built in camera functionality because it's (as you've discovered) incomplete.
Depending on your OS you may have more luck going straight to the hardware through normal channels as opposed to through openCV. If you are using Linux, video4linux or video4linux2 should give you relatively trivial access to USB webcams and you can use libavc1394 for firewire. Depending on the device and the quality of the example code you follow, you should be able to get the device running with the parameters you want in an hour or two.
Edited to add: You are on your own if its Windows. I imagine it's not much more difficult but I've never done it.
A: I strongly suggest using VideoInput lib, it supports any DirectShow device (even multiple devices at the same time) and is more configurable. You'll spend five minutes make it play with OpenCV.
A: Check this ticket out:
https://code.ros.org/trac/opencv/ticket/376
"The solution is to use the newer libv4l-based wrapper.
*
*install libv4l-dev (this is how it's called in Ubuntu)
*rerun cmake, you will see "V4L/V4L2: Using libv4l"
*rerun make. now the resolution can be changed. tested with built-in isight on MBP."
This fixed it for me using Ubuntu and might aswell work for you.
A: Code I finally got working in Python once Aaron Haun pointed out I needed to define the arguments of the set function before using them.
#Camera_Get_Set.py
#By Forrest L. Erickson of VRX Company Inc. 8-31-12.
#Opens the camera and reads and reports the settings.
#Then tries to set for higher resolution.
#Workes with Logitech C525 for resolutions 960 by 720 and 1600 by 896
import cv2.cv as cv
import numpy
CV_CAP_PROP_POS_MSEC = 0
CV_CAP_PROP_POS_FRAMES = 1
CV_CAP_PROP_POS_AVI_RATIO = 2
CV_CAP_PROP_FRAME_WIDTH = 3
CV_CAP_PROP_FRAME_HEIGHT = 4
CV_CAP_PROP_FPS = 5
CV_CAP_PROP_POS_FOURCC = 6
CV_CAP_PROP_POS_FRAME_COUNT = 7
CV_CAP_PROP_BRIGHTNESS = 8
CV_CAP_PROP_CONTRAST = 9
CV_CAP_PROP_SATURATION = 10
CV_CAP_PROP_HUE = 11
CV_CAPTURE_PROPERTIES = tuple({
CV_CAP_PROP_POS_MSEC,
CV_CAP_PROP_POS_FRAMES,
CV_CAP_PROP_POS_AVI_RATIO,
CV_CAP_PROP_FRAME_WIDTH,
CV_CAP_PROP_FRAME_HEIGHT,
CV_CAP_PROP_FPS,
CV_CAP_PROP_POS_FOURCC,
CV_CAP_PROP_POS_FRAME_COUNT,
CV_CAP_PROP_BRIGHTNESS,
CV_CAP_PROP_CONTRAST,
CV_CAP_PROP_SATURATION,
CV_CAP_PROP_HUE})
CV_CAPTURE_PROPERTIES_NAMES = [
"CV_CAP_PROP_POS_MSEC",
"CV_CAP_PROP_POS_FRAMES",
"CV_CAP_PROP_POS_AVI_RATIO",
"CV_CAP_PROP_FRAME_WIDTH",
"CV_CAP_PROP_FRAME_HEIGHT",
"CV_CAP_PROP_FPS",
"CV_CAP_PROP_POS_FOURCC",
"CV_CAP_PROP_POS_FRAME_COUNT",
"CV_CAP_PROP_BRIGHTNESS",
"CV_CAP_PROP_CONTRAST",
"CV_CAP_PROP_SATURATION",
"CV_CAP_PROP_HUE"]
capture = cv.CaptureFromCAM(0)
print ("\nCamera properties before query of frame.")
for i in range(len(CV_CAPTURE_PROPERTIES_NAMES)):
# camera_valeus =[CV_CAPTURE_PROPERTIES_NAMES, foo]
foo = cv.GetCaptureProperty(capture, CV_CAPTURE_PROPERTIES[i])
camera_values =[CV_CAPTURE_PROPERTIES_NAMES[i], foo]
# print str(camera_values)
print str(CV_CAPTURE_PROPERTIES_NAMES[i]) + ": " + str(foo)
print ("\nOpen a window for display of image")
cv.NamedWindow("Camera", 1)
while True:
img = cv.QueryFrame(capture)
cv.ShowImage("Camera", img)
if cv.WaitKey(10) == 27:
break
cv.DestroyWindow("Camera")
#cv.SetCaptureProperty(capture, CV_CAP_PROP_FRAME_WIDTH, 1024)
#cv.SetCaptureProperty(capture, CV_CAP_PROP_FRAME_HEIGHT, 768)
cv.SetCaptureProperty(capture, CV_CAP_PROP_FRAME_WIDTH, 1600)
cv.SetCaptureProperty(capture, CV_CAP_PROP_FRAME_HEIGHT, 896)
print ("\nCamera properties after query and display of frame.")
for i in range(len(CV_CAPTURE_PROPERTIES_NAMES)):
# camera_valeus =[CV_CAPTURE_PROPERTIES_NAMES, foo]
foo = cv.GetCaptureProperty(capture, CV_CAPTURE_PROPERTIES[i])
camera_values =[CV_CAPTURE_PROPERTIES_NAMES[i], foo]
# print str(camera_values)
print str(CV_CAPTURE_PROPERTIES_NAMES[i]) + ": " + str(foo)
print ("/nOpen a window for display of image")
cv.NamedWindow("Camera", 1)
while True:
img = cv.QueryFrame(capture)
cv.ShowImage("Camera", img)
if cv.WaitKey(10) == 27:
break
cv.DestroyWindow("Camera")
A: I'm using openCV 1.1pre1 under Windows (videoinput library is used by default by this version of openCv under windows).
With these instructions I can set camera resolution. Note that I call the old cvCreateCameraCapture instead of cvCaptureFromCam.
capture = cvCreateCameraCapture(cameraIndex);
cvSetCaptureProperty( capture, CV_CAP_PROP_FRAME_WIDTH, 640 );
cvSetCaptureProperty( capture, CV_CAP_PROP_FRAME_HEIGHT, 480 );
videoFrame = cvQueryFrame(capture);
I've tested it with Logitech, Trust and Philips webcams
A: I am using debian and ubuntu, i had the same problem, i couldn't change the resolution of video input using CV_CAP_PROP_FRAME_WIDTH and CV_CAP_PROP_FRAME_HEIGHT
I turned out that the reason was a missing library.
I installed lib4l-dev through synaptic, rebuilt OpenCV and the problem is SOLVED!
A: I am posting this to ensure that no one else wastes time on this setproperty function. I spent 2 days on this to see that nothing seems to be working. So I dug out the code (I had installed the library the first time around). This is what actually happens - cvSetCaptureProperty, calls setProperty inside CvCapture class and lo behold setProperty does nothing. It just returns false.
Instead I'll pick up using another library to feed OpenCV a capture video/images. I am using OpenCV 2.2
A: There doesn't seem to be a solution. The resolution can be increased to 640x480 using this hack shared by lifebelt77. Here are the details reproduced:
Add to highgui.h:
#define CV_CAP_PROP_DIALOG_DISPLAY 8
#define CV_CAP_PROP_DIALOG_FORMAT 9
#define CV_CAP_PROP_DIALOG_SOURCE 10
#define CV_CAP_PROP_DIALOG_COMPRESSION 11
#define CV_CAP_PROP_FRAME_WIDTH_HEIGHT 12
Add the function icvSetPropertyCAM_VFW to cvcap.cpp:
static int icvSetPropertyCAM_VFW( CvCaptureCAM_VFW* capture, int property_id, double value )
{
int result = -1;
CAPSTATUS capstat;
CAPTUREPARMS capparam;
BITMAPINFO btmp;
switch( property_id )
{
case CV_CAP_PROP_DIALOG_DISPLAY:
result = capDlgVideoDisplay(capture->capWnd);
//SendMessage(capture->capWnd,WM_CAP_DLG_VIDEODISPLAY,0,0);
break;
case CV_CAP_PROP_DIALOG_FORMAT:
result = capDlgVideoFormat(capture->capWnd);
//SendMessage(capture->capWnd,WM_CAP_DLG_VIDEOFORMAT,0,0);
break;
case CV_CAP_PROP_DIALOG_SOURCE:
result = capDlgVideoSource(capture->capWnd);
//SendMessage(capture->capWnd,WM_CAP_DLG_VIDEOSOURCE,0,0);
break;
case CV_CAP_PROP_DIALOG_COMPRESSION:
result = capDlgVideoCompression(capture->capWnd);
break;
case CV_CAP_PROP_FRAME_WIDTH_HEIGHT:
capGetVideoFormat(capture->capWnd, &btmp, sizeof(BITMAPINFO));
btmp.bmiHeader.biWidth = floor(value/1000);
btmp.bmiHeader.biHeight = value-floor(value/1000)*1000;
btmp.bmiHeader.biSizeImage = btmp.bmiHeader.biHeight *
btmp.bmiHeader.biWidth * btmp.bmiHeader.biPlanes *
btmp.bmiHeader.biBitCount / 8;
capSetVideoFormat(capture->capWnd, &btmp, sizeof(BITMAPINFO));
break;
default:
break;
}
return result;
}
and edit captureCAM_VFW_vtable as following:
static CvCaptureVTable captureCAM_VFW_vtable =
{
6,
(CvCaptureCloseFunc)icvCloseCAM_VFW,
(CvCaptureGrabFrameFunc)icvGrabFrameCAM_VFW,
(CvCaptureRetrieveFrameFunc)icvRetrieveFrameCAM_VFW,
(CvCaptureGetPropertyFunc)icvGetPropertyCAM_VFW,
(CvCaptureSetPropertyFunc)icvSetPropertyCAM_VFW, // was NULL
(CvCaptureGetDescriptionFunc)0
};
Now rebuilt highgui.dll.
A: I find that in Windows (from Win98 to WinXP SP3), OpenCV will often use Microsoft's VFW library for camera access. The problem with this is that it is often very slow (say a max of 15 FPS frame capture) and buggy (hence why cvSetCaptureProperty often doesn't work). Luckily, you can usually change the resolution in other software (particularly "AMCAP", which is a demo program that is easily available) and it will effect the resolution that OpenCV will use. For example, you can run AMCAP to set the resolution to 640x480, and then OpenCV will use that by default from that point onwards!
But if you can use a different Windows camera access library such as the "videoInput" library http://muonics.net/school/spring05/videoInput/ that accesses the camera using very efficient DirectShow (part of DirectX). Or if you have a professional quality camera, then often it will come with a custom API that lets you access the camera, and you could use that for fast access with the ability to change resolution and many other things.
A: Under Windows try to use VideoInput library:
http://robocraft.ru/blog/computervision/420.html
A:
cvSetCaptureProperty( capture, CV_CAP_PROP_FRAME_WIDTH, WIDTH );
cvSetCaptureProperty( capture, CV_CAP_PROP_FRAME_HEIGHT, HEIGHT);
cvQueryFrame(capture);
That will not work with OpenCV 2.2, but if you use OpenCV 2.1 it will work fine !
A: If you are on windows platform, try DirectShow (IAMStreamConfig).
http://msdn.microsoft.com/en-us/library/dd319784%28v=vs.85%29.aspx
A: Just one information that could be valuable for people having difficulties to change the default capture resolution (640 x 480) ! I experimented myself a such problem with opencv 2.4.x and one Logitech camera ... and found one workaround !
The behaviour I detected is that the default format is setup as initial parameters when camera capture is started (cvCreateCameraCapture), and all request to change height or width :
cvSetCaptureProperty( capture, CV_CAP_PROP_FRAME_WIDTH, ...
or
cvSetCaptureProperty( capture, CV_CAP_PROP_FRAME_HEIGHT, ...
are not possible afterwards ! Effectively, I discovered with adding return error of ioctl functions that V4l2 driver is returning EBUSY for thet requests !
Therefore, one workaround should be to change the default value directly in highgui/cap_v4l.cpp :
*#define DEFAULT_V4L_WIDTH 1280 // Originally 640*
*#define DEFAULT_V4L_HEIGHT 720 // Originally 480*
After that, I just recompiled opencv ... and arrived to get 1280 x 720 without any problem ! Of course, a better fix should be to stop the acquisition, change the parameters, and restart stream after, but I'm not enough familiar with opencv for doing that !
Hope it will help.
Michel BEGEY
A: cvQueryFrame(capture);
cvSetCaptureProperty( capture, CV_CAP_PROP_FRAME_WIDTH, any_supported_size );
cvSetCaptureProperty( capture, CV_CAP_PROP_FRAME_HEIGHT, any_supported_size);
cvQueryFrame(capture);
should be just enough!
A: Try this:
capture = cvCreateCameraCapture(-1);
//set resolution
cvSetCaptureProperty(capture, CV_CAP_PROP_FRAME_WIDTH, frameWidth);
cvSetCaptureProperty(capture, CV_CAP_PROP_FRAME_HEIGHT, frameHeight);
| {
"language": "en",
"url": "https://stackoverflow.com/questions/14287",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "52"
} |
Q: GL_FRAMEBUFFER_INCOMPLETE_DUPLICATE_ATTACHMENT_EXT errors I'm using FBOs in my OpenGL code and I'm seeing compilation errors on GL\_FRAMEBUFFER\_INCOMPLETE\_DUPLICATE\_ATTACHMENT\_EXT. What's the cause of this and how do I fix it?
A: The cause of this error is an older version of NVIDIA's glext.h, which still has this definition. Whereas the most recent versions of GLEW don't. This leads to compilation errors in code that you had written previously or got from the web.
The GL_FRAMEBUFFER_INCOMPLETE_DUPLICATE_ATTACHMENT_EXT definition for FBO used to be present in the specification (and hence in header files). But, it was later removed. The reason for this can be found in the FBO extension specification (look for Issue 87):
(87) What happens if a single image is attached more than once to a
framebuffer object?
RESOLVED: The value written to the pixel is undefined.
There used to be a rule in section 4.4.4.2 that resulted in
FRAMEBUFFER_INCOMPLETE_DUPLICATE_ATTACHMENT_EXT if a single
image was attached more than once to a framebuffer object.
FRAMEBUFFER_INCOMPLETE_DUPLICATE_ATTACHMENT_EXT 0x8CD8
* A single image is not attached more than once to the
framebuffer object.
{ FRAMEBUFFER_INCOMPLETE_DUPLICATE_ATTACHMENT_EXT }
This rule was removed in version #117 of the
EXT_framebuffer_object specification after discussion at the
September 2005 ARB meeting. The rule essentially required an
O(n*lg(n)) search. Some implementations would not need to do that
search if the completeness rules did not require it. Instead,
language was added to section 4.10 which says the values
written to the framebuffer are undefined when this rule is
violated.
To fix this error, remove all usage of GL_FRAMEBUFFER_INCOMPLETE_DUPLICATE_ATTACHMENT_EXT from your code.
If this isn't possible in your setup, then add a dummy definition to your glext.h or glew.h file like this:
#define GL_FRAMEBUFFER_INCOMPLETE_DUPLICATE_ATTACHMENT_EXT 0x8CD8
| {
"language": "en",
"url": "https://stackoverflow.com/questions/14297",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: How do I make Powershell run a batch file and then stay open? For example; with the old command prompt it would be:
cmd.exe /k mybatchfile.bat
A: Drop into a cmd instance (or indeed PowerShell itself) and type this:
powershell -?
You'll see that powershell.exe has a "-noexit" parameter which tells it not to exit after executing a "startup command".
A: When running PowerShell.exe just provide the -NoExit switch like so:
PowerShell -NoExit -File "C:\SomeFolder\SomePowerShellScript.ps1"
PowerShell -NoExit -Command "Write-Host 'This window will stay open.'"
Or if you want to run a file and then run a command and have the window stay open, you can do something like this:
PowerShell -NoExit "& 'C:\SomeFolder\SomePowerShellScript.ps1'; Write-Host 'This window will stay open.'"
The -Command parameter is implied if not provided, and here we use the & to call the PowerShell script, and the ; separates the PowerShell commands.
Also, at the bottom of my blog post I show a quick registry change you can make in order to always have PowerShell remain open after executing a script/command, so that you don't need to always explicitly provide the -NoExit switch all the time.
A: I am sure that you already figure this out but I just post it
$CreateDate = (Get-Date -format 'yyyy-MM-dd hh-mm-ss')
$RemoteServerName ="server name"
$process = [WMICLASS]"\\$RemoteServerName\ROOT\CIMV2:win32_process"
$result = $process.Create("C:\path to a script\test.bat")
$result | out-file -file "C:\some path \Log-$CreatedDate.txt"
| {
"language": "en",
"url": "https://stackoverflow.com/questions/14300",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "24"
} |
Q: What is the best way to handle files for a small office? I'm currently working at a small web development company, we mostly do campaign sites and other promotional stuff. For our first year we've been using a "server" for sharing project files, a plain windows machine with a network share. But this isn't exactly future proof.
SVN is great for code (it's what we use now), but I want to have the comfort of versioning (or atleast some form of syncing) for all or most of our files.
What I essentially want is something that does what subversion does for code, but for our documents/psd/pdf files.
I realize subversion handles binary files too, but I feel it might be a bit overkill for our purposes.
It doesn't necessarily need all the bells and whistles of a full version control system, but something that that removes the need for incremental naming (Notes_1.23.doc) and lessens the chance of overwriting something by mistake.
It also needs to be multiplatform, handle large files (100 mb+) and be usable by somewhat non technical people.
A: SVN is great for binaries, too. If you're afraid you can't compare revisions, I can tell you that it is possible for Word docs, using Tortoise.
But I do not know, what you mean with "expanding the versioning". SVN is no document management system.
Edit:
but I feel it might be a bit overkill for our purposes
If you are already using SVN and it fulfils your purposes, why bother with a second system?
A: If you have a windows 2003 server, you can have a look at Sharepoint Services 3.0 (http://technet.microsoft.com/en-us/windowsserver/sharepoint/bb684453.aspx).
It can do version control for documents, and has a nice integration with Office, starting with Office xp, but office 2003 and 2007 are better. Office and PDF files can be indexed (via Adobe IFilter), and searched. You can also add IFilters to search metadata in your documents.
Regarding large files, by default the max filesize is 50MB, but it can be configured.
A: We've just moved over to Perforce and have been really happy with it. It's a commercial product, but it's so powerful and easy to use that it's worth the price per seat IMHO.
A: A decent folder structure and naming scheme?
VCS don't really handle images and such very well - would it be possible to have the code in a VCS (SVN/Git/Mercurial etc), along-side a sensible folder structure for the binary-assets (source photos, Photoshop PSD files, Illustrator files and so on)?
It wouldn't handle syncing, but a central file-server would achieve the same thing.
It would require some enforcing and kitten-herding to get people to name things properly, but I think having a version folder for each asset (like someproject/asset/header_logo/v01/header_logo_v01.psd) will basically be like a VCS, but easier to move between different revisions (no vcs checkout blah -r 234 when a client decides they prefered v02 more than v03)
A: Your question is interesting because your specifying that it be suitable for a small office. At the enterprise level, I would recommend something along the line of EMC Documentum's eRoom, but obviously thats going to be way more than you need, and more than you want cost-wise as well. I'm not sure of the licensing details on this but I've heard that if your office has MS Office, you have access to Sharepoint, which might work well for you. I'm also sure there are a lot of SAAS implementations of this kind of stuff, so you may want to look at that, keeping in mind that the servers will not be hosted by you, so if the material is extremely sensitive, thats obviously not the proper route.
A: You might want to consider using a Mac as your server and using Time Machine to backup your shared folders. Doing this gives you automatic backups and allows you to share through Samba so everyone can have a network drive on their computer. A Mac server is probably overkill. A Mac Mini would do for a small office or a repurposed desktop machine.
You might also consider Amazon's S3 service to do offline backups. Since it's a pay-as-you-go service this can scale with use, and if you feel you want to move to something else you can always download your data and take it somewhere else.
A: Windows Vista features local file versioning in its file system, which can be useful, but is limited in terms of teamwork. However, if somebody overwrites somebody else's file, a new version is stored as it should be.
A: Also consider KnowledgeTree. Have a look at it, some demos/screenshots are available at
http://www.knowledgetree.com/
It has a free open source Community Edition - so it's cost effective. We haven't tried it, but we chose this one over other systems for a small business looking for document versioning solution.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/14308",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Using OpenGL textures larger than window/display size I'm having problems using textures that are larger than the OpenGL window or the display size as non-display render targets.
What's the solution for this problem?
A: There's a simple solution.
Assuming your (non-display) textures are 1024x1024 and you are restricted to a 256x256 window/display.
unsigned int WIN_WIDTH = 256;
unsigned int WIN_HEIGHT = WIN_WIDTH;
unsigned int TEX_WIDTH = 1024;
unsigned int TEX_HEIGHT = TEX_WIDTH;
Use the window size to create your OpenGL window:
glutInitWindowSize(WIN_WIDTH, WIN_HEIGHT);
But, use the texture size for everything else:
glViewport(0, 0, TEX_WIDTH, TEX_HEIGHT);
gluOrtho2D(0.0, TEX_WIDTH, 0.0, TEX_HEIGHT);
glTexCoord2i(TEX_WIDTH, TEX_HEIGHT);
| {
"language": "en",
"url": "https://stackoverflow.com/questions/14310",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
} |
Q: Using GLUT bitmap fonts I'm writing a simple OpenGL application that uses GLUT. I don't want to roll my own font rendering code, instead I want to use the simple bitmap fonts that ship with GLUT. What are the steps to get them working?
A: Simple text display is easy to do in OpenGL using GLUT bitmap fonts. These are simple 2D fonts and are not suitable for display inside your 3D environment. However, they're perfect for text that needs to be overlayed on the display window.
Here are the sample steps to display Eric Cartman's favorite quote colored in green on a GLUT window:
We'll be setting the raster position in screen coordinates. So, setup the projection and modelview matrices for 2D rendering:
glMatrixMode(GL_PROJECTION);
glPushMatrix();
glLoadIdentity();
gluOrtho2D(0.0, WIN_WIDTH, 0.0, WIN_HEIGHT);
glMatrixMode(GL_MODELVIEW);
glPushMatrix();
glLoadIdentity();
Set the font color. (Set this now, not later.)
glColor3f(0.0, 1.0, 0.0); // Green
Set the window location where the text should be displayed. This is done by setting the raster position in screen coordinates. Lower left corner of the window is (0, 0).
glRasterPos2i(10, 10);
Set the font and display the string characters using glutBitmapCharacter.
string s = "Respect mah authoritah!";
void * font = GLUT_BITMAP_9_BY_15;
for (string::iterator i = s.begin(); i != s.end(); ++i)
{
char c = *i;
glutBitmapCharacter(font, c);
}
Restore back the matrices.
glMatrixMode(GL_MODELVIEW);
glPopMatrix();
glMatrixMode(GL_PROJECTION);
glPopMatrix();
| {
"language": "en",
"url": "https://stackoverflow.com/questions/14318",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "19"
} |
Q: RGB to monochrome conversion How do I convert the RGB values of a pixel to a single monochrome value?
A: This MSDN article uses (0.299 * color.R + 0.587 * color.G + 0.114 * color.B);
This Wikipedia article uses (0.3* color.R + 0.59 * color.G + 0.11 * color.B);
A: This depends on what your motivations are. If you just want to turn an arbitrary image to grayscale and have it look pretty good, the conversions in other answers to this question will do.
If you are converting color photographs to black and white, the process can be both very complicated and subjective, requiring specific tweaking for each image. For an idea what might be involved, take a look at this tutorial from Adobe for Photoshop.
Replicating this in code would be fairly involved, and would still require user intervention to get the resulting image aesthetically "perfect" (whatever that means!).
A: As mentioned also, a grayscale translation (note that monochromatic images need not to be in grayscale) from an RGB-triplet is subject to taste.
For example, you could cheat, extract only the blue component, by simply throwing the red and green components away, and copying the blue value in their stead. Another simple and generally ok solution would be to take the average of the pixel's RGB-triplet and use that value in all three components.
The fact that there's a considerable market for professional and not-very-cheap-at-all-no-sirree grayscale/monochrome converter plugins for Photoshop alone, tells that the conversion is just as simple or complex as you wish.
A: I found one possible solution in the Color FAQ. The luminance component Y (from the CIE XYZ system) captures what is most perceived by humans as color in one channel. So, use those coefficients:
mono = (0.2125 * color.r) + (0.7154 * color.g) + (0.0721 * color.b);
A: The logic behind converting any RGB based picture to monochrome can is not a trivial linear transformation. In my opinion such a problem is better addressed by "Color Segmentation" techniques. You could achieve "Color segmentation" by k-means clustering.
See reference example from MathWorks site.
https://www.mathworks.com/examples/image/mw/images-ex71219044-color-based-segmentation-using-k-means-clustering
Original picture in colours.
After converting to monochrome using k-means clustering
How does this work?
Collect all pixel values from entire image. From an image which is W pixels wide and H pixels high, you will get W *H color values. Now, using k-means algorithm create 2 clusters (or bins) and throw the colours into the appropriate "bins". The 2 clusters represent your black and white shades.
Youtube video demonstrating image segmentation using k-means?
https://www.youtube.com/watch?v=yR7k19YBqiw
Challenges with this method
The k-means clustering algorithm is susceptible to outliers. A few random pixels with a color whose RGB distance is far away from the rest of the crowd could easily skew the centroids to produce unexpected results.
A: Just to point out in the self-selected answer, you have to LINEARIZE the sRGB values before you can apply the coefficients. This means removing the transfer curve.
To remove the power curve, divide the 8 bit R G and B channels by 255.0, then either use the sRGB piecewise transform, which is recommended for image procesing, OR you can cheat and raise each channel to the power of 2.2.
Only after linearizing can you apply the coefficients shown, (which also are not exactly correct in the selected answer).
The standard is 0.2126 0.7152 and 0.0722. Multiply each channel by its coefficient and sum them together for Y, the luminance. Then re-apply the gamma to Y and multiply by 255, then copy to all three channels, and boom you have a greyscale (monochrome) image.
Here it is all at once in one simple line:
// Andy's Easy Greyscale in one line.
// Send it sR sG sB channels as 8 bit ints, and
// it returns three channels sRgrey sGgrey sBgrey
// as 8 bit ints that display glorious grey.
sRgrey = sGgrey = sBgrey = Math.min(Math.pow((Math.pow(sR/255.0,2.2)*0.2126+Math.pow(sG/255.0,2.2)*0.7152+Math.pow(sB/255.0,2.2)*0.0722),0.454545)*255),255);
And that's it. Unless you have to parse hex strings....
| {
"language": "en",
"url": "https://stackoverflow.com/questions/14330",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "21"
} |
Q: Starting off with OpenGL under Cygwin Is it possible to compile and run OpenGL programs from under Cygwin? If yes, how?
A: If the above doesn't work (and it didn't for me), try the following (which did!)
gcc ogl.c -lglut -lglu -lgl
A: I do not normally post answers this long, but this one is worth it.
I will present a Windows 10 64-bit setup for Cygwin that uses the same libraries as Visual Studio. You will be able to use both development environments with the same code (same includes and libraries), so you can switch between the two as you see fit.
You need three libraries: OpenGL, GLEW, and GLFW.
*
*OpenGL
Visual Studio: The following default locations are valid for current versions of Windows 10 and Visual Studio 2019.
OpenGL static library:
C:\Program Files(x86)\Microsoft Visual Studio\2019\Community\SDK\ScopeCPPSDK\vc15\lib\SDK\lib\opengl32.lib
OpenGL DLL:
C:\Windows\SysWOW64\opengl32.dll
The opengl32.lib library will need to be specified under the VS project Properties -> Configuration Properties -> Linker -> Input -> Additional Dependencies. The same applies for all other dynamic libraries under Visual Studio. I will not mention it again.
Cygwin:
OpenGL static library default location:
/lib/w32api/libopengl32.a
OpenGL dynamic library (uses the Windows DLL):
C:\Windows\SysWOW64\opengl32.dll
*GLEW
Visual Studio: Download 32-bit/64-bit binaries from http://glew.sourceforge.net/ and install in a custom folder, say C:\OpenGL\glew-2.1.0. The same download works for both Visual Studio and Cygwin.
GLEW headers (to #include GL/glew.h):
C:\OpenGL\glew-2.1.0\include
GLEW static library:
C:\OpenGL\glew-2.1.0\lib\Release\x64\glew32.lib
GLEW DLL:
C:\OpenGL\glew-2.1.0\bin\Release\x64\glew32.dll
These can be specified in your VS project's Properties menu.
Cygwin: You can link against this library from Cygwin as-is, meaning you can specify its download directory for the INCS, LIBS, and LDLIBS variables in your Makefile as follows (consistent with the download directory specified above):
GLEW headers directory:
/cygdrive/c/OpenGL/glew-2.1.0/include
GLEW static library directory:
/cygdrive/c/OpenGL/glew-2.1.0/lib/Release/x64
GLEW dynamic library directory:
/cygdrive/c/OpenGL/glew-2.1.0/bin/Release/x64
With these values for INCS, LIBS, and LDLIBS respectively, you can then link using the UNIX naming conventions as shown in the complete Makefile, at the bottom of the post.
*GLFW
This can be downloaded at https://www.glfw.org/download. For our 64-bit setup, you need the Windows 64-bit precompiled binaries. You can place it also in a custom folder, say C:\OpenGL\glfw-3.3.4.bin.WIN64. The same download works for both VS and Cygwin.
Visual Studio:
You can specify directly the download locations into your project Properties for headers (to #include GLFW/glfw3.h in your source code) and DLLs (to have VS link against these libraries), respectively.
Cygwin:
For Cygwin, GLFW is trickier, because you can no longer link against it directly from the download location. You need to:
(a) copy the headers, static, and dynamic libraries from the download locations:
C:\OpenGL\glfw-3.3.4.bin.WIN64\include\GLFW\*.h
C:\OpenGL\glfw-3.3.4.bin.WIN64\lib-mingw-w64\*.a
C:\OpenGL\glfw-3.3.4.bin.WIN64\lib-mingw-w64\*.dll
...into your toolchain's (MinGW's) respective locations:
GLFW headers (create the include directory):
/usr/x86_64-w64-mingw32/include/GLFW/*.h
GLFW static libraries:
/usr/x86_64-w64-mingw32/lib/*.a
GLFW dynamic libraries:
/usr/x86_64-w64-mingw32/bin/*.dll
(b) place the dynamic library location into your PATH environment variable, editable in your .bash_profile file in your home directory.
The Makefile for Cygwin is:
CC=/usr/bin/x86_64-w64-mingw32-c++.exe
OPTS=-std=c++11
DEBUG=-g
CFLAGS=-Wall -c ${DEBUG}
INCS= -I.\
-I/cygdrive/c/OpenGL/glew-2.1.0/include\
-I/cygdrive/c/cygwin64/usr/x86_64-w64-mingw32
LIBS= -L/usr/lib\
-L/cygdrive/c/OpenGL/glew-2.1.0/lib/Release/x64\
-L/cygdrive/c/cygwin64/usr/x86_64-w64-mingw32/lib
LDLIBS= -L/bin\
-L/cygdrive/c/OpenGL/glew-2.1.0/bin/Release/x64\
-L/cygdrive/c/cygwin64/usr/x86_64-w64-mingw32\bin
Program.o: Program.cpp
${CC} ${OPTS} ${INCS} -c $<
Program: Program.o
${CC} ${OPTS} ${LIBS} ${LDLIBS} Program.o -lopengl32 -lglew32 -lglew32.dll -lglfw3 -lgdi32 -luser32 -o Program
With this setup, you can use the same exact source code files in both VS and Cygwin. You can compile, link, and run Program.exe from its directory in Cygwin with:
$ make Program
$ ./Program.exe
You can run from VS a Cygwin-compiled program by opening the existing *.exe as an SLN project and running it using the IDE interface. Conversely, you can run the VS executable (created by VS in Project/Debug or Project/Release) directly from the Cygwin command line with the command above.
The includes are:
#include <GL/glew.h>
#include <GLFW/glfw3.h>
No changes whatsoever will have to be made in the source code to switch back and forth b/w VS and Cygwin. Happy coding :-)
A: It is possible to compile and run OpenGL programs under Cygwin. I illustrate the basic steps here:
*
*I assume you know OpenGL programming. If not, get the Red Book (The OpenGL Programming Guide). It is mandatory reading for OpenGL anyway.
*I assume you have Cygwin installed. If not, visit cygwin.com and install it.
*To compile and run OpenGL programs, you need the Cygwin package named opengl. In the Cygwin installer, it can be found under the Graphics section. Please install this package.
*Write a simple OpenGL program, say ogl.c.
*Compile the program using the flags -lglut32 -lglu32 -lopengl32. (This links your program with the GLUT, GLU and OpenGL libraries. An OpenGL program might typically use functions from all the 3 of them.) For example:
$ gcc ogl.c -lglut32 -lglu32 -lopengl32
*Run the program. It's as simple as that!
A: I remember doing this once with some success, a few years ago, basically trying to cross compile a small Linux OpenGL C++ program. I do recall problems with Windows OpenGL drivers being behind the times (due to MS's focus on DirectX). I had NVidia OpenGL and DirectX drivers installed on my Windows system, but cygwin/g++ seemed to want to only use the Microsoft OpenGL DLLs, many years old, which do not have the latest support for all the ARB extensions, like shader programs, etc. YMMV.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/14344",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14"
} |
Q: How do I call a Flex SWF from a remote domain using Flash (AS3)? I have a Flex swf hosted at http://www.a.com/a.swf.
I have a flash code on another doamin that tries loading the SWF:
_loader = new Loader();
var req:URLRequest = new URLRequest("http://services.nuconomy.com/n.swf");
_loader.contentLoaderInfo.addEventListener(Event.COMPLETE,onLoaderFinish);
_loader.load(req);
On the onLoaderFinish event I try to load classes from the remote SWF and create them:
_loader.contentLoaderInfo.applicationDomain.getDefinition("someClassName") as Class
When this code runs I get the following exception
SecurityError: Error #2119: Security sandbox violation: caller http://localhost.service:1234/flashTest/Main.swf cannot access LoaderInfo.applicationDomain owned by http://www.b.com/b.swf.
at flash.display::LoaderInfo/get applicationDomain()
at NuconomyLoader/onLoaderFinish()
Is there any way to get this code working?
A: This is all described in The Adobe Flex 3 Programming ActionScript 3 PDF on page 550 (Chapter 27: Flash Player Security / Cross-scripting):
If two SWF files written with ActionScript 3.0 are served from different domains—for example, http://siteA.com/swfA.swf and http://siteB.com/swfB.swf—then, by default, Flash Player does not allow swfA.swf to script swfB.swf, nor swfB.swf to script swfA.swf. A SWF file gives permission to SWF files from other domains by calling Security.allowDomain(). By calling Security.allowDomain("siteA.com"), swfB.swf gives SWF files from siteA.com permission to script it.
It goes on in some more detail, with diagrams and all.
A: You'll need a crossdomain.xml policy file on the server that has the file you load, it should look a something like this:
<?xml version="1.0"?>
<!-- http://www.foo.com/crossdomain.xml -->
<cross-domain-policy>
<allow-access-from domain="www.friendOfFoo.com" />
<allow-access-from domain="*.foo.com" />
<allow-access-from domain="105.216.0.40" />
</cross-domain-policy>
Put it as crossdomain.xml in the root of the domain you're loading from.
Also you need to set the loader to read this file as such:
var loaderContext:LoaderContext = new LoaderContext();
loaderContext.checkPolicyFile = true;
var loader:Loader = new Loader();
loader.contentLoaderInfo.addEventListener( Event.COMPLETE, onComplete );
loader.load( new URLRequest( "http://my.domain.com/image.png" ), loaderContext );
code sample yoinked from http://blog.log2e.com/2008/08/15/when-a-cross-domain-policy-file-is-not-enough/
A: Mayhaps System.Security.allowDomain is what you need?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/14350",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Accessing OpenGL state variables in Cg I need to access the OpenGL state variables (such as the MVP matrices) in my Cg shader program. I'm passing these values to my Cg shader program manually using calls such as cgGLSetStateMatrixParameter() in my C/C++ code. Is there an easier way to do this?
A: If you are on any fairly recent Cg profile (arbvp1 and later), your Cg shader programs can in fact access the OpenGL state (MVP matrices, material and light settings) directly. This makes writing those programs less painful.
Here are some of the state variables which can be accessed:
MVP matrices of all types:
state.matrix.mvp
state.matrix.inverse.mvp
state.matrix.modelview
state.matrix.inverse.modelview
state.matrix.modelview.invtrans
state.matrix.projection
state.matrix.inverse.projection
Light and material properties:
state.material.ambient
state.material.diffuse
state.material.specular
state.light[0].ambient
For the full list of state variables, refer to the section Accessing OpenGL State, OpenGL ARB Vertex Program Profile (arbvp1) in the Cg Users Manual.
Note:
*
*All the OpenGL state variables are of uniform type when accessed in Cg.
*For light variables, the index is mandatory. (Eg: 1 in state.light[1].ambient)
*Lighting or light(s) need not be enabled to use those corresponding light values inside Cg. But, they need to be set using glLight() functions.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/14358",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: .Net Dynamic Plugin Loading with Authority What recommendations can you give for a system which must do the following:
Load Plugins (and eventually execute them) but have 2 methods of loading these plugins:
*
*Load only authorized plugins
(developed by the owner of the
software)
*Load all plugins
And we need to be reasonably secure that the authorized plugins are the real deal (unmodified). However all plugins must be in seperate assemblies. I've been looking at using strong named assemblies for the plugins, with the public key stored in the loader application, but to me this seems too easy to modify the public key within the loader application (if the user was so inclined) regardless of any obfuscation of the loader application. Any more secure ideas?
A: Basically, if you're putting your code on someone else's machine, there's no absolute guarantee of security.
You can look at all kinds of security tricks, but in the end, the code is on their machine so it's out of your control.
How much do you stand to lose if the end user loads an unauthorised plugin?
A:
How much do you stand to lose if the end user loads an unauthorised plugin?
Admittedly this won't happen often, but when/if it does happen we lose a lot and I although I understand we will produce nothing 100% secure, I want to make it enough of a hindrance to put people off doing it.
The annoying thing about going with a simple dynamic loading with full strong name, is that all it takes is a simple string literal change within the loader app to load any other assembly even though the plugins are signed.
A: you can broaden your question : "how can I protect my .net assemblies from reverse engineering ?"
the answer is - you can not. for those who havent seen it yet, just look up "reflector", and run it on some naive exe.
(by the way, this is always the answer for code that is out of your hands, as long as you do not have en/decryption hardware sent with it),
obfuscating tries to make the reverse engineering to be harder (cost more money) than development, and for some types of algorithems it succeeds.
A: Sign the assemblies.
Strong-name signing, or strong-naming,
gives a software component a globally
unique identity that cannot be spoofed
by someone else. Strong names are used
to guarantee that component
dependencies and configuration
statements map to exactly the right
component and component version.
http://msdn.microsoft.com/en-us/library/h4fa028b(VS.80).aspx
| {
"language": "en",
"url": "https://stackoverflow.com/questions/14359",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Valid OpenGL context How and at what stage is a valid OpenGL context created in my code? I'm getting errors on even simple OpenGL code.
A: From the posts on comp.graphics.api.opengl, it seems like most newbies burn their hands on their first OpenGL program. In most cases, the error is caused due to OpenGL functions being called even before a valid OpenGL context is created. OpenGL is a state machine. Only after the machine has been started and humming in the ready state, can it be put to work.
Here is some simple code to create a valid OpenGL context:
#include <stdlib.h>
#include <GL/glut.h>
// Window attributes
static const unsigned int WIN_POS_X = 30;
static const unsigned int WIN_POS_Y = WIN_POS_X;
static const unsigned int WIN_WIDTH = 512;
static const unsigned int WIN_HEIGHT = WIN_WIDTH;
void glInit(int, char **);
int main(int argc, char * argv[])
{
// Initialize OpenGL
glInit(argc, argv);
// A valid OpenGL context has been created.
// You can call OpenGL functions from here on.
glutMainLoop();
return 0;
}
void glInit(int argc, char ** argv)
{
// Initialize GLUT
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_DOUBLE);
glutInitWindowPosition(WIN_POS_X, WIN_POS_Y);
glutInitWindowSize(WIN_WIDTH, WIN_HEIGHT);
glutCreateWindow("Hello OpenGL!");
return;
}
Note:
*
*The call of interest here is glutCreateWindow(). It not only creates a window, but also creates an OpenGL context.
*The window created with glutCreateWindow() is not visible until glutMainLoop() is called.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/14364",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: GLUT pop-up menus Is it easy to create GLUT pop-up menus for my OpenGL application? If yes, how?
A: Creating and using pop-up menus with GLUT is very simple. Here is a code sample that creates a pop-up menu with 4 options:
// Menu items
enum MENU_TYPE
{
MENU_FRONT,
MENU_SPOT,
MENU_BACK,
MENU_BACK_FRONT,
};
// Assign a default value
MENU_TYPE show = MENU_BACK_FRONT;
// Menu handling function declaration
void menu(int);
int main()
{
// ...
// Create a menu
glutCreateMenu(menu);
// Add menu items
glutAddMenuEntry("Show Front", MENU_FRONT);
glutAddMenuEntry("Show Back", MENU_BACK);
glutAddMenuEntry("Spotlight", MENU_SPOT);
glutAddMenuEntry("Blend 'em all", MENU_BACK_FRONT);
// Associate a mouse button with menu
glutAttachMenu(GLUT_RIGHT_BUTTON);
// ...
return;
}
// Menu handling function definition
void menu(int item)
{
switch (item)
{
case MENU_FRONT:
case MENU_SPOT:
case MENU_DEPTH:
case MENU_BACK:
case MENU_BACK_FRONT:
{
show = (MENU_TYPE) item;
}
break;
default:
{ /* Nothing */ }
break;
}
glutPostRedisplay();
return;
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/14370",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: CVS to SVN conversion and reorganizing branches I am converting from existing CVS repository to SVN repository. CVS repository has few brances and I'd like to rename branches while converting.
Wanted conversion is like this:
CVS branch SVN branch
HEAD -> branches/branchX
branchA -> trunk
branchB -> branches/branchB
branchC -> branches/branchC
That is, CVS HEAD becomes a normal branch and CVS branchA becomes SVN trunk.
Both CVS and SVN repositories will be on same linux machine.
How could this be done?
Also conversion where CVS branchA becomes SVN trunk and all other CVS branches are ignored might be enough.
A: I am especially interested in preserving commit history. If I rename and move branches around in SVN after the conversion, will the history be preserved?
Yes. Subversion also keeps track of changes to the directory structure, and all version history is preserved even if a file is moved in the tree.
I recommend converting the repository with cvs2svn, including branches and tags. Once the repository is in Subversion you can move the branches and tags around as you wish. This also keeps the history of the actual tags and branches being renamed, which may be interesting in a historical context later.
A: It's been a while since I've done a CVS -> SVN conversion, and probably even longer since I did one with a nontrivial branch structure. Since SVN can move around directory trees fairly easily, you could do the whole conversion first, then sort out the trunk/branches structure entirely within SVN later.
If you do get to that point and are moving around whole directory trees within SVN, it's probably best if you commit after every tree rename/move step. Just something to keep in mind.
A: Subversion branches are directories, so you could just move the branches after the import has finished and no history will be lost.
A: Some additional information to support the accepted answer:
cvs2svn does not allow conversion of from trunk to a branch or the branch to trunk
so moving things once you're converted to svn is the best way to go.
A: It is possible to move the trunk and branch directories after the conversion, but this would require an explicit post-conversion SVN commit that will remain in your SVN history, making history exploration a bit more complicated.
But you can indeed tell cvs2svn to store the trunk and branches to the SVN paths that you want by using the --symbol-hints=symbol-hints.txt command-line option or (if you are using an options file for your conversion) the SymbolHintsFileRule('symbol-hints.txt') symbol strategy rule, where symbol-hints.txt is a file containing lines like the following:
. .trunk. trunk branches/branchX .
. branchX branch trunk .
Please note that some commit messages that are autogenerated by cvs2svn (for example, for the creation of the branch) will mention the original branch name.
A: Although moving around branches after the conversion is done is possible, it may be better to setup the cvs2svn configuration file to specify exactly the name you want for each of your existing branches. One of the benefits of this is that FishEye will understand the output a lot better.
A: I am especially interested in preserving commit history. If I rename and move branches around in SVN after the conversion, will the history be preserved?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/14373",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: LINQ, entity that implements Interface and exception in mapping I'm using repository pattern with LINQ, have IRepository.DeleteOnSubmit(T Entity). It works fine, but when my entity class has interface, like this:
public interface IEntity { int ID {get;set;} }
public partial class MyEntity: IEntity {
public int ID {
get { return this.IDfield; }
set { this.IDfield=value; }
}
}
and then trying to delete some entity like this:
IEntity ie=repository.GetByID(1);
repoitory.DeleteOnSubmit(ie);
throws
The member 'IEntity.ID' has no supported translation to SQL.
fetching data from DB works, but delete and insert doesn't. How to use interface against DataContext?
Here it is:
Exception message:
The member 'MMRI.DAL.ITag.idContent' has no supported translation to SQL.
Code:
var d = repContent.GetAll().Where(x => x.idContent.Equals(idContent));
foreach (var tagConnect in d) <- error line
{
repContet.DeleteOnSubmit(tagConnect);
(it gets all tags from DB, and deletes them)
And stack trace:
[NotSupportedException: The member 'MMRI.DAL.ITag.idContent' has no supported translation to SQL.]
System.Data.Linq.SqlClient.Visitor.VisitMember(SqlMember m) +621763
System.Data.Linq.SqlClient.SqlVisitor.Visit(SqlNode node) +541
System.Data.Linq.SqlClient.SqlVisitor.VisitExpression(SqlExpression exp) +8
System.Data.Linq.SqlClient.SqlVisitor.VisitBinaryOperator(SqlBinary bo) +18
System.Data.Linq.SqlClient.Visitor.VisitBinaryOperator(SqlBinary bo) +18
System.Data.Linq.SqlClient.SqlVisitor.Visit(SqlNode node) +196
System.Data.Linq.SqlClient.SqlVisitor.VisitExpression(SqlExpression exp) +8
System.Data.Linq.SqlClient.SqlVisitor.VisitSelectCore(SqlSelect select) +46
System.Data.Linq.SqlClient.Visitor.VisitSelect(SqlSelect select) +20
System.Data.Linq.SqlClient.SqlVisitor.Visit(SqlNode node) +1024
System.Data.Linq.SqlClient.SqlProvider.BuildQuery( ...
When I try do decorate partial class:
[Column(Storage = "_idEvent", DbType = "Int NOT NULL", IsPrimaryKey = true)]
public int idContent
{ get { return this.idEvent; } set { this.idEvent=value; } }
it throws error "Invalid column name 'idContent'."
A: It appears Microsoft dropped support for == operator in interfaces when using linq-to-sql in MVC4 (or maybe it was never supported). You can however use i.ID.Equals(someId) in place of the == operator.
Casting IQueryable to IEnumerable works but should not be used! The reason is: IQueryable has funky implementation of IEnumerable. Whatever linq method you'll use on a IQueryable through the IEnumerable interface will cause the query to be executed first, have all the results fetched to the memory from the DB and eventually running the method localy on the data (normally those methods would be translated to SQL and executed in the DB). Imagine trying to get a single row from a table containing billion rows, fetching all of them only to pick one (and it gets much worse with careless casting of IQueryable to IEnumerable and lazy loading related data).
Apparently Linq has no problem using == operator with interfaces on local data (so only IQueryable is affected) and also with Entity Frameworks (or so I heard).
A: This works for me -
public partial class MyEntity: IEntity
{ [Column(Name = "IDfield", Storage = "_IDfield", IsDbGenerated = true)]
public int ID
{
get { return this.IDfield; }
set { this.IDfield=value; }
}
}
A: Try this:
using System.Data.Linq.Mapping;
public partial class MyEntity: IEntity
{ [Column(Storage="IDfield", DbType="int not null", IsPrimaryKey=true)]
public int ID
{
get { return this.IDfield; }
set { this.IDfield=value; }
}
}
A: For translating your LINQ query to actual SQL, Linq2SQL inspects the expression you give it. The problem is that you have not supplied enough information for L2S to be able to translate the "ID" property to the actual DB column name. You can achieve what you want by making sure that L2S can map "ID" to "IDField".
This should be possible using the approach provided in answers.
If you use the designer, you can also simply rename the class property "IDField" to "ID", with the added benefit that you won't have to explicitly implement the "ID" property in your partial class anymore, i.e. the partial class definition for MyEntity simply becomes:
public partial class MyEntity: IEntity
{
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/14375",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
} |
Q: Using the mouse scrollwheel in GLUT I want to use the mouse scrollwheel in my OpenGL GLUT program to zoom in and out of a scene? How do I do that?
A: Freeglut's glutMouseWheelFunc callback is version dependant and not reliable in X. Use standard mouse function and test for buttons 3 and 4.
The OpenGlut notes on glutMouseWheelFunc state:
Due to lack of information about the mouse, it is impossible to
implement this correctly on X at this time. Use of this function
limits the portability of your application. (This feature does work on
X, just not reliably.) You are encouraged to use the standard,
reliable mouse-button reporting, rather than wheel events.
Using standard GLUT mouse reporting:
#include <GL/glut.h>
<snip...>
void mouse(int button, int state, int x, int y)
{
// Wheel reports as button 3(scroll up) and button 4(scroll down)
if ((button == 3) || (button == 4)) // It's a wheel event
{
// Each wheel event reports like a button click, GLUT_DOWN then GLUT_UP
if (state == GLUT_UP) return; // Disregard redundant GLUT_UP events
printf("Scroll %s At %d %d\n", (button == 3) ? "Up" : "Down", x, y);
}else{ // normal button event
printf("Button %s At %d %d\n", (state == GLUT_DOWN) ? "Down" : "Up", x, y);
}
}
<snip...>
glutMouseFunc(mouse);
As the OP stated, it is "dead simple". He was just wrong.
A: observe case 3 and 4 in the switch statement below in the mouseClick callback
glutMouseFunc(mouseClick);
...
void mouseClick(int btn, int state, int x, int y) {
if (state == GLUT_DOWN) {
switch(btn) {
case GLUT_LEFT_BUTTON:
std::cout << "left click at: (" << x << ", " << y << ")\n";
break;
case GLUT_RIGHT_BUTTON:
std::cout << "right click at: (" << x << ", " << y << ")\n";
break;
case GLUT_MIDDLE_BUTTON:
std::cout << "middle click at: (" << x << ", " << y << ")\n";
break;
case 3: //mouse wheel scrolls
std::cout << "mouse wheel scroll up\n";
break;
case 4:
std::cout << "mouse wheel scroll down\n";
break;
default:
break;
}
}
glutPostRedisplay();
}
A: Note that venerable Nate Robin's GLUT library doesn't support the scrollwheel. But, later implementations of GLUT like FreeGLUT do.
Using the scroll wheel in FreeGLUT is dead simple. Here is how:
Declare a callback function that shall be called whenever the scroll wheel is scrolled. This is the prototype:
void mouseWheel(int, int, int, int);
Register the callback with the (Free)GLUT function glutMouseWheelFunc().
glutMouseWheelFunc(mouseWheel);
Define the callback function. The second parameter gives the direction of the scroll. Values of +1 is forward, -1 is backward.
void mouseWheel(int button, int dir, int x, int y)
{
if (dir > 0)
{
// Zoom in
}
else
{
// Zoom out
}
return;
}
That's it!
| {
"language": "en",
"url": "https://stackoverflow.com/questions/14378",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "34"
} |
Q: fopen deprecated warning With the Visual Studio 2005 C++ compiler, I get the following warning when my code uses the fopen() and such calls:
1>foo.cpp(5) : warning C4996: 'fopen' was declared deprecated
1> c:\program files\microsoft visual studio 8\vc\include\stdio.h(234) : see declaration of 'fopen'
1> Message: 'This function or variable may be unsafe. Consider using fopen_s instead. To disable deprecation, use _CRT_SECURE_NO_DEPRECATE. See online help for details.'
How do I prevent this?
A: I'am using VisualStdio 2008.
In this case I often set Preprocessor Definitions
Menu \ Project \ [ProjectName] Properties... Alt+F7
If click this menu or press Alt + F7 in project window, you can see "Property Pages" window.
Then see menu on left of window.
Configuration Properties \ C/C++ \ Preprocessor
Then add _CRT_SECURE_NO_WARNINGS to \ Preprocessor Definitions.
A: Consider using a portability library like glib or the apache portable runtime. These usually provide safe, portable alternatives to calls like these. It's a good thing too, because these insecure calls are deprecated in most modern environments.
A: Well you could add a:
#pragma warning (disable : 4996)
before you use fopen, but have you considered using fopen_s as the warning suggests? It returns an error code allowing you to check the result of the function call.
The problem with just disabling deprecated function warnings is that Microsoft may remove the function in question in a later version of the CRT, breaking your code (as stated below in the comments, this won't happen in this instance with fopen because it's part of the C & C++ ISO standards).
A: It looks like Microsoft has deprecated lots of calls which use buffers to improve code security. However, the solutions they're providing aren't portable. Anyway, if you aren't interested in using the secure version of their calls (like fopen_s), you need to place a definition of _CRT_SECURE_NO_DEPRECATE before your included header files. For example:
#define _CRT_SECURE_NO_DEPRECATE
#include <stdio.h>
The preprocessor directive can also be added to your project settings to effect it on all the files under the project. To do this add _CRT_SECURE_NO_DEPRECATE to Project Properties -> Configuration Properties -> C/C++ -> Preprocessor -> Preprocessor Definitions.
A: This is just Microsoft being cheeky. "Deprecated" implies a language feature that may not be provided in future versions of the standard language / standard libraries, as decreed by the standards committee. It does not, or should not mean, "we, unilaterally, don't think you should use it", no matter how well-founded that advice is.
A: If you code is intended for a different OS (like Mac OS X, Linux) you may use following:
#ifdef _WIN32
#define _CRT_SECURE_NO_DEPRECATE
#endif
A: If you want it to be used on many platforms, you could as commented use defines like:
#if defined(_MSC_VER) || defined(WIN32) || defined(_WIN32) || defined(__WIN32__) \
|| defined(WIN64) || defined(_WIN64) || defined(__WIN64__)
errno_t err = fopen_s(&stream,name, "w");
#endif
#if defined(unix) || defined(__unix) || defined(__unix__) \
|| defined(linux) || defined(__linux) || defined(__linux__) \
|| defined(sun) || defined(__sun) \
|| defined(BSD) || defined(__OpenBSD__) || defined(__NetBSD__) \
|| defined(__FreeBSD__) || defined __DragonFly__ \
|| defined(sgi) || defined(__sgi) \
|| defined(__MACOSX__) || defined(__APPLE__) \
|| defined(__CYGWIN__)
stream = fopen(name, "w");
#endif
A: Many of Microsoft's secure functions, including fopen_s(), are part of C11, so they should be portable now. You should realize that the secure functions differ in exception behaviors and sometimes in return values. Additionally you need to be aware that while these functions are standardized, it's an optional part of the standard (Annex K) that at least glibc (default on Linux) and FreeBSD's libc don't implement.
However, I fought this problem for a few years. I posted a larger set of conversion macros here., For your immediate problem, put the following code in an include file, and include it in your source code:
#pragma once
#if !defined(FCN_S_MACROS_H)
#define FCN_S_MACROS_H
#include <cstdio>
#include <string> // Need this for _stricmp
using namespace std;
// _MSC_VER = 1400 is MSVC 2005. _MSC_VER = 1600 (MSVC 2010) was the current
// value when I wrote (some of) these macros.
#if (defined(_MSC_VER) && (_MSC_VER >= 1400) )
inline extern
FILE* fcnSMacro_fopen_s(char *fname, char *mode)
{ FILE *fptr;
fopen_s(&fptr, fname, mode);
return fptr;
}
#define fopen(fname, mode) fcnSMacro_fopen_s((fname), (mode))
#else
#define fopen_s(fp, fmt, mode) *(fp)=fopen( (fmt), (mode))
#endif //_MSC_VER
#endif // FCN_S_MACROS_H
Of course this approach does not implement the expected exception behavior.
A: For those who are using Visual Studio 2017 version, it seems like the preprocessor definition required to run unsafe operations has changed. Use instead:
#define _CRT_SECURE_NO_WARNINGS
It will compile then.
A: I also got the same problem. When I try to add the opencv library
#include <opencv\cv.h>
I got not a warning but an error.
error C4996: 'fopen': This function or variable may be unsafe. Consider using fopen_s instead. To disable deprecation, use _CRT_SECURE_NO_WARNINGS. See online help for details. c:\program files (x86)\opencv\build\include\opencv2\flann\logger.h
I also used the preprocessor directives as mentioned. But that didn't solve the problem.
I solved it by doing as follows:
*
*Go to Properties -> C/C++ -> Precompiled Headers -> Choose Not Using Precompiled Headers in Precompiled Header.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/14386",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "74"
} |
Q: Regex and unicode I have a script that parses the filenames of TV episodes (show.name.s01e02.avi for example), grabs the episode name (from the www.thetvdb.com API) and automatically renames them into something nicer (Show Name - [01x02].avi)
The script works fine, that is until you try and use it on files that have Unicode show-names (something I never really thought about, since all the files I have are English, so mostly pretty-much all fall within [a-zA-Z0-9'\-])
How can I allow the regular expressions to match accented characters and the likes? Currently the regex's config section looks like..
config['valid_filename_chars'] = """0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@£$%^&*()_+=-[]{}"'.,<>`~? """
config['valid_filename_chars_regex'] = re.escape(config['valid_filename_chars'])
config['name_parse'] = [
# foo_[s01]_[e01]
re.compile('''^([%s]+?)[ \._\-]\[[Ss]([0-9]+?)\]_\[[Ee]([0-9]+?)\]?[^\\/]*$'''% (config['valid_filename_chars_regex'])),
# foo.1x09*
re.compile('''^([%s]+?)[ \._\-]\[?([0-9]+)x([0-9]+)[^\\/]*$''' % (config['valid_filename_chars_regex'])),
# foo.s01.e01, foo.s01_e01
re.compile('''^([%s]+?)[ \._\-][Ss]([0-9]+)[\.\- ]?[Ee]([0-9]+)[^\\/]*$''' % (config['valid_filename_chars_regex'])),
# foo.103*
re.compile('''^([%s]+)[ \._\-]([0-9]{1})([0-9]{2})[\._ -][^\\/]*$''' % (config['valid_filename_chars_regex'])),
# foo.0103*
re.compile('''^([%s]+)[ \._\-]([0-9]{2})([0-9]{2,3})[\._ -][^\\/]*$''' % (config['valid_filename_chars_regex'])),
]
A: Python's re module doesn't support \p{Letter} or \X. However, the new regex implementation on PyPI does.
A: In Mastering Regular Expressions from Jeffrey Friedl (great book) it is mentioned that you could use \p{Letter} which will match unicode stuff that is considered a letter.
A: Use a subrange of [\u0000-\uFFFF] for what you want.
You can also use the re.UNICODE compile flag. The docs say that if UNICODE is set, \w will match the characters [0-9_] plus whatever is classified as alphanumeric in the Unicode character properties database.
See also http://coding.derkeiler.com/Archive/Python/comp.lang.python/2004-05/2560.html.
A: \X seems to be available as a generic word-character in some languages, it allows you to match a single character disregarding of how many bytes it takes up. Might be useful.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/14389",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "31"
} |
Q: Managed Source Control Hosting and Continuous Integration with CVSDude and CruiseControl.net For my own project at home, I'm using the rather excellent managed subversion hosting from CVSDude. As it's only me working on the code right now, I'm not using CruiseControl.net, however I expect this will change in the next couple of months and will want a full build process to kick off upon check-in.
Has anyone managed to get CruiseControl.net working with CVSDude? My collegue Mike has this blog post where someone from CVSDude said:
"Your can use our post-commit call back facility to call a URL on your
server, which passes variables relating to the last checkin (variables
detailed in our specification). Your CGI script will these variables and
perform whatever tasks are required i.e. updating Cruise Control, etc."
Sounds lovely. But has anyone actually done it with cruisecontrol?
A: I had this email back from CVSDude:
We are currently working on a new version of our service which willeventually include CruiseControl integration.
:-/
A: Dunno if you are still interested, but we have CruiseControl (the original Java-based one, not .NET, but this shouldn't matter much) working with CVSDude - it just does svn log every minute to check if anything changed. We plan to switch to using their API though as unfortunately svn log has some lag behind the realtime update.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/14395",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: When should a multi-module project to split into separate repository trees? Currently we have a project with a standard subversion repository layout of:
./trunk
./branches
./tags
However, as we're moving down the road of OSGi and a modular project, we've ended up with:
./trunk/bundle/main
./trunk/bundle/modulea
./trunk/bundle/moduleb
./tags/bundle/main-1.0.0
./tags/bundle/main-1.0.1
./tags/bundle/modulea-1.0.0
The 'build' is still quite monolithic in that it builds all modules in sequence, though I'm starting to wonder if we should refactor the build/repository to something more like:
./bundle/main/trunk
./bundle/main/tags/main-1.0.0
./bundle/main/tags/main-1.0.1
./bundle/modulea/trunk
./bundle/modulea/tags/modulea-1.0.0
In this pattern I would imagine each module building itself, and storing its binary in a repository (maven, ivy, or another path of the subversion repository itself).
Are there guidelines or 'best-practices' over project layouts once one goes modular?
A: The Subversion book contains two sections on this:
*
*Repository Layout
*Planning Your Repository Organization
A blog entry on the subject: "Subversion Repository Layout"
The short answer, though: while your mileage will vary (every situation is individual), your /bundle/<project>/(trunk|tags|branches) scheme is rather common and will likely work well for you.
A: This is very much up to personal preference, but I find the following structure suitable for large projects consisting of many modules:
branches
project-name
module1
branch-name
module2
possibly-another-branch-name
branch-name-on-a-higher-level-including-both-modules
module1
module2
tags
... (same as branches)
trunk
project-name
module1
module2
I have also often used the structure in large repositories containing many projects, because keeping all projects in the same repository makes cross-referencing projects and sharing code between them—with history—easier.
I like to use the structure with root trunk, tags and branches folders from the start because in my experience (with large repositories containing many projects), many sub-projects and modules will never have separate tags or branches, so there is no need to create the folder structure for them. It also makes it easier for the developers to check out the entire trunk of the repository and not get all the tags and branches (which they don't need most of the time).
I guess this is a matter of project or company policy though. If you have one repository for each project or a given developer is only likely to work on a single project in the repository at a time the rooted trunk may not make as much sense.
A: Just my two cents...
I just want to emphasize the comment in the SVN documentation (already quoted in another answer, same thread) http://svnbook.red-bean.com/en/1.4/svn.reposadmin.planning.html#svn.reposadmin.projects.chooselayout
The excerpt references the following structure :
/
trunk/
calc/
calendar/
spreadsheet/
…
tags/
calc/
calendar/
spreadsheet/
…
branches/
calc/
calendar/
spreadsheet/
"There's nothing particularly incorrect about such a layout, but it may or may not seem as intuitive for your users. Especially in large, multi-project situations with many users, those users may tend to be familiar with only one or two of the projects in the repository. But the projects-as-branch-siblings tends to de-emphasize project individuality and focus on the entire set of projects as a single entity. That's a social issue though. We like our originally suggested arrangement for purely practical reasons—it's easier to ask about (or modify, or migrate elsewhere) the entire history of a single project when there's a single repository path that holds the entire history—past, present, tagged, and branched—for that project and that project alone."
For my own, I tend to agree quite strongly with this and prefer the following layout:
/
utils/
calc/
trunk/
tags/
branches/
calendar/
trunk/
tags/
branches/
…
office/
spreadsheet/
trunk/
tags/
branches/
The reason are simply that its impractical to tag a complete project set when one would want to tag only a specific subset.
Let's use an example: If project-1 depends on moduleA v1.1 and moduleB v2.3, I don't want newer moduleA v2.x to appear in the tags. In fact, when coming back some days/weeks/months later to this tagged release, I would be forced to open the bundle descriptor in the tagged version of project-1 to read the version of moduleA actually required.
Moreover, if I have to make a specific backup of this release's sources onto a CD, I just want to export this tag without downloading hundreds of megabytes of unrelated stuff.
It was just my two cents.
A: I've answered a similar question in a StackOverflow Version Control Structure question. It actually fits even better here since we do heavy OSGi development and have lots of bundles. I must echo Anders Sandvig comments: keep trunk/tags/branches on the root level since you will only branch a limited set of modules. It also does not interfere with modules building individually.
I won't copy the answer I made before but it is entirely relevant to this question.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/14397",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: Class Designer in Visual Studio - is it worth it? Does anybody use the Class Designer much in Visual Studio?
I have downloaded the Modeling Power Toys for 2005 and have been impressed with what I've seen so far. The MSDN Class Designer Blog doesn't seem to have been updated for a while but it still looks quite useful.
Is the Class Designer a quick way to build the basic application or should I just work out on paper and then start coding?
Thanks
A: Short answer: No.
Longer answer: No, not at all. There's a reason it hasn't been updated.
[EDIT] @ MrBrutal - Sorry - do you mean to generate code or just represent a design? Because I took your question as to generate code for you.
A: I guess this is old, but I use it a lot. It could definitely be improved, but I find it extremely useful to be able to visualize my class structure, and to be able to jump to a specific class or method by clicking on it visually.
It's also slightly easier to add classes/interfaces with than the solution explorer, although the new files always end up in the root folder, instead of the same folder as the CD.
The main benefit I find is to be able to see a group of closely related classes at once. I think the best approach might be to have a single CD for each code folder/namespace.
A: As a visualization tool, or for exploratory purposes (drawing up multiple options to see what they look like) it's not bad, but generally I find the object browser does fine for most stuff I care about.
As a code generation tool, it's a terrible idea.
The whole idea that we will design all our code structure first, then fill in the blanks with small bits of implementation is fundamentally broken.
The only time you actually know what the code structure should look like, is if you've done the exact same thing before - however then you can just use your previous code, and you don't need to draw up any new code in any kind of designer.
If you decide ahead of time to use a particular class structure before you've actually tried to solve the problem, there is a 100% chance that you will pick the wrong design, and shoot yourself in the foot.
A: I've used it a couple of times to get some decent looking class diagrams to put in presentations/blogposts etc. But thats about it...
Any suggestions on other simple UML/class diagram tools that is easy to use and create some nice looking diagrams? Must be able to generate diagrams from .NET code.
A: I have tried it out couple of times, mainly for viewing existing classes.
If it would show all the relationships, it would be more usefull. Now it only shows inheritation.
A: I find it useful sometimes, more often for documentation afterwards.
It's a new little utility, but I don't think you get the full functionality in VS Pro - I think you need Architect's Edition.
A: The comments here suggest that few people find the class designer useful.
Amusing to note that Microsoft designed the class designer to be a useful replacement to useless UML (UML diagrams being untrustworthy once they lose synchronisation with source code).
The trouble with class diagrams is that they tell us what we already know.
A: I only use the class designer to display my existing classes, but I don't use it the other way, e.g., design your classes there then let it generate the code.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/14398",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14"
} |
Q: GLUT exit redefinition error In my simple OpenGL program I get the following error about exit redefinition:
1>c:\program files\microsoft visual studio 8\vc\include\stdlib.h(406) : error C2381: 'exit' : redefinition; __declspec(noreturn) differs
1> c:\program files\microsoft visual studio 8\vc\platformsdk\include\gl\glut.h(146) : see declaration of 'exit'
I'm using Nate Robins' GLUT for Win32 and get this error with Visual Studio 2005 or Visual C++ 2005 (Express Edition). What is the cause of this error and how do I fix it?
A: Cause:
The stdlib.h which ships with the recent versions of Visual Studio has a different (and conflicting) definition of the exit() function. It clashes with the definition in glut.h.
Solution:
Override the definition in glut.h with that in stdlib.h. Place the stdlib.h line above the glut.h line in your code.
#include <stdlib.h>
#include <GL/glut.h>
A: or this...
To fix the error, right click on the project name in the Solution Explorer tab and select Properties -> C/C++ -> Preprocessor -> Preprocessor definitions and append GLUT_BUILDING_LIB to the existing definitions, seperated by semicolons.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/14402",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "34"
} |
Q: Why are there so few modal-editors that aren't vi*? Pretty much every other editor that isn't a vi descendant (vim, cream, vi-emu) seems to use the emacs shortcuts (ctrl+w to delete back a word and so on)
A: Um... maybe there isn't much of a need for one, given that Vi/Vim is pretty much available everywhere and got the whole modal thing right? :)
A: I think that it's because vi (and its ilk) already occupies the ecological niche of modal editors.
The number of people who prefer modal and haven't yet been attracted to vi is probably 0, so the hypothetical vi competitor would have to be so great as to make a significant number of vi users switch. This isn't likely. The cost of switching editors is huge and the vi-s are probably already as good as modal editors go. Well, maybe a significant breakthrough could improve upon them, but I find this unlikely.
A: Early software was often modal, but usability took a turn at some point, away from this style.
VI-based editors are total enigmas -- they're the only real surviving members of that order of software.
Modes are a no-no in usability and interaction design because we humans are fickle mammals who cannot be trusted to remember what mode the application is in.
If you think you are in one "mode" when you are actually in another, then all sorts of badness can ensue. What you believe to be a series of harmless keystrokes can (in the wrong mode) cause unlimited catastrophe. This is known as a "mode error".
To learn more, search for the term "modeless" (and "usability")
As mentioned in the comments below, a Modal interface in the hands of an experienced and non-fickle person can be extremely efficient.
A: @Leon: Great answer.
@dbr: Modal editing is something that takes a while to get used to. If you were to build a new editor that fits this paradigm, how would you improve on VI/VIM/Emacs? I think that is, in part, an answer to the question. Getting it "right" is hard enough, competing agains the likes of VI/VIM/Emacs would be extremely tough -- most people who use these editors are "die hard" fans, and you'd have to give them a compelling reason to move to another editor. Those people who don't use them already are most likely going to stay in a non-modal editor. IMHO of course ;)
A: Modal editors have the huge advantage to touch typists that you can navigate around the screen without taking your hands off the home row. My wrists only hurt when I'm doing stuff that requires me to move my hand off the keyboard and onto the mouse or arrow keys and back constantly.
A: Remember that Notepad is a modal editor!
To see this, try typing E, D, I, T; now try typing Alt, E, D, I, T. In the second case the Alt key activates the "menu mode" so the results are different. :oP People seem to cope with that.
(Yes, this is a feature of Windows rather than specifically of Notepad. I think it's a bad feature because it is easy to hit Alt by mistake and I don't think you can turn it off.)
A: VIM and emacs make about as much user interface design sense as qwerty. We now have available modern computer optimized key layouts (see the colemak layout and the carpalx project); it's only a matter of time before someone does the same for text editors.
A: I believe Eclipse has Vi bindings and there is a Visual Studio plugin/extension, too (which is called Vi-Emu, or something).
A: It's worth noting that the vi input models survival is in part due it's adoption in the POSIX standard, so investing time in learning would mean your guarenteed to be able to work on any system complying to these standards. So, like English, theres power in ubiquity.
As far as alternatives go, I doubt an alternate model editor would survive a 30 day free trial period, so its the same reason more people drive automatics than fly jets.
A: Since this is a question already at odds with the "no subjective issues" mantra, allow me to face that head on in kind.
Non-Modal editing seeks to solve the problem caused by non-modal editing in the first place.
Simply put, with Modal editing I can do nearly everything without my hands leaving the keyboard, and without even tormenting my pinky with reaching for the control, or interrupting my finger placement by hunting for the arrow keys.
*
*Reaching for mouse completely interrupts the train of thought. I have hated the intense reliance upon this with Intellij IDEA and Netbeans for many years. Even with vim-style addons.
*Most of what you do has to do with fine-tuning with very small increments and changes within the same paragraph of code. Move up, move over, change character, etc., etc. These things are interrupted with control keys and arrows and mouse.
A: Though not really answering your question, there used to be a "modal like" way to write Japanese on cell phones before :
The first letter you hit was a conson let's say K, and then, and then the next key you would hit would have the role of a conson. (Having two conson in a row is impossible in Japanese)
Though it was main a few years ago, today it's only used by people who really want to hit fast.
A: I recently came across divascheme - an alternative set of key bindings for DrScheme. This is modal, and part of the justification is to do with RSI - specifically avoiding lots of wrist twisting to hit Ctrl-Alt-Shift-something. The coder has done an informal survey of fellow coders and found that emacs users suffered from more wrist pain than vi coders.
You can see him doing a short talk at LugRadio Live USA. (The video is a series of 5 minute talks and I can't remember how far through it is, sorry - if someone watches it and posts that here I'll edit this post to say when in the video it is).
Note I have not used divascheme.
A: I think the answer to the question is actually there are quite a few modal text editors that aren't forks of vi/vim. However they all use the vi key bindings. Vi users get the key bindings into their muscle memory so relearning a different set of key bindings would be really hard, so no-one would create a different set of key bindings.
But lots of different editors have re-implemented the vi key bindings from scratch. Just look at this question about IDEs with vi key bindings. At least half of the answers are editors built from scratch that implement vi key bindings, not versions of vi embedded.
A: The invention of the mouse took one mode and moved it to an input device, and context menus took another mode and moved it to a button. Ironically, the advent of touch devices has had the reverse effect, producing multi-modal interfaces:
*
*aware multi-modal - touch and speech are aware of each other and intersect
*unaware multi-modal - touch and speech are unaware of each other and conflict
The traditional WIMP interfaces have the basic premise that the information can flow in and out of the system through a single channel or an event stream. This event stream can be in the form of input (mouse, keyboard etc) where the user enters data to the system and expects feedback in the form of output (voice, vibration, visual, etc) when the system responds. But the channel maintains its singularity and can process information one source at a time. For example, in today’s interaction, the computer ignores typed information (through a keyboard) when a mouse button is depressed.
This is very much different from a multimodal interaction where the system has multiple event streams and channels and can process information coming through various input modes acting in parallel, such as those described above. For example, in an IVR system a user can either type or speak to navigate through the menu.
References
*
*User Agent Accessibility Guidelines working group (UAWG): Keyboard Interface use cases
*W3C Multimodal Standard Brings Web to More People, More Ways
*Next steps for W3C work on Multimodal Standards
*The Future of Interaction is Multimodal
*Beyond Mouse and Keyboard: Expanding Design Considerations for Information Visualization Interactions - naturalinfovis_infovis2012.pdf
*Setting the scope for light-weight Web-based applications
*Jan. 26, 1983: Spreadsheet as Easy as 1-2-3
*Multi-modal design: Gesture, Touch and Mobile devices...next big thing? | Experience Dynamics
| {
"language": "en",
"url": "https://stackoverflow.com/questions/14410",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "32"
} |
Q: Using OpenGL extensions On Windows I want to use the functions exposed under the OpenGL extensions. I'm on Windows, how do I do this?
A: A 'Very strong reason' not to use GLEW might be that the library is not supported by your compiler/IDE. E.g: Borland C++ Builder.
In that case, you might want to rebuild the library from source. If it works, great, otherwise manual extension loading isnt as bad as it is made to sound.
A: Easy solution: Use GLEW. See how here.
Hard solution:
If you have a really strong reason not to use GLEW, here's how to achieve the same without it:
Identify the OpenGL extension and the extension APIs you wish to use. OpenGL extensions are listed in the OpenGL Extension Registry.
Example: I wish to use the capabilities of the EXT_framebuffer_object extension. The APIs I wish to use from this extension are:
glGenFramebuffersEXT()
glBindFramebufferEXT()
glFramebufferTexture2DEXT()
glCheckFramebufferStatusEXT()
glDeleteFramebuffersEXT()
Check if your graphic card supports the extension you wish to use. If it does, then your work is almost done! Download and install the latest drivers and SDKs for your graphics card.
Example: The graphics card in my PC is a NVIDIA 6600 GT. So, I visit the NVIDIA OpenGL Extension Specifications webpage and find that the EXT_framebuffer_object extension is supported. I then download the latest NVIDIA OpenGL SDK and install it.
Your graphic card manufacturer provides a glext.h header file (or a similarly named header file) with all the declarations needed to use the supported OpenGL extensions. (Note that not all extensions might be supported.) Either place this header file somewhere your compiler can pick it up or include its directory in your compiler's include directories list.
Add a #include <glext.h> line in your code to include the header file into your code.
Open glext.h, find the API you wish to use and grab its corresponding ugly-looking declaration.
Example: I search for the above framebuffer APIs and find their corresponding ugly-looking declarations:
typedef void (APIENTRYP PFNGLGENFRAMEBUFFERSEXTPROC) (GLsizei n, GLuint *framebuffers); for GLAPI void APIENTRY glGenFramebuffersEXT (GLsizei, GLuint *);
All this means is that your header file has the API declaration in 2 forms. One is a wgl-like ugly function pointer declaration. The other is a sane looking function declaration.
For each extension API you wish to use, add in your code declarations of the function name as a type of the ugly-looking string.
Example:
PFNGLGENFRAMEBUFFERSEXTPROC glGenFramebuffersEXT;
PFNGLBINDFRAMEBUFFEREXTPROC glBindFramebufferEXT;
PFNGLFRAMEBUFFERTEXTURE2DEXTPROC glFramebufferTexture2DEXT;
PFNGLCHECKFRAMEBUFFERSTATUSEXTPROC glCheckFramebufferStatusEXT;
PFNGLDELETEFRAMEBUFFERSEXTPROC glDeleteFramebuffersEXT;
Though it looks ugly, all we're doing is to declare function pointers of the type corresponding to the extension API.
Initialize these function pointers with their rightful functions. These functions are exposed by the library or driver. We need to use wglGetProcAddress() function to do this.
Example:
glGenFramebuffersEXT = (PFNGLGENFRAMEBUFFERSEXTPROC) wglGetProcAddress("glGenFramebuffersEXT");
glBindFramebufferEXT = (PFNGLBINDFRAMEBUFFEREXTPROC) wglGetProcAddress("glBindFramebufferEXT");
glFramebufferTexture2DEXT = (PFNGLFRAMEBUFFERTEXTURE2DEXTPROC) wglGetProcAddress("glFramebufferTexture2DEXT");
glCheckFramebufferStatusEXT = (PFNGLCHECKFRAMEBUFFERSTATUSEXTPROC) wglGetProcAddress("glCheckFramebufferStatusEXT");
glDeleteFramebuffersEXT = (PFNGLDELETEFRAMEBUFFERSEXTPROC) wglGetProcAddress("glDeleteFramebuffersEXT");
Don't forget to check the function pointers for NULL. If by chance wglGetProcAddress() couldn't find the extension function, it would've initialized the pointer with NULL.
Example:
if (NULL == glGenFramebuffersEXT || NULL == glBindFramebufferEXT || NULL == glFramebufferTexture2DEXT
|| NULL == glCheckFramebufferStatusEXT || NULL == glDeleteFramebuffersEXT)
{
// Extension functions not loaded!
exit(1);
}
That's it, we're done! You can now use these function pointers just as if the function calls existed.
Example:
glGenFramebuffersEXT(1, &fbo);
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fbo);
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_TEXTURE_2D, colorTex[0], 0);
Reference: Moving Beyond OpenGL 1.1 for Windows by Dave Astle — The article is a bit dated, but has all the information you need to understand why this pathetic situation exists on Windows and how to get around it.
A: @Kronikarz: From the looks of it, GLEW seems to be the way of the future. NVIDIA already ships it along with its OpenGL SDK. And its latest release was in 2007 compared to GLEE which was in 2006.
But, the usage of both libraries looks almost the same to me. (GLEW has an init() which needs to be called before anything else though.) So, you don't need to switch unless you find some extension not being supported under GLEE.
A: GL3W is a public-domain script that creates a library which loads only core functionality for OpenGL 3/4. It can be found on github at:
https://github.com/skaslev/gl3w
GL3W requires Python 2.6 to generate the libraries and headers for OpenGL; it does not require Python after that.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/14413",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "16"
} |
Q: Why is the PyObjC documentation so bad? For example, http://developer.apple.com/cocoa/pyobjc.html is still for OS X 10.4 Tiger, not 10.5 Leopard.. And that's the official Apple documentation for it..
The official PyObjC page is equally bad, http://pyobjc.sourceforge.net/
It's so bad it's baffling.. I'm considering learning Ruby primarily because the RubyCocoa stuff is so much better documented, and there's lots of decent tutorials (http://www.rubycocoa.com/ for example), and because of the Shoes GUI toolkit..
Even this badly-auto-translated Japanese tutorial is more useful than the rest of the documentation I could find..
All I want to do is create fairly simple Python applications with Cocoa GUI's..
Can anyone shed light on the horrible documentation, or point me at some tutorials that don't just give you huge blocks of code and assume you know what NSThread.detachNewThreadSelector_toTarget_withObject_("queryController", self, None) does..?
A: Tom's and Martin's response are definitely true (in just about any open source project, you'll find that most contributors are particularly interested in, well, developing; not so much in semi-related matters such as documentation), but I don't think your particular question at the end would fit well inside PyObjC documentation.
NSThread.detachNewThreadSelector_toTarget_withObject_("queryController", self, None)
NSThread is part of the Cocoa API, and as such documented over at Apple, including the particular method + detachNewThreadSelector:toTarget:withObject: (I'd link there, but apparently stackoverflow has bugs with parsing it). The CocoaDev wiki also has an article.
I don't think it would be a good idea for PyObjC to attempt to document Cocoa, other than a few basic examples of how to use it from within Python. Explaining selectors is also likely outside the scope of PyObjC, as those, too, are a feature of Objective-C, not PyObjC specifically.
A: I stumbled across a good tutorial on PyObjC/Cocoa:
http://lethain.com/entry/2008/aug/22/an-epic-introduction-to-pyobjc-and-cocoa/
A:
All I want to do is create fairly simple Python applications with Cocoa GUI's.. Can anyone shed light on the horrible documentation, or point me at some tutorials that don't just give you huge blocks of code and assume you know what NSThread.detachNewThreadSelector_toTarget_withObject_("queryController", self, None) does..?
[...]
basically all I want to do is write Cocoa applications without having to learn ObjC.
Although I basically agree with Soeren's response, I'd take it even further:
It will be a long time, if ever, before you can use Cocoa without some understanding of Objective C. Cocoa isn't an abstraction built independently from Objective C, it is explicitly tied to it. You can see this in the example line of code you quoted above:
NSThread.detachNewThreadSelector_toTarget_withObject_("queryController", self, None)
This is the Python way of writing the Objective C line:
[NSThread detachNewThreadSelector:@selector(queryController:) toTarget:self withObject:nil];
Now, it's important to notice here that this line can be seen in two ways: (1) as a line of Objective C, or (2) as an invocation of the Cocoa frameworks. We see it as (1) by the syntax. We see it as (2) by recognizing that NSThread is a Cocoa framework which provides a set of handy features. In this case, this particular Cocoa framework is making it easy for us to have an object start doing something on a new thread.
But the kicker is this: The Cocoa framework here (NSThread) is providing us this handy service in a way that is explicitly tied to the language the framework has been written in. Namely, NSThread gave us a feature that explicitly refers to "selectors". Selectors are, in point of fact, the name for something fundamental about how Objective C works.
So there's the rub. Cocoa is fundamentally an Objective-C creation, and its creators have built it with Objective C in mind. I'm not claiming that it's impossible to translate the interface to the Cocoa features into a form more natural for other languages. It's just that as soon as you change the Cocoa framework to stop referring to "selectors", it's not really the Cocoa framework any more. It's a translated version. And once you start going down that road, I'm guessing things get really messy. You're trying to keep up with Apple as they update Cocoa, maybe you hit some parts of Cocoa that just don't translate well into the new language, whatever. So instead, things like PyObjC opt to expose Cocoa directly, in a way that has a very clear and simple correlation. As they say in the documentation:
In order to have a lossless and unambiguous translation between Objective-C messages and Python methods, the Python method name equivalent is simply the selector with colons replaced by underscores.
Sure, it's a bit ugly, and it does mean you need to know something about Objective-C, but that's because the alternative, if one truly exists, is not necessarily better.
A: I didn't know anything at all about Objective C or Cocoa (but plenty about Python), but I am now writing a rather complex application in PyObjc. How did I learn? I picked up Cocoa Programming for OSX and went through the whole book (a pretty quick process) using PyObjC. Just ignore anything about memory management and you'll pretty much be fine. The only caveat is that very occasionally you have to use a decorator like endSheetMethod (actually I think that's the only one I've hit):
@PyObjcTools.AppHelper.endSheetMethod
def alertEnded_code_context_(self, alert, choice, context):
pass
A: The main reason for the lack of documentation for PyObjC is that there is one developer (me), and as most developers I don't particularly like writing documentation. Because PyObjC is a side project for me I tend to focus on working on features and bugfixes, because that's more interesting for me.
The best way to improve the documentation is to volunteer to help on the pyobjc-dev mailing list.
As an aside: the pythonmac-sig mailinglist (see google) is an excelent resource for getting help on Python on MacOSX (not just PyObjC).
A: This answer isn't going to be very helpful but, as a developer I hate doing documentation. This being a opensource project, it's hard to find people to do documentation.
A: Tom says it all really. Lots of open source projects have dedicated developers and few who are interested in documenting. It isn't helped by the fact that goalposts can shift on a daily basis which means documentation not only has to be created, but maintained.
A: I agree that that tutorial is flawed, throwing random, unexplained code right in front of your eyes. It introduces concepts such as the autorelease pool and user defaults without explaining why you would want them ("Autorelease pool for memory management" is hardly an explanation).
That said…
basically all I want to do is write Cocoa applications without having to learn ObjC.
I'm afraid that for the time being, you will need a basic grasp of ObjC in order to benefit from any language that uses Cocoa. PyObjC, RubyCocoa, Nu and others are niches at best, and all of them were developed by people intimately familiar with the ins and outs of ObjC and Cocoa.
For now, you will benefit the most if you realistically see those bridges as useful where scripting languages truly shine, rather than trying to build a whole application with them. While this has been done (with LimeChat, I'm using a RubyCocoa-written app right now), it is rare and likely will be for a while.
A: To be blunt:
If you want to be an effective Cocoa programmer, you must learn Objective-C. End of story.
Neither Python or Ruby are a substitute for Objective-C via their respective bridges. You still have to understand the Objective-C APIs, the behaviors inherent to NSObject derived classes, and many other details of Cocoa.
PyObjC and RubyCocoa are a great way to access Python or Ruby functionality from a Cocoa application, including building a Cocoa application mostly -- if not entirely -- in Python or Ruby. But success therein is founded upon a thorough understanding of Cocoa and the Objective-C APIs it is composed of.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/14422",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17"
} |
Q: 64bit .NET Performance tuning I know that .NET is JIT compiled to the architecture you are running on just before the app runs, but does the JIT compiler optimize for 64bit architecture at all?
Is there anything that needs to be done or considered when programming an app that will run on a 64bit system? (i.e. Will using Int64 improve performance and will the JIT compiler automatically make Int64 work on 32bit systems?)
A: This is a good article on the subject, by one of the people who worked on the 64 bit JIT. Basically, unless you absolutely need the address space that 64 bit can offer, or need to do 64 bit math, you will likely lose performance. As pointers are larger, cache is effectively halved, for example.
A: I have noticed 64-bit being a lot slower.
As has been stated the 64-bit JIT compiler behaves differently to the x86 JIT compiler. The x86 compiler will take advantage of some optimizations that the x64 one does not.
For example in .NET 3.5 the 32-bit JIT will inline function calls with structs as arguments, but the 64-bit JIT does not.
In production code I have seen x86 builds running as much as 20% faster than x64 builds (with no other changes)
A: To sum up, use 64-bit only if
*
*You need the extra memory and there is no way around it.
*You program e.g. scientific apps and need the increased math precision
On every other aspect, as of today, the 64-bit compiler in .NET is one step down.
Performance optimizations done in the .NET compilers are a big issue.
A: The 64bit JIT is different from the one for 32bit, so I would expect some differences in the output - but I wouldn't switch to 64bit just for that, and I wouldn't expect to gain much speed (if any) in CPU time by switching to 64bit.
You will notice a big performance improvement if your app uses a lot of memory and the PC has enough RAM to keep up with it. I've found that 32bit .NET apps tend to start throwing out of memory exceptions when you get to around 1.6gb in use, but they start to thrash the disk due to paging long before that - so you end being I/O bound.
Basically, if you're bottleneck is CPU then 64bit is unlikely to help. If your bottleneck is is memory then you should see a big improvement.
Will using Int64 improve performance and will the JIT compiler automatically make Int64 work on 32bit systems
Int64 already works on both 32bit and 64bit systems, but it'll be faster running on 64bit. So if you're mostly number crunching with Int64, running on a 64bit system should help.
The most important thing is to measure your performance.
A: Performance bottlenecks will be the same regardless of whether the architecture is 32- or 64-bit. Performance problems tend to be the result of sub-optimal algorithms — the choice between 32- and 64-bit types won't significantly affect performance.
Most importantly, don't try to improve the performance of something before you've measured it. In particular you should profile the code to determine where your performance bottlenecks are.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/14432",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
} |
Q: Software to use when designing classes What software do you use when designing classes and their relationship, or just pen and paper?
A: I find pen and paper very useful, and I try to get as far away from a computer as possible. If I do it on the compy, I'm always too tempted to start programming the solution. That inevitably leads to me changing things later that I would have spotted in the planning phase had I actually spent a good measure of time on it.
A: I usually start with a empty interface then start writing tests. I then generate the members using refactoring tools. For me unit testing is part of the design.
A: OmniGraffle (Visio-esque app for Mac OS X), sometimes. Otherwise, just pen and paper will do.
A: It's easy, while in the paper-and-pen (or whatever non-code equivalent you prefer) stage, to overstay, falling prey to the dreaded YAGNI syndrome. How many of us have carefully designed in some "sexy" feature that ended up never being used? (Raises hand. Hands.)
Small iterative test-driven steps and frequent refactoring - let the code tell you what it wants to be.
Most of my projects start out with the only certainty being that we won't end up where we currently think we will. So spending very much time on Big Up-Front Design (or Big Design Up Front if you prefer) is wasteful - better to start with the first thing we want to do and see where we end up.
It kind of depends on where you consider design to end. I read an article a few years back that presented the idea that coding is design - or for the Big Process fans at least it's the back-end of design. It rang true to me and changed forever the way I viewed the stages of the development process. Of course, I've just googled like crazy for the darn thing. Could I find it? Could I heck. Perhaps I dreamed the article and it's all my own idea. Yeah, that'll be it.
A: Pen and paper for the first draft. Umlet to digitalize it. It's very minimal but it does what I need
A: I use pen and paper.
For all planning purposes, it's the fastest way.
I get lost in layout and finetuning when I use a UML package.
But that is my burden.. :-)
A: Go for PENCIL and paper, or a whiteboard. Anything permenant-marking like a pen and you'll have a pretty messy design!
A: Whiteboard for the first 35 or 40 drafts. UML is nice after that, but not particularly necessary. The best documentation after you've hashed out the details is clean code.
A: Mostly pen and paper, although I occasionally break out Visio and just do some rough diagrams.
Would be nice to have a fancy tool I guess, but it would just be another thing to learn.
A: When doing an initial design I like a whiteboard and 1 - 3 other developers to bounce ideas off of. That's usually enough to catch any glaring errors/fix any tricky situations that may arise without dropping the s/n ratio by too much.
A: I find pen and paper, a whiteboard and possibly some CRC cards to be very useful. Most of the time I think a whiteboard and some stickers or cards with the class and/or module names written on them work best when doing planning and designing as a group. Pen and paper is fine if you are doing the activity alone. Once you have the basic structure set you can always make a pretty UML diagram.
A: Pen and Paper and/or Whiteboards for drafting, a more comprehensive tool for documentation purposes.
I mainly use Class Diagrams and a few sketches with Sequence Diagrams to get most of the relationships right.
About the tools: At work I use Enterprise Architect but personally I find Visual Paradigm for UML a better choice. The latter is much more flexible and allows quick drafting as well.
At VP they also have a version called Agilian for some time now (which I have not yet used) which seems to be even more flexible, allowing sketches to become documentation in no-time... maybe one day this tool will replace my paper sketches (save the trees :P).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/14435",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: What is the best way to make a Delphi Application completely full screen? What is the best way to make a delphi application (delphi 2007 for win32 here) go completely full screen, removing the application border and covering windows task bar ?
I am looking for something similar to what IE does when you hit F11.
I wish this to be a run time option for the user not a design time decision by my good self.
As Mentioned in the accepted answer
BorderStyle := bsNone;
was part of the way to do it. Strangely I kept getting a E2010 Incompatible types: 'TFormBorderStyle' and 'TBackGroundSymbol' error when using that line (another type had bsNone defined).
To overcome this I had to use :
BorderStyle := Forms.bsNone;
A: Maximize the form and hide the title bar. The maximize line is done from memory, but I'm pretty sure WindowState is the property you want.
There's also this article, but that seems too complicated to me.
procedure TForm1.FormCreate(Sender: TObject) ;
begin
//maximize the window
WindowState := wsMaximized;
//hide the title bar
SetWindowLong(Handle,GWL_STYLE,GetWindowLong(Handle,GWL_STYLE) and not WS_CAPTION);
ClientHeight := Height;
end;
Edit: Here's a complete example, with "full screen" and "restore" options. I've broken out the different parts into little procedures for maximum clarity, so this could be greatly compressed into just a few lines.
unit Unit1;
interface
uses
Windows, Messages, SysUtils, Variants, Classes, Graphics, Controls, Forms,
Dialogs, StdCtrls;
type
TForm1 = class(TForm)
btnGoFullScreen: TButton;
btnNotFullScreen: TButton;
btnShowTitleBar: TButton;
btnHideTitleBar: TButton;
btnQuit: TButton;
procedure btnGoFullScreenClick(Sender: TObject);
procedure btnShowTitleBarClick(Sender: TObject);
procedure btnHideTitleBarClick(Sender: TObject);
procedure btnNotFullScreenClick(Sender: TObject);
procedure btnQuitClick(Sender: TObject);
private
SavedLeft : integer;
SavedTop : integer;
SavedWidth : integer;
SavedHeight : integer;
SavedWindowState : TWindowState;
procedure FullScreen;
procedure NotFullScreen;
procedure SavePosition;
procedure HideTitleBar;
procedure ShowTitleBar;
procedure RestorePosition;
procedure MaximizeWindow;
public
{ Public declarations }
end;
var
Form1: TForm1;
implementation
{$R *.dfm}
procedure TForm1.btnQuitClick(Sender: TObject);
begin
Application.Terminate;
end;
procedure TForm1.btnGoFullScreenClick(Sender: TObject);
begin
FullScreen;
end;
procedure TForm1.btnNotFullScreenClick(Sender: TObject);
begin
NotFullScreen;
end;
procedure TForm1.btnShowTitleBarClick(Sender: TObject);
begin
ShowTitleBar;
end;
procedure TForm1.btnHideTitleBarClick(Sender: TObject);
begin
HideTitleBar;
end;
procedure TForm1.FullScreen;
begin
SavePosition;
HideTitleBar;
MaximizeWindow;
end;
procedure TForm1.HideTitleBar;
begin
SetWindowLong(Handle,GWL_STYLE,GetWindowLong(Handle,GWL_STYLE) and not WS_CAPTION);
ClientHeight := Height;
end;
procedure TForm1.MaximizeWindow;
begin
WindowState := wsMaximized;
end;
procedure TForm1.NotFullScreen;
begin
RestorePosition;
ShowTitleBar;
end;
procedure TForm1.RestorePosition;
begin
//this proc uses what we saved in "SavePosition"
WindowState := SavedWindowState;
Top := SavedTop;
Left := SavedLeft;
Width := SavedWidth;
Height := SavedHeight;
end;
procedure TForm1.SavePosition;
begin
SavedLeft := Left;
SavedHeight := Height;
SavedTop := Top;
SavedWidth := Width;
SavedWindowState := WindowState;
end;
procedure TForm1.ShowTitleBar;
begin
SetWindowLong(Handle,gwl_Style,GetWindowLong(Handle,gwl_Style) or ws_Caption or ws_border);
Height := Height + GetSystemMetrics(SM_CYCAPTION);
Refresh;
end;
end.
A: Well, this has always worked for me. Seems a bit simpler...
procedure TForm52.Button1Click(Sender: TObject);
begin
BorderStyle := bsNone;
WindowState := wsMaximized;
end;
A: Put to the form onShow event such code:
WindowState:=wsMaximized;
And to the OnCanResize this:
if (newwidth<width) and (newheight<height) then
Resize:=false;
A: A Google search turned up the following, additional methods:
(though I think I'd try Roddy's method first)
Manually fill the screen (from: About Delphi)
procedure TSomeForm.FormShow(Sender: TObject) ;
var
r : TRect;
begin
Borderstyle := bsNone;
SystemParametersInfo
(SPI_GETWORKAREA, 0, @r,0) ;
SetBounds
(r.Left, r.Top, r.Right-r.Left, r.Bottom-r.Top) ;
end;
Variation on a theme by Roddy
FormStyle := fsStayOnTop;
BorderStyle := bsNone;
Left := 0;
Top := 0;
Width := Screen.Width;
Height := Screen.Height;
The WinAPI way (by Peter Below from TeamB)
private // in form declaration
Procedure WMGetMinMaxInfo(Var msg: TWMGetMinMaxInfo);
message WM_GETMINMAXINFO;
Procedure TForm1.WMGetMinMaxInfo(Var msg: TWMGetMinMaxInfo);
Begin
inherited;
With msg.MinMaxInfo^.ptMaxTrackSize Do Begin
X := GetDeviceCaps( Canvas.handle, HORZRES ) + (Width - ClientWidth);
Y := GetDeviceCaps( Canvas.handle, VERTRES ) + (Height - ClientHeight
);
End;
End;
procedure TForm1.Button2Click(Sender: TObject);
Const
Rect: TRect = (Left:0; Top:0; Right:0; Bottom:0);
FullScreen: Boolean = False;
begin
FullScreen := not FullScreen;
If FullScreen Then Begin
Rect := BoundsRect;
SetBounds(
Left - ClientOrigin.X,
Top - ClientOrigin.Y,
GetDeviceCaps( Canvas.handle, HORZRES ) + (Width - ClientWidth),
GetDeviceCaps( Canvas.handle, VERTRES ) + (Height - ClientHeight ));
// Label2.caption := IntToStr(GetDeviceCaps( Canvas.handle, VERTRES ));
End
Else
BoundsRect := Rect;
end;
A: How to constrain a sub-form within the Mainform like it was an MDI app., but without the headaches! (Note: The replies on this page helped me get this working, so that's why I posted my solution here)
private
{ Private declarations }
StickyAt: Word;
procedure WMWINDOWPOSCHANGING(Var Msg: TWMWINDOWPOSCHANGING); Message M_WINDOWPOSCHANGING;
Procedure WMGetMinMaxInfo(Var msg: TWMGetMinMaxInfo); message WM_GETMINMAXINFO;
later...
procedure TForm2.WMWINDOWPOSCHANGING(var Msg: TWMWINDOWPOSCHANGING);
var
A, B: Integer;
iFrameSize: Integer;
iCaptionHeight: Integer;
iMenuHeight: Integer;
begin
iFrameSize := GetSystemMetrics(SM_CYFIXEDFRAME);
iCaptionHeight := GetSystemMetrics(SM_CYCAPTION);
iMenuHeight := GetSystemMetrics(SM_CYMENU);
// inside the Mainform client area
A := Application.MainForm.Left + iFrameSize;
B := Application.MainForm.Top + iFrameSize + iCaptionHeight + iMenuHeight;
with Msg.WindowPos^ do
begin
if x <= A + StickyAt then
x := A;
if x + cx >= A + Application.MainForm.ClientWidth - StickyAt then
x := (A + Application.MainForm.ClientWidth) - cx + 1;
if y <= B + StickyAt then
y := B;
if y + cy >= B + Application.MainForm.ClientHeight - StickyAt then
y := (B + Application.MainForm.ClientHeight) - cy + 1;
end;
end;
and yet more...
Procedure TForm2.WMGetMinMaxInfo(Var msg: TWMGetMinMaxInfo);
var
iFrameSize: Integer;
iCaptionHeight: Integer;
iMenuHeight: Integer;
Begin
inherited;
iFrameSize := GetSystemMetrics(SM_CYFIXEDFRAME);
iCaptionHeight := GetSystemMetrics(SM_CYCAPTION);
iMenuHeight := GetSystemMetrics(SM_CYMENU);
With msg.MinMaxInfo^.ptMaxPosition Do
begin
// position of top when maximised
X := Application.MainForm.Left + iFrameSize + 1;
Y := Application.MainForm.Top + iFrameSize + iCaptionHeight + iMenuHeight + 1;
end;
With msg.MinMaxInfo^.ptMaxSize Do
Begin
// width and height when maximized
X := Application.MainForm.ClientWidth;
Y := Application.MainForm.ClientHeight;
End;
With msg.MinMaxInfo^.ptMaxTrackSize Do
Begin
// maximum size when maximised
X := Application.MainForm.ClientWidth;
Y := Application.MainForm.ClientHeight;
End;
// to do: minimum size (maybe)
End;
A: In my case, the only working solution is:
procedure TFormHelper.FullScreenMode;
begin
BorderStyle := bsNone;
ShowWindowAsync(Handle, SW_MAXIMIZE);
end;
A: You need to make sure Form position is poDefaultPosOnly.
Form1.Position := poDefaultPosOnly;
Form1.FormStyle := fsStayOnTop;
Form1.BorderStyle := bsNone;
Form1.Left := 0;
Form1.Top := 0;
Form1.Width := Screen.Width;
Form1.Height := Screen.Height;
Tested and works on Win7 x64.
A: Try:
Align = alClient
FormStyle = fsStayOnTop
This always align to the primary monitor;
A: Hm. Looking at the responses I seem to remember dealing with this about 8 years ago when I coded a game. To make debugging easier, I used the device-context of a normal, Delphi form as the source for a fullscreen display.
The point being, that DirectX is capable of running any device context fullscreen - including the one allocated by your form.
So to give an app "true" fullscreen capabilities, track down a DirectX library for Delphi and it will probably contain what you need out of the box.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/14451",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "19"
} |
Q: Does anyone have .Net Excel IO component benchmarks? I'm needing to access Excel workbooks from .Net. I know all about the different ways of doing it (I've written them up in a blog post), and I know that using a native .Net component is going to be the fastest. But the question is, which of the components wins? Has anybody benchmarked them? I've been using Syncfusion XlsIO, but that's very slow for some key operations (like deleting rows in a workbook containing thousands of Named ranges).
A: I haven't done any proper benchmarks, but I tried out several other components,and found that SpreadsheetGear was considerably faster than XlsIO which I was using before. I've written up some of my findings in this post
A: Can't help you with your original question, but are you aware that you can access Excel files using an OleDbConnection, and therefore treat it as a database? You can then read worksheets into a DataTable, perform all the changes you need to the data in your application, and then save it all back to the file using an OleDbConnection.
A: Yes but I'm not going to publish them both out of a courtesy to Syncfusion (they ask you not to publish benchmarks), because I'm not an experienced tester so my tests are probably somewhat flawed but mostly because what you actually benchmark makes a huge difference to who wins and by how much.
I took one of their "performance" examples and added the same routine in EPPlus to compare them. XLSIO was around 15% faster with just straightforward inserts, depending on the row/column ratio (I tried a few), memory usage seemed very similar. When I added a routine that, after all the rows were added, deleted every 10th row and then inserted a new row 2 rows up from that - XLSIO was significantly slower in that circumstance.
A generic benchmark is pretty-much useless to you. You need to try them against each other in the specific scenarios you use.
I have been using EPPlus for a few years and the performance has been fine, I don't recall shouting at it.
More worthy of your consideration is the functionality, support (Syncfusion have been good, in my experience), Documentation, access to the source code if that is important, and - importantly - how much sense the API makes to you, the syntax can be quite different. eg. Named Styles
XLSIO
headerStyle.BeginUpdate();
workbook.SetPaletteColor(8, System.Drawing.Color.FromArgb(255, 174, 33));
headerStyle.Color = System.Drawing.Color.FromArgb(255, 174, 33);
headerStyle.Font.Bold = true;
headerStyle.Borders[ExcelBordersIndex.EdgeLeft] .LineStyle = ExcelLineStyle.Thin;
headerStyle.Borders[ExcelBordersIndex.EdgeRight] .LineStyle = ExcelLineStyle.Thin;
headerStyle.Borders[ExcelBordersIndex.EdgeTop] .LineStyle = ExcelLineStyle.Thin;
headerStyle.Borders[ExcelBordersIndex.EdgeBottom].LineStyle = ExcelLineStyle.Thin;
headerStyle.EndUpdate();
EPPlus
ExcelNamedStyleXml headerStyle = xlPackage.Workbook.Styles.CreateNamedStyle("HeaderStyle");
headerStyle.Style.Fill.PatternType = ExcelFillStyle.Solid; // <== needed or BackgroundColor throws an exception
headerStyle.Style.Fill.BackgroundColor.SetColor(System.Drawing.Color.FromArgb(255, 174, 33));
headerStyle.Style.Font.Bold = true;
headerStyle.Style.Border.Left.Style = ExcelBorderStyle.Thin;
headerStyle.Style.Border.Right.Style = ExcelBorderStyle.Thin;
headerStyle.Style.Border.Top.Style = ExcelBorderStyle.Thin;
headerStyle.Style.Border.Bottom.Style = ExcelBorderStyle.Thin;
| {
"language": "en",
"url": "https://stackoverflow.com/questions/14453",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Bit fields in C# I have a structure which I need to populate and write to disk (several actually).
An example is:
byte-6
bit0 - original_or_copy
bit1 - copyright
bit2 - data_alignment_indicator
bit3 - PES_priority
bit4-bit5 - PES_scrambling control.
bit6-bit7 - reserved
In C I might do something like the following:
struct PESHeader {
unsigned reserved:2;
unsigned scrambling_control:2;
unsigned priority:1;
unsigned data_alignment_indicator:1;
unsigned copyright:1;
unsigned original_or_copy:1;
};
Is there any way to do this in C# that would enable me to access the bits using the struct dereferencing dot operator?
For a couple of structures, I can just do bit shifting wrapped in an accessor function.
I have loads of structures to handle in this way, so I'm looking for something that's easier to read and quicker to write.
A: You could also use the BitVector32 and especially the Section struct. The example is very good.
A: I find myself quite comfortable with these helper functions:
uint SetBits(uint word, uint value, int pos, int size)
{
uint mask = ((((uint)1) << size) - 1) << pos;
word &= ~mask; //resettiamo le posizioni
word |= (value << pos) & mask;
return word;
}
uint ReadBits(uint word, int pos, int size)
{
uint mask = ((((uint)1) << size) - 1) << pos;
return (word & mask) >> pos;
}
then:
uint the_word;
public uint Itemx
{
get { return ReadBits(the_word, 5, 2); }
set { the_word = SetBits(the_word, value, 5, 2) }
}
A: I'd probably knock together something using attributes, then a conversion class to convert suitably attributed structures to the bitfield primitives. Something like...
using System;
namespace BitfieldTest
{
[global::System.AttributeUsage(AttributeTargets.Field, AllowMultiple = false)]
sealed class BitfieldLengthAttribute : Attribute
{
uint length;
public BitfieldLengthAttribute(uint length)
{
this.length = length;
}
public uint Length { get { return length; } }
}
static class PrimitiveConversion
{
public static long ToLong<T>(T t) where T : struct
{
long r = 0;
int offset = 0;
// For every field suitably attributed with a BitfieldLength
foreach (System.Reflection.FieldInfo f in t.GetType().GetFields())
{
object[] attrs = f.GetCustomAttributes(typeof(BitfieldLengthAttribute), false);
if (attrs.Length == 1)
{
uint fieldLength = ((BitfieldLengthAttribute)attrs[0]).Length;
// Calculate a bitmask of the desired length
long mask = 0;
for (int i = 0; i < fieldLength; i++)
mask |= 1 << i;
r |= ((UInt32)f.GetValue(t) & mask) << offset;
offset += (int)fieldLength;
}
}
return r;
}
}
struct PESHeader
{
[BitfieldLength(2)]
public uint reserved;
[BitfieldLength(2)]
public uint scrambling_control;
[BitfieldLength(1)]
public uint priority;
[BitfieldLength(1)]
public uint data_alignment_indicator;
[BitfieldLength(1)]
public uint copyright;
[BitfieldLength(1)]
public uint original_or_copy;
};
public class MainClass
{
public static void Main(string[] args)
{
PESHeader p = new PESHeader();
p.reserved = 3;
p.scrambling_control = 2;
p.data_alignment_indicator = 1;
long l = PrimitiveConversion.ToLong(p);
for (int i = 63; i >= 0; i--)
{
Console.Write( ((l & (1l << i)) > 0) ? "1" : "0");
}
Console.WriteLine();
return;
}
}
}
Which produces the expected ...000101011. Of course, it needs more error checking and a slightly saner typing, but the concept is (I think) sound, reusable, and lets you knock out easily maintained structures by the dozen.
adamw
A: While it is a class, using BitArray seems like the way to least reinvent the wheel. Unless you're really pressed for performance, this is the simplest option. (Indexes can be referenced with the [] operator.)
A: Could an Enum with the Flags Attribute help maybe? See here:
What does the [Flags] Enum Attribute mean in C#?
A: A flags enum can work too, I think, if you make it a byte enum:
[Flags] enum PesHeaders : byte { /* ... */ }
A: By using an enum you can do this, but will look awkward.
[Flags]
public enum PESHeaderFlags
{
IsCopy = 1, // implied that if not present, then it is an original
IsCopyrighted = 2,
IsDataAligned = 4,
Priority = 8,
ScramblingControlType1 = 0,
ScramblingControlType2 = 16,
ScramblingControlType3 = 32,
ScramblingControlType4 = 16+32,
ScramblingControlFlags = ScramblingControlType1 | ScramblingControlType2 | ... ype4
etc.
}
A: You want StructLayoutAttribute
[StructLayout(LayoutKind.Explicit, Size=1, CharSet=CharSet.Ansi)]
public struct Foo
{ [FieldOffset(0)]public byte original_or_copy;
[FieldOffset(0)]public byte copyright;
[FieldOffset(0)]public byte data_alignment_indicator;
[FieldOffset(0)]public byte PES_priority;
[FieldOffset(0)]public byte PES_scrambling_control;
[FieldOffset(0)]public byte reserved;
}
This is really a union but you can use it as a bitfield--you just have to be conscious of where in the byte the bits for each field are supposed to be. Utility functions and/or constants to AND against can help.
const byte _original_or_copy = 1;
const byte _copyright = 2;
//bool ooo = foo.original_or_copy();
static bool original_or_copy(this Foo foo)
{ return (foo.original_or_copy & _original_or_copy) == original_or_copy;
}
There is also LayoutKind.Sequential which will allow you to do it the C way.
A: As Christophe Lambrechts suggested BitVector32 provides a solution. Jitted performance should be adequate, but don't know for sure.
Here's the code illustrating this solution:
public struct rcSpan
{
//C# Spec 10.4.5.1: The static field variable initializers of a class correspond to a sequence of assignments that are executed in the textual order in which they appear in the class declaration.
internal static readonly BitVector32.Section sminSection = BitVector32.CreateSection(0x1FFF);
internal static readonly BitVector32.Section smaxSection = BitVector32.CreateSection(0x1FFF, sminSection);
internal static readonly BitVector32.Section areaSection = BitVector32.CreateSection(0x3F, smaxSection);
internal BitVector32 data;
//public uint smin : 13;
public uint smin
{
get { return (uint)data[sminSection]; }
set { data[sminSection] = (int)value; }
}
//public uint smax : 13;
public uint smax
{
get { return (uint)data[smaxSection]; }
set { data[smaxSection] = (int)value; }
}
//public uint area : 6;
public uint area
{
get { return (uint)data[areaSection]; }
set { data[areaSection] = (int)value; }
}
}
You can do a lot this way. You can do even better without using BitVector32, by providing handmade accessors for every field:
public struct rcSpan2
{
internal uint data;
//public uint smin : 13;
public uint smin
{
get { return data & 0x1FFF; }
set { data = (data & ~0x1FFFu ) | (value & 0x1FFF); }
}
//public uint smax : 13;
public uint smax
{
get { return (data >> 13) & 0x1FFF; }
set { data = (data & ~(0x1FFFu << 13)) | (value & 0x1FFF) << 13; }
}
//public uint area : 6;
public uint area
{
get { return (data >> 26) & 0x3F; }
set { data = (data & ~(0x3F << 26)) | (value & 0x3F) << 26; }
}
}
Surprisingly this last, handmade solution seems to be the most convenient, least convoluted, and the shortest one. That's of course only my personal preference.
A: I wrote one, share it, may help someone:
[global::System.AttributeUsage(AttributeTargets.Field, AllowMultiple = false)]
public sealed class BitInfoAttribute : Attribute {
byte length;
public BitInfoAttribute(byte length) {
this.length = length;
}
public byte Length { get { return length; } }
}
public abstract class BitField {
public void parse<T>(T[] vals) {
analysis().parse(this, ArrayConverter.convert<T, uint>(vals));
}
public byte[] toArray() {
return ArrayConverter.convert<uint, byte>(analysis().toArray(this));
}
public T[] toArray<T>() {
return ArrayConverter.convert<uint, T>(analysis().toArray(this));
}
static Dictionary<Type, BitTypeInfo> bitInfoMap = new Dictionary<Type, BitTypeInfo>();
private BitTypeInfo analysis() {
Type type = this.GetType();
if (!bitInfoMap.ContainsKey(type)) {
List<BitInfo> infos = new List<BitInfo>();
byte dataIdx = 0, offset = 0;
foreach (System.Reflection.FieldInfo f in type.GetFields()) {
object[] attrs = f.GetCustomAttributes(typeof(BitInfoAttribute), false);
if (attrs.Length == 1) {
byte bitLen = ((BitInfoAttribute)attrs[0]).Length;
if (offset + bitLen > 32) {
dataIdx++;
offset = 0;
}
infos.Add(new BitInfo(f, bitLen, dataIdx, offset));
offset += bitLen;
}
}
bitInfoMap.Add(type, new BitTypeInfo(dataIdx + 1, infos.ToArray()));
}
return bitInfoMap[type];
}
}
class BitTypeInfo {
public int dataLen { get; private set; }
public BitInfo[] bitInfos { get; private set; }
public BitTypeInfo(int _dataLen, BitInfo[] _bitInfos) {
dataLen = _dataLen;
bitInfos = _bitInfos;
}
public uint[] toArray<T>(T obj) {
uint[] datas = new uint[dataLen];
foreach (BitInfo bif in bitInfos) {
bif.encode(obj, datas);
}
return datas;
}
public void parse<T>(T obj, uint[] vals) {
foreach (BitInfo bif in bitInfos) {
bif.decode(obj, vals);
}
}
}
class BitInfo {
private System.Reflection.FieldInfo field;
private uint mask;
private byte idx, offset, shiftA, shiftB;
private bool isUnsigned = false;
public BitInfo(System.Reflection.FieldInfo _field, byte _bitLen, byte _idx, byte _offset) {
field = _field;
mask = (uint)(((1 << _bitLen) - 1) << _offset);
idx = _idx;
offset = _offset;
shiftA = (byte)(32 - _offset - _bitLen);
shiftB = (byte)(32 - _bitLen);
if (_field.FieldType == typeof(bool)
|| _field.FieldType == typeof(byte)
|| _field.FieldType == typeof(char)
|| _field.FieldType == typeof(uint)
|| _field.FieldType == typeof(ulong)
|| _field.FieldType == typeof(ushort)) {
isUnsigned = true;
}
}
public void encode(Object obj, uint[] datas) {
if (isUnsigned) {
uint val = (uint)Convert.ChangeType(field.GetValue(obj), typeof(uint));
datas[idx] |= ((uint)(val << offset) & mask);
} else {
int val = (int)Convert.ChangeType(field.GetValue(obj), typeof(int));
datas[idx] |= ((uint)(val << offset) & mask);
}
}
public void decode(Object obj, uint[] datas) {
if (isUnsigned) {
field.SetValue(obj, Convert.ChangeType((((uint)(datas[idx] & mask)) << shiftA) >> shiftB, field.FieldType));
} else {
field.SetValue(obj, Convert.ChangeType((((int)(datas[idx] & mask)) << shiftA) >> shiftB, field.FieldType));
}
}
}
public class ArrayConverter {
public static T[] convert<T>(uint[] val) {
return convert<uint, T>(val);
}
public static T1[] convert<T0, T1>(T0[] val) {
T1[] rt = null;
// type is same or length is same
// refer to http://stackoverflow.com/questions/25759878/convert-byte-to-sbyte
if (typeof(T0) == typeof(T1)) {
rt = (T1[])(Array)val;
} else {
int len = Buffer.ByteLength(val);
int w = typeWidth<T1>();
if (w == 1) { // bool
rt = new T1[len * 8];
} else if (w == 8) {
rt = new T1[len];
} else { // w > 8
int nn = w / 8;
int len2 = (len / nn) + ((len % nn) > 0 ? 1 : 0);
rt = new T1[len2];
}
Buffer.BlockCopy(val, 0, rt, 0, len);
}
return rt;
}
public static string toBinary<T>(T[] vals) {
StringBuilder sb = new StringBuilder();
int width = typeWidth<T>();
int len = Buffer.ByteLength(vals);
for (int i = len-1; i >=0; i--) {
sb.Append(Convert.ToString(Buffer.GetByte(vals, i), 2).PadLeft(8, '0')).Append(" ");
}
return sb.ToString();
}
private static int typeWidth<T>() {
int rt = 0;
if (typeof(T) == typeof(bool)) { // x
rt = 1;
} else if (typeof(T) == typeof(byte)) { // x
rt = 8;
} else if (typeof(T) == typeof(sbyte)) {
rt = 8;
} else if (typeof(T) == typeof(ushort)) { // x
rt = 16;
} else if (typeof(T) == typeof(short)) {
rt = 16;
} else if (typeof(T) == typeof(char)) {
rt = 16;
} else if (typeof(T) == typeof(uint)) { // x
rt = 32;
} else if (typeof(T) == typeof(int)) {
rt = 32;
} else if (typeof(T) == typeof(float)) {
rt = 32;
} else if (typeof(T) == typeof(ulong)) { // x
rt = 64;
} else if (typeof(T) == typeof(long)) {
rt = 64;
} else if (typeof(T) == typeof(double)) {
rt = 64;
} else {
throw new Exception("Unsupport type : " + typeof(T).Name);
}
return rt;
}
}
and the usage:
class MyTest01 : BitField {
[BitInfo(3)]
public bool d0;
[BitInfo(3)]
public short d1;
[BitInfo(3)]
public int d2;
[BitInfo(3)]
public int d3;
[BitInfo(3)]
public int d4;
[BitInfo(3)]
public int d5;
public MyTest01(bool _d0, short _d1, int _d2, int _d3, int _d4, int _d5) {
d0 = _d0;
d1 = _d1;
d2 = _d2;
d3 = _d3;
d4 = _d4;
d5 = _d5;
}
public MyTest01(byte[] datas) {
parse(datas);
}
public new string ToString() {
return string.Format("d0: {0}, d1: {1}, d2: {2}, d3: {3}, d4: {4}, d5: {5} \r\nbinary => {6}",
d0, d1, d2, d3, d4, d5, ArrayConverter.toBinary(toArray()));
}
};
class MyTest02 : BitField {
[BitInfo(5)]
public bool val0;
[BitInfo(5)]
public byte val1;
[BitInfo(15)]
public uint val2;
[BitInfo(15)]
public float val3;
[BitInfo(15)]
public int val4;
[BitInfo(15)]
public int val5;
[BitInfo(15)]
public int val6;
public MyTest02(bool v0, byte v1, uint v2, float v3, int v4, int v5, int v6) {
val0 = v0;
val1 = v1;
val2 = v2;
val3 = v3;
val4 = v4;
val5 = v5;
val6 = v6;
}
public MyTest02(byte[] datas) {
parse(datas);
}
public new string ToString() {
return string.Format("val0: {0}, val1: {1}, val2: {2}, val3: {3}, val4: {4}, val5: {5}, val6: {6}\r\nbinary => {7}",
val0, val1, val2, val3, val4, val5, val6, ArrayConverter.toBinary(toArray()));
}
}
public class MainClass {
public static void Main(string[] args) {
MyTest01 p = new MyTest01(false, 1, 2, 3, -1, -2);
Debug.Log("P:: " + p.ToString());
MyTest01 p2 = new MyTest01(p.toArray());
Debug.Log("P2:: " + p2.ToString());
MyTest02 t = new MyTest02(true, 1, 12, -1.3f, 4, -5, 100);
Debug.Log("t:: " + t.ToString());
MyTest02 t2 = new MyTest02(t.toArray());
Debug.Log("t:: " + t.ToString());
Console.Read();
return;
}
}
A: One more based off of Zbyl's answer. This one is a little easier to change around for me - I just have to adjust the sz0,sz1... and also make sure mask# and loc# are correct in the Set/Get blocks.
Performance wise, it should be the same as they both resolved to 38 MSIL statements. (constants are resolved at compile time)
public struct MyStruct
{
internal uint raw;
const int sz0 = 4, loc0 = 0, mask0 = ((1 << sz0) - 1) << loc0;
const int sz1 = 4, loc1 = loc0 + sz0, mask1 = ((1 << sz1) - 1) << loc1;
const int sz2 = 4, loc2 = loc1 + sz1, mask2 = ((1 << sz2) - 1) << loc2;
const int sz3 = 4, loc3 = loc2 + sz2, mask3 = ((1 << sz3) - 1) << loc3;
public uint Item0
{
get { return (uint)(raw & mask0) >> loc0; }
set { raw = (uint)(raw & ~mask0 | (value << loc0) & mask0); }
}
public uint Item1
{
get { return (uint)(raw & mask1) >> loc1; }
set { raw = (uint)(raw & ~mask1 | (value << loc1) & mask1); }
}
public uint Item2
{
get { return (uint)(raw & mask2) >> loc2; }
set { raw = (uint)(raw & ~mask2 | (value << loc2) & mask2); }
}
public uint Item3
{
get { return (uint)((raw & mask3) >> loc3); }
set { raw = (uint)(raw & ~mask3 | (value << loc3) & mask3); }
}
}
A: I wrote one this morning with T4. :) Same example as Zbyl, though I threw in a bit of uint sizing fun. This is just a first pass, it could obviously use a little error checking. Also the bitFields spec array would be nicer in a separate file, maybe a .ttinclude, or a json/yaml..
=== BitFields.tt ===
<#@ template language="C#" #>
<#@ assembly name="System.Core" #>
<#@ import namespace="System.Linq" #>
<#
var bitFields = new[]
{
new
{
Name = "rcSpan2", Fields = new[] { ("smin", 13), ("smax", 13), ("area", 6) },
},
};
foreach (var bitField in bitFields)
{
static string getType(int size) =>
size switch
{
> 32 => "ulong",
> 16 => "uint",
> 8 => "ushort",
_ => "byte",
};
var bitFieldType = getType(bitField.Fields.Sum(f => f.Item2));
#>
public struct <#=bitField.Name#>
{
<#=bitFieldType#> _bitfield;
<#
var offset = 0;
foreach (var (fieldName, fieldSize) in bitField.Fields)
{
var fieldType = getType(fieldSize);
var fieldMask = $"0x{((1UL<<fieldSize)-1):X}U";
#>
public <#=fieldType#> <#=fieldName#> // : <#=fieldSize#>
{
get => (<#=fieldType#>)(<#=offset > 0 ? $"(_bitfield >> {offset})" : "_bitfield"#> & <#=fieldMask#>);
set => _bitfield = (<#=bitFieldType#>)((_bitfield & ~((<#=bitFieldType#>)<#=fieldMask#> << <#=offset#>)) | ((<#=bitFieldType#>)(value & <#=fieldMask#>) << <#=offset#>));
}
<#
offset += fieldSize;
}
#>
}
<#}#>
=== BitFields.cs === (generated)
public struct rcSpan2
{
uint _bitfield;
public ushort smin // : 13
{
get => (ushort)(_bitfield & 0x1FFFU);
set => _bitfield = (uint)((_bitfield & ~((uint)0x1FFFU << 0)) | ((uint)(value & 0x1FFFU) << 0));
}
public ushort smax // : 13
{
get => (ushort)((_bitfield >> 13) & 0x1FFFU);
set => _bitfield = (uint)((_bitfield & ~((uint)0x1FFFU << 13)) | ((uint)(value & 0x1FFFU) << 13));
}
public byte area // : 6
{
get => (byte)((_bitfield >> 26) & 0x3FU);
set => _bitfield = (uint)((_bitfield & ~((uint)0x3FU << 26)) | ((uint)(value & 0x3FU) << 26));
}
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/14464",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "85"
} |
Q: How can an MFC application terminate itself? What is the proper way for an MFC application to cleanly close itself?
A: In support of @Mike's answer, the reason to use this method is to trigger the correct shutdown sequence. Especially important for MDI/SDI applications because it gives a chance for documents to prompt for save before exit or to cancel the exit.
@Matt Noguchi, your method will circumvent this sequence (which may be the desired effect, I suppose, but you've probably got problems if you're short-circuiting the normal teardown.
A: PostQuitMessage( [exit code] );
A: AfxGetMainWnd()->PostMessage(WM_CLOSE);
A: Programatically Terminate an MFC Application
void ExitMFCApp()
{
// same as double-clicking on main window close box
ASSERT(AfxGetMainWnd() != NULL);
AfxGetMainWnd()->SendMessage(WM_CLOSE);
}
http://support.microsoft.com/kb/117320
A: If it is a dialog based application you can do it by calling EndDialog() function.
If it is an SDI/MDI based application you can call DestroyWindow. But before which you will need to do the cleanup yourself (closing documents, deallocating memory and resources, destroying any additional windows created etc).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/14491",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
} |
Q: How can I stop MATLAB from returning until after a command-line script completes? I see in the MATLAB help (matlab -h) that I can use the -r flag to specify an m-file to run. I notice when I do this, MATLAB seems to start the script, but immediately return. The script processes fine, but the main app has already returned.
Is there any way to get MATLAB to only return once the command is finished? If you're calling it from a separate program it seems like it's easier to wait on the process than to use a file or sockets to confirm completion.
To illustrate, here's a sample function waitHello.m:
function waitHello
disp('Waiting...');
pause(3); %pauses 3 seconds
disp('Hello World');
quit;
And I try to run this using:
matlab -nosplash -nodesktop -r waitHello
A: Quick answer:
matlab -wait -nosplash -nodesktop -r waitHello
In Matlab 7.1 (the version I have) there is an undocumented command line option -wait in matlab.bat. If it doesn't work for your version, you could probably add it in. Here's what I found. The command at the bottom that finally launches matlab is (line 153):
start "MATLAB" %START_WAIT% "%MATLAB_BIN_DIR%\%MATLAB_ARCH%\matlab" %MATLAB_ARGS%
The relevant syntax of the start command (see "help start" in cmd.exe) in this case is:
start ["window title"] [/wait] myprogram.exe args ...
A bit higher, among all of the documented command line options, I found (line 60):
) else if (%opt%) == (-wait) (
set START_WAIT=/wait
) else (
So specifying -wait should do what you want, as long as you're also exiting matlab from your script (otherwise it will wait for you to terminate it interactively).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/14495",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "23"
} |
Q: Has anyone used NUnitLite with any success? I've recently started work on the Compact Framework and I was wondering if anyone had some recommendations for unit testing beyond what's in VS 2008. MSTest is ok, but debugging the tests is a nightmare and the test runner is so slow.
I see that NUnitLite on codeplex is an option, but it doesn't look very active; it's also in the roadmap for NUnit 3.0, but who knows when that will come out. Has anyone had any success with it?
A: What we've done that really improves our efficiency and quality is to multi target our mobile application. That is to say with a very little bit of creativity and a few conditional compile tags and custom project configurations it is possible to build a version of your mobile application that also runs on the desktop.
If you put all your business logic you need tested in a separate project/assembly then this layer can be very effectively tested using any of the desktop tools you are already familiar with.
A: We use NUnitLite, although I think we did have had to add some code to it in order for it to work.
One of the problems we found is that if you are using parts of the platform that only exist in CF, then you can only run those tests in NUnitLite on an emulator or Windows Mobile device, which makes it hard to run the tests as part of an integrated build process. We got round this by added a new test attribute allowing you to disable the tests what would only run on the CF (typically these would be p/invoking out to some windows mobile only dll).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/14497",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Proportional font IDE I would really like to see a proportional font IDE, even if I have to build it myself (perhaps as an extension to Visual Studio). What I basically mean is MS Word style editing of code that sort of looks like the typographical style in The C++ Programming Language book.
I want to set tab stops for my indents and lining up function signatures and rows of assignment statements, which could be specified in points instead of fixed character positions. I would also like bold and italics. Various font sizes and even style sheets would be cool.
Has anyone seen anything like this out there or know the best way to start building one?
A: Thinking with Style suggests to use your favorite text-manipulation software like Word or Writer. Create your programme code in rich XML and extract the compiler-relevant sections with XSLT. The "Office" software will provide all advanced text-manipulation and formatting features.
A: i expected you'll get down-modded and picked on for that suggestion, but there's some real sense to the idea.
The main advantage of the traditional 'non-proportional' font requirement in code editors is to ease the burden of performing code formatting.
But with all of the interactive automatic formatting that occurs in modern IDE's, it's really possible that a proportional font could improve the readability of the code (rather than hampering it, as i'm sure many purists would expect).
A character called Roedy Green (famous for his 'how to write unmaintainable code' articles) wrote about a theoretical editor/language, based on Java and called Bali. It didn't include non-proportional fonts exactly, but it did include the idea of having non-uniform font-sizes.
Also, this short Joel Spolsky post posts to a solution, elastic tab stops (as mentioned by another commentor) that would help with the support of non-proportional (and variable sized) fonts.
A: @Thomas Owens
I don't find code formatted like that easier to read.
That's fine, it is just a personal preference and we can disagree. Format it the way you think is best and I'll respect it. I frequently ask myself 'how should I format this or that thing?' My answer is always to format it to improve readability, which I admit can be subjective.
Regarding your sample, I just like having that nicely aligned column on the right hand side, its sort of a quick "index" into the code on the left. Having said that, I would probably avoid commenting every line like that anyway because the code itself shouldn't need that much explanation. And if it does I tend to write a paragraph above the code.
But consider this example from the original poster. Its easier to spot the comments in the second one in my opinion.
for (size-type i = 0; i<v.size(); i++) { // rehash:
size-type ii = has(v[i].key)%b.size9); // hash
v[i].next = b[ii]; // link
b[ii] = &v[i];
}
for (size-type i = 0; i<v.size(); i++) { // rehash:
size-type ii = has(v[i].key)%b.size9); // hash
v[i].next = b[ii]; // link
b[ii] = &v[i];
}
A: @Thomas Owens
But do people really line comments up
like that? ... I never try to
line up declarations or comments or
anything, and the only place I've ever
seen that is in textbooks.
Yes people do line up comments and declarations and all sorts of things. Consistently well formatted code is easier to read and code that is easier to read is easier to maintain.
A: I wonder why nobody actually answers your question, and why the accepted answer doesn't really have anything to do with your question. But anyway...
a proportional font IDE
In Eclipse you can cchoose any font on your system.
set tab stops for my indents
In Eclipse you can configure the automatic indentation, including setting it to "tabs only".
lining up function signatures and rows of assignment statements
In Eclipse, automatic indentation does that.
which could be specified in points instead of fixed character positions.
Sorry, I don't think Eclipse can help you there. But it is open source. ;-)
bold and italics
Eclipse has that.
Various font sizes and even style sheets would be cool
I think Eclipse only uses one font and font-size for each file type (for example Java source file), but you can have different "style sheets" for different file types.
A: I'd still like to see a popular editor or IDE implement elastic tabstops.
A: When I last looked at Eclipse (some time ago now!) it allowed you to choose any installed font to work in. Not so sure whether it supported the notion of indenting using tab stops.
It looked cool, but the code was definitely harder to read...
A: Soeren: That's kind of neat, IMO. But do people really line comments up like that? For my end of line comments, I always use a single space then // or /* or equivalent, depending on language I'm using. I never try to line up declarations or comments or anything, and the only place I've ever seen that is in textbooks.
A: @Brian Ensink: I don't find code formatted like that easier to read.
int var1 = 1 //Comment
int longerVar = 2 //Comment
int anotherVar = 4 //Command
versus
int var2 = 1 //Comment
int longerVar = 2 //Comment
int anotherVar = 4 //Comment
I find the first lines easier to read than the second lines, personally.
A: The indentation part of your question is being done today in a real product, though possibly to even a greater level of automation than you imagined, the product I mention is an XSLT IDE, but the same formatting principles would work with most (but not all) conventional code syntaxes.
This really has to be seen in video to get the sense of it all (sorry about the music back-track). There's also a light XML editor spin-off product, XMLQuire, that serves as a technology demonstrator.
The screenshot below shows XML formatted with quite complex formatting rules in this XSLT IDE, where all indentation is performed word-processor style, using the left margin - not space or tab characters.
To emphasise this formatting concept, all characters have been highlighted to show where the left-margin extends to keep indentation. I use the term Virtual Formatting to describe this - it's not like elastic tab stops, because there simply are no tabs, just margin information which is part of the 'paragraph' formatting (RTF codes are used here). The parser reformats continuously, in the same pass as syntax coloring.
A proportional font hasn't been used here, but it could have been quite easily - because the indentation is set in TWIPS. The editing experience is quite compelling because, as you refactor the code (XML in this case), perhaps through drag and drop, or by extending the length of an attribute value, the indentation just re-flows itself to fit - there's no tab-key or 'reformat' button to press.
So, the indentation is there, but the font work is a more complex problem. I've experimented with this, but found that if fonts are re-selected as you type, the horizontal shifting of the code is too distracting - there would need to be a user-initiated 'format fonts' command probably. The product also has Ink/Handwriting technology built-in for annotating code, but I've yet to exploit this in the live release.
A: Folks are all complaining about comments not lining up.
Seems to me that there's a very simple solution: Define the unit space as the widest character in the font. Now, proportionally space all characters except the space. the space takes up as much room so as to line up the next character where it would be if all preceeding characters on the line were the widest in the font.
ie:
iiii_space_Foo
xxxx_space_Foo
would line up the "Foo", with the space after the "i" being much wider than after the "x".
So call it elastic spaces. rather than tab-stops.
If you're a smart editor, treat comments specially, but that's just gravy
A: Let me recall arguments about using the 'var' keyword in C#. People hated it, and thought it would make code less clear. For example, you couldn't know the type in something like:
var x = GetResults("Main");
foreach(var y in x)
{
WriteResult(x);
}
Their argument was, that you couln't see if x was an array, an List or any other IEnumerable. Or what the type of y was. In my opinion the unclearity did not arise from using var, but from picking unclear variable names. Why not just type:
var electionResults = GetRegionalElactionResults("Main");
foreach(var result in electionResults)
{
Write(result); // you can see what you're writing!!
}
"But you still cannot see the type of electionResults!" - does it really matter? If you want to change the return type of GetRegionalElectionResults, you can do so. Any IEnumerable will do.
Fast forward to now. People want to align comments en similar code:
int var2 = 1; //The number of days since startup, including the first
int longerVar = 2; //The number of free days per week
int anotherVar = 38; //The number of working hours per week
So without the comment everything is unclear. And if you don't align the values, you cannot seperate them from the variales. But do you? What about this (ignore the bullets please)
*
*int daysSinceStartup = 1; // including first
*int freeDaysPerWeek = 2;
*int workingHoursPerWeek = 38;
If you need a comment on EVERY LINE, you're doing something wrong. "But you still need to align the VALUES" - do you? what does 38 have to do with 2?
In C# Most code blocks can easily be aligned using only tabs (or acually, multiples of four spaces):
*
*var regionsWithIncrease =
*
*from result in GetRegionalElectionResults()
*where result.TotalCount > result > PreviousTotalCount &&
*
*result.PreviousTotalCount > 0 // just new regions
*select result.Region;
*foreach (var region in regionsWithIncrease)
*{
*
*Write(region);
*}
You should never use line-to-line comments and you should rarely need to vertically align things. Rarely, not never. So I understand if some of you guys prefer a monospaced font. I prefer the readibility of font Noto Sans or Source Sans Pro. These fonts are available freely from Google, and resemble Calibri, but are designed for programming and thus have all the neccesary characteristics:
*
*Big : ; . , so you can clearly see the difference
*Clearly distinct 0Oo and distinct Il|
A: The major problem with proportional fonts is they destroy the vertical alignment of the code and this is a fairly major loss when it comes to writing code.
The vertical alignment makes it possible to manipulate rectangular blocks of code that span multiple lines by allowing block operations like cut, copy, paste, delete and indent, unindent etc to be easily performed.
As an example consider this snippet of code:
a1 = a111;
B2 = aaaa;
c3 = AAAA;
w4 = wwWW;
W4 = WWWW;
In a mono-spaced font the = and the ; all line up.
Now if this text is loded into Word and display using a proportional font the text effectively turns into this:
NOTE: Extra white space added to show how the = and ; no longer line up:
a1 = a1 1 1;
B2 = aaaa;
c3 = A A A A;
w4 = w w W W;
W4 = W W W W;
With the vertical alignment gone those nice blocks of code effectively disappear.
Also because the cursor is no longer guaranteed to move vertically (i.e. the column number is not always constant from one line to the next) it makes it more difficult to write throw away macro scripts designed to manipulated similar looking lines.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/14504",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12"
} |
Q: Alpha blending colors in .NET Compact Framework 2.0 In the Full .NET framework you can use the Color.FromArgb() method to create a new color with alpha blending, like this:
Color blended = Color.FromArgb(alpha, color);
or
Color blended = Color.FromArgb(alpha, red, green , blue);
However in the Compact Framework (2.0 specifically), neither of those methods are available, you only get:
Color.FromArgb(int red, int green, int blue);
and
Color.FromArgb(int val);
The first one, obviously, doesn't even let you enter an alpha value, but the documentation for the latter shows that "val" is a 32bit ARGB value (as 0xAARRGGBB as opposed to the standard 24bit 0xRRGGBB), so it would make sense that you could just build the ARGB value and pass it to the function. I tried this with the following:
private Color FromARGB(byte alpha, byte red, byte green, byte blue)
{
int val = (alpha << 24) | (red << 16) | (green << 8) | blue;
return Color.FromArgb(val);
}
But no matter what I do, the alpha blending never works, the resulting color always as full opacity, even when setting the alpha value to 0.
Has anyone gotten this to work on the Compact Framework?
A: Apparently, it's not quite that simple, but still possible, if you have Windows Mobile 5.0 or newer.
A: There is a codeplex site out there that seems to do the heavy lifting of com interop for you:
A:
Apparently, it's not quite that
simple, but still possible, if you
have Windows Mobile 5.0 or newer.
Wow...definitely not worth it if I have to put all that code in (and do native interop!)
Good to know though, thanks for the link.
A: i take this code and i add this
device.RenderState.AlphaBlendEnable = true;
device.RenderState.AlphaFunction = Compare.Greater;
device.RenderState.AlphaTestEnable = true;
device.RenderState.DestinationBlend = Blend.InvSourceAlpha;
device.RenderState.SourceBlend = Blend.SourceAlpha;
device.RenderState.DiffuseMaterialSource = ColorSource.Material;
in the initialized routine and it work very well, thank you
A: CE 6.0 does not support alpha blending. WM 5.0 and above do, but not through the .NET CF, you will need to P/Invoke GDI stuff to do so. There are ready-made solutions out there, however, if you are interested i can dig the links out tomorrow at the office. I have to work with CE 6.0 currently so i don't have them on my mind.
If you are using CE 6.0 you can use pseudo-transparency by reserving a transparency background color (e.g. ff00ff or something similiarly ugly) and using that in your images for transparent areas. Then, your parent controls must implement a simple interface that allows re-drawing the relevant portion on your daughter controls' graphics buffer. Note that this will not give you a true "alpha channel" but rather just a hard on-off binary kind of transparency.
It's not as bad as it sounds. Take a look at the OpenNETCF ImageButton for starters. If you are going to use this method, i have a somewhat extended version of some simple controls with this technique lying around if you are interested.
One additional drawback is that this technique relies on the parent control implementing a special interface, and the daugther controls using it in drawing. So with closed-source components (i.e. also the stock winforms components) you are out of luck.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/14505",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Reverse Find in a string I need to be able to find the last occurrence of a character within an element.
For example:
<mediaurl>http://www.blah.com/path/to/file/media.jpg</mediaurl>
If I try to locate it through using substring-before(mediaurl, '.') and substring-after(mediaurl, '.') then it will, of course, match on the first dot.
How would I get the file extension? Essentially, I need to get the file name and the extension from a path like this, but I am quite stumped as to how to do it using XSLT.
A: If you're using XSLT 2.0, it's easy:
<xsl:variable name="extension" select="tokenize($filename, '\.')[last()]"/>
If you're not, it's a bit harder. There's a good example from the O'Reilly XSLT Cookbook. Search for "Tokenizing a String."
I believe there's also an EXSLT function, if you have that available.
A: The following is an example of a template that would produce the required output in XSLT 1.0:
<xsl:template name="getExtension">
<xsl:param name="filename"/>
<xsl:choose>
<xsl:when test="contains($filename, '.')">
<xsl:call-template name="getExtension">
<xsl:with-param name="filename" select="substring-after($filename, '.')"/>
</xsl:call-template>
</xsl:when>
<xsl:otherwise>
<xsl:value-of select="$filename"/>
</xsl:otherwise>
</xsl:choose>
</xsl:template>
<xsl:template match="/">
<xsl:call-template name="getExtension">
<xsl:with-param name="filename" select="'http://www.blah.com/path/to/file/media.jpg'"/>
</xsl:call-template>
</xsl:template>
A: How about tokenize with "/" and take the last element from the array ?
Example: tokenize("XPath is fun", "\s+")
Result: ("XPath", "is", "fun")
Was an XSLT fiddler sometime back... lost touch now. But HTH
A: For reference, this problem is usually called "substring-after-last" in XSLT.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/14527",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: LINQ-to-SQL vs stored procedures? I took a look at the "Beginner's Guide to LINQ" post here on StackOverflow (Beginners Guide to LINQ), but had a follow-up question:
We're about to ramp up a new project where nearly all of our database op's will be fairly simple data retrievals (there's another segment of the project which already writes the data). Most of our other projects up to this point make use of stored procedures for such things. However, I'd like to leverage LINQ-to-SQL if it makes more sense.
So, the question is this: For simple data retrievals, which approach is better, LINQ-to-SQL or stored procs? Any specific pro's or con's?
Thanks.
A: I am generally a proponent of putting everything in stored procedures, for all of the reasons DBAs have been harping on for years. In the case of Linq, it is true that there will be no performance difference with simple CRUD queries.
But keep a few things in mind when making this decision: using any ORM couples you tightly to your data model. A DBA has no freedom to make changes to the data model without forcing you to change your compiled code. With stored procedures, you can hide these sorts of changes to an extent, since the parameter list and results set(s) returned from a procedure represent its contract, and the innards can be changed around, just so long as that contract is still met.
And also, if Linq is used for more complex queries, tuning the database becomes a much more difficult task. When a stored procedure is running slow, the DBA can totally focus on the code in isolation, and has lots of options, just so that contract is still satisfied when he/she is done.
I have seen many, many cases where serious problems in an application were addressed by changes to the schema and code in stored procedures without any change to deployed, compiled code.
Perhaps a hybird approach would be nice with Linq? Linq can, of course, be used to call stored procedures.
A:
A DBA has no freedom to make changes
to the data model without forcing you
to change your compiled code. With
stored procedures, you can hide these
sorts of changes to an extent, since
the parameter list and results set(s)
returned from a procedure represent
its contract, and the innards can be
changed around, just so long as that
contract is still met.
I really don't see this as being a benefit. Being able to change something in isolation might sound good in theory, but just because the changes fulfil a contract doesn't mean it's returning the correct results. To be able to determine what the correct results are you need context and you get that context from the calling code.
A: Linq to Sql.
Sql server will cache the query plans, so there's no performance gain for sprocs.
Your linq statements, on the other hand, will be logically part of and tested with your application. Sprocs are always a bit separated and are harder to maintain and test.
If I was working on a new application from scratch right now I would just use Linq, no sprocs.
A: I think you need to go with procs for anything real.
A) Writing all your logic in linq means your database is less useful because only your application can consume it.
B) I'm not convinced that object modelling is better than relational modelling anyway.
C) Testing and developing a stored procedure in SQL is a hell of a lot faster than a compile edit cycle in any Visual Studio environment. You just edit, F5 and hit select and you are off to the races.
D) It's easier to manage and deploy stored procedures than assemblies.. you just put the file on the server, and press F5...
E) Linq to sql still writes crappy code at times when you don't expect it.
Honestly, I think the ultimate thing would be for MS to augment t-sql so that it can do a join projection impliclitly the way linq does. t-sql should know if you wanted to do order.lineitems.part, for example.
A: LINQ doesn't prohibit the use of stored procedures. I've used mixed mode with LINQ-SQL and LINQ-storedproc. Personally, I'm glad I don't have to write the stored procs....pwet-tu.
A: IMHO, RAD = LINQ, RUP = Stored Procs. I worked for a large Fortune 500 company for many years, at many levels including management, and frankly, I would never hire RUP developers to do RAD development. They are so siloed that they very limited knowledge of what to do at other levels of the process. With a siloed environment, it makes sense to give DBAs control over the data through very specific entry points, because others frankly don't know the best ways to accomplish data management.
But large enterprises move painfully slow in the development arena, and this is extremely costly. There are times when you need to move faster to save both time and money, and LINQ provides that and more in spades.
Sometimes I think that DBAs are biased against LINQ because they feel it threatens their job security. But that's the nature of the beast, ladies and gentlemen.
A: For basic data retrieval I would be going for Linq without hesitation.
Since moving to Linq I've found the following advantages:
*
*Debugging my DAL has never been easier.
*Compile time safety when your schema changes is priceless.
*Deployment is easier because everything is compiled into DLL's. No more managing deployment scripts.
*Because Linq can support querying anything that implements the IQueryable interface, you will be able to use the same syntax to query XML, Objects and any other datasource without having to learn a new syntax
A: Also, there is the issue of possible 2.0 rollback. Trust me it has happened to me a couple of times so I am sure it has happened to others.
I also agree that abstraction is the best. Along with the fact, the original purpose of an ORM is to make RDBMS match up nicely to the OO concepts. However, if everything worked fine before LINQ by having to deviate a bit from OO concepts then screw 'em. Concepts and reality don't always fit well together. There is no room for militant zealots in IT.
A: According to gurus, I define LINQ as motorcycle and SP as car.
If you want to go for a short trip and only have small passengers(in this case 2), go gracefully with LINQ.
But if you want to go for a journey and have large band, i think you should choose SP.
As a conclusion, choosing between motorcycle or car is depend on your route (business), length (time), and passengers (data).
Hope it helps, I may be wrong. :D
A: I'm assuming you mean Linq To Sql
For any CRUD command it's easy to profile the performance of a stored procedure vs. any technology. In this case any difference between the two will be negligible. Try profiling for a 5 (simple types) field object over 100,000 select queries to find out if there's a real difference.
On the other hand the real deal-breaker will be the question on whether you feel comfortable putting your business logic on your database or not, which is an argument against stored procedures.
A: All these answers leaning towards LINQ are mainly talking about EASE of DEVELOPMENT which is more or less connected to poor quality of coding or laziness in coding. I am like that only.
Some advantages or Linq, I read here as , easy to test, easy to debug etc, but these are no where connected to Final output or end user. This is always going cause the trouble the end user on performance. Whats the point loading many things in memory and then applying filters on in using LINQ?
Again TypeSafety, is caution that "we are careful to avoid wrong typecasting" which again poor quality we are trying to improve by using linq. Even in that case, if anything in database changes, e.g. size of String column, then linq needs to be re-compiled and would not be typesafe without that .. I tried.
Although, we found is good, sweet, interesting etc while working with LINQ, it has shear disadvantage of making developer lazy :) and it is proved 1000 times that it is bad (may be worst) on performance compared to Stored Procs.
Stop being lazy. I am trying hard. :)
A: LINQ will bloat the procedure cache
If an application is using LINQ to SQL and the queries involve the use of strings that can be highly variable in length, the SQL Server procedure cache will become bloated with one version of the query for every possible string length. For example, consider the following very simple queries created against the Person.AddressTypes table in the AdventureWorks2008 database:
var p =
from n in x.AddressTypes
where n.Name == "Billing"
select n;
var p =
from n in x.AddressTypes
where n.Name == "Main Office"
select n;
If both of these queries are run, we will see two entries in the SQL Server procedure cache: One bound with an NVARCHAR(7), and the other with an NVARCHAR(11). Now imagine if there were hundreds or thousands of different input strings, all with different lengths. The procedure cache would become unnecessarily filled with all sorts of different plans for the exact same query.
More here: https://connect.microsoft.com/VisualStudio/feedback/ViewFeedback.aspx?FeedbackID=363290
A: For simple CRUD operations with a single data access point, I would say go for LINQ if you feel comfortable with the syntax. For more complicated logic I think sprocs are more efficiant performance-wise if you are good at T-SQL and its more advanced operations. You also have the help from Tuning Advisor, SQL Server Profiler, debugging your queries from SSMS etc.
A: Some advantages of LINQ over sprocs:
*
*Type safety: I think we all understand this.
*Abstraction: This is especially true with LINQ-to-Entities. This abstraction also allows the framework to add additional improvements that you can easily take advantage of. PLINQ is an example of adding multi-threading support to LINQ. Code changes are minimal to add this support. It would be MUCH harder to do this data access code that simply calls sprocs.
*Debugging support: I can use any .NET debugger to debug the queries. With sprocs, you cannot easily debug the SQL and that experience is largely tied to your database vendor (MS SQL Server provides a query analyzer, but often that isn't enough).
*Vendor agnostic: LINQ works with lots of databases and the number of supported databases will only increase. Sprocs are not always portable between databases, either because of varying syntax or feature support (if the database supports sprocs at all).
*Deployment: Others have mentioned this already, but it's easier to deploy a single assembly than to deploy a set of sprocs. This also ties in with #4.
*Easier: You don't have to learn T-SQL to do data access, nor do you have to learn the data access API (e.g. ADO.NET) necessary for calling the sprocs. This is related to #3 and #4.
Some disadvantages of LINQ vs sprocs:
*
*Network traffic: sprocs need only serialize sproc-name and argument data over the wire while LINQ sends the entire query. This can get really bad if the queries are very complex. However, LINQ's abstraction allows Microsoft to improve this over time.
*Less flexible: Sprocs can take full advantage of a database's featureset. LINQ tends to be more generic in it's support. This is common in any kind of language abstraction (e.g. C# vs assembler).
*Recompiling: If you need to make changes to the way you do data access, you need to recompile, version, and redeploy your assembly. Sprocs can sometimes allow a DBA to tune the data access routine without a need to redeploy anything.
Security and manageability are something that people argue about too.
*
*Security: For example, you can protect your sensitive data by restricting access to the tables directly, and put ACLs on the sprocs. With LINQ, however, you can still restrict direct access to tables and instead put ACLs on updatable table views to achieve a similar end (assuming your database supports updatable views).
*Manageability: Using views also gives you the advantage of shielding your application non-breaking from schema changes (like table normalization). You can update the view without requiring your data access code to change.
I used to be a big sproc guy, but I'm starting to lean towards LINQ as a better alternative in general. If there are some areas where sprocs are clearly better, then I'll probably still write a sproc but access it using LINQ. :)
A: I think the pro LINQ argument seems to be coming from people who don't have a history with database development (in general).
Especially if using a product like VS DB Pro or Team Suite, many of the arguments made here do not apply, for instance:
Harder to maintain and Test:
VS provides full syntax checking, style checking, referential and constraint checking and more. It also provide full unit testing capabilities and refactoring tools.
LINQ makes true unit testing impossible as (in my mind) it fails the ACID test.
Debugging is easier in LINQ:
Why? VS allows full step-in from managed code and regular debugging of SPs.
Compiled into a single DLL rather than deployment scripts:
Once again, VS comes to the rescue where it can build and deploy full databases or make data-safe incremental changes.
Don't have to learn TSQL with LINQ:
No you don't, but you have to learn LINQ - where's the benefit?
I really don't see this as being a benefit. Being able to change something in isolation might sound good in theory, but just because the changes fulfil a contract doesn't mean it's returning the correct results. To be able to determine what the correct results are you need context and you get that context from the calling code.
Um, loosely coupled apps are the ultimate goal of all good programmers as they really do increase flexibility. Being able to change things in isolation is fantastic, and it is your unit tests that will ensure it is still returning appropriate results.
Before you all get upset, I think LINQ has its place and has a grand future. But for complex, data-intensive applications I do not think it is ready to take the place of stored procedures. This was a view I had echoed by an MVP at TechEd this year (they will remain nameless).
EDIT: The LINQ to SQL Stored Procedure side of things is something I still need to read more on - depending on what I find I may alter my above diatribe ;)
A: LINQ is new and has its place. LINQ is not invented to replace stored procedure.
Here I will focus on some performance myths & CONS, just for "LINQ to SQL", of course I might be totally wrong ;-)
(1)People say LINQ statment can "cache" in SQL server, so it doesn't lose performance. Partially true. "LINQ to SQL" actually is the runtime translating LINQ syntax to TSQL statment. So from the performance perspective,a hard coded ADO.NET SQL statement has no difference than LINQ.
(2)Given an example, a customer service UI has a "account transfer" function. this function itself might update 10 DB tables and return some messages in one shot. With LINQ, you have to build a set of statements and send them as one batch to SQL server. the performance of this translated LINQ->TSQL batch can hardly match stored procedure. Reason? because you can tweak the smallest unit of the statement in Stored procedue by using the built-in SQL profiler and execution plan tool, you can not do this in LINQ.
The point is, when talking single DB table and small set of data CRUD, LINQ is as fast as SP. But for much more complicated logic, stored procedure is more performance tweakable.
(3)"LINQ to SQL" easily makes newbies to introduce performance hogs. Any senior TSQL guy can tell you when not to use CURSOR (Basically you should not use CURSOR in TSQL in most cases). With LINQ and the charming "foreach" loop with query, It's so easy for a newbie to write such code:
foreach(Customer c in query)
{
c.Country = "Wonder Land";
}
ctx.SubmitChanges();
You can see this easy decent code is so attractive. But under the hood, .NET runtime just translate this to an update batch. If there are only 500 lines, this is 500 line TSQL batch; If there are million lines, this is a hit. Of course, experienced user won't use this way to do this job, but the point is, it's so easy to fall in this way.
A: The best code is no code, and with stored procedures you have to write at least some code in the database and code in the application to call it , whereas with LINQ to SQL or LINQ to Entities, you don't have to write any additional code beyond any other LINQ query aside from instantiating a context object.
A: LINQ definitely has its place in application-specific databases and in small businesses.
But in a large enterprise, where central databases serve as a hub of common data for many applications, we need abstraction. We need to centrally manage security and show access histories. We need to be able to do impact analysis: if I make a small change to the data model to serve a new business need, what queries need to be changed and what applications need to be re-tested? Views and Stored Procedures give me that. If LINQ can do all that, and make our programmers more productive, I'll welcome it -- does anyone have experience using it in this kind of environment?
A: The outcome can be summarized as
LinqToSql for small sites, and prototypes. It really saves time for Prototyping.
Sps : Universal. I can fine tune my queries and always check ActualExecutionPlan / EstimatedExecutionPlan.
A: Create PROCEDURE userInfoProcedure
-- Add the parameters for the stored procedure here
@FirstName varchar,
@LastName varchar
AS
BEGIN
SET NOCOUNT ON;
-- Insert statements for procedure here
SELECT FirstName , LastName,Age from UserInfo where FirstName=@FirstName
and LastName=@FirstName
END
GO
http://www.totaldotnet.com/Article/ShowArticle121_StoreProcBasic.aspx
A: Stored procedure makes testing easier and you can change the query without touching the application code. Also with linq, getting a data does not mean its the right data. And testing the correctness of the data means running the application but with stored procedure it's easy to test without touching the application.
A: Both LINQ and SQL have their places. Both have their disadvantages and advantages.
Sometimes for complex data retrieval you might need stored procs. And sometimes you may want other people to use your stored proc in Sql Server Management Studio.
Linq to Entities is great for fast CRUD development.
Sure you can build an app using only one or the other. Or you can mix it up. It all comes down to your requirements. But SQL stored procs will no go away any time soon.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/14530",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "191"
} |
Q: How should I handle autolinking in wiki page content? What I mean by autolinking is the process by which wiki links inlined in page content are generated into either a hyperlink to the page (if it does exist) or a create link (if the page doesn't exist).
With the parser I am using, this is a two step process - first, the page content is parsed and all of the links to wiki pages from the source markup are extracted. Then, I feed an array of the existing pages back to the parser, before the final HTML markup is generated.
What is the best way to handle this process? It seems as if I need to keep a cached list of every single page on the site, rather than having to extract the index of page titles each time. Or is it better to check each link separately to see if it exists? This might result in a lot of database lookups if the list wasn't cached. Would this still be viable for a larger wiki site with thousands of pages?
A: In my own wiki I check all the links (without caching), but my wiki is only used by a few people internally. You should benchmark stuff like this.
A: In my own wiki system my caching system is pretty simple - when the page is updated it checks links to make sure they are valid and applies the correct formatting/location for those that aren't. The cached page is saved as a HTML page in my cache root.
Pages that are marked as 'not created' during the page update are inserted into the a table of the database that holds the page and then a csv of pages that link to it.
When someone creates that page it initiates a scan to look through each linking page and re-caches the linking page with the correct link and formatting.
If you weren't interested in highlighting non-created pages however you could just have a checker to see if the page is created when you attempt to access it - and if not redirect to the creation page. Then just link to pages as normal in other articles.
A: I tried to do this once and it was a nightmare! My solution was a nasty loop in a SQL procedure, and I don't recommend it.
One thing that gave me trouble was deciding what link to use on a multi-word phrase. Say you had some text saying "I am using Stack Overflow" and your wiki had 3 pages called "stack", "overflow" and "stack overflow"....which part of your phrase gets linked to where? It will happen!
A: My idea would be to query the titles like SELECT title FROM articles and simply check if each wikilink is in that array of strings. If it is you link to the page, if not, you link to the create page.
A: In a personal project I made with Sinatra (link text) after I run the content through Markdown, I do a gsub to replace wiki words and other things (like [[Here is my link]] and whatnot) with proper links, on each checking if the page exists and linking to create or view depending.
It's not the best, but I didn't build this app with caching/speed in mind. It's a low resource simple wiki.
If speed was more important, you could wrap the app in something to cache it. For example, sinatra can be wrapped with the Rack caching.
A: Based on my experience developing Juli, which is an offline personal wiki with autolink, generating static HTML approach may fix your issue.
As you think, it takes long time to generate autolinked Wiki page. However, in generating static HTML situation, regenerating autolinked Wiki page happens only when a wikipage is newly added or deleted (in other words, it doesn't happen when updating wikipage) and the 'regenerating' can be done in background so that usually I don't matter how it take long time. User will see only the generated static HTML.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/14545",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: SSRS scheduled reports not working My scheduled reports in SQL server won't run. I checked the logs and found the job that was failing. The error message in the log was:
'EXECUTE AS LOGIN' failed for the requested login 'NT AUTHORITY\NETWORK
SERVICE'. The step failed.
I'm using SQL authentication for the report so it shouldn't be issues with the permissions to the data. No sheduled reports on the server will run.
A: I found the answer here:
http://www.themssforum.com/SVCS/Unable-execute/
Apperently there was something wrong with the login for 'NT AUTHORITY\NETWORK SERVICE' and it wouldn't run the jobs it owned properly. Anyone understand why this might have happened?
A: Can you check the permissions for your Network Service account? Specifically make sure they have the "Act as part of the OS" permission. If I'm reading the error message correctly, it looks like the NT AUTHORITY\NETWORK SERVICE account failed to execute as a logged on user. It doesn't look like it ever got to the query, looks like it's failing in the Windows authentication portion, so never gets to the SQL authentication piece.
You might also check the Security Event Log in Windows. If it is an authentication problem, there might be additional information logged by the OS.
A: Can you hit the Web Service page (ReportService2005.asmx) for SRS with a web browser? If not, it might point to the SRS service, if you can, it might point to the scheduler service.
A: Just to troubleshoot, can you use a domain account rather than the builtin NetworkService account to run your service?
I also found this post that describes the same error you reported:
Maybe that solution will work for you.
A: I seem to remeber having a similar problem a long time ago, the problem was related to accessing the smtp server to despatch the reports and was resolved by applying a patch to the server.
A: I'd guess you are running a non-english operating system.
Somehow, for whatever reasons, the login name for the network service is sometimes localized when the OS is translated into other languages.
Now a lot of programs expect the login name to be hard coded to "NT AUTHORITY\NETWORK SERVICE" - in a German version of Windows for example the name of the account is "NT-AUTORITÄT\NETZWERKDIENST". So your program is looking for the english name, can't find it, and shows an error.
Even some Microsoft programs run into this trap and show errors like this when installed!
| {
"language": "en",
"url": "https://stackoverflow.com/questions/14552",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How to organize dataset queries to improve performance I don't know when to add to a dataset a tableadapter or a query from toolbox. Does it make any difference?
I also dont know where to create instances of the adapters.
*
*Should I do it in the Page_Load?
*Should I just do it when I'm going to use it?
*Am I opening a new connection when I create a new instance?
This doesn't seem very important, but every time I create a query a little voice on my brain asks me these questions.
A:
Should I just do it when I'm going to use it?
I would recommend that you only retrieve the data when you are going to use it. If you are not going to need it, there is no reason to waste resources by retrieving it in Page_Load. If you are going to need it multiple times throughout the page load, consider saving the query results to a private variable or collection so that the same data can be reused multiple times throughout the page load.
Am I opening a new connection when I create a new instance?
Asp.net handles connection pooling, and opens and closes connections in an efficient way. You shouldn't have to worry about this.
One other thing to consider from a performance perspective is to avoid using Datasets and TableAdapters. In many cases, they add extra overhead into data retrieval that does not exist when using Linq to Sql, Stored Procedures or DataReaders.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/14570",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How do I avoid using cursors in Sybase (T-SQL)? Imagine the scene, you're updating some legacy Sybase code and come across a cursor. The stored procedure builds up a result set in a #temporary table which is all ready to be returned except that one of columns isn't terribly human readable, it's an alphanumeric code.
What we need to do, is figure out the possible distinct values of this code, call another stored procedure to cross reference these discrete values and then update the result set with the newly deciphered values:
declare c_lookup_codes for
select distinct lookup_code
from #workinprogress
while(1=1)
begin
fetch c_lookup_codes into @lookup_code
if @@sqlstatus<>0
begin
break
end
exec proc_code_xref @lookup_code @xref_code OUTPUT
update #workinprogress
set xref = @xref_code
where lookup_code = @lookup_code
end
Now then, whilst this may give some folks palpitations, it does work. My question is, how best would one avoid this kind of thing?
_NB: for the purposes of this example you can also imagine that the result set is in the region of 500k rows and that there are 100 distinct values of look_up_code and finally, that it is not possible to have a table with the xref values in as the logic in proc_code_xref is too arcane._
A: Unless you are willing to duplicate the code in the xref proc, there is no way to avoid using a cursor.
A: You have to have a XRef table if you want to take out the cursor. Assuming you know the 100 distinct lookup values (and that they're static) it's simple to generate one by calling proc_code_xref 100 times and inserting the results into a table
A: They say, that if you must use cursor, then, you must have done something wrong ;-) here's solution without cursor:
declare @lookup_code char(8)
select distinct lookup_code
into #lookup_codes
from #workinprogress
while 1=1
begin
select @lookup_code = lookup_code from #lookup_codes
if @@rowcount = 0 break
exec proc_code_xref @lookup_code @xref_code OUTPUT
delete #lookup_codes
where lookup_code = @lookup_code
end
| {
"language": "en",
"url": "https://stackoverflow.com/questions/14577",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Can I make Subversion + TortoiseSVN case-insensitive for Windows? I've been using Subversion for code control with TortoiseSVN to interface with the server for the past few months, and in general it's been going great! However, occasionally my FoxPro IDE will change the case of a file extension without warning where "program.prg" becomes "program.PRG") TortoiseSVN apparently takes this to mean the first file was removed, becoming flagged as "missing" and the second name comes up as "non-versioned", wreaking havoc on my ability to track changes to the file. I understand that Subversion has it origins in the case-sensitive world of *nix but, is there any way to control this behavior in either Subversion or TortoiseSVN to be file name case-insensitive when used with Windows?
A: Windows does support case sensitivity, but you must send it the correct POSIX flags on CreateFile from the Windows API! A registry key may need changed (SFU/Tools for Unix and Ultimate Windows 7 has this registry entry already set so windows supports case sensitive file names).
Windows is designed off of Unix, but things such as Explorer.exe and other programs are designed to disallow case sensitivity for backwards compatibility and security (mostly when dealing with dos executing notepad.exe vs. NOTEPAD.EXE, where all caps is a virus or malware).
But Vista+ has security attributes which makes this obsolete.
TortiousSVN just doesn't support passing this posix flag while making and renaming files.
A: Unfortunately, Subversion is case-sensitive. This is due to the fact that files from Subversion can be checked out on both case-sensitive file systems (e.g., *nix) and case-insensitive file systems (e.g., Windows, Mac).
This pre-commit hook script may help you avoid problems when you check in files. If it doesn't solve your problem, my best suggestion is to write a little script to make sure that all extensions are lowercase and run it every time before you check in/check out. It'll be a PITA, but maybe your best bet.
A: I use TortoiseSVN with VFP, and it mostly-seamlessly handles the case flipping. The only time it doesn't is if I have the file open in the IDE when I try to do the commit: the file lock VFP holds confuses it. Is this where your problem comes in, or are there other issues?
I did a presentation at FoxForward last year about using VFP with Subversion: most of the presentation dealt with the command line, but there are a couple of slides at the end that have links to tools that help you work with Subversion in VFP. http://docs.google.com/Presentation?id=dfxkh6x4_3ghnqc4
A: Kit, you comment above that VFP's binary-based source files are tough to work with in Subversion. The link I gave above mentions a couple of tools to make it easier, but the one I work with is Christof Wollenhaupt's TwoFox utility -- it converts a VFP project to text-only. You have to run it manually, but I don't have a problem with that.
http://www.foxpert.com/docs/cvs.en.htm
A: I believe the random upper and lower case on the extensions isn't random at all.
I remember testing on this. If you modify a program from the project manager.
By clicking on the modify button let's say. And then save the changes the extension is lower case. If you do a modify command from the command window and save the changes the extension is upper case. Apparently the coders at Microsoft didn't worry about the extension case being the same.
A: TortoiseSVN has a Repairing File Renames feature. It requires manual intervention and it actually issues a file rename operation to be committed but nonetheless addresses current use case by keeping file history.
A: Nope you sure can't. SVN is case-sensitive unless you were to rewrite the code somehow ... it is open-source.
A: We had a similar problem and I found a better solution than the ones exposed here, so I'm sharing it now:
*
*For commits done manualy, now TortoiseSVN fixes the case of the file names automatically: it renames the local files to match the case of the versioned files (just by opening the commit window in that path), so there should be no problem with that.
*For automated commits you cannot use TortoiseSVN, as it requires you to manually confirm the commit (it opens the commit window with a specific message, but you still have to click ok). But if you directly use Subversion (svn) to make an automated commit, then you will have the case-sensitive issue on that commit, as Subversion is still case-sensitive...
How to solve this for automated commits? Well, I tried a mixed approach: creating a batch file called FixCaseSensitiveFileNames.bat that you may call passing the path you want to fix before the commit, for example: call FixCaseSensitiveFileNames.bat C:\MyRepo. The batch file opens TortoiseSVN for a manual commit, and that automatically fixes the file names, but then it closes the commit window after a predefined pause, so you can continue with the automated commit with the case-sensitive file names already fixed. The pause is emulated with a local ping, and you can change the duration by changing the -n argument, which is the number of tries. If you don't make a long enough pause, it exist the risk to close the TortoiseSVN window before it makes its magic fix. Here it is the code of the batch file:
@echo off
REM *** This BAT uses TortoiseSVN to fix the case-sensitive names of the files in Subversion
REM *** Call it before an automated commit. The Tortoise commit fixes this issue for manual commits,
REM *** so the trick is opening the commit window and close it automatically after a pause (with ping).
REM *** %1 = path to be fixed
start TortoiseProc.exe /command:commit /path:"%1"
ping localhost -n 10 >nul
taskkill /im TortoiseProc.exe
This totally solved the issue for our automated daily build process. The only problem I see is a window will open for a few seconds, which was not a problem for our daily build, but if that is a problem for you there could be workarounds too...
| {
"language": "en",
"url": "https://stackoverflow.com/questions/14582",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "25"
} |
Q: Do you use version control other than for source code? I've found SVN to be extremely useful for documentation, personal files, among other non-source code uses.
What other practical uses have you found to version control systems in general?
A: I've seen version control being used for other non-source code purposes, like,
*
*Schema files - a set of XML schema files that represent a real world schema
*Content files - content represented in a specific format, this is tied to a designer in VStudio, using source control, allows history, rollbacks everything, with no database interaction
In both the cases we notice that it's basically verbose files, primary reasons to have these files in source control as opposed to "text records in database", is that
*
*files that might need ability to compare versions
*history (because multiple users work on them)
*ability to rollback to an earlier version
*labeling and releases by getting a specific label
*if you use Team Foundation (TFS), the whole scrum templates with work items etc
*no database involved, no extra development for all the above
A: At one of my early jobs, we used CVS for DNS revision control. It was mainly a cheap and dirty way to back up the zone files.
I've also heard of people using a version control system for their home directories.
A: During my final semester in school I took two classes that each had a large, time-consuming project due at the end of the semester. They both also required several long-ish papers throughout the semester. I made heavy use of SVN for both classes to track every change I made to every paper and project.
I'm more of a "write it all at once" kind of guy when it comes to writing, and tend to lose my train of thought if I try to spread the process over multiple sessions. Being able to diff the latest revisions of my papers made it much easier for me to get back on track.
A: I edit a lot of documents in LaTeX, so I use SVN to store the tex files and images and so on. Handy for doing Diffs, and hopefully will save me if I have a disaster.
A: Generally anything that the build process needs I put in to source control. The only issue that comes up is if you have resoruces prepared by other departments e.g. Marketing, that go in your install for example.
A: I have a folder in the path called bin with useful utilities like those from sysinternals and others. I use svn to keep these up to date on different machines. Also, things like powershell scripts, vimrc files, etc. are great to keep centralized.
A: I never even thought to use it for personal stuff, but on software projects, I check in pretty much everything that can't be regenerated at a later date (examples of this include executables and code-generated docs). Documentation always gets checked in. Presentations to customers gets checked in and tagged along with the code base used to demo, if there was a demo.
I'm thinking SVN and CVS aren't "friendly" enough to non-technical users, but I'm curious now as to the possibly uses for version control for non-engineering projects...
A: Most documentation that will be viewed by more than one pair of human eyes. It is incredibly useful, for instance, during project planning phases when the analyst updates the requirements document and you'd like to see what changed since the last time you saw it. Wikis also have this functionality, natch. We use SharePoint for these purposes, but pick your vendor.
A: I use version control a lot for common files, because I have one laptop, a desktop machine at work and a home desktop on which I do a lot of work too (I work from home two days a week).
A new session at any of them starts with a script called 'start' that updates a bunch of checkouts, and ends with a script called 'stop' that commits some things to VCS, or shows me at least the modifications.
I use it for:
*
*my one-file Getting Things Done task list (see yagtd, the tool I use)
*my password database (I should have sent in that suggestion to the StackOverflow podcast in reply to Joel's question)
*all of my random notes and files on projects
*a bunch of spreadsheets (including one that tracks some personal things day by day)
*some images (like the web avatars I use)
In addition, I've written something on top of Subversion to manage configuration files for both systems and my user accounts. I have so many accounts on so many machines, and I was tired of always relearning how to configure my shell/vim/... so I now store most of those things in version control too. That includes email signature files, a bunch of shell scripts in $HOME/bin, ...
A: I use revision control for just about all of my documents for any purpose.
I'm using Mercurial, so setting up a new repository in a given directory is a matter of a simple "hg init", which I found much less of a hassle than setting up a new Subversion repository.
I've also found that RCS is great in any situation where you need to sync files - I'm using that now instead of rsync for all of my syncing needs. It's also easier to make backups - cloning a repository to another location/machine/disk means I can just push the changes to that location, which is even easier with a default push repository. If you don't modify in the remote repo, then you don't even need to worry too much about setting that up other than the default.
One of the nicest things for me is that I can have syncing, backups or whatever on any system that I have SSH access to. (Well, if they'd install mercurial for me at Uni, then I could!)
A: In my company, the development group aims to use Subversion for pretty much every electronic document. This depends on being able to "lock" files that can't be merged, such as Excel documents. SVN provides the "requires-lock" feature, and the get-lock, modify, commit workflow is reasonably straight-forward.
The software engineers are on-board, but there is some resistance from mechanical engineers. They want to use the simultaneous collaborative editing features of Excel for example. They haven't adapted to the get-lock, modify, commit workflow.
TortoiseSVN lets you diff Word docs, which I find extremely useful. It also supports merging apparently, although I've been too chicken to try that feature...
I'd like to seriously consider a DVCS such as git or Mercurial. But unless it can lock binary file format (i.e. unmergable) files (thus becoming more like a centralised model for such files), and/or merge the binary file formats we use, it won't fit into my company's usage.
I just wish all software companies provided good diff and merge tools for their proprietary doc formats. That would increase the value of version control systems for proprietary doc formats.
A: Yes, I have a doc directory in git. I contains a todo list, a calendar and a few other documents.
A: I use SVN to check in changes to Asterisk VOIP Server config files. I have one repository with a folder corresponding to each of several servers. That folder contains the entire contents of /etc/asterisk.
A: I've used Subversion for everything from source control, build environments, installer scripts, and all that developmenty goodness. I've also set up a repository for non technical users for binary files, in this case old Excel and Word documents. It worked alright considering we lost any merge functionality. But it let all our users get a whole ton of information that was mostly edited by two or three people pretty easily. And with simple instructions on how to update before you do any editing (locking if necessary), and then dealing with conflicts (check what you updated and then delete your copy and perform an update) they were able to handle the repository thing pretty well, though I'm not sure they ever actually grew to like it. :)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/14588",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14"
} |
Q: Editing User Profile w/ Forms Authentication We're using Forms Authentication in SharePoint. When the account is created, the administrator can add some information, like name and address. But the required fields are username and email address.
When a user goes to their profile page, all the fields are blank and they are unable to edit them. I have read a number of articles discussing how to import profiles from another data store, or to sync profiles. This doesn't work for us, because we don't have another data store where these profiles are stored.
Will I just have to recreate the edit profile page and build a custom profile editor? Is this information exposed via SharePoint API? I don't think directly editing the database is a good solution.
A: If you log in to the "Shared Services administration" through the "Central Admin Tool" there is an option "Profile services policies". You can define in here what fields are user-overridable.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/14611",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |