text
stringlengths 8
267k
| meta
dict |
---|---|
Q: What are the list of Resharper like plugins for VS I should consider? My license for Whole Tomatoes Visual AssistX is about to expire and I'm not really planning on renewing it. I use it for spell checking but that's about it. The refactoring abilities have been a little disappointing. Before I just jump into Resharper though what are your thoughts on other possible plugins?
A: Aside from trying out Visual AssistX, the only other one I've tried is ReSharper (which I highly recommend). If you do decide to go for ReSharper, you'll likely notice that it's missing a spell checker for code though - however the Agent Smith plugin fixes that.
A: You should take a look at Visual Studio Gallery, the one stop shop for Visual Studio extensions.
Here you'll find quite a lot of extensions for Visual Studio in all categories, from intellisense and refactoring to designers and documentation builders.
A: Once you get into resharper, you really don't want to leave, its done a massive amount to improve my productivity.
It depends though on what you are doing. Are you doing a lot of TDD when you write tests, write code, then refactor?
Unless you are pretty intensely into refactoring then I'd suggest that you might not get the best of out R#.
As a plugin for a plugin I use the RGreatX plugin for R#. It's really handy for shifting string values out to resource files for localization of the software.....saves me plenty of time!
A: The other major player would be DevExpress and their CodeRush and Refactor products. Found here.
A: MZ-Tools is really good as well.
A: Have you tried model maker code explorer?
It is a great tool in delphi and the visual studio version looks pretty sweet as well. Still trying to work out what the best option for VS is though.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/11520",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: How can I find unused functions in a PHP project How can I find any unused functions in a PHP project?
Are there features or APIs built into PHP that will allow me to analyse my codebase - for example Reflection, token_get_all()?
Are these APIs feature rich enough for me not to have to rely on a third party tool to perform this type of analysis?
A: 2020 Update
I have used the other methods outlined above, even the 2019 update answer here is outdated.
Tomáš Votruba's answer led me to find Phan as the ECS route has now been deprecated. Symplify have removed the dead public method checker.
Phan is a static analyzer for PHP
We can utilise Phan to search for dead code. Here are the steps to take using composer to install. These steps are also found on the git repo for phan. These instructions assume you're at the root of your project.
Step 1 - Install Phan w/ composer
composer require phan/phan
Step 2 - Install php-ast
PHP-AST is a requirement for Phan
As I'm using WSL, I've been able to use PECL to install, however, other install methods for php-ast can be found in a git repo
pecl install ast
Step 3 - Locate and edit php.ini to use php-ast
Locate current php.ini
php -i | grep 'php.ini'
Now take that file location and nano (or whichever of your choice to edit this doc). Locate the area of all extensions and ADD the following line:
extension=ast.so
Step 4 - create a config file for Phan
Steps on config file can be found in Phan's documentation on how to create a config file
You'll want to use their sample one as it's a good starting point. Edit the following arrays to add your own paths on both
directory_list & exclude_analysis_directory_list.
Please note that exclude_analysis_directory_list will still be parsed but not validated eg. adding Wordpress directory here would mean, false positives for called wordpress functions in your theme would not appear as it found the function in wordpress but at the same time it'll not validate functions in wordpress' folder.
Mine looked like this
......
'directory_list' => [
'public_html'
],
......
'exclude_analysis_directory_list' => [
'vendor/',
'public_html/app/plugins',
'public_html/app/mu-plugins',
'public_html/admin'
],
......
Step 5 - Run Phan with dead code detection
Now that we've installed phan and ast, configured the folders we wish to parse, it's time to run Phan. We'll be passing an argument to phan --dead-code-detection which is self explanatory.
./vendor/bin/phan --dead-code-detection
This output will need verifying with a fine tooth comb but it's certainly the best place to start
The output will look like this in console
the/path/to/php/file.php:324 PhanUnreferencedPublicMethod Possibly zero references to public method\the\path\to\function::the_funciton()
the/path/to/php/file.php:324 PhanUnreferencedPublicMethod Possibly zero references to public method\the\path\to\function::the_funciton()
the/path/to/php/file.php:324 PhanUnreferencedPublicMethod Possibly zero references to public method\the\path\to\function::the_funciton()
the/path/to/php/file.php:324 PhanUnreferencedPublicMethod Possibly zero references to public method\the\path\to\function::the_funciton()
Please feel free to add to this answer or correct my mistakes :)
A: You can try Sebastian Bergmann's Dead Code Detector:
phpdcd is a Dead Code Detector (DCD) for PHP code. It scans a PHP project for all declared functions and methods and reports those as being "dead code" that are not called at least once.
Source: https://github.com/sebastianbergmann/phpdcd
Note that it's a static code analyzer, so it might give false positives for methods that only called dynamically, e.g. it cannot detect $foo = 'fn'; $foo();
You can install it via PEAR:
pear install phpunit/phpdcd-beta
After that you can use with the following options:
Usage: phpdcd [switches] <directory|file> ...
--recursive Report code as dead if it is only called by dead code.
--exclude <dir> Exclude <dir> from code analysis.
--suffixes <suffix> A comma-separated list of file suffixes to check.
--help Prints this usage information.
--version Prints the version and exits.
--verbose Print progress bar.
More tools:
*
*https://phpqa.io/
Note: as per the repository notice, this project is no longer maintained and its repository is only kept for archival purposes. So your mileage may vary.
A: If I remember correctly you can use phpCallGraph to do that. It'll generate a nice graph (image) for you with all the methods involved. If a method is not connected to any other, that's a good sign that the method is orphaned.
Here's an example: classGallerySystem.png
The method getKeywordSetOfCategories() is orphaned.
Just by the way, you don't have to take an image -- phpCallGraph can also generate a text file, or a PHP array, etc..
A: Because PHP functions/methods can be dynamically invoked, there is no programmatic way to know with certainty if a function will never be called.
The only certain way is through manual analysis.
A: Thanks Greg and Dave for the feedback. Wasn't quite what I was looking for, but I decided to put a bit of time into researching it and came up with this quick and dirty solution:
<?php
$functions = array();
$path = "/path/to/my/php/project";
define_dir($path, $functions);
reference_dir($path, $functions);
echo
"<table>" .
"<tr>" .
"<th>Name</th>" .
"<th>Defined</th>" .
"<th>Referenced</th>" .
"</tr>";
foreach ($functions as $name => $value) {
echo
"<tr>" .
"<td>" . htmlentities($name) . "</td>" .
"<td>" . (isset($value[0]) ? count($value[0]) : "-") . "</td>" .
"<td>" . (isset($value[1]) ? count($value[1]) : "-") . "</td>" .
"</tr>";
}
echo "</table>";
function define_dir($path, &$functions) {
if ($dir = opendir($path)) {
while (($file = readdir($dir)) !== false) {
if (substr($file, 0, 1) == ".") continue;
if (is_dir($path . "/" . $file)) {
define_dir($path . "/" . $file, $functions);
} else {
if (substr($file, - 4, 4) != ".php") continue;
define_file($path . "/" . $file, $functions);
}
}
}
}
function define_file($path, &$functions) {
$tokens = token_get_all(file_get_contents($path));
for ($i = 0; $i < count($tokens); $i++) {
$token = $tokens[$i];
if (is_array($token)) {
if ($token[0] != T_FUNCTION) continue;
$i++;
$token = $tokens[$i];
if ($token[0] != T_WHITESPACE) die("T_WHITESPACE");
$i++;
$token = $tokens[$i];
if ($token[0] != T_STRING) die("T_STRING");
$functions[$token[1]][0][] = array($path, $token[2]);
}
}
}
function reference_dir($path, &$functions) {
if ($dir = opendir($path)) {
while (($file = readdir($dir)) !== false) {
if (substr($file, 0, 1) == ".") continue;
if (is_dir($path . "/" . $file)) {
reference_dir($path . "/" . $file, $functions);
} else {
if (substr($file, - 4, 4) != ".php") continue;
reference_file($path . "/" . $file, $functions);
}
}
}
}
function reference_file($path, &$functions) {
$tokens = token_get_all(file_get_contents($path));
for ($i = 0; $i < count($tokens); $i++) {
$token = $tokens[$i];
if (is_array($token)) {
if ($token[0] != T_STRING) continue;
if ($tokens[$i + 1] != "(") continue;
$functions[$token[1]][1][] = array($path, $token[2]);
}
}
}
?>
I'll probably spend some more time on it so I can quickly find the files and line numbers of the function definitions and references; this information is being gathered, just not displayed.
A: This bit of bash scripting might help:
grep -rhio ^function\ .*\( .|awk -F'[( ]' '{print "echo -n " $2 " && grep -rin " $2 " .|grep -v function|wc -l"}'|bash|grep 0
This basically recursively greps the current directory for function definitions, passes the hits to awk, which forms a command to do the following:
*
*print the function name
*recursively grep for it again
*piping that output to grep -v to filter out function definitions so as to retain calls to the function
*pipes this output to wc -l which prints the line count
This command is then sent for execution to bash and the output is grepped for 0, which would indicate 0 calls to the function.
Note that this will not solve the problem calebbrown cites above, so there might be some false positives in the output.
A: 2019+ Update
I got inspied by Andrey's answer and turned this into a coding standard sniff.
The detection is very simple yet powerful:
*
*finds all methods public function someMethod()
*then find all method calls ${anything}->someMethod()
*and simply reports those public functions that were never called
It helped me to remove over 20+ methods I would have to maintain and test.
3 Steps to Find them
Install ECS:
composer require symplify/easy-coding-standard --dev
Set up ecs.yaml config:
# ecs.yaml
services:
Symplify\CodingStandard\Sniffs\DeadCode\UnusedPublicMethodSniff: ~
Run the command:
vendor/bin/ecs check src
See reported methods and remove those you don't fine useful
You can read more about it here: Remove Dead Public Methods from Your Code
A: USAGE: find_unused_functions.php <root_directory>
NOTE: This is a ‘quick-n-dirty’ approach to the problem. This script only performs a lexical pass over the files, and does not respect situations where different modules define identically named functions or methods. If you use an IDE for your PHP development, it may offer a more comprehensive solution.
Requires PHP 5
To save you a copy and paste, a direct download, and any new versions, are available here.
#!/usr/bin/php -f
<?php
// ============================================================================
//
// find_unused_functions.php
//
// Find unused functions in a set of PHP files.
// version 1.3
//
// ============================================================================
//
// Copyright (c) 2011, Andrey Butov. All Rights Reserved.
// This script is provided as is, without warranty of any kind.
//
// http://www.andreybutov.com
//
// ============================================================================
// This may take a bit of memory...
ini_set('memory_limit', '2048M');
if ( !isset($argv[1]) )
{
usage();
}
$root_dir = $argv[1];
if ( !is_dir($root_dir) || !is_readable($root_dir) )
{
echo "ERROR: '$root_dir' is not a readable directory.\n";
usage();
}
$files = php_files($root_dir);
$tokenized = array();
if ( count($files) == 0 )
{
echo "No PHP files found.\n";
exit;
}
$defined_functions = array();
foreach ( $files as $file )
{
$tokens = tokenize($file);
if ( $tokens )
{
// We retain the tokenized versions of each file,
// because we'll be using the tokens later to search
// for function 'uses', and we don't want to
// re-tokenize the same files again.
$tokenized[$file] = $tokens;
for ( $i = 0 ; $i < count($tokens) ; ++$i )
{
$current_token = $tokens[$i];
$next_token = safe_arr($tokens, $i + 2, false);
if ( is_array($current_token) && $next_token && is_array($next_token) )
{
if ( safe_arr($current_token, 0) == T_FUNCTION )
{
// Find the 'function' token, then try to grab the
// token that is the name of the function being defined.
//
// For every defined function, retain the file and line
// location where that function is defined. Since different
// modules can define a functions with the same name,
// we retain multiple definition locations for each function name.
$function_name = safe_arr($next_token, 1, false);
$line = safe_arr($next_token, 2, false);
if ( $function_name && $line )
{
$function_name = trim($function_name);
if ( $function_name != "" )
{
$defined_functions[$function_name][] = array('file' => $file, 'line' => $line);
}
}
}
}
}
}
}
// We now have a collection of defined functions and
// their definition locations. Go through the tokens again,
// and find 'uses' of the function names.
foreach ( $tokenized as $file => $tokens )
{
foreach ( $tokens as $token )
{
if ( is_array($token) && safe_arr($token, 0) == T_STRING )
{
$function_name = safe_arr($token, 1, false);
$function_line = safe_arr($token, 2, false);;
if ( $function_name && $function_line )
{
$locations_of_defined_function = safe_arr($defined_functions, $function_name, false);
if ( $locations_of_defined_function )
{
$found_function_definition = false;
foreach ( $locations_of_defined_function as $location_of_defined_function )
{
$function_defined_in_file = $location_of_defined_function['file'];
$function_defined_on_line = $location_of_defined_function['line'];
if ( $function_defined_in_file == $file &&
$function_defined_on_line == $function_line )
{
$found_function_definition = true;
break;
}
}
if ( !$found_function_definition )
{
// We found usage of the function name in a context
// that is not the definition of that function.
// Consider the function as 'used'.
unset($defined_functions[$function_name]);
}
}
}
}
}
}
print_report($defined_functions);
exit;
// ============================================================================
function php_files($path)
{
// Get a listing of all the .php files contained within the $path
// directory and its subdirectories.
$matches = array();
$folders = array(rtrim($path, DIRECTORY_SEPARATOR));
while( $folder = array_shift($folders) )
{
$matches = array_merge($matches, glob($folder.DIRECTORY_SEPARATOR."*.php", 0));
$moreFolders = glob($folder.DIRECTORY_SEPARATOR.'*', GLOB_ONLYDIR);
$folders = array_merge($folders, $moreFolders);
}
return $matches;
}
// ============================================================================
function safe_arr($arr, $i, $default = "")
{
return isset($arr[$i]) ? $arr[$i] : $default;
}
// ============================================================================
function tokenize($file)
{
$file_contents = file_get_contents($file);
if ( !$file_contents )
{
return false;
}
$tokens = token_get_all($file_contents);
return ($tokens && count($tokens) > 0) ? $tokens : false;
}
// ============================================================================
function usage()
{
global $argv;
$file = (isset($argv[0])) ? basename($argv[0]) : "find_unused_functions.php";
die("USAGE: $file <root_directory>\n\n");
}
// ============================================================================
function print_report($unused_functions)
{
if ( count($unused_functions) == 0 )
{
echo "No unused functions found.\n";
}
$count = 0;
foreach ( $unused_functions as $function => $locations )
{
foreach ( $locations as $location )
{
echo "'$function' in {$location['file']} on line {$location['line']}\n";
$count++;
}
}
echo "=======================================\n";
echo "Found $count unused function" . (($count == 1) ? '' : 's') . ".\n\n";
}
// ============================================================================
/* EOF */
A: phpxref will identify where functions are called from which would facilitate the analysis - but there's still a certain amount of manual effort involved.
A: afaik there is no way. To know which functions "are belonging to whom" you would need to execute the system (runtime late binding function lookup).
But Refactoring tools are based on static code analysis. I really like dynamic typed languages, but in my view they are difficult to scale. The lack of safe refactorings in large codebases and dynamic typed languages is a major drawback for maintainability and handling software evolution.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/11532",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "68"
} |
Q: What is the difference between an endpoint, a service, and a port when working with webservices? I've used Apache CXF to expose about ten java classes as web services.
I've generated clients using CXF, Axis, and .NET.
In Axis and CXF a "Service" or "Locator" is generated.
From this service you can get a "Port".
The "Port" is used to make individual calls to the methods exposed by the web service.
In .NET the "Service" directly exposes the calls to the web service.
Can someone explain the difference between a port, a service, a locator, and an endpoint when it comes to web services?
Axis:
PatientServiceImplServiceLocator locator =
new PatientServiceImplServiceLocator();
PatientService service = locator.getPatientServiceImplPort();
CXF:
PatientServiceImplService locator = new PatientServiceImplService();
PatientService service = locator.getPatientServiceImplPort();
.net:
PatientServiceImplService service = new PatientServiceImplService();
A: I'd hop over to http://www.w3.org/TR/wsdl.html which I think explains Port, Service and Endpoint reasonably well. A locator is an implementation specific mechanism that some WS stacks use to provide access to service endpoints.
A: I found the information based on Kevin Kenny's answer, but I figured I'd post it here for others.
A WSDL document defines services as collections of network endpoints, or ports. In WSDL, the abstract definition of endpoints and messages is separated from their concrete network deployment or data format bindings. This allows the reuse of abstract definitions: messages, which are abstract descriptions of the data being exchanged, and port types which are abstract collections of operations. The concrete protocol and data format specifications for a particular port type constitutes a reusable binding. A port is defined by associating a network address with a reusable binding, and a collection of ports define a service. Hence, a WSDL document uses the following elements in the definition of network services:
*
*Types– a container for data type definitions using some type system (such as XSD).
*Message– an abstract, typed definition of the data being communicated.
*Operation– an abstract description of an action supported by the service.
*Port Type–an abstract set of operations supported by one or more endpoints.
*Binding– a concrete protocol and data format specification for a particular port type.
*Port– a single endpoint defined as a combination of a binding and a network address.
*Service– a collection of related endpoints.
A: I would like to add that <port> and <endpoint> serve the same purpose, but port is used by WSDL 1.1 and endpoint by WSDL 2.0.
A: As you already mentioned, those terms mean different things in different stacks - there is no one right generic answer for web services.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/11561",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "24"
} |
Q: How to overload std::swap() std::swap() is used by many std containers (such as std::list and std::vector) during sorting and even assignment.
But the std implementation of swap() is very generalized and rather inefficient for custom types.
Thus efficiency can be gained by overloading std::swap() with a custom type specific implementation. But how can you implement it so it will be used by the std containers?
A: Attention Mozza314
Here is a simulation of the effects of a generic std::algorithm calling std::swap, and having the user provide their swap in namespace std. As this is an experiment, this simulation uses namespace exp instead of namespace std.
// simulate <algorithm>
#include <cstdio>
namespace exp
{
template <class T>
void
swap(T& x, T& y)
{
printf("generic exp::swap\n");
T tmp = x;
x = y;
y = tmp;
}
template <class T>
void algorithm(T* begin, T* end)
{
if (end-begin >= 2)
exp::swap(begin[0], begin[1]);
}
}
// simulate user code which includes <algorithm>
struct A
{
};
namespace exp
{
void swap(A&, A&)
{
printf("exp::swap(A, A)\n");
}
}
// exercise simulation
int main()
{
A a[2];
exp::algorithm(a, a+2);
}
For me this prints out:
generic exp::swap
If your compiler prints out something different then it is not correctly implementing "two-phase lookup" for templates.
If your compiler is conforming (to any of C++98/03/11), then it will give the same output I show. And in that case exactly what you fear will happen, does happen. And putting your swap into namespace std (exp) did not stop it from happening.
Dave and I are both committee members and have been working this area of the standard for a decade (and not always in agreement with each other). But this issue has been settled for a long time, and we both agree on how it has been settled. Disregard Dave's expert opinion/answer in this area at your own peril.
This issue came to light after C++98 was published. Starting about 2001 Dave and I began to work this area. And this is the modern solution:
// simulate <algorithm>
#include <cstdio>
namespace exp
{
template <class T>
void
swap(T& x, T& y)
{
printf("generic exp::swap\n");
T tmp = x;
x = y;
y = tmp;
}
template <class T>
void algorithm(T* begin, T* end)
{
if (end-begin >= 2)
swap(begin[0], begin[1]);
}
}
// simulate user code which includes <algorithm>
struct A
{
};
void swap(A&, A&)
{
printf("swap(A, A)\n");
}
// exercise simulation
int main()
{
A a[2];
exp::algorithm(a, a+2);
}
Output is:
swap(A, A)
Update
An observation has been made that:
namespace exp
{
template <>
void swap(A&, A&)
{
printf("exp::swap(A, A)\n");
}
}
works! So why not use that?
Consider the case that your A is a class template:
// simulate user code which includes <algorithm>
template <class T>
struct A
{
};
namespace exp
{
template <class T>
void swap(A<T>&, A<T>&)
{
printf("exp::swap(A, A)\n");
}
}
// exercise simulation
int main()
{
A<int> a[2];
exp::algorithm(a, a+2);
}
Now it doesn't work again. :-(
So you could put swap in namespace std and have it work. But you'll need to remember to put swap in A's namespace for the case when you have a template: A<T>. And since both cases will work if you put swap in A's namespace, it is just easier to remember (and to teach others) to just do it that one way.
A: You're not allowed (by the C++ standard) to overload std::swap, however you are specifically allowed to add template specializations for your own types to the std namespace. E.g.
namespace std
{
template<>
void swap(my_type& lhs, my_type& rhs)
{
// ... blah
}
}
then the usages in the std containers (and anywhere else) will pick your specialization instead of the general one.
Also note that providing a base class implementation of swap isn't good enough for your derived types. E.g. if you have
class Base
{
// ... stuff ...
}
class Derived : public Base
{
// ... stuff ...
}
namespace std
{
template<>
void swap(Base& lha, Base& rhs)
{
// ...
}
}
this will work for Base classes, but if you try to swap two Derived objects it will use the generic version from std because the templated swap is an exact match (and it avoids the problem of only swapping the 'base' parts of your derived objects).
NOTE: I've updated this to remove the wrong bits from my last answer. D'oh! (thanks puetzk and j_random_hacker for pointing it out)
A: While it's correct that one shouldn't generally add stuff to the std:: namespace, adding template specializations for user-defined types is specifically allowed. Overloading the functions is not. This is a subtle difference :-)
17.4.3.1/1
It is undefined for a C++ program to add declarations or definitions
to namespace std or namespaces with namespace std unless otherwise
specified. A program may add template specializations for any
standard library template to namespace std. Such a specialization
(complete or partial) of a standard library results in undefined
behaviour unless the declaration depends on a user-defined name of
external linkage and unless the template specialization meets the
standard library requirements for the original template.
A specialization of std::swap would look like:
namespace std
{
template<>
void swap(myspace::mytype& a, myspace::mytype& b) { ... }
}
Without the template<> bit it would be an overload, which is undefined, rather than a specialization, which is permitted. @Wilka's suggest approach of changing the default namespace may work with user code (due to Koenig lookup preferring the namespace-less version) but it's not guaranteed to, and in fact isn't really supposed to (the STL implementation ought to use the fully-qualified std::swap).
There is a thread on comp.lang.c++.moderated with a long dicussion of the topic. Most of it is about partial specialization, though (which there's currently no good way to do).
A: The right way to overload std::swap's implemention (aka specializing it), is to write it in the same namespace as what you're swapping, so that it can be found via argument-dependent lookup (ADL). One particularly easy thing to do is:
class X
{
// ...
friend void swap(X& a, X& b)
{
using std::swap; // bring in swap for built-in types
swap(a.base1, b.base1);
swap(a.base2, b.base2);
// ...
swap(a.member1, b.member1);
swap(a.member2, b.member2);
// ...
}
};
| {
"language": "en",
"url": "https://stackoverflow.com/questions/11562",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "126"
} |
Q: How can I improve performance when adding InDesign XMLElements via AppleScript? I have an AppleScript program which creates XML tags and elements within an Adobe InDesign document. The data is in tables, and tagging each cell takes .5 seconds. The entire script takes several hours to complete.
I can post the inner loop code, but I'm not sure if SO is supposed to be generic or specific. I'll let the mob decide.
[edit]
The code builds a list (prior to this loop) which contains one item per row in the table. There is also a list containing one string for each column in the table. For each cell, the program creates an XML element and an XML tag by concatenating the items in the [row]/[column] positions of the two lists. It also associates the text in that cell to the newly-created element.
I'm completely new to AppleScript so some of this code is crudely modified from Adobe's samples. If the code is atrocious I won't be offended.
Here's the code:
repeat with columnNumber from COL_START to COL_END
select text of cell ((columnNumber as string) & ":" & (rowNumber as string)) of ThisTable
tell activeDocument
set thisXmlTag to make XML tag with properties {name:item rowNumber of symbolList & "_" & item columnNumber of my histLabelList}
tell rootXmlElement
set thisXmlElement to make XML element with properties {markup tag:thisXmlTag}
end tell
set contents of thisXmlElement to (selection as string)
end tell
end repeat
EDIT: I've rephrased the question to better reflect the correct answer.
A: The problem is almost certainly the select. Is there anyway you could extract all the text at once then iterate over internal variables?
A: I figured this one out.
The document contains a bunch of data tables. In all, there are about 7,000 data points that need to be exported. I was creating one root element with 7,000 children.
Don't do that. Adding each child to the root element got slower and slower until at about 5,000 children AppleScript timed out and the program aborted.
The solution was to make my code more brittle by creating ~480 children off the root, with each child having about 16 grandchildren. Same number of nodes, but the code now runs fast enough. (It still takes about 40 minutes to process the document, but that's infinitely less time than infinity.)
Incidentally, the original 7,000 children plan wasn't as stupid or as lazy as it appears. The new solution is forcing me to link the two tables together using data in the tables that I don't control. The program will now break if there's so much as a space where there shouldn't be one. (But it works.)
A:
I can post the inner loop code, but I'm not sure if SO is supposed to be generic or specific. I'll let the mob decide.
The code you post as an example can be as specific as you (or your boss) is comfortable with - more often than not, it's easier to help you with more specific details.
A: If the inner loop code is a reasonable length, I don't see any reason you can't post it. I think Stack Overflow is intended to encompass both general and specific questions.
A: Are you using InDesign or InDesign Server? How many pages is your document (or what other information can you tell us about your document/ID setup)?
I do a lot of InDesign Server development. You could be seeing slow-downs for a couple of reasons that aren't necessarily code related.
Right now, I'm generating 100-300 page documents almost completely from script/xml in about 100 seconds (you may be doing something much larger).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/11574",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: Clearing Page Cache in ASP.NET For my blog I am wanting to use the Output Cache to save a cached version of a perticular post for around 10 minutes, and thats fine...
<%@OutputCache Duration="600" VaryByParam="*" %>
However, if someone posts a comment, I want to clear the cache so that the page is refreshed and the comment can be seen.
How do I do this in ASP.Net C#?
A: Using Response.AddCacheItemDependency to clear all outputcaches.
public class Page : System.Web.UI.Page
{
protected override void OnLoad(EventArgs e)
{
try
{
string cacheKey = "cacheKey";
object cache = HttpContext.Current.Cache[cacheKey];
if (cache == null)
{
HttpContext.Current.Cache[cacheKey] = DateTime.UtcNow.ToString();
}
Response.AddCacheItemDependency(cacheKey);
}
catch (Exception ex)
{
throw new SystemException(ex.Message);
}
base.OnLoad(e);
}
}
// Clear All OutPutCache Method
public void ClearAllOutPutCache()
{
string cacheKey = "cacheKey";
HttpContext.Cache.Remove(cacheKey);
}
This is also can be used in ASP.NET MVC's OutputCachedPage.
A: I've found the answer I was looking for:
HttpResponse.RemoveOutputCacheItem("/caching/CacheForever.aspx");
A: The above are fine if you know what pages you want to clear the cache for. In my instance (ASP.NET MVC) I referenced the same data from all over. Therefore, when I did a [save] I wanted to clear cache site wide. This is what worked for me: http://aspalliance.com/668
This is done in the context of an OnActionExecuting filter. It could just as easily be done by overriding OnActionExecuting in a BaseController or something.
HttpContextBase httpContext = filterContext.HttpContext;
httpContext.Response.AddCacheItemDependency("Pages");
Setup:
protected void Application_Start()
{
HttpRuntime.Cache.Insert("Pages", DateTime.Now);
}
Minor Tweak:
I have a helper which adds "flash messages" (Error messages, success messages - "This item has been successfully saved", etc). In order to avoid the flash message from showing up on every subsequent GET, I had to invalidate after writing the flash message.
Clearing Cache:
HttpRuntime.Cache.Insert("Pages", DateTime.Now);
Hope this helps.
A: On the master page load event, please write the following:
Response.Cache.SetExpires(DateTime.UtcNow.AddMinutes(-1));
Response.Cache.SetCacheability(HttpCacheability.NoCache);
Response.Cache.SetNoStore();
and in the logout button click:
Session.Abandon();
Session.Clear();
A: Hmm. You can specify a VaryByCustom attribute on the OutputCache item. The value of this is passed as a parameter to the GetVaryByCustomString method that you can implement in global.asax. The value returned by this method is used as an index into the cached items - if you return the number of comments on the page, for instance, each time a comment is added a new page will be cached.
The caveat to this is that this does not actually clear the cache. If a blog entry gets heavy comment usage, your cache could explode in size with this method.
Alternatively, you could implement the non-changeable bits of the page (the navigation, ads, the actual blog entry) as user controls and implement partial page caching on each of those user controls.
A: If you change "*" to just the parameters the cache should vary on (PostID?) you can do something like this:
//add dependency
string key = "post.aspx?id=" + PostID.ToString();
Cache[key] = new object();
Response.AddCacheItemDependency(key);
and when someone adds a comment...
Cache.Remove(key);
I guess this would work even with VaryByParam *, since all requests would be tied to the same cache dependency.
A: why not use the sqlcachedependency on the posts table?
sqlcachedependency msdn
This way your not implementing custom cache clearing code and simply refreshing the cache as the content changes in the db?
A: HttpRuntime.Close() .. I try all method and this is the only that work for me
| {
"language": "en",
"url": "https://stackoverflow.com/questions/11585",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "52"
} |
Q: Do you use design patterns? What's the penetration of design patterns in the real world? Do you use them in your day to day job - discussing how and where to apply them with your coworkers - or do they remain more of an academic concept?
Do they actually provide actual value to your job? Or are they just something that people talk about to sound smart?
Note: For the purpose of this question ignore 'simple' design patterns like Singleton. I'm talking about designing your code so you can take advantage of Model View Controller, etc.
A: Any large program that is well written will use design patterns, even if they aren't named or recognized as such. That's what design patterns are, designs that repeatedly and naturally occur. If you're interfacing with an ugly API, you'll likely find yourself implementing a Facade to clean it up. If you've got messaging between components that you need to decouple, you may find yourself using Observer. If you've got several interchangeable algorithms, you might end up using Strategy.
It's worth knowing the design patterns because you're more likely to recognize them and then converge on a clean solution more quickly. However, even if you don't know them at all, you'll end up creating them eventually (if you are a decent programmer).
And of course, if you are using a modern language, you'll probably be forced to use them for some things, because they're baked into the standard libraries.
A: Yes. Design patterns can be wonderful when used appropriately. As you mentioned, I am now using Model-View-Controller (MVC) for all of my web projects. It is a very common pattern in the web space which makes server-side code much cleaner and well-organized.
Beyond that, here are some other patterns that may be useful:
*
*MVVM (Model-View-ViewModel): a similar pattern to MVC; used for WPF and Silverlight applications.
*Composition: Great for when you need to use a hierarchy of objects.
*Singleton: More elegant than using globals for storing items that truly need a single instance. As you mentioned, a simple pattern but it does have its uses.
It is worth noting a design pattern can also highlight a lack of language features and/or deficiencies in a language. For example, iterators are now built in as part of newer languages.
In general design patterns are quite useful but you should not use them everywhere; just where they are a good fit for your needs.
A: I try to, yes. They do indeed help maintainability and readability of your code. However, there are people who do abuse them, usually (from what I've seen) by forcing a system into a pattern that doesn't exist.
A: I try to use patterns if they are applicable. I think it's kind of sad seeing developers implement design patterns in code just for the sake of it. For the right task though, design patterns can be very useful and powerful.
A: There are many design patterns beyond the simple that are used in "real world". Good example Stackoverflow uses the Model View Controller Pattern. I have used Class Factories multiple times in projects for my employer, and I have seen many already written projects using them as well.
I am not saying every design pattern is being used but many are.
A: Yes we do, it usually happens when we start designing something and then someone notices that it resembles an existing pattern. We then take a look at it and see how it would help us achieve our goal.
We also use patterns that are not documented but that emerge from designing a lot.
Mind you, we don't use them a lot.
A: Yes, Factory, Chain of Responsibility, Command, Proxy, Visitor, and Observer, among others, are in use in a codebase I work with daily. As far as MVC goes, this site seems to use it quite well, and the devs couldn't say enough good things in the latest podcast.
A: In my opinion, the question: "Do you use design pattern?", alone is a little flawed because the answer is universally YES.
Let me explain, we, programmers and designers, all use design patterns... we just don't always realise it. I know this sounds cliché, but you don't go to patterns, patterns come to you. You design stuff, it might look like an existing pattern, you name it that way so everyone understand what you are talking about and the rationale behind your design decision is stronger, knowing it has been discussed ad nauseum before.
I personally use patterns as a communication tool. That's it. They are not design solutions, they are not best practices, they are not tools in a toolbox.
Don't get me wrong, if you are a beginner, books on patterns will show you how a solution is best solved "using" their patterns rather than another flawed design. You will probably learn from the exercise. However, you have to realise that this doesn't mean that every situation needs a corresponding pattern to solve it. Every situation has a quirk here and there that will require you to think about alternatives and take a difficult decision with no perfect answer. That's design.
Anti-pattern however are on a totally different class. You actually want to actively avoid anti-patterns. That's why the name anti-pattern is so controversial.
To get back to your original question:
"Do I use design patterns?", Yes!
"Do I actively lean toward design patterns?", No.
A: Yes, I use a lot of well known design patterns, but I also end up building some software that I later find out uses a 'named' design pattern. Most elegant, reusable designs could be called a 'pattern'. It's a lot like dance moves. We all know the waltz, and the 2-step, but not everyone has a name for the 'bump and scoot' although most of us do it.
A: MVC is very well known so yes we use design patterns quite a lot. Now if your asking about the Gang of Four patterns, there are several that I use because other maintainers will know the design and what we are working towards in the code. There are several though that remain fairly obscure for what we do, so if I use one I don't get the full benefits of using a pattern.
Are they important, yes because it gives you a method of talking about software design in a quick efficient and generally accepted way. Can you do better custom solutions, well yes (sorta)?
The original GoF patterns were pulled from production code, so they catalogued what was already being used in the wild. They aren't purely or even mostly an academic thing.
A: I find the MVC pattern really useful to isolate your model logic, which can than be reused or worked on without too much trouble. It also helps de-coupling your classes and makes unit testing easier. I wrote about it recently (yes, shameless plug here...)
Also, I've recently used a factory pattern from a base class to generate and return the proper DataContext class that I needed on the fly, using LINQ.
Bridges are used when trying when trying to glue together two different technologies (like Cocoa and Ruby on the Mac, for example)
I find, however, that whenever I implement a pattern, it's because I knew about it before hand. Some extra thought generally goes into it as I find I must modify the original pattern slightly to accommodate my needs.
You just need to be careful not to become and architecture astronaut!
A: Yes, design patterns are largely used in the real world - and daily by many of the people I work with.
In my opinion the biggest value provided by design patterns is that they provide a universal, high level language for you to convey software design to other programmers.
For instance instead of describing your new class as a "utility that creates one of several other classes based on some combination of input criteria", you can simply say it's an "abstract factory" and everyone instantly understands what you're talking about.
A: Yes, design patterns or abstractly patterns are part of my life, where I look, I begin to see them. Therefore, I am surrounded by them. But, as you know, little knowledge is a dangerous thing. Therefore, I strongly recommend you to read GoF book.
One of the main problems about design patterns, most developers just do not get the idea, or do not believe in them. And most of the time they argue about the variables, loops, or switches. But, I strongly believe that if you do not speak the pattern language, your software will not go far and you will find yourselves in a maintenance nightmare.
As you know, anti-pattern is also dangerous thing and it happens when you have little expertise on design patterns. And refactoring anti-patterns is much more harder. As a recommended book about this problem please read "AntiPatterns: Refactoring Software, Architectures, and Projects in Crisis".
A: Yes.
We are even using them in my current job: Mainframe coding with COBOL and PL/I.
So far I have seen Adaptor, Visitor, Facade, Module, Observer and something very close to Composite and Iterator. Due to the nature of the languages it's mostly strutural patterns that are used. Also, I'm not always sure that the people who use them do so conciously :D
A: I absolutely use design patterns. At this point I take MVC for granted as a design pattern. My primary reason for using them is that I am humble enough to know that I am likely not the first person to encounter a particular problem. I rarely start a piece of code knowing which pattern I am going to use; I constantly watch the code to see if it naturally develops into an existing pattern.
I am also very fond of Martin Fowler's Patterns of Enterprise Application Architecture. When a problem or task presents itself, I flip to related section (it's mostly a reference book) and read a few overviews of the patterns. Once I have a better idea of the general problem and the existing solutions, I begin to see the long term path my code will likely take via the experience of others. I end up making much better decisions.
Design patterns definitely play a big role in all of my "for the future" ideas.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/11586",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "29"
} |
Q: Is there any wiki engine that supports page creation by email? I want to consolidate all the loose information of the company I work for into a knowledge base. A wiki seems to be the way to go, but most of the relevant information is buried inside PST files, and it would take ages to convince people to manually translate their emails one by one (including attachments) into wiki pages. So I'm looking for a wiki engine that supports page creation by email, that is, capable of receiving email (supporting plain text, html and attachments) and then create the corresponding page. Supporting file indexing and looking for duplicates would be a huge bonus.
I tried with WikiMatrix, but didn't find what I was looking for. I wouldn’t mind to build my own engine (borrowing a couple of snippets here and there for MIME decoding), but I don’t think is that a rare problem so there is no implementation.
A: Both Jotspot and MediaWiki allow you to do this. The latter has support for a lot of plugins, of which this is one. The format is essentially PageTitle@something. Jotspot is a hosted solution where you get your own email address, MediaWiki is self-hosted and you give it a mailbox to monitor for incoming.
Articles are appended to pages if they already exist, or a new page is created if it does not. This does require a degree of discipline for naming conventions, but is great for CC'ing.
We use MediaWiki here and I like it a lot. It has the same flaws as many other Wiki packages (e.g difficult to reorganize without orphaning pages) but is as good if not better than other Wiki packages I've used.
A: I don't know if this is exactly what you're looking for, but I know many of 37 Signals' products support adding data through email. I use Highrise to keep track of some of my business correspondence, and I'm able to CC or forward emails to Highrise and they get added to the appropriate contact.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/11612",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: Good Resources for Relational Database Design I'm looking for a book/site/tutorial on best practices for relational database design, tuning for performance etc. It turns out this kind of resource is a bit difficult to find; there's a lot of "here's normalization, here's ER diagrams, have at it," but not much in the way of real examples. Anyone have any ideas?
A: Take a look at the Library of Free Data Models. There are tons of example database designs, with diagrams that cover real-world scenarios (and some just fun/funny ones as well). I haven't ever used one as-is, but it's often been handy to get an idea of how to approach the problem of mapping the needs of the situation into a data model.
A: Check out the "The Art of SQL". A pleasure to read.
A: Book: Database Design for Mere Mortals
A: Here are some resources I could find on the web. They include examples you are looking for:
*
*Designing and creating a Relational Database - Dr Lorna Scammell: Newcastle University Database Adviser
*Sample Data Models for Relational Database Design
| {
"language": "en",
"url": "https://stackoverflow.com/questions/11615",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13"
} |
Q: How do you kill all current connections to a SQL Server 2005 database? I want to rename a database, but keep getting the error that 'couldn't get exclusive lock' on the database, which implies there is some connection(s) still active.
How can I kill all the connections to the database so that I can rename it?
A: Select 'Kill '+ CAST(p.spid AS VARCHAR)KillCommand into #temp
from master.dbo.sysprocesses p (nolock)
join master..sysdatabases d (nolock) on p.dbid = d.dbid
Where d.[name] = 'your db name'
Declare @query nvarchar(max)
--Select * from #temp
Select @query =STUFF((
select ' ' + KillCommand from #temp
FOR XML PATH('')),1,1,'')
Execute sp_executesql @query
Drop table #temp
use the 'master' database and run this query, it will kill all the active connections from your database.
A: Kill it, and kill it with fire:
USE master
go
DECLARE @dbname sysname
SET @dbname = 'yourdbname'
DECLARE @spid int
SELECT @spid = min(spid) from master.dbo.sysprocesses where dbid = db_id(@dbname)
WHILE @spid IS NOT NULL
BEGIN
EXECUTE ('KILL ' + @spid)
SELECT @spid = min(spid) from master.dbo.sysprocesses where dbid = db_id(@dbname) AND spid > @spid
END
A: I usually run into that error when I am trying to restore a database I usually just go to the top of the tree in Management Studio and right click and restart the database server (because it's on a development machine, this might not be ideal in production). This is close all database connections.
A: In MS SQL Server Management Studio on the object explorer, right click on the database. In the context menu that follows select 'Tasks -> Take Offline'
A: Here's how to reliably this sort of thing in MS SQL Server Management Studio 2008 (may work for other versions too):
*
*In the Object Explorer Tree, right click the root database server (with the green arrow), then click activity monitor.
*Open the processes tab in the activity monitor, select the 'databases' drop down menu, and filter by the database you want.
*Right click the DB in Object Explorer and start a 'Tasks -> Take Offline' task. Leave this running in the background while you...
*Safely shut down whatever you can.
*Kill all remaining processes from the process tab.
*Bring the DB back online.
*Rename the DB.
*Bring your service back online and point it to the new DB.
A: Another "kill it with fire" approach is to just restart the MSSQLSERVER service.
I like to do stuff from the commandline. Pasting this exactly into CMD will do it:
NET STOP MSSQLSERVER & NET START MSSQLSERVER
Or open "services.msc" and find "SQL Server (MSSQLSERVER)" and right-click, select "restart".
This will "for sure, for sure" kill ALL connections to ALL databases running on that instance.
(I like this better than many approaches that change and change back the configuration on the server/database)
A: The reason that the approach that Adam suggested won't work is that during the time that you are looping over the active connections new one can be established, and you'll miss those. You could instead use the following approach which does not have this drawback:
-- set your current connection to use master otherwise you might get an error
use master
ALTER DATABASE YourDatabase SET SINGLE_USER WITH ROLLBACK IMMEDIATE
--do you stuff here
ALTER DATABASE YourDatabase SET MULTI_USER
A: The option working for me in this scenario is as follows:
*
*Start the "Detach" operation on the database in question. This wil open a window (in SQL 2005) displaying the active connections that prevents actions on the DB.
*Kill the active connections, cancel the detach-operation.
*The database should now be available for restoring.
A: Using SQL Management Studio Express:
In the Object Explorer tree drill down under Management to "Activity Monitor" (if you cannot find it there then right click on the database server and select "Activity Monitor"). Opening the Activity Monitor, you can view all process info. You should be able to find the locks for the database you're interested in and kill those locks, which will also kill the connection.
You should be able to rename after that.
A: I've always used:
ALTER DATABASE DB_NAME SET SINGLE_USER WITH ROLLBACK IMMEDIATE
GO
SP_RENAMEDB 'DB_NAME','DB_NAME_NEW'
Go
ALTER DATABASE DB_NAME_NEW SET MULTI_USER -- set back to multi user
GO
A: ALTER DATABASE [Test]
SET OFFLINE WITH ROLLBACK IMMEDIATE
ALTER DATABASE [Test]
SET ONLINE
A: Try this:
ALTER DATABASE [DATABASE_NAME]
SET SINGLE_USER
WITH ROLLBACK IMMEDIATE
A: Right click on the database name, click on Property to get property window, Open the Options tab and change the "Restrict Access" property from Multi User to Single User. When you hit on OK button, it will prompt you to closes all open connection, select "Yes" and you are set to rename the database....
A: These didn't work for me (SQL2008 Enterprise), I also couldn't see any running processes or users connected to the DB. Restarting the server (Right click on Sql Server in Management Studio and pick Restart) allowed me to restore the DB.
A: I'm using SQL Server 2008 R2, my DB was already set for single user and there was a connection that restricted any action on the database. Thus the recommended SQLMenace's solution responded with error. Here is one that worked in my case.
A: Take offline takes a while and sometimes I experience some problems with that..
Most solid way in my opinion:
Detach
Right click DB -> Tasks -> Detach...
check "Drop Connections"
Ok
Reattach
Right click Databases -> Attach..
Add... -> select your database, and change the Attach As column to your desired database name.
Ok
A: Script to accomplish this, replace 'DB_NAME' with the database to kill all connections to:
USE master
GO
SET NOCOUNT ON
DECLARE @DBName varchar(50)
DECLARE @spidstr varchar(8000)
DECLARE @ConnKilled smallint
SET @ConnKilled=0
SET @spidstr = ''
Set @DBName = 'DB_NAME'
IF db_id(@DBName) < 4
BEGIN
PRINT 'Connections to system databases cannot be killed'
RETURN
END
SELECT @spidstr=coalesce(@spidstr,',' )+'kill '+convert(varchar, spid)+ '; '
FROM master..sysprocesses WHERE dbid=db_id(@DBName)
IF LEN(@spidstr) > 0
BEGIN
EXEC(@spidstr)
SELECT @ConnKilled = COUNT(1)
FROM master..sysprocesses WHERE dbid=db_id(@DBName)
END
A: I use sp_who to get list of all process in database. This is better because you may want to review which process to kill.
declare @proc table(
SPID bigint,
Status nvarchar(255),
Login nvarchar(255),
HostName nvarchar(255),
BlkBy nvarchar(255),
DBName nvarchar(255),
Command nvarchar(MAX),
CPUTime bigint,
DiskIO bigint,
LastBatch nvarchar(255),
ProgramName nvarchar(255),
SPID2 bigint,
REQUESTID bigint
)
insert into @proc
exec sp_who2
select *, KillCommand = concat('kill ', SPID, ';')
from @proc
Result
You can use command in KillCommand column to kill the process you want to.
SPID KillCommand
26 kill 26;
27 kill 27;
28 kill 28;
A: You can Use SP_Who command and kill all process that use your database and then rename your database.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/11620",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "294"
} |
Q: What are the most important functional differences between C# and VB.NET? Certainly there's the difference in general syntax, but what other critical distinctions exist? There are some differences, right?
A: This topic has had a lot of face time since .Net 2.0 was released. See this Wikipedia article for a readable summary.
A: This may be considered syntax, but VB.NET is case insensitive while C# is case sensitive.
A: The linked comparisons are very thorough, but as far as the main differences I would note the following:
*
*C# has anonymous methodsVB has these now, too
*C# has the yield keyword (iterator blocks)VB11 added this
*VB supports implicit late binding (C# has explicit late binding now via the dynamic keyword)
*VB supports XML literals
*VB is case insensitive
*More out-of-the-box code snippets for VB
*More out-of-the-box refactoring tools for C#Visual Studio 2015 now provides the same refactoring tools for both VB and C#.
In general the things MS focuses on for each vary, because the two languages are targeted at very different audiences. This blog post has a good summary of the target audiences. It is probably a good idea to determine which audience you are in, because it will determine what kind of tools you'll get from Microsoft.
A: This is a very comprehensive reference.
A: Since I assume you can google, I don't think a link to more sites is what you are looking for.
My answer: Choose base on the history of your developers. C# is more JAVA like, and probably C++ like.
VB.NET was easier for VB programmers, but I guess that is no really an issue anymore sine there are no new .NET programmers coming from old VB.
My opinion is that VB is more productive then C#, it seems it is always ahead in terms of productivity tools (such as intelisense), and I would recommend vb over c# to someone that asks. Of course, someone that knows he prefers c# won't ask, and c# is probably the right choice for him.
A: Although the syntax sugar on C#3 has really pushed the bar forward, I must say some of the Linq to XML stuff in VB.Net seems quite nice and makes handling complex, deeply nested XML a little bit more tolerable. Just a little bit.
A: One glaring difference is in how they handle extension methods (Vb.Net actually allows something that C# doesn't - passing the type on which the extension method is being defined as ref): http://blog.gadodia.net/extension-methods-in-vbnet-and-c/
A: Apart from syntax not that much any more. They both compile to exactly the same IL, so you can compile something as VB and reflect it into C#.
Most of the apparent differences are syntactic sugar. For instance VB appears to support dynamic types, but really they're just as static as C#'s - the VB compiler figures them out.
Visual Studio behaves differently with VB than with C# - it hides lots of functionality but adds background compiling (great for small projects, resource hogging for large ones) and better snippet support.
With more and more compiler 'magic' in C#3 VB.Net has really fallen behind. The only thing VB now has that C# doesn't is the handles keyword - and that's of debatable benefit.
@Tom - that really useful, but a little out of date - VB.Net now supports XML docs too with '''
@Luke - VB.Net still doesn't have anon-methods, but does now support lambdas.
A: The biggest difference in my opinion is the ability to write unsafe code in C#.
A: Although VB.NET supports try...catch type exception handling, it still has something similar to VB6's ON ERROR. ON ERROR can be seriously abused, and in the vast majority of cases, try...catch is far better; but ON ERROR can be useful when handling COM time-out operations where the error can be trapped, decoded, and the final "try again" is a simple one line.
You can do the same with try...catch but the code is a lot messier.
A: This topic is briefly described at wikipedia and harding.
http://en.wikipedia.org/wiki/Comparison_of_C_Sharp_and_Visual_Basic_.NET
http://www.harding.edu/fmccown/vbnet_csharp_comparison.html
Just go through and make your notes on that.
A: When it gets to IL its all just bits. That case insensitivity is just a precompiler pass.
But the general consensus is, vb is more verbose.
If you can write c# why not save your eyes and hands and write the smaller amount of code to do the same thing.
A: One glaring difference is in how they handle extension methods (Vb.Net actually allows something that C# doesn't - passing the type on which the extension method is being defined as ref): http://blog.gadodia.net/extension-methods-in-vbnet-and-c/
A: Yes VB.NET fixed most of the VB6 problems and made it a proper OOP language - ie. Similar in abilities to C#. AlthougnI tend to prefer C#, I do find the old VB ON ERROR construct useful for handling COM interop timeouts. Something to use wisely though - ON ERROR is easily abused!!
| {
"language": "en",
"url": "https://stackoverflow.com/questions/11632",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "34"
} |
Q: Case-insensitive string comparison in C++ What is the best way of doing case-insensitive string comparison in C++ without transforming a string to all uppercase or all lowercase?
Please indicate whether the methods are Unicode-friendly and how portable they are.
A: For my basic case insensitive string comparison needs I prefer not to have to use an external library, nor do I want a separate string class with case insensitive traits that is incompatible with all my other strings.
So what I've come up with is this:
bool icasecmp(const string& l, const string& r)
{
return l.size() == r.size()
&& equal(l.cbegin(), l.cend(), r.cbegin(),
[](string::value_type l1, string::value_type r1)
{ return toupper(l1) == toupper(r1); });
}
bool icasecmp(const wstring& l, const wstring& r)
{
return l.size() == r.size()
&& equal(l.cbegin(), l.cend(), r.cbegin(),
[](wstring::value_type l1, wstring::value_type r1)
{ return towupper(l1) == towupper(r1); });
}
A simple function with one overload for char and another for whar_t. Doesn't use anything non-standard so should be fine on any platform.
The equality comparison won't consider issues like variable length encoding and Unicode normalization, but basic_string has no support for that that I'm aware of anyway and it isn't normally an issue.
In cases where more sophisticated lexicographical manipulation of text is required, then you simply have to use a third party library like Boost, which is to be expected.
A: Doing this without using Boost can be done by getting the C string pointer with c_str() and using strcasecmp:
std::string str1 ="aBcD";
std::string str2 = "AbCd";;
if (strcasecmp(str1.c_str(), str2.c_str()) == 0)
{
//case insensitive equal
}
A: If you are on a POSIX system, you can use strcasecmp. This function is not part of standard C, though, nor is it available on Windows. This will perform a case-insensitive comparison on 8-bit chars, so long as the locale is POSIX. If the locale is not POSIX, the results are undefined (so it might do a localized compare, or it might not). A wide-character equivalent is not available.
Failing that, a large number of historic C library implementations have the functions stricmp() and strnicmp(). Visual C++ on Windows renamed all of these by prefixing them with an underscore because they aren’t part of the ANSI standard, so on that system they’re called _stricmp or _strnicmp. Some libraries may also have wide-character or multibyte equivalent functions (typically named e.g. wcsicmp, mbcsicmp and so on).
C and C++ are both largely ignorant of internationalization issues, so there's no good solution to this problem, except to use a third-party library. Check out IBM ICU (International Components for Unicode) if you need a robust library for C/C++. ICU is for both Windows and Unix systems.
A: Assuming you are looking for a method and not a magic function that already exists, there is frankly no better way. We could all write code snippets with clever tricks for limited character sets, but at the end of the day at somepoint you have to convert the characters.
The best approach for this conversion is to do so prior to the comparison. This allows you a good deal of flexibility when it comes to encoding schemes, which your actual comparison operator should be ignorant of.
You can of course 'hide' this conversion behind your own string function or class, but you still need to convert the strings prior to comparison.
A: I wrote a case-insensitive version of char_traits for use with std::basic_string in order to generate a std::string that is not case-sensitive when doing comparisons, searches, etc using the built-in std::basic_string member functions.
So in other words, I wanted to do something like this.
std::string a = "Hello, World!";
std::string b = "hello, world!";
assert( a == b );
...which std::string can't handle. Here's the usage of my new char_traits:
std::istring a = "Hello, World!";
std::istring b = "hello, world!";
assert( a == b );
...and here's the implementation:
/* ---
Case-Insensitive char_traits for std::string's
Use:
To declare a std::string which preserves case but ignores case in comparisons & search,
use the following syntax:
std::basic_string<char, char_traits_nocase<char> > noCaseString;
A typedef is declared below which simplifies this use for chars:
typedef std::basic_string<char, char_traits_nocase<char> > istring;
--- */
template<class C>
struct char_traits_nocase : public std::char_traits<C>
{
static bool eq( const C& c1, const C& c2 )
{
return ::toupper(c1) == ::toupper(c2);
}
static bool lt( const C& c1, const C& c2 )
{
return ::toupper(c1) < ::toupper(c2);
}
static int compare( const C* s1, const C* s2, size_t N )
{
return _strnicmp(s1, s2, N);
}
static const char* find( const C* s, size_t N, const C& a )
{
for( size_t i=0 ; i<N ; ++i )
{
if( ::toupper(s[i]) == ::toupper(a) )
return s+i ;
}
return 0 ;
}
static bool eq_int_type( const int_type& c1, const int_type& c2 )
{
return ::toupper(c1) == ::toupper(c2) ;
}
};
template<>
struct char_traits_nocase<wchar_t> : public std::char_traits<wchar_t>
{
static bool eq( const wchar_t& c1, const wchar_t& c2 )
{
return ::towupper(c1) == ::towupper(c2);
}
static bool lt( const wchar_t& c1, const wchar_t& c2 )
{
return ::towupper(c1) < ::towupper(c2);
}
static int compare( const wchar_t* s1, const wchar_t* s2, size_t N )
{
return _wcsnicmp(s1, s2, N);
}
static const wchar_t* find( const wchar_t* s, size_t N, const wchar_t& a )
{
for( size_t i=0 ; i<N ; ++i )
{
if( ::towupper(s[i]) == ::towupper(a) )
return s+i ;
}
return 0 ;
}
static bool eq_int_type( const int_type& c1, const int_type& c2 )
{
return ::towupper(c1) == ::towupper(c2) ;
}
};
typedef std::basic_string<char, char_traits_nocase<char> > istring;
typedef std::basic_string<wchar_t, char_traits_nocase<wchar_t> > iwstring;
A: Are you talking about a dumb case insensitive compare or a full normalized Unicode compare?
A dumb compare will not find strings that might be the same but are not binary equal.
Example:
U212B (ANGSTROM SIGN)
U0041 (LATIN CAPITAL LETTER A) + U030A (COMBINING RING ABOVE)
U00C5 (LATIN CAPITAL LETTER A WITH RING ABOVE).
Are all equivalent but they also have different binary representations.
That said, Unicode Normalization should be a mandatory read especially if you plan on supporting Hangul, Thaï and other asian languages.
Also, IBM pretty much patented most optimized Unicode algorithms and made them publicly available. They also maintain an implementation : IBM ICU
A: I've had good experience using the International Components for Unicode libraries - they're extremely powerful, and provide methods for conversion, locale support, date and time rendering, case mapping (which you don't seem to want), and collation, which includes case- and accent-insensitive comparison (and more). I've only used the C++ version of the libraries, but they appear to have a Java version as well.
Methods exist to perform normalized compares as referred to by @Coincoin, and can even account for locale - for example (and this a sorting example, not strictly equality), traditionally in Spanish (in Spain), the letter combination "ll" sorts between "l" and "m", so "lz" < "ll" < "ma".
A: Just use strcmp() for case sensitive and strcmpi() or stricmp() for case insensitive comparison. Which are both in the header file <string.h>
format:
int strcmp(const char*,const char*); //for case sensitive
int strcmpi(const char*,const char*); //for case insensitive
Usage:
string a="apple",b="ApPlE",c="ball";
if(strcmpi(a.c_str(),b.c_str())==0) //(if it is a match it will return 0)
cout<<a<<" and "<<b<<" are the same"<<"\n";
if(strcmpi(a.c_str(),b.c_str()<0)
cout<<a[0]<<" comes before ball "<<b[0]<<", so "<<a<<" comes before "<<b;
Output
apple and ApPlE are the same
a comes before b, so apple comes before ball
A: Late to the party, but here is a variant that uses std::locale, and thus correctly handles Turkish:
auto tolower = std::bind1st(
std::mem_fun(
&std::ctype<char>::tolower),
&std::use_facet<std::ctype<char> >(
std::locale()));
gives you a functor that uses the active locale to convert characters to lowercase, which you can then use via std::transform to generate lower-case strings:
std::string left = "fOo";
transform(left.begin(), left.end(), left.begin(), tolower);
This also works for wchar_t based strings.
A: boost::iequals is not utf-8 compatible in the case of string.
You can use boost::locale.
comparator<char,collator_base::secondary> cmpr;
cout << (cmpr(str1, str2) ? "str1 < str2" : "str1 >= str2") << endl;
*
*Primary -- ignore accents and character case, comparing base letters only. For example "facade" and "Façade" are the same.
*Secondary -- ignore character case but consider accents. "facade" and "façade" are different but "Façade" and "façade" are the same.
*Tertiary -- consider both case and accents: "Façade" and "façade" are different. Ignore punctuation.
*Quaternary -- consider all case, accents, and punctuation. The words must be identical in terms of Unicode representation.
*Identical -- as quaternary, but compare code points as well.
A: Boost includes a handy algorithm for this:
#include <boost/algorithm/string.hpp>
// Or, for fewer header dependencies:
//#include <boost/algorithm/string/predicate.hpp>
std::string str1 = "hello, world!";
std::string str2 = "HELLO, WORLD!";
if (boost::iequals(str1, str2))
{
// Strings are identical
}
A: My first thought for a non-unicode version was to do something like this:
bool caseInsensitiveStringCompare(const string& str1, const string& str2) {
if (str1.size() != str2.size()) {
return false;
}
for (string::const_iterator c1 = str1.begin(), c2 = str2.begin(); c1 != str1.end(); ++c1, ++c2) {
if (tolower(static_cast<unsigned char>(*c1)) != tolower(static_cast<unsigned char>(*c2))) {
return false;
}
}
return true;
}
A: A simple way to compare two string in c++ (tested for windows) is using _stricmp
// Case insensitive (could use equivalent _stricmp)
result = _stricmp( string1, string2 );
If you are looking to use with std::string, an example:
std::string s1 = string("Hello");
if ( _stricmp(s1.c_str(), "HELLO") == 0)
std::cout << "The string are equals.";
For more information here: https://msdn.microsoft.com/it-it/library/e0z9k731.aspx
A: You can use strcasecmp on Unix, or stricmp on Windows.
One thing that hasn't been mentioned so far is that if you are using stl strings with these methods, it's useful to first compare the length of the two strings, since this information is already available to you in the string class. This could prevent doing the costly string comparison if the two strings you are comparing aren't even the same length in the first place.
A: Just a note on whatever method you finally choose, if that method happens to include the use of strcmp that some answers suggest:
strcmp doesn't work with Unicode data in general. In general, it doesn't even work with byte-based Unicode encodings, such as utf-8, since strcmp only makes byte-per-byte comparisons and Unicode code points encoded in utf-8 can take more than 1 byte. The only specific Unicode case strcmp properly handle is when a string encoded with a byte-based encoding contains only code points below U+00FF - then the byte-per-byte comparison is enough.
A: As of early 2013, the ICU project, maintained by IBM, is a pretty good answer to this.
http://site.icu-project.org/
ICU is a "complete, portable Unicode library that closely tracks industry standards." For the specific problem of string comparison, the Collation object does what you want.
The Mozilla Project adopted ICU for internationalization in Firefox in mid-2012; you can track the engineering discussion, including issues of build systems and data file size, here:
*
*https://groups.google.com/forum/#!topic/mozilla.dev.platform/sVVpS2sKODw
*https://bugzilla.mozilla.org/show_bug.cgi?id=724529 (tracker)
*https://bugzilla.mozilla.org/show_bug.cgi?id=724531 (build system)
A: Looks like above solutions aren't using compare method and implementing total again so here is my solution and hope it works for you (It's working fine).
#include<iostream>
#include<cstring>
#include<cmath>
using namespace std;
string tolow(string a)
{
for(unsigned int i=0;i<a.length();i++)
{
a[i]=tolower(a[i]);
}
return a;
}
int main()
{
string str1,str2;
cin>>str1>>str2;
int temp=tolow(str1).compare(tolow(str2));
if(temp>0)
cout<<1;
else if(temp==0)
cout<<0;
else
cout<<-1;
}
A: I'm trying to cobble together a good answer from all the posts, so help me edit this:
Here is a method of doing this, although it does transforming the strings, and is not Unicode friendly, it should be portable which is a plus:
bool caseInsensitiveStringCompare( const std::string& str1, const std::string& str2 ) {
std::string str1Cpy( str1 );
std::string str2Cpy( str2 );
std::transform( str1Cpy.begin(), str1Cpy.end(), str1Cpy.begin(), ::tolower );
std::transform( str2Cpy.begin(), str2Cpy.end(), str2Cpy.begin(), ::tolower );
return ( str1Cpy == str2Cpy );
}
From what I have read this is more portable than stricmp() because stricmp() is not in fact part of the std library, but only implemented by most compiler vendors.
To get a truly Unicode friendly implementation it appears you must go outside the std library. One good 3rd party library is the IBM ICU (International Components for Unicode)
Also boost::iequals provides a fairly good utility for doing this sort of comparison.
A: str1.size() == str2.size() && std::equal(str1.begin(), str1.end(), str2.begin(), [](auto a, auto b){return std::tolower(a)==std::tolower(b);})
You can use the above code in C++14 if you are not in a position to use boost. You have to use std::towlower for wide chars.
A: See std::lexicographical_compare:
// lexicographical_compare example
#include <iostream> // std::cout, std::boolalpha
#include <algorithm> // std::lexicographical_compare
#include <cctype> // std::tolower
// a case-insensitive comparison function:
bool mycomp (char c1, char c2) {
return std::tolower(c1) < std::tolower(c2);
}
int main () {
char foo[] = "Apple";
char bar[] = "apartment";
std::cout << std::boolalpha;
std::cout << "Comparing foo and bar lexicographically (foo < bar):\n";
std::cout << "Using default comparison (operator<): ";
std::cout << std::lexicographical_compare(foo, foo + 5, bar, bar + 9);
std::cout << '\n';
std::cout << "Using mycomp as comparison object: ";
std::cout << std::lexicographical_compare(foo, foo + 5, bar, bar + 9, mycomp);
std::cout << '\n';
return 0;
}
Demo
A: Short and nice. No other dependencies, than extended std C lib.
strcasecmp(str1.c_str(), str2.c_str()) == 0
returns true if str1 and str2 are equal.
strcasecmp may not exist, there could be analogs stricmp, strcmpi, etc.
Example code:
#include <iostream>
#include <string>
#include <string.h> //For strcasecmp(). Also could be found in <mem.h>
using namespace std;
/// Simple wrapper
inline bool str_ignoreCase_cmp(std::string const& s1, std::string const& s2) {
if(s1.length() != s2.length())
return false; // optimization since std::string holds length in variable.
return strcasecmp(s1.c_str(), s2.c_str()) == 0;
}
/// Function object - comparator
struct StringCaseInsensetiveCompare {
bool operator()(std::string const& s1, std::string const& s2) {
if(s1.length() != s2.length())
return false; // optimization since std::string holds length in variable.
return strcasecmp(s1.c_str(), s2.c_str()) == 0;
}
bool operator()(const char *s1, const char * s2){
return strcasecmp(s1,s2)==0;
}
};
/// Convert bool to string
inline char const* bool2str(bool b){ return b?"true":"false"; }
int main()
{
cout<< bool2str(strcasecmp("asd","AsD")==0) <<endl;
cout<< bool2str(strcasecmp(string{"aasd"}.c_str(),string{"AasD"}.c_str())==0) <<endl;
StringCaseInsensetiveCompare cmp;
cout<< bool2str(cmp("A","a")) <<endl;
cout<< bool2str(cmp(string{"Aaaa"},string{"aaaA"})) <<endl;
cout<< bool2str(str_ignoreCase_cmp(string{"Aaaa"},string{"aaaA"})) <<endl;
return 0;
}
Output:
true
true
true
true
true
A: Visual C++ string functions supporting unicode: http://msdn.microsoft.com/en-us/library/cc194799.aspx
the one you are probably looking for is _wcsnicmp
A: The trouble with boost is that you have to link with and depend on boost. Not easy in some cases (e.g. android).
And using char_traits means all your comparisons are case insensitive, which isn't usually what you want.
This should suffice. It should be reasonably efficient. Doesn't handle unicode or anything though.
bool iequals(const string& a, const string& b)
{
unsigned int sz = a.size();
if (b.size() != sz)
return false;
for (unsigned int i = 0; i < sz; ++i)
if (tolower(a[i]) != tolower(b[i]))
return false;
return true;
}
Update: Bonus C++14 version (#include <algorithm>):
bool iequals(const string& a, const string& b)
{
return std::equal(a.begin(), a.end(),
b.begin(), b.end(),
[](char a, char b) {
return tolower(a) == tolower(b);
});
}
Update: C++20 version using std::ranges:
#include <ranges>
#include <algorithm>
#include <string>
bool iequals(const std::string_view& lhs, const std::string_view& rhs) {
auto to_lower{ std::ranges::views::transform(std::tolower) };
return std::ranges::equal(lhs | to_lower, rhs | to_lower);
}
A: Take advantage of the standard char_traits. Recall that a std::string is in fact a typedef for std::basic_string<char>, or more explicitly, std::basic_string<char, std::char_traits<char> >. The char_traits type describes how characters compare, how they copy, how they cast etc. All you need to do is typedef a new string over basic_string, and provide it with your own custom char_traits that compare case insensitively.
struct ci_char_traits : public char_traits<char> {
static bool eq(char c1, char c2) { return toupper(c1) == toupper(c2); }
static bool ne(char c1, char c2) { return toupper(c1) != toupper(c2); }
static bool lt(char c1, char c2) { return toupper(c1) < toupper(c2); }
static int compare(const char* s1, const char* s2, size_t n) {
while( n-- != 0 ) {
if( toupper(*s1) < toupper(*s2) ) return -1;
if( toupper(*s1) > toupper(*s2) ) return 1;
++s1; ++s2;
}
return 0;
}
static const char* find(const char* s, int n, char a) {
while( n-- > 0 && toupper(*s) != toupper(a) ) {
++s;
}
return s;
}
};
typedef std::basic_string<char, ci_char_traits> ci_string;
The details are on Guru of The Week number 29.
A: FYI, strcmp() and stricmp() are vulnerable to buffer overflow, since they just process until they hit a null terminator. It's safer to use _strncmp() and _strnicmp().
A: The Boost.String library has a lot of algorithms for doing case-insenstive comparisons and so on.
You could implement your own, but why bother when it's already been done?
A: If you don't want to use Boost library then here is solution to it using only C++ standard io header.
#include <iostream>
struct iequal
{
bool operator()(int c1, int c2) const
{
// case insensitive comparison of two characters.
return std::toupper(c1) == std::toupper(c2);
}
};
bool iequals(const std::string& str1, const std::string& str2)
{
// use std::equal() to compare range of characters using the functor above.
return std::equal(str1.begin(), str1.end(), str2.begin(), iequal());
}
int main(void)
{
std::string str_1 = "HELLO";
std::string str_2 = "hello";
if(iequals(str_1,str_2))
{
std::cout<<"String are equal"<<std::endl;
}
else
{
std::cout<<"String are not equal"<<std::endl;
}
return 0;
}
A: If you have to compare a source string more often with other strings one elegant solution is to use regex.
std::wstring first = L"Test";
std::wstring second = L"TEST";
std::wregex pattern(first, std::wregex::icase);
bool isEqual = std::regex_match(second, pattern);
A: An easy way to compare strings that are only different by lowercase and capitalized characters is to do an ascii comparison. All capital and lowercase letters differ by 32 bits in the ascii table, using this information we have the following...
for( int i = 0; i < string2.length(); i++)
{
if (string1[i] == string2[i] || int(string1[i]) == int(string2[j])+32 ||int(string1[i]) == int(string2[i])-32)
{
count++;
continue;
}
else
{
break;
}
if(count == string2.length())
{
//then we have a match
}
}
A: bool insensitive_c_compare(char A, char B){
static char mid_c = ('Z' + 'a') / 2 + 'Z';
static char up2lo = 'A' - 'a'; /// the offset between upper and lowers
if ('a' >= A and A >= 'z' or 'A' >= A and 'Z' >= A)
if ('a' >= B and B >= 'z' or 'A' >= B and 'Z' >= B)
/// check that the character is infact a letter
/// (trying to turn a 3 into an E would not be pretty!)
{
if (A > mid_c and B > mid_c or A < mid_c and B < mid_c)
{
return A == B;
}
else
{
if (A > mid_c)
A = A - 'a' + 'A';
if (B > mid_c)/// convert all uppercase letters to a lowercase ones
B = B - 'a' + 'A';
/// this could be changed to B = B + up2lo;
return A == B;
}
}
}
this could probably be made much more efficient, but here is a bulky version with all its bits bare.
not all that portable, but works well with whatever is on my computer (no idea, I am of pictures not words)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/11635",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "372"
} |
Q: How do I restyle an Adobe Flex Accordion to include a button in each canvas header? Here is the sample code for my accordion:
<mx:Accordion x="15" y="15" width="230" height="599" styleName="myAccordion">
<mx:Canvas id="pnlSpotlight" label="SPOTLIGHT" height="100%" width="100%" horizontalScrollPolicy="off">
<mx:VBox width="100%" height="80%" paddingTop="2" paddingBottom="1" verticalGap="1">
<mx:Repeater id="rptrSpotlight" dataProvider="{aSpotlight}">
<sm:SmallCourseListItem
viewClick="PlayFile(event.currentTarget.getRepeaterItem().fileID);"
Description="{rptrSpotlight.currentItem.fileDescription}"
FileID = "{rptrSpotlight.currentItem.fileID}"
detailsClick="{detailsView.SetFile(event.currentTarget.getRepeaterItem().fileID,this)}"
Title="{rptrSpotlight.currentItem.fileTitle}"
FileIcon="{iconLibrary.getIcon(rptrSpotlight.currentItem.fileExtension)}" />
</mx:Repeater>
</mx:VBox>
</mx:Canvas>
</mx:Accordion>
I would like to include a button in each header like so:
A: Thanks, I got it working using FlexLib's CanvasButtonAccordionHeader.
A: You will have to create a custom header renderer, add a button to it and position it manually. Try something like this:
<mx:Accordion>
<mx:headerRenderer>
<mx:Component>
<AccordionHeader xmlns="mx.containers.accordionClasses.*">
<mx:Script>
<![CDATA[
import mx.controls.Button;
private var extraButton : Button;
override protected function createChildren( ) : void {
super.createChildren();
if ( extraButton == null ) {
extraButton = new Button();
addChild(extraButton);
}
}
override protected function updateDisplayList( unscaledWidth : Number, unscaledHeight : Number ) : void {
super.updateDisplayList(unscaledWidth, unscaledHeight);
extraButton.setActualSize(unscaledHeight - 6, unscaledHeight - 6);
extraButton.move(unscaledWidth - extraButton.width - 3, (unscaledHeight - extraButton.height)/2);
}
]]>
</mx:Script>
</AccordionHeader>
</mx:Component>
</mx:headerRenderer>
<mx:HBox label="1"><Label text="Text 1"/></HBox>
<mx:HBox label="1"><Label text="Text 2"/></HBox>
<mx:HBox label="1"><Label text="Text 3"/></HBox>
</mx:Accordion>
| {
"language": "en",
"url": "https://stackoverflow.com/questions/11665",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Creating a development environment for SharePoint I haven't touched sharepoint in years. If I want to setup a development environment to get up to speed, what options do I have? I don't have an MSDN license, is there anyway I can get up and running for free? (for development only)
A: You need a Windows 2003 Server (or 2008 Server, but I have no experience with that), no way around that. You can then of course use Visual C# 2005 Express and the SHarepoint Services 3.0 if that's your target.
If you want to do development on Sharepoint 2007, you have to buy a Sharepoint 2007 license, which has a pretty hefty fee attached to it.
As a SQL, SQL 2005 Express works fine for development.
There is a good Article how to set up Sharepoint on a Single Server:
http://blogs.msdn.com/martinkearn/archive/2007/03/28/how-to-install-sharepoint-server-2007-on-a-single-machine.aspx
You CAN use a Trial Version of Windows 2003 and Sharepoint 2007 though if it's only needed for a limited time (i believe the Trials run 180 days).
A: There is no way you can have a MOSS 2007/WSS 3.0 development for free but a Microsoft Action Pact is so cheap to get. :)
There is a nice blog to read to get the requirements and the steps to get a full MOSS 2007 image up and running here : How to Create a MOSS 2007 VPC Image: The Whole 9 Yards.
A: The action pack is fantastic value, you can use the Windows Server from that, as well as SharePoint Enterprise / Standard.
A: If you're just (re-)starting out in SharePoint development, there's a lot of value in just using WSS 3.0 and not (yet) using MOSS 2007. The basic vocabulary is going to be exactly the same at the development level, and you can accomplish a huge amount without ever feeling like you need MOSS to learn.
A: You could always download the Sharepoint trial VM here and then install the express version of visual studio.
A: You can download an Office SharePoint Server VHD from Microsoft. This allows you to run a virtual Windows Server & SharePoint Server on your personal machine using Virtual Server.
I recently went through this process and wrote a blog article describing how to setup a virtual Office SharePoint Server.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/11676",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Design pattern for parsing binary file data and storing in a database Does anybody recommend a design pattern for taking a binary data file, parsing parts of it into objects and storing the resultant data into a database?
I think a similar pattern could be used for taking an XML or tab-delimited file and parse it into their representative objects.
A common data structure would include:
(Header) (DataElement1) (DataElement1SubData1) (DataElement1SubData2)(DataElement2) (DataElement2SubData1) (DataElement2SubData2) (EOF)
I think a good design would include a way to change out the parsing definition based on the file type or some defined metadata included in the header. So a Factory Pattern would be part of the overall design for the Parser part.
A: I fully agree with Orion Edwards, and it is usually the way I approach the problem; but lately I've been starting to see some patterns(!) to the madness.
For more complex tasks I usually use something like an interpreter (or a strategy) that uses some builder (or factory) to create each part of the data.
For streaming data, the entire parser would look something like an adapter, adapting from a stream object to an object stream (which usually is just a queue).
For your example there would probably be one builder for the complete data structure (from head to EOF) which internally uses builders for the internal data elements (fed by the interpreter). Once the EOF is encountered an object would be emitted.
However, objects created in a switch statement in some factory function is probably the simplest way for many lesser tasks. Also, I like keeping my data-objects immutable as you never know when someone shoves concurrency down your throat :)
A: *
*Just write your file parser, using whatever techniques come to mind
*Write lots of unit tests for it to make sure all your edge cases are covered
Once you've done this, you will actually have a reasonable idea of the problem/solution.
Right now you just have theories floating around in your head, most of which will turn out to be misguided.
Step 3: Refactor mercilessly. Your aim should be to delete about half of your code
You'll find that your code at the end will either resemble an existing design pattern, or you'll have created a new one. You'll then be qualified to answer this question :-)
A: The Strategy pattern is maybe one you want to look at. The strategy being the file parsing algorithm.
Then you want a separate strategy for database insertion.
A: Use Lex and YACC. Unless you devote the next ten years exclusively to this subject, they will produce better and faster code every time.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/11680",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: Is there any kind of non text interface to MySQL? I have a MySQL query that returns a result with a single column of integers. Is there any way to get the MySQL C API to transfer this as actually integers rather than as ASCII text? For that matter is there a way to get MySQL to do /any/ of the API stuff as other than ASCII text. I'm thinking this would save a bit of time in sprintf/sscanf or whatever else is used as well as in bandwidth.
A: You're probably out of luck, to be honest. Looking at the MySQL C API (http://dev.mysql.com/doc/refman/5.0/en/mysql-fetch-row.html, http://dev.mysql.com/doc/refman/5.0/en/c-api-datatypes.html, look at MYSQL_ROW) there doesn't seem to be a mechanism for returning data in its actual type... the joys of using structs I guess.
You could always implement a wrapper which checks against the MYSQL_ROW's type attribute (http://dev.mysql.com/doc/refman/5.0/en/c-api-datatypes.html) and returns a C union, but that's probably poor advice; don't do that.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/11686",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Design debate: what are good ways to store and manipulate versioned objects? I am intentionally leaving this quite vague at first. I'm looking for discussion and what issues are important more than I'm looking for hard answers.
I'm in the middle of designing an app that does something like portfolio management. The design I have so far is
*
*Problem: a problem that needs to be solved
*Solution: a proposed solution to one or more problems
*Relationship: a relationship among two problems, two solutions, or a problem and a solution. Further broken down into:
*
*Parent-child - some sort of categorization / tree hierarchy
*Overlap - the degree to which two solutions or two problems really address the same concept
*Addresses - the degree to which a problem addresses a solution
My question is about the temporal nature of these things. Problems crop up, then fade. Solutions have an expected resolution date, but that might be modified as they are developed. The degree of a relationship might change over time as problems and solutions evolve.
So, the question: what is the best design for versioning of these things so I can get both a current and an historical perspective of my portfolio?
Later: perhaps I should make this a more specific question, though @Eric Beard's answer is worth an up.
I've considered three database designs. I'll enough of each to show their drawbacks. My question is: which to pick, or can you think of something better?
1: Problems (and separately, Solutions) are self-referential in versioning.
table problems
int id | string name | text description | datetime created_at | int previous_version_id
foreign key previous_version_id -> problems.id
This is problematic because every time I want a new version, I have to duplicate the entire row, including that long description column.
2: Create a new Relationship type: Version.
table problems
int id | string name | text description | datetime created_at
This simply moves the relationship from the Problems and Solutions tables into the Relationships table. Same duplication problem, but perhaps a little "cleaner" since I already have an abstract Relationship concept.
3: Use a more Subversion-like structure; move all Problem and Solution attributes into a separate table and version them.
table problems
int id
table attributes
int id | int thing_id | string thing_type | string name | string value | datetime created_at | int previous_version_id
foreign key (thing_id, thing_type) -> problems.id or solutions.id
foreign key previous_version_id -> attributes.id
This means that to load the current version of a Problem or Solution I have to fetch all versions of the attribute, sort them by date and then use the most current. That might not be terrible. What seems really bad to me is that I can't type-check these attributes in the database. That value column has to be free-text. I can make the name column a reference into a separate attribute_names table that has a type column, but that doesn't force the correct type in the attributes table.
later still: response to @Eric Beard's comments about multi-table foreign keys:
Alas, what I've described is simplistic: there are only two types of Things (Problems and Solutions). I actually have about 9 or 10 different types of Things, so I'd have 9 or 10 columns of foreign keys under your strategy. I wanted to use single-table inheritance, but the Things have so little in common that it would be extremely wasteful to do combine them into one table.
A: Hmm, sounds kind of like this site...
As far as a database design would go, a versioning system kind of like SVN, where you never actually do any updates, just inserts (with a version number) when things change, might be what you need. This is called MVCC, Multi-Value Concurrency Control. A wiki is another good example of this.
A: @Gaius
foreign key (thing_id, thing_type) -> problems.id or solutions.id
Be careful with these kinds of "multidirectional" foreign keys. My experience has shown that query performance suffers dramatically when your join condition has to check the type before figuring out which table to join on. It doesn't seem as elegant but nullable
problem_id and solution_id
will work much better.
Of course, query performance will also suffer with an MVCC design when you have to add the check to get the latest version of a record. The tradeoff is that you never have to worry about contention with updates.
A: How do you think about this:
table problems
int id | string name | text description | datetime created_at
table problems_revisions
int revision | int id | string name | text description | datetime created_at
foreign key id -> problems.id
Before updates you have to perform an additional insert in the revision table. This additional insert is fast, however, this is what you have to pay for
*
*efficient access to the current version - select problems as usual
*a schema that is intuitive and close to the reality you want to model
*joins between tables in your schema keep efficient
*using a revision number per busines transaction you can do versioning over table records like SVN does over files.
A: I suppose there's
Option 4: the hybrid
Move the common Thing attributes into a single-inheritance table, then add an custom_attributes table. This makes foreign-keys simpler, reduces duplication, and allows flexibility. It doesn't solve the problems of type-safety for the additional attributes. It also adds a little complexity since there are two ways for a Thing to have an attribute now.
If description and other large fields stay in the Things table, though, it also doesn't solve the duplication-space problem.
table things
int id | int type | string name | text description | datetime created_at | other common fields...
foreign key type -> thing_types.id
table custom_attributes
int id | int thing_id | string name | string value
foreign key thing_id -> things.id
A: It's a good idea to choose a data structure that makes common questions that you ask of the model easy to answer. It's most likely that you're interested in the current position most of the time. On occasion, you will want to drill into the history for particular problems and solutions.
I would have tables for problem, solution, and relationship that represent the current position. There would also be a problem_history, solution_history, etc table. These would be child tables of problem but also contain extra columns for VersionNumber and EffectiveDate. The key would be (ProblemId, VersionNumber).
When you update a problem, you would write the old values into the problem_history table. Point in time queries are therefore possible as you can pick out the problem_history record that is valid as-at a particular date.
Where I've done this before, I have also created a view to UNION problem and problem_history as this is sometimes useful in various queries.
Option 1 makes it difficult to query the current situation, as all your historic data is mixed in with your current data.
Option 3 is going to be bad for query performance and nasty to code against as you'll be accessing lots of rows for what should just be a simple query.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/11689",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: How can I get Unicode characters to display properly for the tooltip for the IMG ALT in IE7? I've got some Japanese in the ALT attribute, but the tooltip is showing me the ugly block characters in the tooltip. The rest of the content on the page renders correctly. So far, it seems to be limited to the tooltips.
A: This is because the font used in the tooltip doesn't include the characters you are trying to display. Try installing a font pack that includes those characters. I'm affraid you can't do much for your site's visitors other than implementating a tooltip yourself using javascript.
A: I'm not sure about the unicode issue but if you want the tooltip effect you should be using the title attribute, not alt.
Alt is for text you want screenreaders to speak, and it's what gets displayed if an image can't be loaded.
A: Where's your Japanese input coming from? It could be that it's in a non-unicode (e.g. http://en.wikipedia.org/wiki/JIS_X_0208) encoding, whereas your file is in unicode so the browser attempts to interpret the non-unicode characters as unicode and gets confused. I tried throwing together an example to reproduce your problem:
<img src="test.png" alt="日本語" />
The tooltip displays properly under IE7 with the Japanese language pack installed.
A: Do note that the alt attribute isn't intended to be a tooltip. Alt is for describing the image where the image itself is not available. If you want to use tooltips, use the title attribute instead.
A: Can you sanitize the alt text so that it doesn't have the characters in it, preferably by replacing the entire text with something useful (rather than just filtering the string)? That's not ideal, but neither is displaying broken characters, or telling your users to install a new font pack.
A: In IE and Firefox on Win2000/WinXP/Vista, with the Japanese Language support installed from Regional Options, this just works. On Win95/98/ME, it only worked on a Japanese OS, at least with IE, because of limitations in the Windows tooltip control in non-NT systems. (Regarding other answers which guide you to the title attribute: the same behavior applied with the title attribute).
However, it's possible that font linking/font mapping won't kick in if you haven't installed the language support, or if you've just copied some font to your fonts folder. It's also possible that your default font choice for tooltips doesn't support Japanese, though GDI font-linking fallback should kick in on Win2000 or above, unless the font lies about what it supports.
The "empty square" phenomenon is typically suggestive of a font mapping problem, though it's remotely possible that the encoding is wrong.
Are your users Japanese-speakers? Does this problem occur on a system with a Japanese default system locale?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/11690",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: EFS encryption key pop up I'm getting notifications to back up my encryption key for EFS in Vista, however i haven't enabled bit locker or drive encryption.
Anyone know how to find out what files may be encrypted or have an explanation for why it would notify me?
A: To find out which files on your system have been encrypted with EFS, you can simply run this command:
CIPHER.EXE /U /N
A: Yes it's EFS:
[Window Title]
Encrypting File System
[Main Instruction]
Back up your file encryption certificate and key
[Content]
Creating this backup file helps you avoid permanently losing access to your encrypted files if the original certificate and key are lost or corrupted.
[Back up now (recommended)] [Back up later] [Never back up] [Cancel]
[Footer]
Why should I backup the certificate and key?
A: EFS encryption is typically achieved via the "Advanced" tab of the "File Properties" dialog and it's best to do it at the folder-level.
But on Vista I remember seeing this message on my new computer, definitely never having encrypted a single file. So I AGREE it's confusing to ask you to back up the key, until the FIRST USE of EFS. Windows-7 has never asked me, so probably that's the way it works in the future.
A: I just got this same message for the first time after using Windows 7 for many months. Running cipher.exe as noted above revealed that a font file I downloaded (Anonymous Pro) had the encryption attribute set (right-click the file, properties, General Tab, click Advanced). It also had security settings granting an unknown account read and execute permissions. (!) I don't know why a font file would have the encryption flag set.
If you just got this message out of the blue, perhaps it is in response to something you just downloaded.
A: Clippy noticed that you have sensitive information in your files and automatically encrypted them.
Are you sure it's for EFS? I've had things prompt me to backup my keys before, but I didn't know exactly what they were to. I was assuming it was like a DRM protected file or something. It was a while ago so i don't remember exactly what the specific details were. I never backed it up and haven't been locked out of anything.
A: I've got the same message after un-zipping DroidDraw (http://www.droiddraw.org/).
It's a normal (I think) zip file. Right click on it, extrat all. The resulting folder/files were encrypted. Immediatelly Win prompted me to backup EFS keys.
Same behaviour on Win Vista and Win 7.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/11699",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14"
} |
Q: How can I create virtual machines as part of a build process using MSBuild and MS Virtual Server and/or Hyper-V Server Virtualization? What I would like to do is create a clean virtual machine image as the output of a build of an application.
So a new virtual machine would be created (from a template is fine, with the OS installed, and some base software installed) --- a new web site would be created in IIS, and the web app build output copied to a location on the virtual machine hard disk, and IIS configured correctly, the VM would start up and run.
I know there are MSBuild tasks to script all the administrative actions in IIS, but how do you script all the actions with Virtual machines? Specifically, creating a new virtual machine from a template, naming it uniquely, starting it, configuring it, etc...
Specifically I was wondering if anyone has successfully implemented any VM scripting as part of a build process.
Update: I assume with Hyper-V, there is a different set of libraries/APIs to script virtual machines, anyone played around with this? And anyone with real practical experience of doing something like this?
A: You can actually script a fair number of tasks in MS Virtual Server:
http://www.microsoft.com/technet/scriptcenter/scripts/vs/default.mspx?mfr=true
http://msdn.microsoft.com/en-us/library/aa368876(VS.85).aspx
Also Virtual PC guy has got a ton of stuff on his blog about scripting Virtual Server/PC and now Hyper-V here:
http://blogs.msdn.com/virtual_pc_guy/default.aspx
VMware has similar capabilities:
http://www.vmware.com/support/developer/scripting-API/
A: Checkout Powershell Management library for Hyper-V on CodePlex. Some features:
Finding a VM
Connecting to a VM
Discovering and manipulating Machine states
Backing up, exporting and snapshotting VMs
Adding and removing VMs, configuring motherboard settings.
Manipulating Disk controllers, drives and disk images
Manipluating Network Interface Cards
Working with VHD files
| {
"language": "en",
"url": "https://stackoverflow.com/questions/11720",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15"
} |
Q: In Visual Studio you must be a member of Debug Users or Administrators to start debugging. What if you are but it doesn't work? On my Windows XP machine Visual Studio 2003 2005 and 2008 all complain that I cannot start debugging my web application because I must either be a member of the Debug Users group or of the Administrators group. So, I am an Administrator and I added Debug Users just in case, and it still complains.
Short of reformatting my machine and starting over, has anyone encountered this and fixed it [with some undocumented command]?
A: Which users and/or groups are in your "Debug programs" right (under User Rights Assignment)? Maybe that setting got overridden by group policy (Daniel's answer), or just got out of whack for some reason. It should, obviously, include the "Debug Users" group.
A: We encountered an issue like this and found that it was a group policy issue. There's a group policy setting for debugging that needs to be enabled. It overrides the fact that you are in the right group.
A: You could try running "VsJITDebugger.exe -p <PID>" on the command line. I've had a simalar situation and been able to debug the application using the above.
"VsJITDebugger.exe /?" will show you all the options.
The PID can be found either in the task manager (view->Select Columns...) or Visual Studio's Attach to Process.
A: Awesome, I'd never really known about the "Administrative Tools -> Local Security Settings -> Local Policies -> User Rights Assignment" under XP. My "Debug programs" policy is set to "Administrators" only, yet trying to debug now just worked and this is several days after installing the .NET framework 3.5, so maybe that installation fixed things in the background.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/11724",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How to intercept and cancel auto play from an application? I am developing an application to install a large number of data files from multiple DVDs. The application will prompt the user to insert the next disk, however Windows will automatically try to open that disk either in an explorer window or ask the user what to do with the new disk.
How can I intercept and cancel auto play messages from my application?
A: There are two approaches that I know of. The first and simplest is to register the special Windows message "QueryCancelAutoPlay" and simply return 1 when the message is handled. This only works for the current window, and not a background application.
The second approach requires inserting an object that implements the COM interface IQueryCancelAutoPlay COM interface into the Running Object Table.
A: Alternatively, you could just programmatically save the current state of autoplay and turn it off when your program starts, then restore the original state when your program closes. This would be a lot simpler. Check out the NoDriveTypeAutoRun key.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/11734",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Configurable Table Prefixes with a .Net OR/M? In a web application like wiki or forums or blogging software, it is often useful to store your data in a relational database. Since many hosting companies offer a single database with their hosting plans (with additional databases costing extra) it is very useful for your users when your database objects (tables, views, constraints, and stored procedures) have a common prefix. It is typical for applications aware of database scarcity to have a hard-coded table prefix. I want more, however. Specifically, I'd like to have a table prefix that users can designate—say in the web.config file (with an appropriate default, of course).
Since I hate coding CRUD operations by hand, I prefer to work through a competent OR/M and have used (and enjoyed) LINQ to SQL, Subsonic, and ADO.Net. I'm having some thrash in a new project, however, when it comes to putting a table prefix in a user's web.config file. Are there any .Net-based OR/M products that can handle this scenario elegantly?
The best I have been able to come up with so far is using LINQ to SQL with an external mapping file that I'd have to update somehow based on an as-yet hypothetical web.config setting.
Anyone have a better solution? I tried to make it happen in Entity Framework, but that turned into a mess quickly. (Due to my unfamiliarity with EF? Possibly.) How about SubSonic? Does it have an option to apply a table prefix besides at code generation time?
A: I've now researched what it takes to do this in both Entity Framework and LINQ to SQL and documented the steps required in each. It's much longer than answers here tend to be so I'll be content with a link to the answer rather than duplicate it here. It's relatively involved for each, but the LINQ to SQL is the more flexible solution and also the easiest to implment.
A: LightSpeed allows you to specify an INamingStrategy that lets you resolve table names dynamically at runtime.
A: Rather than use table prefixes instead have an application user that belongs to a schema (in MS Sql 2005 or above).
This means that instead of:
select * from dbo.clientAProduct
select * from dbo.clientBroduct
You have:
select * from clientA.Product
select * from clientB.Product
| {
"language": "en",
"url": "https://stackoverflow.com/questions/11740",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: best way to persist data in .NET Web Service I have a web service that queries data from this json file, but I don't want the web service to have to access the file every time. I'm thinking that maybe I can store the data somewhere else (maybe in memory) so the web service can just get the data from there the next time it's trying to query the same data. I kinda understand what needs to be done but I'm just not sure how to actually do it. How do we persist data in a web service?
Update:
Both suggestions, caching and using static variables, look good. Maybe I should just use both so I can look at one first, and if it's not in there, use the second one, if it's not in there either, then I'll look at the json file.
A: Extending on Ice^^Heat's idea, you might want to think about where you would cache - either cache the contents of the json file in the Application cache like so:
Context.Cache.Insert("foo", _
Foo, _
Nothing, _
DateAdd(DateInterval.Minute, 30, Now()), _
System.Web.Caching.Cache.NoSlidingExpiration)
And then generate the results you need from that on every hit. Alternatively you can cache the webservice output on the function definition:
<WebMethod(CacheDuration:=60)> _
Public Function HelloWorld() As String
Return "Hello World"
End Function
Info gathered from XML Web Service Caching Strategies.
A: What about using a global or static collection object? Is that a good idea?
A: To echo klughing, if your JSON data isn't expected to change often, I think the simplest way to cache it is to use a static collection of some kind - perhaps a DataTable.
First, parse your JSON data into a System.Data.DataTable, and make it static in your Web service class. Then, access the static object. The data should stay cached until IIS recycles your application pool.
public class WebServiceClass
{
private static DataTable _myData = null;
public static DataTable MyData
{
get
{
if (_myData == null)
{
_myData = ParseJsonDataReturnDT();
}
return _myData;
}
}
[WebMethod]
public string GetData()
{
//... do some stuff with MyData and return a string ...
return MyData.Rows[0]["MyColumn"].ToString();
}
}
A: ASP.NET caching works just as well with Web services so you can implement regular caching as explained here: http://msdn.microsoft.com/en-us/library/aa478965.aspx
| {
"language": "en",
"url": "https://stackoverflow.com/questions/11761",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Why does a bad password cause "Padding is invalid and cannot be removed"? I needed some simple string encryption, so I wrote the following code (with a great deal of "inspiration" from here):
// create and initialize a crypto algorithm
private static SymmetricAlgorithm getAlgorithm(string password) {
SymmetricAlgorithm algorithm = Rijndael.Create();
Rfc2898DeriveBytes rdb = new Rfc2898DeriveBytes(
password, new byte[] {
0x53,0x6f,0x64,0x69,0x75,0x6d,0x20, // salty goodness
0x43,0x68,0x6c,0x6f,0x72,0x69,0x64,0x65
}
);
algorithm.Padding = PaddingMode.ISO10126;
algorithm.Key = rdb.GetBytes(32);
algorithm.IV = rdb.GetBytes(16);
return algorithm;
}
/*
* encryptString
* provides simple encryption of a string, with a given password
*/
public static string encryptString(string clearText, string password) {
SymmetricAlgorithm algorithm = getAlgorithm(password);
byte[] clearBytes = System.Text.Encoding.Unicode.GetBytes(clearText);
MemoryStream ms = new MemoryStream();
CryptoStream cs = new CryptoStream(ms, algorithm.CreateEncryptor(), CryptoStreamMode.Write);
cs.Write(clearBytes, 0, clearBytes.Length);
cs.Close();
return Convert.ToBase64String(ms.ToArray());
}
/*
* decryptString
* provides simple decryption of a string, with a given password
*/
public static string decryptString(string cipherText, string password) {
SymmetricAlgorithm algorithm = getAlgorithm(password);
byte[] cipherBytes = Convert.FromBase64String(cipherText);
MemoryStream ms = new MemoryStream();
CryptoStream cs = new CryptoStream(ms, algorithm.CreateDecryptor(), CryptoStreamMode.Write);
cs.Write(cipherBytes, 0, cipherBytes.Length);
cs.Close();
return System.Text.Encoding.Unicode.GetString(ms.ToArray());
}
The code appears to work fine, except that when decrypting data with an incorrect key, I get a CryptographicException - "Padding is invalid and cannot be removed" - on the cs.Close() line in decryptString.
example code:
string password1 = "password";
string password2 = "letmein";
string startClearText = "The quick brown fox jumps over the lazy dog";
string cipherText = encryptString(startClearText, password1);
string endClearText = decryptString(cipherText, password2); // exception thrown
My question is, is this to be expected? I would have thought that decrypting with the wrong password would just result in nonsense output, rather than an exception.
A: If you want your usage to be correct, you should add authentication to your ciphertext so that you can verify that it is the correct pasword or that the ciphertext hasn't been modified. The padding you are using ISO10126 will only throw an exception if the last byte doesn't decrypt as one of 16 valid values for padding (0x01-0x10). So you have a 1/16 chance of it NOT throwing the exception with the wrong password, where if you authenticate it you have a deterministic way to tell if your decryption is valid.
Using crypto api's while seemingly easy, actually is rather is easy to make mistakes. For example you use a fixed salt for for you key and iv derivation, that means every ciphertext encrypted with the same password will reuse it's IV with that key, that breaks semantic security with CBC mode, the IV needs to be both unpredictable and unique for a given key.
For that reason of easy to make mistakes, I have a code snippet, that I try to keep reviewed and up to date (comments, issues welcome):
Modern Examples of Symmetric Authenticated Encryption of a string C#.
If you use it's AESThenHMAC.AesSimpleDecryptWithPassword(ciphertext, password) when the wrong password is used, null is returned, if the ciphertext or iv has been modified post encryption null is returned, you will never get junk data back, or a padding exception.
A: If you've ruled out key-mismatch, then besides FlushFinalBlock() (see Yaniv's answer), calling Close() on the CryptoStream will also suffice.
If you are cleaning up resources strictly with using blocks, be sure to nest the block for the CryptoStream itself:
using (MemoryStream ms = new MemoryStream())
using (var enc = RijndaelAlg.CreateEncryptor())
{
using (CryptoStream encStream = new CryptoStream(ms, enc, CryptoStreamMode.Write))
{
encStream.Write(bar2, 0, bar2.Length);
} // implicit close
byte[] encArray = ms.ToArray();
}
I've been bitten by this (or similar):
using (MemoryStream ms = new MemoryStream())
using (var enc = RijndaelAlg.CreateEncryptor())
using (CryptoStream encStream = new CryptoStream(ms, enc, CryptoStreamMode.Write))
{
encStream.Write(bar2, 0, bar2.Length);
byte[] encArray = ms.ToArray();
} // implicit close -- too late!
A: Yes, this is to be expected, or at least, its exactly what happens when our crypto routines get non-decryptable data
A: Another reason of the exception might be a race condition between several threads using decryption logic - native implementations of ICryptoTransform are not thread-safe (e.g. SymmetricAlgorithm), so it should be put to exclusive section, e.g. using lock.
Please refer here for more details: http://www.make-awesome.com/2011/07/system-security-cryptography-and-thread-safety/
A: Although this have been already answered I think it would be a good idea to explain why it is to be expected.
A padding scheme is usually applied because most cryptographic filters are not semantically secure and to prevent some forms of cryptoatacks. For example, usually in RSA the OAEP padding scheme is used which prevents some sorts of attacks (such as a chosen plaintext attack or blinding).
A padding scheme appends some (usually) random garbage to the message m before the message is sent. In the OAEP method, for example, two Oracles are used (this is a simplistic explanation):
*
*Given the size of the modulus you padd k1 bits with 0 and k0 bits with a random number.
*Then by applying some transformation to the message you obtain the padded message wich is encrypted and sent.
That provides you with a randomization for the messages and with a way to test if the message is garbage or not. As the padding scheme is reversible, when you decrypt the message whereas you can't say anything about the integrity of the message itself you can, in fact, make some assertion about the padding and thus you can know if the message has been correctly decrypted or you're doing something wrong (i.e someone has tampered with the message or you're using the wrong key)
A: I experienced a similar "Padding is invalid and cannot be removed." exception, but in my case the key IV and padding were correct.
It turned out that flushing the crypto stream is all that was missing.
Like this:
MemoryStream msr3 = new MemoryStream();
CryptoStream encStream = new CryptoStream(msr3, RijndaelAlg.CreateEncryptor(), CryptoStreamMode.Write);
encStream.Write(bar2, 0, bar2.Length);
// unless we flush the stream we would get "Padding is invalid and cannot be removed." exception when decoding
encStream.FlushFinalBlock();
byte[] bar3 = msr3.ToArray();
A: There may be some unread bytes in the CryptoStream. Closing before reading the stream completely was causing the error in my program.
A: I had a similar problem, the issue in decrypt method was initializing an empty memory stream. when it worked when I initialized it with the cipher text byte array like this:
MemoryStream ms = new MemoryStream(cipherText)
A: The answer updated by the user "atconway" worked for me.
The problem was not with the padding but the key which was different during encryption and decryption.
The key and iv should be same during encypting and decrypting the same value.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/11762",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "40"
} |
Q: Publishing to IIS - Best Practices I'm not new to web publishing, BUT I am new to publishing against a web site that is frequently used. Previously, the apps on this server were not hit very often, but we're rolling out a high demand application. So, what is the best practice for publishing to a live web server?
*
*Is it best to wait until the middle
of the night when people won't be on
it (Yes, I can pretty much rely on
that -- it's an intranet and
therefore will have times of
non-use)
*Publish when new updates are made to
the trunk (dependent on build
success of course)
*If 2 is true, then that seems bad if someone is using that specific page or DLL and it gets overwritten.
...I'm sure there are lots of great places for this kind of thing, but I didn't use the right google search terms.
A:
@Nick DeVore wrote:
If 2 is true, then that seems bad if
someone is using that specific page or
DLL and it gets overwritten.
It's not really an issue if you're using ASP.NET stack (Webforms, MVC or rolling your own) because all your aspx files get compiled and therefore not touched by webserver. /bin/ folder is completely shadowed somewhere else, so libraries inside are not used by webserver either.
IIS will wait until all requests are done (however there is some timeout though) and then will proceed with compilation (if needed) and restart of AppDomain. If only a few files have changed, there won't even be AppDomain restart. IIS will load new assemblies (or compiled aspx/asmx/ascx files) into existing AppDomain.
@Nick DeVore wrote:
Help me understand this a little bit
more. Point me to the place where this
is explained from Microsoft. Thanks!
Try google for "IIS AppDomain" keywords. I found What ASP.NET Programmers Should Know About Application Domains.
A: We do most of our updates in the wee small hours.
Handy hint, if this is an ASP.NET site, whatever time of the day you roll out, drop in an App_Offline.htm file with a message explaining to users that the site is down for maintenance.
Scott Guthrie has more info here:
http://weblogs.asp.net/scottgu/archive/2006/04/09/442332.aspx
| {
"language": "en",
"url": "https://stackoverflow.com/questions/11764",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Browse for a directory in C# How can I present a control to the user that allows him/her to select a directory?
There doesn't seem to be any native .net controls which do this?
A: string folderPath = "";
FolderBrowserDialog folderBrowserDialog1 = new FolderBrowserDialog();
if (folderBrowserDialog1.ShowDialog() == DialogResult.OK) {
folderPath = folderBrowserDialog1.SelectedPath ;
}
A: The FolderBrowserDialog class is the best option.
A: You could just use the FolderBrowserDialog class from the System.Windows.Forms namespace.
A: Note: there is no guarantee this code will work in future versions of the .Net framework. Using private .Net framework internals as done here through reflection is probably not good overall. Use the interop solution mentioned at the bottom, as the Windows API is less likely to change.
If you are looking for a Folder picker that looks more like the Windows 7 dialog, with the ability to copy and paste from a textbox at the bottom and the navigation pane on the left with favorites and common locations, then you can get access to that in a very lightweight way.
The FolderBrowserDialog UI is very minimal:
But you can have this instead:
Here's a class that opens a Vista-style folder picker using the .Net private IFileDialog interface, without directly using interop in the code (.Net takes care of that for you). It falls back to the pre-Vista dialog if not in a high enough Windows version. Should work in Windows 7, 8, 9, 10 and higher (theoretically).
using System;
using System.Reflection;
using System.Windows.Forms;
namespace MyCoolCompany.Shuriken {
/// <summary>
/// Present the Windows Vista-style open file dialog to select a folder. Fall back for older Windows Versions
/// </summary>
public class FolderSelectDialog {
private string _initialDirectory;
private string _title;
private string _fileName = "";
public string InitialDirectory {
get { return string.IsNullOrEmpty(_initialDirectory) ? Environment.CurrentDirectory : _initialDirectory; }
set { _initialDirectory = value; }
}
public string Title {
get { return _title ?? "Select a folder"; }
set { _title = value; }
}
public string FileName { get { return _fileName; } }
public bool Show() { return Show(IntPtr.Zero); }
/// <param name="hWndOwner">Handle of the control or window to be the parent of the file dialog</param>
/// <returns>true if the user clicks OK</returns>
public bool Show(IntPtr hWndOwner) {
var result = Environment.OSVersion.Version.Major >= 6
? VistaDialog.Show(hWndOwner, InitialDirectory, Title)
: ShowXpDialog(hWndOwner, InitialDirectory, Title);
_fileName = result.FileName;
return result.Result;
}
private struct ShowDialogResult {
public bool Result { get; set; }
public string FileName { get; set; }
}
private static ShowDialogResult ShowXpDialog(IntPtr ownerHandle, string initialDirectory, string title) {
var folderBrowserDialog = new FolderBrowserDialog {
Description = title,
SelectedPath = initialDirectory,
ShowNewFolderButton = false
};
var dialogResult = new ShowDialogResult();
if (folderBrowserDialog.ShowDialog(new WindowWrapper(ownerHandle)) == DialogResult.OK) {
dialogResult.Result = true;
dialogResult.FileName = folderBrowserDialog.SelectedPath;
}
return dialogResult;
}
private static class VistaDialog {
private const string c_foldersFilter = "Folders|\n";
private const BindingFlags c_flags = BindingFlags.Instance | BindingFlags.Public | BindingFlags.NonPublic;
private readonly static Assembly s_windowsFormsAssembly = typeof(FileDialog).Assembly;
private readonly static Type s_iFileDialogType = s_windowsFormsAssembly.GetType("System.Windows.Forms.FileDialogNative+IFileDialog");
private readonly static MethodInfo s_createVistaDialogMethodInfo = typeof(OpenFileDialog).GetMethod("CreateVistaDialog", c_flags);
private readonly static MethodInfo s_onBeforeVistaDialogMethodInfo = typeof(OpenFileDialog).GetMethod("OnBeforeVistaDialog", c_flags);
private readonly static MethodInfo s_getOptionsMethodInfo = typeof(FileDialog).GetMethod("GetOptions", c_flags);
private readonly static MethodInfo s_setOptionsMethodInfo = s_iFileDialogType.GetMethod("SetOptions", c_flags);
private readonly static uint s_fosPickFoldersBitFlag = (uint) s_windowsFormsAssembly
.GetType("System.Windows.Forms.FileDialogNative+FOS")
.GetField("FOS_PICKFOLDERS")
.GetValue(null);
private readonly static ConstructorInfo s_vistaDialogEventsConstructorInfo = s_windowsFormsAssembly
.GetType("System.Windows.Forms.FileDialog+VistaDialogEvents")
.GetConstructor(c_flags, null, new[] { typeof(FileDialog) }, null);
private readonly static MethodInfo s_adviseMethodInfo = s_iFileDialogType.GetMethod("Advise");
private readonly static MethodInfo s_unAdviseMethodInfo = s_iFileDialogType.GetMethod("Unadvise");
private readonly static MethodInfo s_showMethodInfo = s_iFileDialogType.GetMethod("Show");
public static ShowDialogResult Show(IntPtr ownerHandle, string initialDirectory, string title) {
var openFileDialog = new OpenFileDialog {
AddExtension = false,
CheckFileExists = false,
DereferenceLinks = true,
Filter = c_foldersFilter,
InitialDirectory = initialDirectory,
Multiselect = false,
Title = title
};
var iFileDialog = s_createVistaDialogMethodInfo.Invoke(openFileDialog, new object[] { });
s_onBeforeVistaDialogMethodInfo.Invoke(openFileDialog, new[] { iFileDialog });
s_setOptionsMethodInfo.Invoke(iFileDialog, new object[] { (uint) s_getOptionsMethodInfo.Invoke(openFileDialog, new object[] { }) | s_fosPickFoldersBitFlag });
var adviseParametersWithOutputConnectionToken = new[] { s_vistaDialogEventsConstructorInfo.Invoke(new object[] { openFileDialog }), 0U };
s_adviseMethodInfo.Invoke(iFileDialog, adviseParametersWithOutputConnectionToken);
try {
int retVal = (int) s_showMethodInfo.Invoke(iFileDialog, new object[] { ownerHandle });
return new ShowDialogResult {
Result = retVal == 0,
FileName = openFileDialog.FileName
};
}
finally {
s_unAdviseMethodInfo.Invoke(iFileDialog, new[] { adviseParametersWithOutputConnectionToken[1] });
}
}
}
// Wrap an IWin32Window around an IntPtr
private class WindowWrapper : IWin32Window {
private readonly IntPtr _handle;
public WindowWrapper(IntPtr handle) { _handle = handle; }
public IntPtr Handle { get { return _handle; } }
}
}
}
I developed this as a cleaned up version of .NET Win 7-style folder select dialog by Bill Seddon of lyquidity.com (I have no affiliation). I wrote my own because his solution requires an additional Reflection class that isn't needed for this focused purpose, uses exception-based flow control, doesn't cache the results of its reflection calls. Note that the nested static VistaDialog class is so that its static reflection variables don't try to get populated if the Show method is never called.
It is used like so in a Windows Form:
var dialog = new FolderSelectDialog {
InitialDirectory = musicFolderTextBox.Text,
Title = "Select a folder to import music from"
};
if (dialog.Show(Handle)) {
musicFolderTextBox.Text = dialog.FileName;
}
You can of course play around with its options and what properties it exposes. For example, it allows multiselect in the Vista-style dialog.
Also, please note that Simon Mourier gave an answer that shows how to do the exact same job using interop against the Windows API directly, though his version would have to be supplemented to use the older style dialog if in an older version of Windows. Unfortunately, I hadn't found his post yet when I worked up my solution. Name your poison!
A: Please don't try and roll your own with a TreeView/DirectoryInfo class. For one thing there are many nice features you get for free (icons/right-click/networks) by using SHBrowseForFolder. For another there are a edge cases/catches you will likely not be aware of.
A: or even more better, you can put this code in a class file
using System;
using System.IO;
using System.Runtime.CompilerServices;
using System.Runtime.InteropServices;
using System.Windows.Forms;
internal class OpenFolderDialog : IDisposable {
/// <summary>
/// Gets/sets folder in which dialog will be open.
/// </summary>
public string InitialFolder { get; set; }
/// <summary>
/// Gets/sets directory in which dialog will be open if there is no recent directory available.
/// </summary>
public string DefaultFolder { get; set; }
/// <summary>
/// Gets selected folder.
/// </summary>
public string Folder { get; private set; }
internal DialogResult ShowDialog(IWin32Window owner) {
if (Environment.OSVersion.Version.Major >= 6) {
return ShowVistaDialog(owner);
} else {
return ShowLegacyDialog(owner);
}
}
private DialogResult ShowVistaDialog(IWin32Window owner) {
var frm = (NativeMethods.IFileDialog)(new NativeMethods.FileOpenDialogRCW());
uint options;
frm.GetOptions(out options);
options |= NativeMethods.FOS_PICKFOLDERS | NativeMethods.FOS_FORCEFILESYSTEM | NativeMethods.FOS_NOVALIDATE | NativeMethods.FOS_NOTESTFILECREATE | NativeMethods.FOS_DONTADDTORECENT;
frm.SetOptions(options);
if (this.InitialFolder != null) {
NativeMethods.IShellItem directoryShellItem;
var riid = new Guid("43826D1E-E718-42EE-BC55-A1E261C37BFE"); //IShellItem
if (NativeMethods.SHCreateItemFromParsingName(this.InitialFolder, IntPtr.Zero, ref riid, out directoryShellItem) == NativeMethods.S_OK) {
frm.SetFolder(directoryShellItem);
}
}
if (this.DefaultFolder != null) {
NativeMethods.IShellItem directoryShellItem;
var riid = new Guid("43826D1E-E718-42EE-BC55-A1E261C37BFE"); //IShellItem
if (NativeMethods.SHCreateItemFromParsingName(this.DefaultFolder, IntPtr.Zero, ref riid, out directoryShellItem) == NativeMethods.S_OK) {
frm.SetDefaultFolder(directoryShellItem);
}
}
if (frm.Show(owner.Handle) == NativeMethods.S_OK) {
NativeMethods.IShellItem shellItem;
if (frm.GetResult(out shellItem) == NativeMethods.S_OK) {
IntPtr pszString;
if (shellItem.GetDisplayName(NativeMethods.SIGDN_FILESYSPATH, out pszString) == NativeMethods.S_OK) {
if (pszString != IntPtr.Zero) {
try {
this.Folder = Marshal.PtrToStringAuto(pszString);
return DialogResult.OK;
} finally {
Marshal.FreeCoTaskMem(pszString);
}
}
}
}
}
return DialogResult.Cancel;
}
private DialogResult ShowLegacyDialog(IWin32Window owner) {
using (var frm = new SaveFileDialog()) {
frm.CheckFileExists = false;
frm.CheckPathExists = true;
frm.CreatePrompt = false;
frm.Filter = "|" + Guid.Empty.ToString();
frm.FileName = "any";
if (this.InitialFolder != null) { frm.InitialDirectory = this.InitialFolder; }
frm.OverwritePrompt = false;
frm.Title = "Select Folder";
frm.ValidateNames = false;
if (frm.ShowDialog(owner) == DialogResult.OK) {
this.Folder = Path.GetDirectoryName(frm.FileName);
return DialogResult.OK;
} else {
return DialogResult.Cancel;
}
}
}
public void Dispose() { } //just to have possibility of Using statement.
}
internal static class NativeMethods {
#region Constants
public const uint FOS_PICKFOLDERS = 0x00000020;
public const uint FOS_FORCEFILESYSTEM = 0x00000040;
public const uint FOS_NOVALIDATE = 0x00000100;
public const uint FOS_NOTESTFILECREATE = 0x00010000;
public const uint FOS_DONTADDTORECENT = 0x02000000;
public const uint S_OK = 0x0000;
public const uint SIGDN_FILESYSPATH = 0x80058000;
#endregion
#region COM
[ComImport, ClassInterface(ClassInterfaceType.None), TypeLibType(TypeLibTypeFlags.FCanCreate), Guid("DC1C5A9C-E88A-4DDE-A5A1-60F82A20AEF7")]
internal class FileOpenDialogRCW { }
[ComImport(), Guid("42F85136-DB7E-439C-85F1-E4075D135FC8"), InterfaceType(ComInterfaceType.InterfaceIsIUnknown)]
internal interface IFileDialog {
[MethodImpl(MethodImplOptions.InternalCall, MethodCodeType = MethodCodeType.Runtime)]
[PreserveSig()]
uint Show([In, Optional] IntPtr hwndOwner); //IModalWindow
[MethodImpl(MethodImplOptions.InternalCall, MethodCodeType = MethodCodeType.Runtime)]
uint SetFileTypes([In] uint cFileTypes, [In, MarshalAs(UnmanagedType.LPArray)] IntPtr rgFilterSpec);
[MethodImpl(MethodImplOptions.InternalCall, MethodCodeType = MethodCodeType.Runtime)]
uint SetFileTypeIndex([In] uint iFileType);
[MethodImpl(MethodImplOptions.InternalCall, MethodCodeType = MethodCodeType.Runtime)]
uint GetFileTypeIndex(out uint piFileType);
[MethodImpl(MethodImplOptions.InternalCall, MethodCodeType = MethodCodeType.Runtime)]
uint Advise([In, MarshalAs(UnmanagedType.Interface)] IntPtr pfde, out uint pdwCookie);
[MethodImpl(MethodImplOptions.InternalCall, MethodCodeType = MethodCodeType.Runtime)]
uint Unadvise([In] uint dwCookie);
[MethodImpl(MethodImplOptions.InternalCall, MethodCodeType = MethodCodeType.Runtime)]
uint SetOptions([In] uint fos);
[MethodImpl(MethodImplOptions.InternalCall, MethodCodeType = MethodCodeType.Runtime)]
uint GetOptions(out uint fos);
[MethodImpl(MethodImplOptions.InternalCall, MethodCodeType = MethodCodeType.Runtime)]
void SetDefaultFolder([In, MarshalAs(UnmanagedType.Interface)] IShellItem psi);
[MethodImpl(MethodImplOptions.InternalCall, MethodCodeType = MethodCodeType.Runtime)]
uint SetFolder([In, MarshalAs(UnmanagedType.Interface)] IShellItem psi);
[MethodImpl(MethodImplOptions.InternalCall, MethodCodeType = MethodCodeType.Runtime)]
uint GetFolder([MarshalAs(UnmanagedType.Interface)] out IShellItem ppsi);
[MethodImpl(MethodImplOptions.InternalCall, MethodCodeType = MethodCodeType.Runtime)]
uint GetCurrentSelection([MarshalAs(UnmanagedType.Interface)] out IShellItem ppsi);
[MethodImpl(MethodImplOptions.InternalCall, MethodCodeType = MethodCodeType.Runtime)]
uint SetFileName([In, MarshalAs(UnmanagedType.LPWStr)] string pszName);
[MethodImpl(MethodImplOptions.InternalCall, MethodCodeType = MethodCodeType.Runtime)]
uint GetFileName([MarshalAs(UnmanagedType.LPWStr)] out string pszName);
[MethodImpl(MethodImplOptions.InternalCall, MethodCodeType = MethodCodeType.Runtime)]
uint SetTitle([In, MarshalAs(UnmanagedType.LPWStr)] string pszTitle);
[MethodImpl(MethodImplOptions.InternalCall, MethodCodeType = MethodCodeType.Runtime)]
uint SetOkButtonLabel([In, MarshalAs(UnmanagedType.LPWStr)] string pszText);
[MethodImpl(MethodImplOptions.InternalCall, MethodCodeType = MethodCodeType.Runtime)]
uint SetFileNameLabel([In, MarshalAs(UnmanagedType.LPWStr)] string pszLabel);
[MethodImpl(MethodImplOptions.InternalCall, MethodCodeType = MethodCodeType.Runtime)]
uint GetResult([MarshalAs(UnmanagedType.Interface)] out IShellItem ppsi);
[MethodImpl(MethodImplOptions.InternalCall, MethodCodeType = MethodCodeType.Runtime)]
uint AddPlace([In, MarshalAs(UnmanagedType.Interface)] IShellItem psi, uint fdap);
[MethodImpl(MethodImplOptions.InternalCall, MethodCodeType = MethodCodeType.Runtime)]
uint SetDefaultExtension([In, MarshalAs(UnmanagedType.LPWStr)] string pszDefaultExtension);
[MethodImpl(MethodImplOptions.InternalCall, MethodCodeType = MethodCodeType.Runtime)]
uint Close([MarshalAs(UnmanagedType.Error)] uint hr);
[MethodImpl(MethodImplOptions.InternalCall, MethodCodeType = MethodCodeType.Runtime)]
uint SetClientGuid([In] ref Guid guid);
[MethodImpl(MethodImplOptions.InternalCall, MethodCodeType = MethodCodeType.Runtime)]
uint ClearClientData();
[MethodImpl(MethodImplOptions.InternalCall, MethodCodeType = MethodCodeType.Runtime)]
uint SetFilter([MarshalAs(UnmanagedType.Interface)] IntPtr pFilter);
}
[ComImport, Guid("43826D1E-E718-42EE-BC55-A1E261C37BFE"), InterfaceType(ComInterfaceType.InterfaceIsIUnknown)]
internal interface IShellItem {
[MethodImpl(MethodImplOptions.InternalCall, MethodCodeType = MethodCodeType.Runtime)]
uint BindToHandler([In] IntPtr pbc, [In] ref Guid rbhid, [In] ref Guid riid, [Out, MarshalAs(UnmanagedType.Interface)] out IntPtr ppvOut);
[MethodImpl(MethodImplOptions.InternalCall, MethodCodeType = MethodCodeType.Runtime)]
uint GetParent([MarshalAs(UnmanagedType.Interface)] out IShellItem ppsi);
[MethodImpl(MethodImplOptions.InternalCall, MethodCodeType = MethodCodeType.Runtime)]
uint GetDisplayName([In] uint sigdnName, out IntPtr ppszName);
[MethodImpl(MethodImplOptions.InternalCall, MethodCodeType = MethodCodeType.Runtime)]
uint GetAttributes([In] uint sfgaoMask, out uint psfgaoAttribs);
[MethodImpl(MethodImplOptions.InternalCall, MethodCodeType = MethodCodeType.Runtime)]
uint Compare([In, MarshalAs(UnmanagedType.Interface)] IShellItem psi, [In] uint hint, out int piOrder);
}
#endregion
[DllImport("shell32.dll", CharSet = CharSet.Unicode, SetLastError = true)]
internal static extern int SHCreateItemFromParsingName([MarshalAs(UnmanagedType.LPWStr)] string pszPath, IntPtr pbc, ref Guid riid, [MarshalAs(UnmanagedType.Interface)] out IShellItem ppv);
}
And use it like this
using (var frm = new OpenFolderDialog()) {
if (frm.ShowDialog(this)== DialogResult.OK) {
MessageBox.Show(this, frm.Folder);
}
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/11767",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "65"
} |
Q: File Uploads via Web Services Is it possible to upload a file from a client's computer to the server through a web service? The client can be running anything from a native desktop app to a thin ajax client.
A: It's certainly possible to send binary files via web services (eg. SOAP), but you usually have to do some kind of encoding such as base64, which increases the amount of data to send. One of the most efficient ways to send an arbitrary binary file is via an HTTP PUT operation, since there is no encoding overhead. Not all clients necessarily have an easy way to do this, but it's worth looking.
The other side of that coin is how to get the data off the user's disk an on to the network connection. A "thin ajax client" might not have the requisite permissions to read files from the user's disk. On the other hand, a desktop app implementation would be able to do so without any problem.
A: I'm not a master in "webservice", but if you develop the webservice (and the client), you always can convert the binary file to BASE64 in the client (can do in java... and i soupose in ajax too) and transfer as "string", in the other side, in the webservice encode to binary from BASE64...
It's one idea, that's work, but maybe not "correct" in all enviroment.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/11782",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: In .NET, will empty method calls be optimized out? Given an empty method body, will the JIT optimize out the call (I know the C# compiler won't). How would I go about finding out? What tools should I be using and where should I be looking?
Since I'm sure it'll be asked, the reason for the empty method is a preprocessor directive.
@Chris:
Makes sense, but it could optimize out calls to the method. So the method would still exist, but static calls to it could be removed (or at least inlined...)
@Jon:
That just tells me the language compiler doesn't do anything. I think what I need to do is run my dll through ngen and look at the assembly.
A: No, empty methods are never optimized out. Here are a couple reasons why:
*
*The method could be called from a
derived class, perhaps in a
different assembly
*The method could
be called using Reflection (even if
it is marked private)
Edit: Yes, from looking at that (exellent) code project doc the JITer will eliminate calls to empty methods. But the methods themselves will still be compiled and part of your binary for the reasons I listed.
A: This chap has quite a good treatment of JIT optimisations, do a search on the page for 'method is empty', it's about half way down the article -
http://www.codeproject.com/KB/dotnet/JITOptimizations.aspx
Apparently empty methods do get optimised out through inlining what is effectively no code.
@Chris: I do realise the that the methods will still be part of the binary and that these are JIT optimisations :-). On a semi-related note, Scott Hanselman had quite an interesting article on inlining in Release build call stacks:
http://www.hanselman.com/blog/ReleaseISNOTDebug64bitOptimizationsAndCMethodInliningInReleaseBuildCallStacks.aspx
A: I'm guessing your code is like:
void DoSomethingIfCompFlag() {
#if COMPILER_FLAG
//your code
#endif
}
This won't get optimised out, however:
partial void DoSomethingIfCompFlag();
#if COMPILER_FLAG
partial void DoSomethingIfCompFlag() {
//your code
}
#endif
The first empty method is partial, and the C#3 compiler will optimise it out.
By the way: this is basically what partial methods are for. Microsoft added code generators to their Linq designers that need to call methods that by default don't do anything.
Rather than force you to overload the method you can use a partial.
This way the partials are completely optimised out if not used and no performance is lost, rather than adding the overhead of the extra empty method call.
A: All things being equal, yes it should be optimized out. The JIT inlines functions where appropriate and there are few things more appropriate than empty functions :)
If you really want to be sure then change your empty method to throw an exception and print out the stack trace it contains.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/11783",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "25"
} |
Q: What's the general consensus on supporting Windows 2000? What's the general consensus on supporting Windows 2000 for software distribution? Are people supporting Windows XP SP2+ for new software development or is this too restrictive still?
A: "OK" is a subjective judgement. You'll need to take a look at your client base and see what they're using.
Having said that, I dropped support for Win2K over a year ago with no negative impact.
A: I'd say MS have made the decision for you if they themselves wont support it in .NET 3.5.
A: The latest version of WinRAR still supports Windows 95. Think about it, why is that? It's because WinRAR solves a extremely common problem - of unpacking a file. People still use older systems not because they like them, but because they are forced to by the hardware. If you're making a video game, sure, drop support for anything below XP SP2, but if you're making a program that solves a specific task, like converting an RTF to PDF, I don't see a reason not to support other systems.
A: It is not merely "OK"; it is a good idea. Anything to encourage the laggards to keep current is a good thing.
A: A lot of computers at my company use Win2k, so we couldn't really drop support. It all depends on the client base.
A: With XP being 5/6 years old now, I think most home users will be using it, but many business users may still be using it. all in all, it depends on your target audience.
Personally I would regard Windows 2000 support as a bonus rather than a requirement.
A: This is very subjective, it really depends who you're selling to.
If it's average Joe then Windows 2K owners are going to be at best a percent or two of your target market. If it's the military (who I believe still run 2K on their toughbooks) then you're in trouble.
A: Its fine by me :)
The company i work for (mining and construction) with <15k employees and we don't support Wink2k and have not for a while.
A: I would say yes, as most have switched to XP or vista, from what I can tell.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/11801",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Returning Large Results Via a Webservice I'm working on a web service at the moment and there is the potential that the returned results could be quite large ( > 5mb).
It's perfectly valid for this set of data to be this large and the web service can be called either sync or async, but I'm wondering what people's thoughts are on the following:
*
*If the connection is lost, the
entire resultset will have to be
regenerated and sent again. Is there
any way I can do any sort of
"resume" if the connection is lost
or reset?
*Is sending a result set this large even appropriate? Would it be better to implement some sort of "paging" where the resultset is generated and stored on the server and the client can then download chunks of the resultset in smaller amounts and re-assemble the set at their end?
A: I have seen all three approaches, paged, store and retrieve, and massive push.
I think the solution to your problem depends to some extent on why your result set is so large and how it is generated. Do your results grow over time, are they calculated all at once and then pushed, do you want to stream them back as soon as you have them?
Paging Approach
In my experience, using a paging approach is appropriate when the client needs quick access to reasonably sized chunks of the result set similar to pages in search results. Considerations here are overall chattiness of your protocol, caching of the entire result set between client page requests, and/or the processing time it takes to generate a page of results.
Store and retrieve
Store and retrieve is useful when the results are not random access and the result set grows in size as the query is processed. Issues to consider here are complexity for clients and if you can provide the user with partial results or if you need to calculate all results before returning anything to the client (think sorting of results from distributed search engines).
Massive Push
The massive push approach is almost certainly flawed. Even if the client needs all of the information and it needs to be pushed in a monolithic result set, I would recommend taking the approach of WS-ReliableMessaging (either directly or through your own simplified version) and chunking your results. By doing this you
*
*ensure that the pieces reach the client
*can discard the chunk as soon as you get a receipt from the client
*can reduce the possible issues with memory consumption from having to retain 5MB of XML, DOM, or whatever in memory (assuming that you aren't processing the results in a streaming manner) on the server and client sides.
Like others have said though, don't do anything until you know your result set size, how it is generated, and overall performance to be actual issues.
A: There's no hard law against 5 Mb as a result set size. Over 400 Mb can be hard to send.
You'll automatically get async handlers (since you're using .net)
implement some sort of "paging" where
the resultset is generated and stored
on the server and the client can then
download chunks of the resultset in
smaller amounts and re-assemble the
set at their end
That's already happening for you -- it's called tcp/ip ;-) Re-implementing that could be overkill.
Similarly --
entire resultset will have to be
regenerated and sent again
If it's MS-SQL, for example that is generating most of the resultset -- then re-generating it will take advantage of some implicit cacheing in SQL Server and the subsequent generations will be quicker.
To some extent you can get away with not worrying about these problems, until they surface as 'real' problems -- because the platform(s) you're using take care of a lot of the performance bottlenecks for you.
A: I somewhat disagree with secretGeek's comment:
That's already happening for you -- it's called tcp/ip ;-) Re-implementing that could be overkill.
There are times when you may want to do just this, but really only from a UI perspective. If you implement some way to either stream the data to the client (via something like a pushlets mechanism), or chunk it into pages as you suggest, you can then load some really small subset on the client and then slowly build up the UI with the full amount of data.
This makes for a slicker, speedier UI (from the user's perspective), but you have to evaluate if the extra effort will be worthwhile... because I don't think it will be an insignificant amount of work.
A: So it sounds like you'd be interested in a solution that adds 'starting record number' and 'final record number' parameter to your web method. (or 'page number' and 'results per page')
This shouldn't be too hard if the backing store is sql server (or even mysql) as they have built in support for row numbering.
Despite this you should be able to avoid doing any session management on the server, avoid any explicit caching of the result set, and just rely on the backing store's caching to keep your life simple.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/11804",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
} |
Q: How do you impersonate an Active Directory user in Powershell? I'm trying to run powershell commands through a web interface (ASP.NET/C#) in order to create mailboxes/etc on Exchange 2007. When I run the page using Visual Studio (Cassini), the page loads up correctly. However, when I run it on IIS (v5.1), I get the error "unknown user name or bad password". The biggest problem that I noticed was that Powershell was logged in as ASPNET instead of my Active Directory Account. How do I force my Powershell session to be authenticated with another Active Directory Account?
Basically, the script that I have so far looks something like this:
RunspaceConfiguration rc = RunspaceConfiguration.Create();
PSSnapInException snapEx = null;
rc.AddPSSnapIn("Microsoft.Exchange.Management.PowerShell.Admin", out snapEx);
Runspace runspace = RunspaceFactory.CreateRunspace(rc);
runspace.Open();
Pipeline pipeline = runspace.CreatePipeline();
using (pipeline)
{
pipeline.Commands.AddScript("Get-Mailbox -identity 'user.name'");
pipeline.Commands.Add("Out-String");
Collection<PSObject> results = pipeline.Invoke();
if (pipeline.Error != null && pipeline.Error.Count > 0)
{
foreach (object item in pipeline.Error.ReadToEnd())
resultString += "Error: " + item.ToString() + "\n";
}
runspace.Close();
foreach (PSObject obj in results)
resultString += obj.ToString();
}
return resultString;
A: Here is a class that I use to impersonate a user.
using System;
using System.Data;
using System.Configuration;
using System.Web;
using System.Web.Security;
using System.Web.UI;
using System.Web.UI.WebControls;
using System.Web.UI.WebControls.WebParts;
using System.Web.UI.HtmlControls;
namespace orr.Tools
{
#region Using directives.
using System.Security.Principal;
using System.Runtime.InteropServices;
using System.ComponentModel;
#endregion
/// <summary>
/// Impersonation of a user. Allows to execute code under another
/// user context.
/// Please note that the account that instantiates the Impersonator class
/// needs to have the 'Act as part of operating system' privilege set.
/// </summary>
/// <remarks>
/// This class is based on the information in the Microsoft knowledge base
/// article http://support.microsoft.com/default.aspx?scid=kb;en-us;Q306158
///
/// Encapsulate an instance into a using-directive like e.g.:
///
/// ...
/// using ( new Impersonator( "myUsername", "myDomainname", "myPassword" ) )
/// {
/// ...
/// [code that executes under the new context]
/// ...
/// }
/// ...
///
/// Please contact the author Uwe Keim (mailto:uwe.keim@zeta-software.de)
/// for questions regarding this class.
/// </remarks>
public class Impersonator :
IDisposable
{
#region Public methods.
/// <summary>
/// Constructor. Starts the impersonation with the given credentials.
/// Please note that the account that instantiates the Impersonator class
/// needs to have the 'Act as part of operating system' privilege set.
/// </summary>
/// <param name="userName">The name of the user to act as.</param>
/// <param name="domainName">The domain name of the user to act as.</param>
/// <param name="password">The password of the user to act as.</param>
public Impersonator(
string userName,
string domainName,
string password)
{
ImpersonateValidUser(userName, domainName, password);
}
// ------------------------------------------------------------------
#endregion
#region IDisposable member.
public void Dispose()
{
UndoImpersonation();
}
// ------------------------------------------------------------------
#endregion
#region P/Invoke.
[DllImport("advapi32.dll", SetLastError = true)]
private static extern int LogonUser(
string lpszUserName,
string lpszDomain,
string lpszPassword,
int dwLogonType,
int dwLogonProvider,
ref IntPtr phToken);
[DllImport("advapi32.dll", CharSet = CharSet.Auto, SetLastError = true)]
private static extern int DuplicateToken(
IntPtr hToken,
int impersonationLevel,
ref IntPtr hNewToken);
[DllImport("advapi32.dll", CharSet = CharSet.Auto, SetLastError = true)]
private static extern bool RevertToSelf();
[DllImport("kernel32.dll", CharSet = CharSet.Auto)]
private static extern bool CloseHandle(
IntPtr handle);
private const int LOGON32_LOGON_INTERACTIVE = 2;
private const int LOGON32_PROVIDER_DEFAULT = 0;
// ------------------------------------------------------------------
#endregion
#region Private member.
// ------------------------------------------------------------------
/// <summary>
/// Does the actual impersonation.
/// </summary>
/// <param name="userName">The name of the user to act as.</param>
/// <param name="domainName">The domain name of the user to act as.</param>
/// <param name="password">The password of the user to act as.</param>
private void ImpersonateValidUser(
string userName,
string domain,
string password)
{
WindowsIdentity tempWindowsIdentity = null;
IntPtr token = IntPtr.Zero;
IntPtr tokenDuplicate = IntPtr.Zero;
try
{
if (RevertToSelf())
{
if (LogonUser(
userName,
domain,
password,
LOGON32_LOGON_INTERACTIVE,
LOGON32_PROVIDER_DEFAULT,
ref token) != 0)
{
if (DuplicateToken(token, 2, ref tokenDuplicate) != 0)
{
tempWindowsIdentity = new WindowsIdentity(tokenDuplicate);
impersonationContext = tempWindowsIdentity.Impersonate();
}
else
{
throw new Win32Exception(Marshal.GetLastWin32Error());
}
}
else
{
throw new Win32Exception(Marshal.GetLastWin32Error());
}
}
else
{
throw new Win32Exception(Marshal.GetLastWin32Error());
}
}
finally
{
if (token != IntPtr.Zero)
{
CloseHandle(token);
}
if (tokenDuplicate != IntPtr.Zero)
{
CloseHandle(tokenDuplicate);
}
}
}
/// <summary>
/// Reverts the impersonation.
/// </summary>
private void UndoImpersonation()
{
if (impersonationContext != null)
{
impersonationContext.Undo();
}
}
private WindowsImpersonationContext impersonationContext = null;
// ------------------------------------------------------------------
#endregion
}
}
A: In your ASP.NET app, you will need to impersonate a valid AD account with the correct permissions:
http://support.microsoft.com/kb/306158
A: Exchange 2007 doesn't allow you to impersonate a user for security reasons. This means that it is impossible (at the moment) to create mailboxes by impersonating a user. In order to get around this problem, I created a web service which runs under AD user which has permissions to create email acounts, etc. You can then access this webservice to get access to powershell. Please remember to add the necessary security because this could potentially be a huge security hole.
A: You might need a patch.
From: http://support.microsoft.com/kb/943937
An application cannot impersonate a
user and then run Windows PowerShell
commands in an Exchange Server 2007
environment
To resolve this problem, install
Update Rollup 1 for Exchange Server
2007 Service Pack 1.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/11806",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13"
} |
Q: How to create an all browser-compatible hanging indent style in CSS in a span The only thing I've found has been;
.hang {
text-indent: -3em;
margin-left: 3em;
}
The only way for this to work is putting text in a paragraph, which causes those horribly unsightly extra lines. I'd much rather just have them in a <span class="hang"></span> type of thing.
I'm also looking for a way to further indent than just a single-level of hanging. Using paragraphs to stack the indentions doesn't work.
A: ysth's answer is best with one debatable exception; the unit of measure should correspond to the size of the font.
p {
text-indent: -2en;
padding-left: 2en;
}
"3" would also work adequately well; "em" is not recommended as it is wider than the average character in an alphabetic set. "px" should only be used if you intended to align hangs of text blocks with differing font sizes.
A: <span> is an inline element. The term hanging indent is meaningless unless you're talking about a paragraph (which generally means a block element). You can, of course, change the margins on <p> or <div> or any other block element to get rid of extra vertical space between paragraphs.
You may want something like display: run-in, where the tag will become either block or inline depending on context... sadly, this is not yet universally supported by browsers.
A: Found a cool way to do just that, minus the nasty span.
p {
padding-left: 20px;
}
p:first-letter {
margin-left: -20px;
}
Nice and simple :D
If the newlines are bothering you in p blocks, you can add
p {
margin-top: 0px;
margin-bottom: 0px;
}
JSFiddle Example
| {
"language": "en",
"url": "https://stackoverflow.com/questions/11809",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "33"
} |
Q: How much extra overhead is generated when sending a file over a web service as a byte array? This question and answer shows how to send a file as a byte array through an XML web service. How much overhead is generated by using this method for file transfer? I assume the data looks something like this:
<?xml version="1.0" encoding="UTF-8" ?>
<bytes>
<byte>16</byte>
<byte>28</byte>
<byte>127</byte>
...
</bytes>
If this format is correct, the bytes must first be converted to UTF-8 characters. Each of these characters allocates 8 bytes. Are the bytes stored in base 10, hex, or binary characters? How much larger does the file appear as it is being sent due to the XML data and character encoding? Is compression built into web services?
A: Typically a byte array is sent as a base64 encoded string, not as individual bytes in tags.
http://en.wikipedia.org/wiki/Base64
The base64 encoded version is about 137% of the size of the original content.
A: I use this method for some internal corporate webservices, and I haven't noticed any major slow-downs (but that doesn't mean it's not there).
You could probably use any of the numerous network traffic analysis tools to measure the size of the data, and make a judgment call based off that.
A: I'm not sure about all the details (compressing, encoding, etc) but I usually just use WireShark to analyze the network traffic (while trying various methods) which then allows you to see exactly how it's sent.
For example, if it's compressed the data block of the packet shouldn't be readable as plain text...however if it's uncompressed, you will just see plain old xml text...like you would see with HTTP traffic, or even FTP in certain cases.
A: To echo what Kevin said, in .net web services if you have a byte array it is sent as a base64 encoded string by default. You can also specify the encoding of the byte array beforehand.
Obviously, once it gets to the server (or client) you need to manually decode the string back into a byte array as this isn't done automagically for you unfortunately.
A: The main performance hit isn't going to be from the transfer of the encoded file, it's going to be in the processing that the server has to do to encode the file pre-transfer (unless the files don't change often and the encoded version can be cached somehow).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/11820",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Singletons: good design or a crutch? Singletons are a hotly debated design pattern, so I am interested in what the Stack Overflow community thought about them.
Please provide reasons for your opinions, not just "Singletons are for lazy programmers!"
Here is a fairly good article on the issue, although it is against the use of Singletons:
scientificninja.com: performant-singletons.
Does anyone have any other good articles on them? Maybe in support of Singletons?
A: I think there is a great misunderstanding about the use of the Singleton pattern. Most of the comments here refer to it as a place to access global data. We need to be careful here - Singleton as a pattern is not for accessing globals.
Singleton should be used to have only one instance of the given class. Pattern Repository has great information on Singleton.
A: One of the colleagues I have worked with was very Singleton-minded. Whenever there was something that was kind of a manager or boss like object he would make that into a singleton, because he figured that there should be only one boss. And each time the system took up some new requirements, it turned out there were perfectly valid reasons to allow multiple instances.
I would say that singleton should be used if the domain model dictates (not 'suggests') that there is one. All other cases are just accendentally single instances of a class.
A: In defense of singletons:
*
*They are not as bad as globals because globals have no standard-enforced initialization order, and you could easily see nondeterministic bugs due to naive or unexpected dependency orders. Singletons (assuming they're allocated on the heap) are created after all globals, and in a very predictable place in the code.
*They're very useful for resource-lazy / -caching systems such as an interface to a slow I/O device. If you intelligently build a singleton interface to a slow device, and no one ever calls it, you won't waste any time. If another piece of code calls it from multiple places, your singleton can optimize caching for both simultaneously, and avoid any double look-ups. You can also easily avoid any deadlock condition on the singleton-controlled resource.
Against singletons:
*
*In C++, there's no nice way to auto-clean-up after singletons. There are work-arounds, and slightly hacky ways to do it, but there's just no simple, universal way to make sure your singleton's destructor is always called. This isn't so terrible memory-wise -- just think of it as more global variables, for this purpose. But it can be bad if your singleton allocates other resources (e.g. locks some files) and doesn't release them.
My own opinion:
I use singletons, but avoid them if there's a reasonable alternative. This has worked well for me so far, and I have found them to be testable, although slightly more work to test.
A: I've been trying to think of a way to come to the poor singelton's rescue here, but I must admit it's hard. I've seen very few legitimate uses of them and with the current drive to do dependency injection andd unit testing they are just hard to use. They definetly are the "cargo cult" manifestation of programming with design patterns I have worked with many programmers that have never cracked the "GoF" book but they know 'Singelton' and thus they know 'Patterns'.
I do have to disagree with Orion though, most of the time I've seen singeltons oversused it's not global variables in a dress, but more like global services(methods) in a dress. It's interesting to note that if you try to use Singeltons in the SQL Server 2005 in safe mode through the CLR interface the system will flag the code. The problem is that you have persistent data beyond any given transaction that may run, of course if you make the instance variable read only you can get around the issue.
That issue lead to a lot of rework for me one year.
A: Holy wars! Ok let me see.. Last time I checked the design police said..
Singletons are bad because they hinder auto testing - instances cannot be created afresh for each test case.
Instead the logic should be in a class (A) that can be easily instantiated and tested. Another class (B) should be responsible for constraining creation. Single Responsibility Principle to the fore! It should be team-knowledge that you're supposed to go via B to access A - sort of a team convention.
I concur mostly..
A: Many applications require that there is only one instance of some class, so the pattern of having only one instance of a class is useful. But there are variations to how the pattern is implemented.
There is the static singleton, in which the class forces that there can only be one instance of the class per process (in Java actually one per ClassLoader). Another option is to create only one instance.
Static singletons are evil - one sort of global variables. They make testing harder, because it's not possible to execute the tests in full isolation. You need complicated setup and tear down code for cleaning the system between every test, and it's very easy to forget to clean some global state properly, which in turn may result in unspecified behaviour in tests.
Creating only one instance is good. You just create one instance when the programs starts, and then pass the pointer to that instance to all other objects which need it. Dependency injection frameworks make this easy - you just configure the scope of the object, and the DI framework will take care of creating the instance and passing it to all who need it. For example in Guice you would annotate the class with @Singleton, and the DI framework will create only one instance of the class (per application - you can have multiple applications running in the same JVM). This makes testing easy, because you can create a new instance of the class for each test, and let the garbage collector destroy the instance when it is no more used. No global state will leak from one test to another.
For more information:
The Clean Code Talks - "Global State and Singletons"
A: Google has a Singleton Detector for Java that I believe started out as a tool that must be run on all code produced at Google. The nutshell reason to remove Singletons:
because they can make testing
difficult and hide problems with your
design
For a more explicit explanation see 'Why Singletons Are Controversial' from Google.
A: A singleton is just a bunch of global variables in a fancy dress.
Global variables have their uses, as do singletons, but if you think you're doing something cool and useful with a singleton instead of using a yucky global variable (everyone knows globals are bad mmkay), you're unfortunately misled.
A: Singleton as an implementation detail is fine. Singleton as an interface or as an access mechanism is a giant PITA.
A static method that takes no parameters returning an instance of an object is only slightly different from just using a global variable. If instead an object has a reference to the singleton object passed in, either via constructor or other method, then it doesn't matter how the singleton is actually created and the whole pattern turns out not to matter.
A: The purpose of a Singleton is to ensure a class has only one instance, and provide a global point of access to it. Most of the time the focus is on the single instance point. Imagine if it were called a Globalton. It would sound less appealing as this emphasizes the (usually) negative connotations of a global variable.
Most of the good arguments against singletons have to do with the difficulty they present in testing as creating test doubles for them is not easy.
A: There's three pretty good blog posts about Singletons by Miško Hevery in the Google Testing blog.
*
*Singletons are Pathological Liars
*Where Have All the Singletons Gone?
*Root Cause of Singletons
A: Singleton is not a horrible pattern, although it is misused a lot. I think this misuse is because it is one of the easier patterns and most new to the singleton are attracted to the global side effect.
Erich Gamma had said the singleton is a pattern he wishes wasn't included in the GOF book and it's a bad design. I tend to disagree.
If the pattern is used in order to create a single instance of an object at any given time then the pattern is being used correctly. If the singleton is used in order to give a global effect, it is being used incorrectly.
Disadvantages:
*
*You are coupling to one class throughout the code that calls the singleton
*
*Creates a hassle with unit testing because it is difficult to replace the instance with a mock object
*If the code needs to be refactored later on because of the need for more than one instance, it is more painful than if the singleton class were passed into the object (using an interface) that uses it
Advantages:
*
*One instance of a class is represented at any given point in time.
*
*By design you are enforcing this
*Instance is created when it is needed
*Global access is a side effect
A:
It was not just a bunch of variables in a fancy dress because this was had dozens of responsibilities, like communicating with persistence layer to save/retrieve data about the company, deal with employees and prices collections, etc.
I must say you're not really describing somthing that should be a single object and it's debatable that any of them, other than Data Serialization should have been a singelton.
I can see at least 3 sets of classes that I would normally design in, but I tend to favor smaller simpler objects that do a narrow set of tasks very well. I know that this is not the nature of most programmers. (Yes I work on 5000 line class monstrosities every day, and I have a special love for the 1200 line methods some people write.)
I think the point is that in most cases you don't need a singelton and often your just making your life harder.
A: The biggest problem with singletons is that they make unit testing hard, particularly when you want to run your tests in parallel but independently.
The second is that people often believe that lazy initialisation with double-checked locking is a good way to implement them.
Finally, unless your singletons are immutable, then they can easily become a performance problem when you try and scale your application up to run in multiple threads on multiple processors. Contended synchronization is expensive in most environments.
A: Chicks dig me because I rarely use singleton and when I do it's typically something unusual. No, seriously, I love the singleton pattern. You know why? Because:
*
*I'm lazy.
*Nothing can go wrong.
Sure, the "experts" will throw around a bunch of talk about "unit testing" and "dependency injection" but that's all a load of dingo's kidneys. You say the singleton is hard to unit test? No problem! Just declare everything public and turn your class into a fun house of global goodness. You remember the show Highlander from the 1990's? The singleton is kind of like that because: A. It can never die; and B. There can be only one. So stop listening to all those DI weenies and implement your singleton with abandon. Here are some more good reasons...
*
*Everybody is doing it.
*The singleton pattern makes you invincible.
*Singleton rhymes with "win" (or "fun" depending on your accent).
A: Singletons have their uses, but one must be careful in using and exposing them, because they are way too easy to abuse, difficult to truly unit test, and it is easy to create circular dependencies based on two singletons that accesses each other.
It is helpful however, for when you want to be sure that all your data is synchronized across multiple instances, e.g., configurations for a distributed application, for instance, may rely on singletons to make sure that all connections use the same up-to-date set of data.
A: I find you have to be very careful about why you're deciding to use a singleton. As others have mentioned, it's essentially the same issue as using global variables. You must be very cautious and consider what you could be doing by using one.
It's very rare to use them and usually there is a better way to do things. I've run into situations where I've done something with a singleton and then had to sift through my code to take it out after I discovered how much worse it made things (or after I came up with a much better, more sane solution)
A: I've used singletons a bunch of times in conjunction with Spring and didn't consider it a crutch or lazy.
What this pattern allowed me to do was create a single class for a bunch of configuration-type values and then share the single (non-mutable) instance of that specific configuration instance between several users of my web application.
In my case, the singleton contained client configuration criteria - css file location, db connection criteria, feature sets, etc. - specific for that client. These classes were instantiated and accessed through Spring and shared by users with the same configuration (i.e. 2 users from the same company). * **I know there's a name for this type of application but it's escaping me*
I feel it would've been wasteful to create (then garbage collect) new instances of these "constant" objects for each user of the app.
A: I'm reading a lot about "Singleton", its problems, when to use it, etc., and these are my conclusions until now:
*
*Confusion between the classic implementation of Singleton and the real requirement: TO HAVE JUST ONE INSTANCE OF a CLASS!
*It's generally bad implemented. If you want a unique instance, don't use the (anti)pattern of using a static GetInstance() method returning a static object. This makes a class to be responsible for instantiating a single instance of itself and also perform logic. This breaks the Single Responsibility Principle. Instead, this should be implemented by a factory class with the responsibility of ensuring that only one instance exists.
*It's used in constructors, because it's easy to use and must not be passed as a parameter. This should be resolved using dependency injection, that is a great pattern to achieve a good and testable object model.
*Not TDD. If you do TDD, dependencies are extracted from the implementation because you want your tests to be easy to write. This makes your object model to be better. If you use TDD, you won't write a static GetInstance =). BTW, if you think in objects with clear responsibilities instead classes, you'll get the same effect =).
A: I really disagree on the bunch of global variables in a fancy dress idea. Singletons are really useful when used to solve the right problem. Let me give you a real example.
I once developed a small piece of software to a place I worked, and some forms had to use some info about the company, its employees, services and prices. At its first version, the system kept loading that data from the database every time a form was opened. Of course, I soon realized this approach was not the best one.
Then I created a singleton class, named company, which encapsulated everything about the place, and it was completely filled with data by the time the system was opened.
It was not just a bunch of variables in a fancy dress because this was had dozens of responsibilities, like communicating with persistence layer to save/retrieve data about the company, deal with employees and prices collections, etc.
Plus, it was a fixed, system-wide, easily accessible point to have the company data.
A: Singletons are very useful, and using them is not in and of itself an anti-pattern. However, they've gotten a bad reputation largely because they force any consuming code to acknowledge that they are a singleton in order to interact with them. That means if you ever need to "un-Singletonize" them, the impact on your codebase can be very significant.
Instead, I'd suggest either hiding the Singleton behind a factory. That way, if you need to alter the service's instantiation behavior in the future, you can just change the factory rather than all types that consume the Singleton.
Even better, use an inversion of control container! Most of them allow you to separate instantiation behavior from the implementation of your classes.
A: One scary thing on singletons in for instance Java is that you can end up with multiple instances of the same singleton in some cases. The JVM uniquely identifies based on two elements: A class' fully qualified name, and the classloader responsible for loading it.
That means the same class can be loaded by two classloaders unaware of each other, and different parts of your application would have different instances of this singleton that they interact with.
A: Write normal, testable, injectable objects and let Guice/Spring/whatever handle the instantiation. Seriously.
This applies even in the case of caches or whatever the natural use cases for singletons are. There's no need to repeat the horror of writing code to try to enforce one instance. Let your dependency injection framework handle it. (I recommend Guice for a lightweight DI container if you're not already using one).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/11831",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "69"
} |
Q: Inheritance and Polymorphism - Ease of use vs Purity In a project our team is using object lists to perform mass operations on sets of data that should all be processed in a similar way. In particular, different objects would ideally act the same, which would be very easily achieved with polymorphism. The problem I have with it is that inheritance implies the is a relationship, rather than the has a relationship. For example, several objects have a damage counter, but to make this easy to use in an object list, polymorphism could be used - except that would imply an is a relationship which wouldn't be true. (A person is not a damage counter.)
The only solution I can think of is to have a member of the class return the proper object type when implicitly casted instead of relying on inheritance. Would it be better to forgo the is a / has a ideal in exchange for ease of programming?
Edit:
To be more specific, I am using C++, so using polymorphism would allow the different objects to "act the same" in the sense that the derived classes could reside within a single list and be operated upon by a virtual function of the base class. The use of an interface (or imitating them via inheritance) seems like a solution I would be willing to use.
A: I think you should be implementing interfaces to be able to enforce your has a relationships (am doing this in C#):
public interface IDamageable
{
void AddDamage(int i);
int DamageCount {get;}
}
You could implement this in your objects:
public class Person : IDamageable
public class House : IDamageable
And you'd be sure that the DamageCount property and has a method to allow you to add damage, without implying that a person and a house are related to each other in some sort of heirarchy.
A: This can be accomplished using multiple inheritance. In your specific case (C++), you can use pure virtual classes as interfaces. This allows you to have multiple inheritance without creating scope/ambiguity problems. Example:
class Damage {
virtual void addDamage(int d) = 0;
virtual int getDamage() = 0;
};
class Person : public virtual Damage {
void addDamage(int d) {
// ...
damage += d * 2;
}
int getDamage() {
return damage;
}
};
class Car : public virtual Damage {
void addDamage(int d) {
// ...
damage += d;
}
int getDamage() {
return damage;
}
};
Now both Person and Car 'is-a' Damage, meaning, they implement the Damage interface. The use of pure virtual classes (so that they are like interfaces) is key and should be used frequently. It insulates future changes from altering the entire system. Read up on the Open-Closed Principle for more information.
A: I agree with Jon, but assuming you still have need for a separate damage counter class, you can do:
class IDamageable {
virtual DamageCounter* damage_counter() = 0;
};
class DamageCounter {
...
};
Each damageable class then needs to provide their own damage_counter() member function. The downside of this is that it creates a vtable for each damageable class. You can instead use:
class Damageable {
public:
DamageCounter damage_counter() { return damage_counter_; }
private:
DamageCounter damage_counter_;
};
But many people are Not Cool with multiple inheritance when multiple parents have member variables.
A: Sometimes it's worth giving up the ideal for the realistic. If it's going to cause a massive problem to "do it right" with no real benefit, then I would do it wrong. With that said, I often think it's worth taking the time to do it right, because unnecessary multiple inheritance increases complexity, and it can contribute to the system being less maintainable. You really have to decide what's best for your circumstance.
One option would be to have these objects implement a Damageable interface, rather than inheriting from DamageCounter. This way, a person has-a damage counter, but is damageable. (I often find interfaces make a lot more sense as adjective than nouns.) Then you could have a consistent damage interface on Damageable objects, and not expose that a damage counter is the underlying implementation (unless you need to).
If you want to go the template route (assuming C++ or similar), you could do this with mixins, but that can get ugly really quickly if done poorly.
A: Normally when we talk about 'is a' vs 'has a' we're talking about Inheritance vs Composition.
Um...damage counter would just be attribute of one of your derived classes and wouldn't really be discussed in terms of 'A person is a damage counter' with respect to your question.
See this:
http://www.artima.com/designtechniques/compoinh.html
Which might help you along the way.
@Derek: From the wording, I assumed there was a base clase, having re-read the question I kinda now see what he's getting at.
A: This question is really confusing :/
Your question in bold is very open-ended and has an answer of "it depends", but your example doesn't really give much information about the context from which you are asking. These lines confuse me;
sets of data that should all be processed in a similar way
What way? Are the sets processed by a function? Another class? Via a virtual function on the data?
In particular, different objects would ideally act the same, which would be very easily achieved with polymorphism
The ideal of "acting the same" and polymorphism are absolutely unrelated. How does polymorphism make it easy to achieve?
A: @Kevin
Normally when we talk about 'is a' vs 'has a' we're talking about Inheritance vs Composition.
Um...damage counter would just be attribute of one of your derived classes and wouldn't really be discussed in terms of 'A person is a damage counter' with respect to your question.
Having the damage counter as an attribute doesn't allow him to diverse objects with damage counters into a collection. For example, a person and a car might both have damage counters, but you can't have a vector<Person|Car> or a vector<with::getDamage()> or anything similar in most languages. If you have a common Object base class, then you can shove them in that way, but then you can't access the getDamage() method generically.
That was the essence of his question, as I read it. "Should I violate is-a and has-a for the sake of treating certain objects as if they are the same, even though they aren't?"
A: "Doing it right" will have benefits in the long run, if only because someone maintaining the system later will find it easier to comprehend if it was done right to begin with.
Depending on the language, you may well have the option of multiple inheritance, but normally simple interfaces make the most sense. By "simple" I mean make an interface that isn't trying to be too much. Better to have lots of simple interfaces and a few monolithic ones. Of course, there is always a trade off, and too many interfaces would probably lead to ones being "forgotten" about...
A: @Andrew
The ideal of "acting the same" and polymorphism are absolutely unrelated. How does polymorphism make it easy to achieve?
They all have, e.g., one function in common. Let's call it addDamage(). If you want to do something like this:
foreach (obj in mylist)
obj.addDamage(1)
Then you need either a dynamic language, or you need them to extend from a common parent class (or interface). e.g.:
class Person : DamageCounter {}
class Car : DamageCounter {}
foreach (DamageCounter d in mylist)
d.addDamage(1)
Then, you can treat Person and Car the same in certain very useful circumstances.
A: Polymorphism does not require inheritance. Polymorphism is what you get when multiple objects implement the same message signature (method).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/11854",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: What do you use as a good alternative to Team System? I would like to gauge what solutions other people put in place to get Team System functionality. We all know that Team System can be pricey for some of us. I know they offer a small team edition with five licenses with a MSDN subscription, but what if your team is bigger than five or you don't want to use Team System?
A: I'll second Trac + Subversion. While nothing is perfect, this combination works quite well for me, and the price is right.
Even for projects I work solo on, it's nice to have both of these integrated.
A: I've had a lot of success with the nice integration between SourceGear vault and FogBugz.
MS Build for build automation meets my needs.
A: Took my answer out of the question and posted it as one of the answers per the StackOverflow FAQ.
Here is the solution that I use and it works great:
*
*Subversion for source control
*Warehouse for my Subversion web browser
*FogBugz for feature and bug tracking with it integrated with Subversion, Visual Studio, and Warehouse
*VisualSVN for integrating Subversion into Visual Studio
*CruiseControl.Net with nAnt for my automated build system for .Net projects
*CruiseControl.rb with Capistrano for my automated build system for Ruby on Rails projects
A: Sourcegear's suite of products are a very nice alternative. Vault + Dragnet + Fortress are nice, however if you can't afford all of those, Vault + FogBugz is a pretty decent alternative.
A: Trac
It seems targeted for Open Source / Community type projects but it's working just find as an internal Developer intranet. It integrates a Wiki, Bug tracker and SVN Source browser into one nice package and it's very easy to configure.
A: I'm stunned that nobody has mentioned the free and excellent TeamCity product from JetBrains. It includes:
*
*Continous Integration
*Software Build management
*Project Management, Monitoring and Statistical Reports
*Integration with many IDEs, Sourcecode control systems, and Testing Frameworks
For project management / bug tracking / Git or Subversion repository I also use Unfuddle (free for small personal projects!)
A: I use SourceGear's Fortress on my home computer for personal development. Its free for a single user.
A: I use VisualSVN Server for source control, Mingle for project management and bug tracking, and Team City for continous integration. I'm still getting used to it, but it's working great so far. This is a good free setup for small teams. Licensing Mingle and Team City will cost money for larger teams.
A: Seapine CM - Cross platform issue management and version control
http://www.seapine.com
A: For a lightweight & completely free option, you can use Springloops integrated with Basecamp (+ an SVN client).
*
*Hosted SVN: SpringLoops: http://www.springloops.com/ (free for a single project & user)
*Basecamp: http://basecamphq.com/ (also free for a single project)
Note: SpringLoops integration with Basecamp is not available in their free setup.
A: I develop on Linux also, which is one reason I came up with the solution I have. I was wondering how the SourceGear options work in this respect? I have used Vault before, which in my experience wasn't too bad, but I know it is mostly Windows based. I think I read at one point that they have a client that can work on Linux, but I have never used it. I just want to open the conversation up a little more, so people who come to this question can hopefully find the best answer for them, based on their wants.
If the Vault client can run on Linux and Mac and run well, then using Vault and Fortress will definitely be the accepted answer as a good low cost alternative to Team System.
A: SVN with the TortoiseSVN add-on makes for a solid and easy to use interface. WinMerge is a great tool to thrown in that mix as well.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/11857",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "28"
} |
Q: Best .NET Solution for Frequently Changed Database I am currently architecting a small CRUD applicaton. Their database is a huge mess and will be changing frequently over the course of the next 6 months to a year. What would you recommend for my data layer:
1) ORM (if so, which one?)
2) Linq2Sql
3) Stored Procedures
4) Parametrized Queries
I really need a solution that will be dynamic enough (both fast and easy) where I can replace tables and add/delete columns frequently.
Note: I do not have much experience with ORM (only a little SubSonic) and generally tend to use stored procedures so maybe that would be the way to go. I would love to learn Ling2Sql or NHibernate if either would allow for the situation I've described above.
A: One key thing to be aware of here is that if the database schema is changing frequently, you want to have some level of compile time type safety. I've found this to be a problem with NHibernate because it uses xml mapping files so if you change something in your database schema, you don't know until runtime that the mapping is broken.
It will also be a problem with stored procs.
Using Linq2Sql will give you the advantage of knowing where exactly your code is breaking when you change a schema at compile time. This for me, is something that would take precedence over everything else if I'm working with a frequently changing schema
A: I'd look at SubSonic with the build provider (Website Project) setup. That works great because it automatically regenerates the DAL objects every time you build your project, so if the database changes in a way that breaks your code, you get a build error.
It worked well until the database schema got really complex and we were hitting the limits of the ActiveRecord pattern, but as long as the schema's not hugely complex it works pretty well. Once the schema stabilizes, you can switch so that you're only building the DAL when you want to.
A: You definitely want to use an ORM. Any ORM is ok, but you want something that will generate strongly typed classes. When fields get added, modified or deleted from a table, you want to be able to regenerate those classes, and deal with fixing compile time errors only. If you use a dynamic model, you're likely to have many nasty runtime errors. This is VERY important! I am part of the MyGeneration development team on sourceforge, and I think that is a great solution to your problem. You can generate dOOdads, NHibernate, EasyObjects, EntitySpaces, etc. If you want to go with a more expensive solution, go with CodeSmith or LLBLGen Pro. Good luck - anyone interested in using MyGeneration, feel free to contact me with questions.
A: NHibernate, but only if you would be amenable to having an object-first approach wherein you define your classes, and then define your desired table structure in the mapping files, and then create a database schema using NHibernate's built in schema generation classes.
For doing it the other way around (e.g., you have a bunch of tables and then you base your object design on that) I've found MyGeneration + NHibernate to work, although I'm not too happy with the resulting classes (mainly because I'm such a stickler for true Object Oriented Programming).
A: If I were in your shoes I would try to leverage what I knew (sprocs) with Linq2Sql. Linq2Sql can still use your sprocs but then you have the added bonus of putting a new tool in your belt. I think having a grasp on the Linq2XXX (X being a random technology not adult entertainment....which isn't a bad idea now that I think of it) syntax and methodology is going to be a great addition to your skill set using Linq over a collection of objects is way sweet.
But ultimately something like NHibernate will suit you better in the long run.
A: EntitySpaces can regenerate your DAL/Business Layer in one minute, and no code loss, see the trial version ==> HERE
No Registration Necessary, runs under Visual Studio as well.
A: Use EntitySpaces. you will send me flowers, guaranteed.
simply awesome. change the db as you like. hit the button, bang. all your changes are done. without changing your custom code. I love it.
A: How simple is the application? If I were to be working with schema/design stuff for a couple of months, and not really worry about an actual app . . . I would consider using EDM and a Dynamic Data Entities Web Application project. This get your going with the least amount of effort, in my opinion. This keeps you focused on schema, data and other groovey things. I hopefully don't get too many neg bumps from this one!
Here's how the new project dialog will look like this
A: You're already happy with stored procs and they might be enough to abstract away the changing schema. If ORMs aren't happy with stored procs then maybe they'd work with Views that you keep current on top of the changing schema.
A: If the database schema changes often, prefer the Entity Framework over LINQ2SQL. If the schema changes, using L2S you have to
1) Remove and re-add you table (loosing your customizations)
2) Modify the model by hand (as done here in stackoverflow)
The EF is a super-set of L2S, giving you more flexibility of usage and dbms-independence
A: look at why it is changing, and see if you can anticipate and generalize the kinds of changes coming at you so that they don't break your code
a framework may make accomodating the changes easier, but deeper analysis will have a longer-term benefit
A: Any solution can work, what you really need is a set of tests which will guarantee that basic operation like insert, select, update and delete works. This way you can simply run your tests and check if mappings are up-to-date.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/11878",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: Is it possible to return objects from a WebService? Instead of returning a common string, is there a way to return classic objects?
If not: what are the best practices? Do you transpose your object to xml and rebuild the object on the other side? What are the other possibilities?
A: As mentioned, you can do this in .net via serialization. By default all native types are serializable so this happens automagically for you.
However if you have complex types, you need to mark the object with the [Serializable] attribute. The same goes with complex types as properties.
So for example you need to have:
[Serializable]
public class MyClass
{
public string MyString {get; set;}
[Serializable]
public MyOtherClass MyOtherClassProperty {get; set;}
}
A: If the object can be serialised to XML and can be described in WSDL then yes it is possible to return objects from a webservice.
A: Yes: in .NET they call this serialization, where objects are serialized into XML and then reconstructed by the consuming service back into its original object type or a surrogate with the same data structure.
A: Where possible, I transpose the objects into XML - this means that the Web Service is more portable - I can then access the service in whatever language, I just need to create the parser/object transposer in that language.
Because we have WSDL files describing the service, this is almost automated in some systems.
(For example, we have a server written in pure python which is replacing a server written in C, a client written in C++/gSOAP, and a client written in Cocoa/Objective-C. We use soapUI as a testing framework, which is written in Java).
A: It is possible to return objects from a web service using XML. But Web Services are supposed to be platform and operating system agnostic. Serializing an object simply allows you to store and retrieve an object from a byte stream, such as a file. For instance, you can serialize a Java object, convert that binary stream (perhaps via a Base 64 encoding into a CDATA field) and transfer that to service's client.
But the client would only be able to restore that object if it were Java-based. Moreover, a deep copy is required to serialize an object and have it restored exactly. Deep copies can be expensive.
Your best route is to create an XML schema that represents the document and create an instance of that schema with the object specifics.
A: .NET automatically does this with objects that are serializable. I'm pretty sure Java works the same way.
Here is an article that talks about object serialization in .NET:
http://www.codeguru.com/Csharp/Csharp/cs_syntax/serialization/article.php/c7201
A: @Brian: I don't know how things work in Java, but in .net objects get serialized down to XML, not base64 strings. The webservice publishes a wsdl file that contains the method and object definitions required for your webservice.
I would hope that nobody creates webservices that simply create a base64 string
A:
Daniel Auger:
As others have said, it is possible.
However, if both the service and
client use an object that has the
exact same domain behavior on both
sides, you probably didn't need a
service in the first place.
lomax:
I have to disagree with this as it's a
somewhat narrow comment. Using a
webservice that can serialize domain
objects to XML means that it makes it
easy for clients that work with the
same domain objects, but it also means
that those clients are restricted to
using that particular web service
you've exposed and it also works in
reverse by allowing other clients to
have no knowledge of your domain
objects but still interact with your
service via XML.
@ Lomax: You've described two scenarios. Scenario 1: The client is rehydrating the xml message back into the exact same domain object. I consider this to be "returning an object". In my experience this is a bad choice and I'll explain this below. Scenario 2: The client rehydrates the xml message into something other than the exact same domain object: I am 100% behind this, however I don't consider this to be returning a domain object. It's really sending a message or DTO.
Now let me explain why true/pure/not DTO object serialization across a web service is usually a bad idea. An assertion: in order to do this in the first place, you either have to be the owner of both the client and the service, or provide the client with a library to use so that they can rehydrate the object back into it's true type. The problem: This domain object as a type now exists in and belongs to two semi-related domains. Over time, behaviors may need to be added in one domain that make no sense in the other domain and this leads to pollution and potentially painful problems.
I usually default to scenario 2. I only use scenario 1 when there is an overwhelming reason to do so.
I apologize for being so terse with my initial reply. I hope this clears things up to a degree as far as what my opinion is. Lomax, it would seem we half agree ;).
A: JSON is a pretty standard way to pass objects around the web (as a subset of javascript). Many languages feature a library which will convert JSON code into a native object - see for example simplejson in Python.
For more libraries for JSON use, see the JSON webpage
A: As others have said, it is possible. However, if both the service and client use an object that has the exact same domain behavior on both sides, you probably didn't need a service in the first place.
A:
As others have said, it is possible.
However, if both the service and
client use an object that has the
exact same domain behavior on both
sides, you probably didn't need a
service in the first place.
I have to disagree with this as it's a somewhat narrow comment. Using a webservice that can serialize domain objects to XML means that it makes it easy for clients that work with the same domain objects, but it also means that those clients are restricted to using that particular web service you've exposed and it also works in reverse by allowing other clients to have no knowledge of your domain objects but still interact with your service via XML.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/11879",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: sn.exe fails with Access Denied error message I get an Access is Denied error message when I use the strong name tool to create a new key to sign a .NET assembly. This works just fine on a Windows XP machine but it does not work on my Vista machine.
PS C:\users\brian\Dev\Projects\BELib\BELib> sn -k keypair.snk
Microsoft (R) .NET Framework Strong Name Utility Version 3.5.21022.8
Copyright (c) Microsoft Corporation. All rights reserved.
Failed to generate a strong name key pair -- Access is denied.
What causes this problem and how can I fix it?
Are you running your PowerShell or
Command Prompt as an Administrator? I
found this to be the first place to
look until you get used to User Access
Control or by turning User Access
Control off.
Yes I have tried running PS and the regular command prompt as administrator. The same error message comes up.
A:
Yes I have tried running PS and the
regular command prompt as
administrator. The same error message
comes up.
Another possible solution could be that you need to give your user account access to the key container located at C:\Documents and Settings\All Users\Application Data\Microsoft\Crypto\RSA\MachineKeys
A: Are you running your PowerShell or Command Prompt as an Administrator? I found this to be the first place to look until you get used to User Access Control or by turning User Access Control off.
A: Why not fire up sysinternals Process Monitor too see what you can see, it's the first thing I always do when I get any kind of access denied message?
http://technet.microsoft.com/en-us/sysinternals/bb896645.aspx
A: Just to update this a bit: I ran into the same problem on Vista. My local user on the PC had no problem but then we switched to a domain and my domain user (albeit having local admin rights) got "Access Denied".
I granted my domain user access rights to C:\Users\All Users\Microsoft\Crypto\RSA\MachineKeys and that fixed it.
A: Some people rebuild their machines to resolve this problem, but it can be solved by giving user access to the key container C:\Documents and Settings\All Users\Application Data\Microsoft\Crypto\RSA\MachineKeys
Each container created using sn.exe -i is located in the MachineKeys directory (unless you specify elsewhere). The default key container that is used by sn.exe is also in that location.
In case you reset your key container to a new one, and forget where it is.. you can reset the key container for the strong name utility using sn.exe -c. So, if the account access fix doesn't work, you may be using an alternate key store so a reset may be in order.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/11887",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15"
} |
Q: OpenID Attribute Exchange - should I use it? My website will be using only OpenID for authentication. I'd like to pull user details down via attribute exchange, but attribute exchange seems to have caused a lot of grief for StackOverflow.
What is the current state of play in the industry? Does any OpenID provider do a decent job of attribute exchange?
Should I just steer away from OpenID attribute exchange altogether?
How can I deal with inconsistent support for functionality?
A: Here on Stack Overflow, we're just using the Simple Registration extension for now, as there were some issues with Attribute Exchange (AX).
The biggest was OpenID Providers (OP) not agreeing on which attribute type urls to use. The finalized spec for AX says that attribute urls should come from http://www.axschema.org/ However, some OPs, especially our favorite http://myopenid.com, recognize other urls. I wasn't going to keep a list of which ones were naughty and which were nice!
The other problem was that most of the OPs I tried just didn't return information when queried with AX - I might have been doing something wrong (happens quite frequently :) ), but I had made relevant details public on my profiles and we're using the latest, most excellent .NET library, DotNetOpenId.
We'll definitely revisit AX here on Stack Overflow when we get a little more time, as a seamless user experience is very important to us!
A: While Attribute Exchange has it's problems (I'm sure someone from SO can tell you more), it does have a lot of benefits. To some extent it depends on whether you really need it or not. Simple Registration seems to do that job, and it might make sense to just ask the user for certain values. Use common sense and don't get stuck shoving everything down the One True Way.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/11903",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
} |
Q: RSS Feeds in ASP.NET MVC How would you reccommend handling RSS Feeds in ASP.NET MVC? Using a third party library? Using the RSS stuff in the BCL? Just making an RSS view that renders the XML? Or something completely different?
A: I got this from Eran Kampf and a Scott Hanselman vid (forgot the link) so it's only slightly different from some other posts here, but hopefully helpful and copy paste ready as an example rss feed.
From my blog
Eran Kampf
using System;
using System.Collections.Generic;
using System.ServiceModel.Syndication;
using System.Web;
using System.Web.Mvc;
using System.Xml;
namespace MVC3JavaScript_3_2012.Rss
{
public class RssFeed : FileResult
{
private Uri _currentUrl;
private readonly string _title;
private readonly string _description;
private readonly List<SyndicationItem> _items;
public RssFeed(string contentType, string title, string description, List<SyndicationItem> items)
: base(contentType)
{
_title = title;
_description = description;
_items = items;
}
protected override void WriteFile(HttpResponseBase response)
{
var feed = new SyndicationFeed(title: this._title, description: _description, feedAlternateLink: _currentUrl,
items: this._items);
var formatter = new Rss20FeedFormatter(feed);
using (var writer = XmlWriter.Create(response.Output))
{
formatter.WriteTo(writer);
}
}
public override void ExecuteResult(ControllerContext context)
{
_currentUrl = context.RequestContext.HttpContext.Request.Url;
base.ExecuteResult(context);
}
}
}
And the Controller Code....
[HttpGet]
public ActionResult RssFeed()
{
var items = new List<SyndicationItem>();
for (int i = 0; i < 20; i++)
{
var item = new SyndicationItem()
{
Id = Guid.NewGuid().ToString(),
Title = SyndicationContent.CreatePlaintextContent(String.Format("My Title {0}", Guid.NewGuid())),
Content = SyndicationContent.CreateHtmlContent("Content The stuff."),
PublishDate = DateTime.Now
};
item.Links.Add(SyndicationLink.CreateAlternateLink(new Uri("http://www.google.com")));//Nothing alternate about it. It is the MAIN link for the item.
items.Add(item);
}
return new RssFeed(title: "Greatness",
items: items,
contentType: "application/rss+xml",
description: String.Format("Sooper Dooper {0}", Guid.NewGuid()));
}
A: Here is what I recommend:
*
*Create a class called RssResult that
inherits off the abstract base class
ActionResult.
*Override the ExecuteResult method.
*ExecuteResult has the ControllerContext passed to it by the caller and with this you can get the data and content type.
*Once you change the content type to rss, you will want to serialize the data to RSS (using your own code or another library) and write to the response.
*Create an action on a controller that you want to return rss and set the return type as RssResult. Grab the data from your model based on what you want to return.
*Then any request to this action will receive rss of whatever data you choose.
That is probably the quickest and reusable way of returning rss has a response to a request in ASP.NET MVC.
A: I agree with Haacked. I am currently implementing my site/blog using the MVC framework and I went with the simple approach of creating a new View for RSS:
<%@ Page ContentType="application/rss+xml" Language="C#" AutoEventWireup="true" CodeBehind="PostRSS.aspx.cs" Inherits="rr.web.Views.Blog.PostRSS" %><?xml version="1.0" encoding="utf-8"?>
<rss version="2.0">
<channel>
<title>ricky rosario's blog</title>
<link>http://<%= Request.Url.Host %></link>
<description>Blog RSS feed for rickyrosario.com</description>
<lastBuildDate><%= ViewData.Model.First().DatePublished.Value.ToUniversalTime().ToString("r") %></lastBuildDate>
<language>en-us</language>
<% foreach (Post p in ViewData.Model) { %>
<item>
<title><%= Html.Encode(p.Title) %></title>
<link>http://<%= Request.Url.Host + Url.Action("ViewPostByName", new RouteValueDictionary(new { name = p.Name })) %></link>
<guid>http://<%= Request.Url.Host + Url.Action("ViewPostByName", new RouteValueDictionary(new { name = p.Name })) %></guid>
<pubDate><%= p.DatePublished.Value.ToUniversalTime().ToString("r") %></pubDate>
<description><%= Html.Encode(p.Content) %></description>
</item>
<% } %>
</channel>
</rss>
For more information, check out (shameless plug) http://rickyrosario.com/blog/creating-an-rss-feed-in-asp-net-mvc
A: The .NET framework exposes classes that handle syndation: SyndicationFeed etc.
So instead of doing the rendering yourself or using some other suggested RSS library why not let the framework take care of it?
Basically you just need the following custom ActionResult and you're ready to go:
public class RssActionResult : ActionResult
{
public SyndicationFeed Feed { get; set; }
public override void ExecuteResult(ControllerContext context)
{
context.HttpContext.Response.ContentType = "application/rss+xml";
Rss20FeedFormatter rssFormatter = new Rss20FeedFormatter(Feed);
using (XmlWriter writer = XmlWriter.Create(context.HttpContext.Response.Output))
{
rssFormatter.WriteTo(writer);
}
}
}
Now in your controller action you can simple return the following:
return new RssActionResult() { Feed = myFeedInstance };
There's a full sample on my blog at http://www.developerzen.com/2009/01/11/aspnet-mvc-rss-feed-action-result/
A: Another crazy approach, but has its advantage, is to use a normal .aspx view to render the RSS. In your action method, just set the appropriate content type. The one benefit of this approach is it is easy to understand what is being rendered and how to add custom elements such as geolocation.
Then again, the other approaches listed might be better, I just haven't used them. ;)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/11915",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "115"
} |
Q: How do I get PHP and MySQL working on IIS 7.0? Okay, I've looked all over the internet for a good solution to get PHP and MySQL working on IIS7.0. It's nearly impossible, I've tried it so many times and given up in vain. Please please help by linking some great step-by-step tutorial to adding PHP and MySQL on IIS7.0 from scratch. PHP and MySQL are essential for installing any CMS.
A: Have you taken a look at this:
http://learn.iis.net/page.aspx/246/using-fastcgi-to-host-php-applications-on-iis7/
MySQL should be pretty straight forward.
Let us know what problems you're encountering...
A: I've been given a PHP / MySQL web site that I'm to host with IIS 7.0 on 64-bit Windows Server 2008.
I'm a .NET / MSSQL developer, and am unfamiliar with either PHP or MySQL.
Kev wrote:
Have you taken a look at this…
I don't know if any one implementation of Win64 PHP is more authoratative or popular than another.
I'm going to try following the steps in Kev's Enable FastCGI support in IIS7.0 article with file php-5.2.5-x64-2007-11-12.zip from fusion-x lan.
It's "PHP Version 5.2.5 (x64)", but according to php.net, the latest version is PHP 5.2.6. Oh, well.
*
*Make sure "ISAPI Extensions" are installed in IIS (mine were).
*Download and then unzip php-5.2.5-x64-2007-11-12.zip
*Copy contents of folder php-5.2.5 (x64) into *C:\php*
*Copy file C:\php\php.ini-dist into folder *C:\Windows*
*Rename file C:\Windows\php.ini-dist as php.ini
*Edit php.ini in Notepad. Remove leading semi-colon (;) from line:
;extension=php_mysql.dll
*Save and close
*Copy file C:\php\ext\php_mysql.dll into folder *C:\Windows\System32*
*Within IIS Manager's "Handler Mappings", choose "Add Script Map…"
Request path: *.php
Executable: C:\php\php5isapi.dll
Name: PHP
*Install MySQL (someone had already installed MySQL 5.0 for me).
*Create file C:\inetpub\wwwroot\test.php as
<html>
<head>
<title>PHP Information</title>
</head>
<body>
<?php phpInfo(); ?>
</body>
</html>
*Navigate to http://localhost/test.php in your web browser. You will see a page of information about PHP.
Roadblock: How do I get PHP to work with ADOdb and MySQL?
A: It's supposed to work via FastCGI. But I haven't had great success (using Vista). I can get PHP to run, but it crashes after a page loads (FastCGI does). So I'm modding you up. I'd like to see a reliable answer myself.
A: From my experience with windows/apache it's just a matter of install MySQL, I can't Imagine that IIS/Apache has anything to do with this.
A: Apache is a major pain to get running in Vista. And II7 (and 6) are suppose to run PHP fine. So why bother with Apache?
A: I would suggest if you are going for a PHP and MySQL install to instead use WAMP. It works great and is easy to add extensions and modify everything. I use it for work and love it.
A: One of the IIS developers has an excellent walkthrough here:
http://blogs.iis.net/bills/archive/2006/10/31/PHP-on-IIS.aspx
However, for the love of god why?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/11919",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
} |
Q: ASP.Net MVC route mapping I'm new to MVC (and ASP.Net routing). I'm trying to map *.aspx to a controller called PageController.
routes.MapRoute(
"Page",
"{name}.aspx",
new { controller = "Page", action = "Index", id = "" }
);
Wouldn't the code above map *.aspx to PageController? When I run this and type in any .aspx page I get the following error:
The controller for path '/Page.aspx' could not be found or it does not implement the IController interface.
Parameter name: controllerType
Is there something I'm not doing here?
A: I just answered my own question. I had the routes backwards (Default was above page). Below is the correct order. So this brings up the next question... how does the "Default" route match (I assume they use regular expressions here) the "Page" route?
routes.MapRoute(
"Page",
"{Name}.aspx",
new { controller = "Page", action = "Display", id = "" }
);
routes.MapRoute(
"Default", // Route name
"{controller}/{action}/{id}", // URL with parameters
new { controller = "Home", action = "Index", id = "" } // Parameter defaults
);
A:
I just answered my own question. I had
the routes backwards (Default was
above page).
Yeah, you have to put all custom routes above the Default route.
So this brings up the next question...
how does the "Default" route match (I
assume they use regular expressions
here) the "Page" route?
The Default route matches based on what we call Convention over Configuration. Scott Guthrie explains it well in his first blog post on ASP.NET MVC. I recommend that you read through it and also his other posts. Keep in mind that these were posted based on the first CTP and the framework has changed. You can also find web cast on ASP.NET MVC on the asp.net site by Scott Hanselman.
*
*http://weblogs.asp.net/scottgu/archive/2007/11/13/asp-net-mvc-framework-part-1.aspx
*http://www.asp.net/MVC/
A: On one of Rob Conery's MVC Storefront screencasts, he encounters this exact issue. It's at around the 23 minute mark if you're interested.
A: Not sure how your controller looks, the error seems to be pointing to the fact that it can't find the controller. Did you inherit off of Controller after creating the PageController class? Is the PageController located in the Controllers directory?
Here is my route in the Global.asax.cs
routes.MapRoute(
"Page",
"{Page}.aspx",
new { controller = "Page", action = "Index", id = "" }
);
Here is my controller, which is located in the Controllers folder:
using System.Web.Mvc;
namespace MvcApplication1.Controllers
{
public class PageController : Controller
{
public void Index()
{
Response.Write("Page.aspx content.");
}
}
}
A: public class AspxRouteConstraint : IRouteConstraint
{
#region IRouteConstraint Members
public bool Match(HttpContextBase httpContext, Route route, string parameterName, RouteValueDictionary values, RouteDirection routeDirection)
{
return values["aspx"].ToString().EndsWith(".aspx");
}
#endregion
}
register the route for all aspx
routes.MapRoute("all",
"{*aspx}",//catch all url
new { Controller = "Page", Action = "index" },
new AspxRouteConstraint() //return true when the url is end with ".aspx"
);
And you can test the routes by MvcRouteVisualizer
| {
"language": "en",
"url": "https://stackoverflow.com/questions/11926",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13"
} |
Q: How can I determine the IP of my router/gateway in Java? How can I determine the IP of my router/gateway in Java? I can get my IP easily enough. I can get my internet IP using a service on a website. But how can I determine my gateway's IP?
This is somewhat easy in .NET if you know your way around. But how do you do it in Java?
A: On windows parsing the output of IPConfig will get you the default gateway, without waiting for a trace.
A: try{
String gateway;
Process result = Runtime.getRuntime().exec("netstat -rn");
BufferedReader output = new BufferedReader(new InputStreamReader(result.getInputStream()));
String line = output.readLine();
while(line != null){
if ( line.trim().startsWith("default") == true || line.trim().startsWith("0.0.0.0") == true )
break;
line = output.readLine();
}
if(line==null) //gateway not found;
return;
StringTokenizer st = new StringTokenizer( line );
st.nextToken();
st.nextToken();
gateway = st.nextToken();
System.out.println("gateway is: "+gateway);
} catch( Exception e ) {
System.out.println( e.toString() );
gateway = new String();
adapter = new String();
}
A: On Windows, OSX, Linux, etc then Chris Bunch's answer can be much improved by using
netstat -rn
in place of a traceroute command.
Your gateway's IP address will appear in the second field of the line that starts either default or 0.0.0.0.
This gets around a number of problems with trying to use traceroute:
*
*on Windows traceroute is actually tracert.exe, so there's no need for O/S dependencies in the code
*it's a quick command to run - it gets information from the O/S, not from the network
*traceroute is sometimes blocked by the network
The only downside is that it will be necessary to keep reading lines from the netstat output until the right line is found, since there'll be more than one line of output.
EDIT: The Default Gateway's IP Address is in the second field of the line that starts with 'default' if you are on a MAC (tested on Lion), or in the third field of the line that starts with '0.0.0.0' (tested on Windows 7)
Windows:
Network Destination Netmask Gateway Interface Metric
0.0.0.0 0.0.0.0 192.168.2.254 192.168.2.46 10
Mac:
Destination Gateway Flags Refs Use Netif Expire
default 192.168.2.254 UGSc 104 4 en1
A: You may be better off using something like checkmyip.org, which will determine your public IP address - not necessarily your first hop router: at Uni I have a "real" IP address, whereas at home it is my local router's public IP address.
You can parse the page that returns, or find another site that allows you to just get the IP address back as the only string.
(I'm meaning load this URL in Java/whatever, and then get the info you need).
This should be totally platform independent.
A: Regarding UPnP: be aware that not all routers support UPnP. And if they do it could be switched off (for security reasons). So your solution might not always work.
You should also have a look at NatPMP.
A simple library for UPnP can be found at http://miniupnp.free.fr/, though it's in C...
A: To overcome the issues mentioned with traceroute (ICMP-based, wide area hit) you could consider:
*
*traceroute to your public IP (avoids wide-area hit, but still ICMP)
*Use a non-ICMP utility like ifconfig/ipconfig (portability issues with this though).
*What seems the best and most portable solution for now is to shell & parse netstat (see the code example here)
A: output of netstat -rn is locale specific.
on my system (locale=de) the output looks like:
...
Standardgateway: 10.22.0.1
so there is no line starting with 'default'.
so using netstat might be no good idea.
A: This Version connects to www.whatismyip.com, reads the content of the site and searches via regular expressions the ip adress and prints it to the cmd. Its a little improvement of MosheElishas Code
import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStreamReader;
import java.net.URL;
import java.util.regex.Matcher;
import java.util.regex.Pattern;
public class Main {
public static void main(String[] args) {
BufferedReader buffer = null;
try {
URL url = new URL(
"http://www.whatismyip.com/tools/ip-address-lookup.asp");
InputStreamReader in = new InputStreamReader(url.openStream());
buffer = new BufferedReader(in);
String line = buffer.readLine();
Pattern pattern = Pattern
.compile("(.*)value=\"(\\d+).(\\d+).(\\d+).(\\d+)\"(.*)");
Matcher matcher;
while (line != null) {
matcher = pattern.matcher(line);
if (matcher.matches()) {
line = matcher.group(2) + "." + matcher.group(3) + "."
+ matcher.group(4) + "." + matcher.group(5);
System.out.println(line);
}
line = buffer.readLine();
}
} catch (IOException e) {
e.printStackTrace();
} finally {
try {
if (buffer != null) {
buffer.close();
}
} catch (IOException e) {
e.printStackTrace();
}
}
}
}
import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStreamReader;
import java.net.URL;
import java.util.regex.Matcher;
import java.util.regex.Pattern;
public class Main {
public static void main(String[] args) {
BufferedReader buffer = null;
try {
URL url = new URL(
"http://www.whatismyip.com/tools/ip-address-lookup.asp");
InputStreamReader in = new InputStreamReader(url.openStream());
buffer = new BufferedReader(in);
String line = buffer.readLine();
Pattern pattern = Pattern
.compile("(.*)value=\"(\\d+).(\\d+).(\\d+).(\\d+)\"(.*)");
Matcher matcher;
while (line != null) {
matcher = pattern.matcher(line);
if (matcher.matches()) {
line = matcher.group(2) + "." + matcher.group(3) + "."
+ matcher.group(4) + "." + matcher.group(5);
System.out.println(line);
}
line = buffer.readLine();
}
} catch (IOException e) {
e.printStackTrace();
} finally {
try {
if (buffer != null) {
buffer.close();
}
} catch (IOException e) {
e.printStackTrace();
}
}
}
}
A: Java doesn't make this as pleasant as other languages, unfortunately. Here's what I did:
import java.io.*;
import java.util.*;
public class ExecTest {
public static void main(String[] args) throws IOException {
Process result = Runtime.getRuntime().exec("traceroute -m 1 www.amazon.com");
BufferedReader output = new BufferedReader(new InputStreamReader(result.getInputStream()));
String thisLine = output.readLine();
StringTokenizer st = new StringTokenizer(thisLine);
st.nextToken();
String gateway = st.nextToken();
System.out.printf("The gateway is %s\n", gateway);
}
}
This presumes that the gateway is the second token and not the third. If it is, you need to add an extra st.nextToken(); to advance the tokenizer one more spot.
A: That is not as easy as it sounds. Java is platform independent, so I am not sure how to do it in Java. I am guessing that .NET contacts some web site which reports it back. There are a couple ways to go. First, a deeper look into the ICMP protocol may give you the information you need. You can also trace the IP you go through (your route). When you encounter an IP that is not in the following ranges:
*
*10.0.0.0 – 10.255.255.255
*172.16.0.0 – 172.31.255.255
*192.168.0.0 – 192.168.255.255
it is the IP one hop away from yours, and probably shares a few octets of information with your IP.
Best of luck. I'll be curious to hear a definitive answer to this question.
A: Try shelling out to traceroute if you have it.
'traceroute -m 1 www.amazon.com' will emit something like this:
traceroute to www.amazon.com (72.21.203.1), 1 hops max, 40 byte packets
1 10.0.1.1 (10.0.1.1) 0.694 ms 0.445 ms 0.398 ms
Parse the second line. Yes, it's ugly, but it'll get you going until someone posts something nicer.
A: Matthew: Yes, that is what I meant by "I can get my internet IP using a service on a website." Sorry about being glib.
Brian/Nick: Traceroute would be fine except for the fact that lots of these routers have ICMP disabled and thus it always stalls.
I think a combination of traceroute and uPnP will work out. That is what I was planning on doing, I as just hoping I was missing something obvious.
Thank you everyone for your comments, so it sounds like I'm not missing anything obvious. I have begun implementing some bits of uPnP in order to discover the gateway.
A: You can query the URL "http://whatismyip.com/automation/n09230945.asp".
For example:
BufferedReader buffer = null;
try {
URL url = new URL("http://whatismyip.com/automation/n09230945.asp");
InputStreamReader in = new InputStreamReader(url.openStream());
buffer = new BufferedReader(in);
String line = buffer.readLine();
System.out.println(line);
} catch (IOException e) {
e.printStackTrace();
} finally {
try {
if (buffer != null) {
buffer.close();
}
} catch (IOException e) {
e.printStackTrace();
}
}
A: In windows you can just use the following command:
ipconfig | findstr /i "Gateway"
Which will give you output like:
Default Gateway . . . . . . . . . : 192.168.2.1
Default Gateway . . . . . . . . . : ::
However I can't run this command with Java, gonna post when I figure this out.
A: You can use netstat -rn command which is available on Windows, OSX, Linux, etc platform. Here is my code:
private String getDefaultAddress() {
String defaultAddress = "";
try {
Process result = Runtime.getRuntime().exec("netstat -rn");
BufferedReader output = new BufferedReader(new InputStreamReader(
result.getInputStream()));
String line = output.readLine();
while (line != null) {
if (line.contains("0.0.0.0")) {
StringTokenizer stringTokenizer = new StringTokenizer(line);
stringTokenizer.nextElement(); // first element is 0.0.0.0
stringTokenizer.nextElement(); // second element is 0.0.0.0
defaultAddress = (String) stringTokenizer.nextElement();
break;
}
line = output.readLine();
} // while
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
return defaultAddress;
} // getDefaultAddress
A: I'm not sure if it works on every system but at least here I found this:
import java.net.InetAddress;
import java.net.UnknownHostException;
public class Main
{
public static void main(String[] args)
{
try
{
//Variables to find out the Default Gateway IP(s)
String canonicalHostName = InetAddress.getLocalHost().getCanonicalHostName();
String hostName = InetAddress.getLocalHost().getHostName();
//"subtract" the hostName from the canonicalHostName, +1 due to the "." in there
String defaultGatewayLeftover = canonicalHostName.substring(hostName.length() + 1);
//Info printouts
System.out.println("Info:\nCanonical Host Name: " + canonicalHostName + "\nHost Name: " + hostName + "\nDefault Gateway Leftover: " + defaultGatewayLeftover + "\n");
System.out.println("Default Gateway Addresses:\n" + printAddresses(InetAddress.getAllByName(defaultGatewayLeftover)));
} catch (UnknownHostException e)
{
e.printStackTrace();
}
}
//simple combined string out of the address array
private static String printAddresses(InetAddress[] allByName)
{
if (allByName.length == 0)
{
return "";
} else
{
String str = "";
int i = 0;
while (i < allByName.length - 1)
{
str += allByName[i] + "\n";
i++;
}
return str + allByName[i];
}
}
}
For me this produces:
Info:
Canonical Host Name: PCK4D-PC.speedport.ip
Host Name: PCK4D-PC
Default Gateway Leftover: speedport.ip
Default Gateway Addresses:
speedport.ip/192.168.2.1
speedport.ip/fe80:0:0:0:0:0:0:1%12
I'd require more tests on other Systems/Configurations/PC-Gateway-Setups to confirm if it works everywhere. Kind of doubt it but this was the first I found.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/11930",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17"
} |
Q: How do you log errors (Exceptions) in your ASP.NET apps? I'm looking for the best way to log errors in an ASP.NET application.
I want to be able to receive emails when errors occurs in my application, with detailed information about the Exception and the current Request.
In my company we used to have our own ErrorMailer, catching everything in the Global.asax Application_Error. It was "Ok" but not very flexible nor configurable.
We switched recently to NLog. It's much more configurable, we can define different targets for the errors, filter them, buffer them (not tried yet). It's a very good improvement.
But I discovered lately that there's a whole Namespace in the .Net framework for this purpose : System.Web.Management and it can be configured in the healthMonitoring section of web.config.
Have you ever worked with .Net health monitoring? What is your solution for error logging?
A: I've been using Log4net, configured to email details of fatal errors. It's also set up to log everything to a log file, which is invaluable when trying to debug problems. The other benefit is that if that standard functionality doesn't do what you want it to, it's fairly easy to write a custom appender which can process the logging information as required.
Having said that, I'm using this in tandem with a custom error handler which sends out a html email with a bit more information than is included in the standard log4net emails - page, session variables, cookies, http server variables, etc.
These are both wired up in the Application_OnError event, where the exception is logged as a fatal exception in log4net (which then causes it to be emailed to a specified email address), and also handled using the custom error handler.
First heard about Elmah from the Coding Horror blog entry, Crash Responsibly, and although it looks promising I'm yet to implement it any projects.
A: I use elmah. It has some really nice features and here is a CodeProject article on it. I think the StackOverflow team uses elmah also!
A: I've been using the Enterprise Library's Logging objects. It allows you to have different types of logging (flat file, e-mail, and/or database). It's pretty customizable and has a pretty good interface for updating your web.config for the configuration of the logging. Usually I call my logging from the On Error in the Global.asax.
Here's a link to the MSDN
A: I use log4net and where ever I expect an exception I log it to the appropriate level. I tend not to re-throw the exception because it doesn't really allow for as-nice user experience, there is less info you can provide at the current state.
I'll have Application_Error also configured to catch any exception which was not expected and the error is logged as a Fatal priority through log4net (well, 404's are detected and logged as Info as they aren't that high severity).
A: We use a custom homegrown logging util we wrote. It requires you to implement logging on your own everywhere you need it. But, it also allows you to capture a lot more than just the exception.
For example our code would look like this:
Try
Dim p as New Person()
p.Name = "Joe"
p.Age = 30
Catch ex as Exception
Log.LogException(ex,"Err creating person and assigning name/age")
Throw ex
End Try
This way our logger will write all the info we need to a SQL database. We have email alerts set up at the DB level to look for certain errors or frequently occurring errors. It helps us identify exactly where the errors are coming from.
This might not be exactly what you're looking for. Another approach similar to using Global.asax is to us a code injection technique like AOP with PostSharp. This allows you to inject custom code at the beginning and end of every method or on every exception. It's an interesting approach but I believe it may have a heavy performance overhead.
A: My team uses log4net from Apache. It's pretty lightweight and easy to setup. Best of all, it's completely configurable from the web.config file, so once you've got the hooks in your code setup, you can completely change the way logging is done just by changing the web.config file.
log4net supports logging to a wide variety of locations - database, email, text file, Windows event log, etc. My team has it configured to send detailed error information to a database, and also send an email to the entire team with enough information for us to determine in which part of the code the error originated. Then we know who is responsible for that piece of code, and they can go to the database to get more detailed information.
A: I recently built an asp.net webservice with NLog, which I use for all my desktop apps. The logging works fine when I'm debugging in Visual Studio, but as soon as I switch to IIS the log file isn't created; I've not yet determined why, but it the fact that I need to look for a solution makes me want to try something else for my asp.net needs!
A: We use EnterpriseLibrary.ExceptionHandling.Logging. I like it a bit better than log4net because not only do we control the logging completely, but we can control the Throw/NoThrow decision within config as well.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/11950",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "30"
} |
Q: Pay for vmware or use Open Source? What should I use to virtualize my desktop, vmx, xen, or vmware?
Needs to work on a linux or windows host, sorry virtual pc.
@Derek Park: Free as in speech, not beer. I want to be able to make a new virtual machine from my own licensed copies of windows, for that vmware is kind of expensive.
A: @ChanChan, I don't think you can claim to be only interested in freedom when you ask if you should "Pay for vmware." I'm forced to assume you are talking about money there, not about freedom. :p
Nonetheless, I gave you a poor link. VMware Server is free (as in beer) and will run Windows VMs just fine.
For what it's worth, I've also used Xen, and it's perfectly good, too.
Edit: I reread this and it sounds really obnoxious and rude. So, I'd just like to apologize, ChanChan, for not taking more care with my reply. (I would have apologized in a private message, but we don't have those yet.)
A: I've only had experience with VMware ESX, and while it's a fairly expensive product, it is also very powerful. I would definitely recommend it if you have the resources. Depending on your needs, they also have a more basic (and free) version, VMware Server.
A: Try VirtualBox. It's free, open source, and it runs on Windows, Linux, Macintosh and OpenSolaris.
A: I've been using vmware for about 8 years or so. Currently I'm using it on a mac and am very happy with it. I still have my old windows 95 in suspended animation, I boot it up every once in a while to show my kids the awesomeness of 32M and 256 colors.
That being said, you should probably try them out with the particular environment and apps you will be using, and see which one is best for you.
One feature of vmware I really like is the ability to snapshot the system. I do this before every software install, and when one of them goes awry I just revert the virtual box back to the pre-install state. It's great!
A: We have been using VMWare Server in production for 2 years now, and are migrating to ESX next year. For your desktop the free VMWare version will work well for you. There is also a utility to convert an existing machine to a VM slice.
A: I've tried VirtualBox, VMWare Server (free) and Virtual PC. Of the three, VMWare seems to be the fastest. The other two were just too slow for me. The one thing I don't like about VMWare is that you only get one snapshot per vm. Of course, I could get more if I bought the VMWorkstation product but, at $200, it's more than I can afford right now.
A: Um, VMware is free.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/11964",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: What is the best way to upload a file via an HTTP POST with a web form? Basically, something better than this:
<input type="file" name="myfile" size="50">
First of all, the browse button looks different on every browser. Unlike the submit button on a form, you have to come up with some hack-y way to style it.
Secondly, there's no progress indicator showing you how much of the file has uploaded. You usually have to implement some kind of client-side way to disable multiple submits (e.g. change the submit button to a disabled button showing "Form submitting... please wait.") or flash a giant warning.
Are there any good solutions to this that don't use Flash or Java?
Yaakov: That product looks to be exactly what I'm looking for, but the cost is $1000 and its specifically for ASP.NET. Are there any open source projects that cover the same or similar functionality?
A: File upload boxes is where we're currently at if you don't want to involve other technologies like Flash, Java or ActiveX.
With plain HTML you are pretty much limited to the experience you've described (no progress bar, double submits, etc). If you are willing to use some javascript, you can solve some of the problems by giving feedback that the upload is in progress and even showing the upload progress (it is a hack because you shouldn't have to do a full round-trip to the server and back, but at least it works).
If you are willing to use Flash (which is available pretty much anywhere and on many platforms), you can overcome pretty much all of these problems. A quick googling turned up two such components, both of them free and open source. I never used any of them, but they look good. BTW, Flash isn't without its problems either, for example when using the multi-file uploader for slide share, the browser kept constantly crashing on me :-(
Probably the best solution currently is to detect dynamically if the user has Flash, and if it's the case, give her the flash version of the uploader, while still making it possible to choose the basic HTML one.
HTH
A: You could have a look at the Fancy Upload script. Though it uses flash it still looks great.
A: The problem here is that the browsers specifically work to block anything that changes the basic file upload input control. You can't change it with javascript for instance.
The reason is security - if I could script it I could build a page that when you visited it sent me various files from your hard disk. Not nice.
There are various workarounds at the moment, but they're different between IE and FX (I don't know about Safari, Opera, etc).
Look at what http://www.gmail.com does in IE and FX when you attach something to an e-mail.
I want to see that rubbish "Browse" button - it tells me that I'm not letting anything unexpected in.
A: It is true, the file upload control is definitely behind the times. Hopefully this will be addressed in a future asp.net version.
Though it costs some money, I have found the Telerik upload control to have all of the functionality that you are looking for, including styling and progress updates (it also optimizes memory for large uploads).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/11974",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Handling timezones in storage? Store everything in GMT?
Store everything the way it was entered with an embedded offset?
Do the math everytime you render?
Display relative Times "1 minutes ago"?
A: Josh is completely correct above, but I have one subtle caveat to explain. This is a case with no correct answer regarding future events and timezones.
Consider the case of a repeating appointment. It occurs at GMT 0000 (for simplicity), which is 1200 NZST (New Zealand Standard Time) and 1000 AEST in Sydney Australia.
When Daylight Savings comes into effect in one zone, what should occur to the appointment? Should it:
1a. If the TZ change is in the zone of
the appointment's "owner" (who
booked it) then attempt to remain at
the same desk clock time (eg 10:00am)?
1b. If the
TZ change is in one of the other
meeting attendee's zones then no
change
Consequences: It moves for
everyone else, unexpectedly, due to
the owners TZ change, but it stays
"the 10am meeting" as far as the
owner is concerned.
'2. As above, but reversed.
Consequences: It moves for the meeting owner (the 10am meeting becomes the 9am meeting, or v/v), which may be expected but inconvenient. It stays at the same desk clock time for the other attendees until they go through their own TZ transition.
Neither is perfect. Consider the case of two appointments, one booked by Person A that occurs at 10am local time, the other booked by Person B with Person A as an attendee that occurs at 9am. If Person A and Person B are in different TZ's then a DST change could easily cause them to become double-booked.
If your mind is a bit bent at this point then I quite understand.
The point behind this example is that to do either of these behaviors properly you need to know not just the UTC version of the local time, but the TZ (and not the offset) that the owner was in when they booked it. Otherwise you have no choice but to take option 2, silently, without even informing anyone that things have changed since GMT times don't change and only the presentation changes...right? (no, this is the trap, presentation matters when your 10am meeting moves by itself)
I have to credit my colleague and friend Jason Pollock for this insight. Read his view here, and the follow-up discussing iCal and VTIMEZONE here.
A: You have to store in UTC - if you don't, your historic reporting and behaviour during things like Daylight Savings goes... funny. GMT is a local time, subject to Daylight Savings relative to UTC (which is not).
Presentation to users in different time-zones can be a real bastard if you're storing local time. It's easy to adjust to local if your raw data is in UTC - just add your user's offset and you're done!
Joel talked about this in one of the podcasts (in a round-about way) - he said to store your data in the highest resolution possible (search for 'fidelity'), because you can always munge it when it goes out again. That's why I say store it as UTC, as local time you need to adjust for anyone who's not in that timezone, and that's a lot of hard work. And you need to store whether, for example, daylight savings was in effect when you stored the time. Yuk.
Often in databases in the past I've stored two - UTC for sorting, local time for display. That way neither the user nor the computer get confused.
Now, as to display: Sure, you can do the "3 minutes ago" thing, but only if you store UTC - otherwise, data entered in different timezones is going to do things like display as "-4 hours ago", which will freak people out. If you're going to display an actual time, people love to have it in their local time - and if data's being entered in multiple timezones you can only do that with ease if you're storing UTC.
A: Storing everything in GMT/UTC seems most logical to me. You can then show the date and time in every timezone you want.
A few ceveats:
*
*If a time is only specified as a
wall clock time and that is the
leading representation, then it is
not an absolutely specified time.
You should (and cannot) convert it
in any GMT representation. E.G. 9:00
AM every morning. In other words:
this is no (date)time.
*If you
save a date and time of a future
appointment, you should use the
offset to GMT specified by the input
timezone and the the moment in time
itself. So if it is an appointment
in summer made in winter in e.g.
western europe, it is +2:00,
allthough the normal (winter time)
offset is +1:00. This will solve the
calender problem that Bwooce
mentioned.
*Of course, the same
that applies to using the right
offset while converting to GMT
applies when converting back to a
date and time in any particular
timezone.
Luckily, when used correctly, the (.NET) DateTime type takes care of all the gory details of keeping calendars etc. for you and all of this should be very easy when you know how it works.
A: The answer, as always, is "depends".
It depends on what you are describing with the time, and how the data was provided to you.
The key to deciding how to store time values is deciding if you are losing information by dropping the timezone, as well as not surprising your users.
There are definite benefits in storing data in a UTC time_t - it is a single int, allowing quick sorting and easy storage.
I see the problem as being broken down into specific areas:
*
*Historical Data
*Future, Short Term Data
*Future, Long Term Data
With the following subclasses on each:
*
*System Provided
*User Provided
Let's look at them one at a time.
System Provided: I would recommend running systems in UTC, then you avoid the timezone problem and again, no information loss is seen (it's always UTC).
Historical Data: These are things like system log files, process statistics, tracing, comment dates/times, etc. The data isn't going to change, and the timezone descriptor isn't going to change retroactively. For this type of data, there is no information lost by storing the information in UTC regardless of the timezone it was provided in. So, drop the timezone.
Future, Long Term Data: These are events that are either far enough in the future or will keep happening. If they are kept around long enough, the timezone descriptors are guaranteed change. A good example of this type of data is, "The Weekly Management Meeting". This is data that is entered once, and expected to keep working into perpetuity. For these values, it is important to determine if it is system or user provided. For user-provided data, the time should be stored with the creator's timezone, anything else results in information loss. This information loss becomes apparent when the timezone definition changes and the time is displayed to the creator as having an entirely different value!
As Bwooce has indicated, there is some confusion where the creator and viewer are in different timezones. In that case, I would expect the application to indicate which time values have moved due to a timezone shift from their previous locations.
Future, Short Term Data: This is data that is quickly going to become historical, or is only valid for a short period of time. Examples could be interval timers, rating transitions, etc. For this data, since the likelihood is low that the definition will change between the creation of the value and the time it becomes historical, it might be possible to get away with dropping the timezone. However, I have found that these values have a bad habit of becoming "Future, Long Term Data".
Once you have decided to store the timezone, care must be taken with how it is stored.
*
*Don't store the timezone as an offset, or the full descriptor.
If you store a timezone as an offset, what do you do if the timezone changes? Do you go through the system and do a blanket change on the existing data? If you do, you've now made any historical values incorrect. Good examples of this fault are Oracle and iCal. Oracle stores timezone information as an offset from UTC, and iCal includes the full descriptor for the creation timezone.
*
*Do store it as a name.
This allows the definition of the timezone to change without having to modify the existing values you have. It does make sorting more difficult, since any index that is generated may be invalid if the timezone data changes.
If developers continue to store everything in UTC, irrespective of timezone, we will continue to see the problems that we've seen with the last batch of timezone changes.
At one organisation, the secretaries had to print out the calendars for their teams before the daylight savings date, and then print them out again after the change. Finally, they compared the two and re-created all of the appointments that had moved. Of course, they missed several, and there were several weeks of pain until the old daylight savings date was reached and the times became correct again.
A: Personally, I can't see any reason not to store everything in GMT and then use the users local timezone to display the time as it relates to them.
If you want to display relative time, you obviously still need the time and do a translation, but if you do want to do the translation I think GMT is still your best option.
A: So I ran a little experiment with MSSQL server.
I created a table and added a row with the current localized timezone (Australia).
Then I changed my datetime to be GMT and added another row.
Even tho those rows were added around 10 seconds apart, they appear in SQL server as tho they're 10 hours apart.
If nothing else, it at least tells me that I should be storing dates in a conisitent manner, which for me, adds weight to the argument for storing them as GMT.
A: MS Dynamics stores GMT and then at a user level knows your times zone relative to GMT. Then it displays items to you in your time zone.
Just thought I'd throw that out there as that's a pretty big group at MS and this is how they decided to handle it.
A: i prefer to store everything with the timezone.
the client can decide, which way it should be presented later.
my favorite library for converting is the PostgreSQL-Database.
A: Have a look here, the w3c have done an excellent job answering the question.
Look at the use cases.
http://www.w3.org/TR/timezone/
Note that they recommend storing datetimes as UTC not GMT, GMT is subject to daylight savings time.
A: I like storing in GMT and showing only relative ("about 10 seconds ago", "5 months ago"). Users don't need to see actual timestamps for most use cases.
There are certainly exceptions, and an individual application might have many of them, so it can't be a 'one-true-way' answer. Things that need strong audit-ability (e.g. voting), and systems where time is part of the domain of discourse (astronomy, scientific research) might demand true timestamps to be shown to the user.
Most apps, though, are easier to understand with a simple relative time.
A: I usually just use Unix time. not necessarily future safe, but it works pretty well.
A: Always store in GMT (or UTC). From there it is easy to convert to any local time zone value.
A: Dates should be stored as UTC UNLESS it is user provided data and you CANNOT know what timezone the user intended that data to be in. Sometimes (very very rarely) you need to just store the hour, minute, second, day, month and year components without any timezone so you can spit it out back to the user. Now for new developers or if you're unsure, store UTC and you will be 99% correct.
But don't be fooled by believing this works 100% of the time for all cases all the time. It does not.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/11975",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "20"
} |
Q: Why does sqlite3-ruby-1.2.2 not work on OS X? I am running
*
*OS X 10.5,
*Ruby 1.8.6,
*Rails 2.1,
*sqlite3-ruby 1.2.2
and I get the following error when trying to rake db:migrate on an app that works find connected to MySQL.
rake aborted!
no such file to load -- sqlite3/database
A: Looks like there's a bug with 1.2.2. Just roll back to 1.2.1 with:
gem install sqlite3-ruby -v=1.2.1
and that will fix the problem.
A: Jamis has just released 1.2.4, and the comment history on that bug suggests that the fix is in 1.2.3 and later versions. As a quick test, I did the following on an OS X 10.5 box with Ruby 1.8.6:
sudo gem install sqlite3-ruby
(verified version number of 1.2.4)
rails test
(used default database.yml with sqlite3)
cd test
./script/generate model Person name:string
rake db:migrate
Ran fine. The error would have happened when sqlite3 was required before the migration finished, so it looks like they've fixed the issue.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/11986",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: How do you do system integration? I curious to how different people solve integration of systems. I have a feeling that the last years more and more work has gone into integrating systems and that this kind of work need will increase as well.
I wondering if you solve it developing your own small services that are then connected or if you use some sort of product (WebSphere, BizTalk, Mule etc). I'd also think it'd be interesting to know how these kind of solutions are managed and maintained (how do you solve security, instrumentation etc, etc), what kind of problems have you experienced with your solution and so on.
A: wow - Ok - will get a post on this but will be big.
Intergration needs to be backed up with a big understanding by the business on the benefits - Get an opertating model sorted out - as the business may acutally need to standardise instead of intergrate, as this can be costly - its why most SOA fail! Enterprise Architecture: Driving Business Benefits from IT
Author(s): Jeanne W. Ross
If intergration is needed you then need to settle on type of integration.
What are the speed and performance metrics?
We have a .NET SOA with a Composite Application that uses BizTalk 2006 and webservices with Line of Business Applications. Performance of the application at the composite end (consuming) - is limited to the speed of the webservices (and their implementation) in the line of business application! We need sub <3 second return on results - list of cases. Could not be acheived in the webservices so we need to get go to the database directly for initial search. Then over the webservices for case creation. Cost implications and maintance becomes an issue here.
The point here is to look at the performance criteria in the specs and business requirements this will help in look at the type of integration that you need to do - WebServices (HTTP), File Drop/ EDI etc
Functionally for intergration you need to then look at the points of failure in the proposed architecture - as this will lead to a chain of responisblity in SLA/OLA. You may need to wrapper the intergration/faliure points into things that you control.
On similar point about integration with Line of Business is with how much do you need to know about the other product before you can integrate? Yeah Webservices are supposed to be design by contract but the implementation is often leaky and you need to understand alot about what is happening - and if this is a product that you dont control the abstraction even with webservices leaks into your intergation technology aka BizTalk.
Couple these two points together and you the best advise is to get a intergration hub type like BizTalk - wrapper the line of business applications in webservices you create - so the BizTalk side can be free from leaky abstractions then you also can reduce the points of failure as the you have decoupled the line of business application from the intergration hub and the point of failure to a single source rather than inside an orchestration.
Instrumentation and diagnosics in SOA and Intergation Porjects are hard to acheive! - Dont let any shiney sales person try and tell you differently! Yeah MOM with MOM Ent can do this UniCenter can do blah.
The main problem is understand what the error aka burps in the intergation mean and how to recover from them... You end up with messages stuck and you need to understand what that means to that busienss process. You can get an alert to say - processers are 100% Ram 100% orchestrations have failed - but no real meaning. You have to engineer this stuff in to the solution from the outset - and hopefully into you points of failure.
Types of intergration patterns and how to do them do need to be considered too.
The above is a real world view of a .NET SOA with BizTalk in a LIVE implementation. But it is also due to the architectural limitations of this - BizTalk mainly is a HUB and SPOKE pattern.
Check out Enterprise Application Patterns by Martin Fowler
There are many ways to skin the task!
Other considerations... Platform/Developer Languages etc.
One of the big factors for us was the skills needed to start this stuff. We had OO devs with Java and C# understanding, but mainly C#. So we went for the MS stack. But when you choose the intergration type and the product to manage this they will need more skills in understanding that technology. But hey this is normall for us Devs right? Wrong many developers regardless of there expereince can come unstuck with the likes of BizTalk! Big shift in paradigm - which in part is due to messaging shift rather than code.
Best bit for last!
Numbers of transactions that are likely to be faced in the integration is probable the single biggest factor in all of this. As this will guide what pattern, points of failure and tolarance for such things.
You need to select best on anticpated volumes the right one. Something that can scale up and scale out! We selected BizTalk since it can scale up and scale out correctly and with better understanding than some others.
If you dont have volumes then look at not getting something to manage them and go for a webservice to webservice type style with no management - performance and failure understanding will need to be coded into them.
If your on windows platform with .net 3 take look at WWF/WCF as this can help in webservice to webservice - lots more in the acutal platform now for all these concerns without the overhead of BizTalk and others.
Hope this helps.
A: In my experience it depends on what kind of problem you are tacking.
In my experience it's difficult to beat BizTalk 2006 R2 for bang for the buck but it does imply the use of a Microsoft technology stack.
Websphere MQ seems to be an easier sell to larger corporates and it probably seen greater use at the enterprise level.
Both provide good instrumentation but it's really up to you as a developer to customize this to suit your customer's requirements.
In some cases I've found that a bespoke solution is most appropriate or leveraged technologies such MSMQ to keep costs down.
A: You mentioned WebSphere, BizTalk, Mule. Each of which has very different characteristics with its good and bad points.
If just integration you are after, I will recommend Mule. I had very good experience with it and more important the architect is non invasive, so you could always migrate to a different ESB or other Buzz word complaint solution.
One the the sweet spots of Mule is that it can be embedded within your application and your final artifact can be deployed on Webshpere, WLS, Glassfish etc... without even showing you embeded an ESB. Then this ESB can perform all the integration plumbing (managing connection types and protocols). Whereas some of the end points could be other integration solution you mentioned.
A: We are using Mule for a while (now investigate migration from 1.4 to 2.1.x version).
Well It's one of the best ESB with live community and quick reaction from vendor side, but IMO version 2.1.x is still a bit raw (or we are only the company that use it for calling CXF web :) see also my post for details http://www.nabble.com/Migration-from-XFire-to-CXF:-Is-Web-Service-Client-in-Mule-2.x-broken--to19969320.html#a19969320)
A: we have an Oracle contract. So we are using the Oracle Stack. SOA Suite 10.1.3.4. Mostly BPEL solutions and for a simple transformations we try to use ESB.
The ESB has a bad Fault handling mechanism. For the BPEL there are many ways to handle errors. We try to develop java webservices to connect to the SOA Suite and our main systems are Oracle EBS systems. They communicate to legacy systems or other EBS enviroments through the default EBS Adapters that are shipped with the SOA Suite.
Problems we encountered is the lack off knowledge about the EBS adapters. We encoutered some problems with an BPEL solution that received information from the EBS systems. It was a hell of a job to get the solution production ready.
Securing our webservices isnt't a big issue. With the Oracle stack comes the Oracle Web Service Manager. With that we can secure, log etc. all the webservices.
The biggest problems we encounter is the lack of our own standards. Getting the business to feel that they can also build SOA solutions. We can't explain the benefits they get with a SOA solution. Faster? no ! Cheaper? Hell no! Easier solutions? Well, maybe when we get good reusable services ... well, that easier part has a problem within it: how do we know which applications use the reusable webservices?
We need an register, that can display this kind of information. Because we can't find a good opensource solution, we are trying to build our own register. Simple APEX solution, again from the Oracle stack. ;)
So someone knows a good product to register this kind of information?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/12008",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Piping password to smbpasswd How can I pipe the new password to smbpasswd so I can automate my installation process.
A: Thanks to Mark I found the answer:
(echo newpassword; echo confirmNewPassword) | smbpasswd -s
BTW: (echo oldpasswd; echo newpasswd) | smbpasswd -s does not work.
A: Use this
echo 'somepassword' | tee - | smbpasswd -s
A: I use the following in one of my scripts:
echo -ne "$PASS\n$PASS\n" | smbpasswd -a -s $LOGIN
With echo:
-e : escape sequences, like \n
-n : don't add implicit newline at end
With smbpasswd:
-a : add new user
-s : silent
A: I had to create a new Samba user in a Puppet 5.x Exec resource and for various reasons none of the above worked. Fortunately this rather silly-looking command worked:
yes vagrant|head -n 2|smbpasswd -a -s vagrant
Password here is of course "vagrant".
A: Try something like this:
(echo oldpasswd; echo newpasswd) | smbpasswd -s
A: This unfortunately is not desirable for two reasons:
1) if the user uses a combination of '\n' in the password there will be a mismatch in the input
2) if there are unix users on the system, then a user using the utility ps may see the password
A better way would be to put the names in a file and read from the file and use python pexpect to read them, not like below, but the simple script is enough to see how to use pexpect
#!/usr/bin/python
#converted from: http://pexpect.sourceforge.net/pexpect.html
#child = pexpect.spawn('scp foo myname@host.example.com:.')
#child.expect ('Password:')
#child.sendline (mypassword)
import pexpect
import sys
user=sys.argv[1]
passwd=sys.argv[2]
child = pexpect.spawn('/usr/bin/smbpasswd -a '+str(user))
child.expect('New SMB password:')
child.sendline (passwd)
child.expect ('Retype new SMB password:')
child.sendline (passwd)
then try: ./smbpasswd.py userName1 'f#@(&*(_\n895'
A: using either pipelines or redirection.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/12009",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "35"
} |
Q: "Symbols can not be loaded" when trying to read dump I have an application that sometimes causes a BSOD on a Win XP machine. Trying to find out more, I loaded up the resulting *.dmp file (from C:\Windows\Minidump), but get this message when in much of the readout when doing so:
*********************************************************************
* Symbols can not be loaded because symbol path is not initialized. *
* *
* The Symbol Path can be set by: *
* using the _NT_SYMBOL_PATH environment variable. *
* using the -y <symbol_path> argument when starting the debugger. *
* using .sympath and .sympath+ *
*********************************************************************
What does this mean, and how do I "fix" it?
A: Quick answer is to
c:\> set _NT_SYMBOL_PATH=SRV*C:\WINDOWS\Symbols*http://msdl.microsoft.com/download/symbols
before starting windbg.
A: Quicker answer:
!symfix
But it only affects the current windbg/ntsd/cdb/kd.
A: you actually need to either download the symbols to your computer, or configure it to download as you go if you are online while debugging.
Here's the link that talks about this in detail: http://www.microsoft.com/whdc/DevTools/Debugging/debugstart.mspx
A: I usually go to the System control panel, then Advanced tab, then Environment. You can then add the requisite _NT_SYMBOL_PATH variable. Then you don't have to do anything on the command-line before running WinDbg.
The setting of srv*C:\Windows\Symbols*http://msdl.microsoft.com/download/symbols as suggested by staffan is fine. I usually prefer to use my own profile for storing symbols though (so that I don't need to edit the permissions for C:\Windows\Symbols, since I intentionally run as a limited user, for good security hygiene). Thus (in my case) my _NT_SYMBOL_PATH is srv*C:\Documents and Settings\cky\symbols*http://msdl.microsoft.com/download/symbols.
Hope this helps. :-)
A: As @Vaibhav noted, you actually need to download the symbols and configure windbg to use them.
Also note the following:
!sym noisy -- Activates noisy symbol loading
lm v -- Use with "m" parameter to look at information for a loaded module.
lme D sm - List all modules w/o symbols.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/12039",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: Unit testing a timer based application? I am currently writing a simple, timer based mini app in C# that performs an action n times every k seconds.
I am trying to adopt a test driven development style, so my goal is to unit test all parts of the app.
So, my question is: Is there a good way to unit test a timer based class?
The problem, as I see it, is that there is a big risk that the tests will take uncomfortably long to execute, since they must wait so and so long for the desired actions to happen.
Especially if one wants realistic data (seconds), instead of using the minimal time resolution allowed by the framework (1 ms?).
I am using a mock object for the action, to register the number of times the action was called, and so that the action takes practically no time.
A: I think what I would do in this case is test the code that actually executes when the timer ticks, rather than the entire sequence. What you really need to decide is whether it is worthwhile for you to test the actual behaviour of the application (for example, if what happens after every tick changes drastically from one tick to another), or whether it is sufficient (that is to say, the action is the same every time) to just test your logic.
Since the timer's behaviour is guaranteed never to change, it's either going to work properly (ie, you've configured it right) or not; it seems to be to be wasted effort to include that in your test if you don't actually need to.
A: I agree with Danny insofar as it probably makes sense from a unit-testing perspective to simply forget about the timer mechanism and just verify that the action itself works as expected. I would also say that I disagree in that it's wasted effort to include the configuration of the timer in an automated test suite of some kind. There are a lot of edge cases when it comes to working with timing applications and it's very easy to create a false sense of security by only testing the things that are easy to test.
I would recommend having a suite of tests that runs the timer as well as the real action. This suite will probably take a while to run and would likely not be something you would run all the time on your local machine. But setting these types of things up on a nightly automated build can really help root out bugs before they become too hard to find and fix.
So in short my answer to your question is don't worry about writing a few tests that do take a long time to run. Unit test what you can and make that test suite run fast and often but make sure to supplement that with integration tests that run less frequently but cover more of the application and its configuration.
A: What I have done is to mock the timer, and also the current system time, that my events could be triggered immediately, but as far as the code under test was concerned time elapsed was seconds.
A: Len Holgate has a series of 20 articles on testing timer based code.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/12045",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "20"
} |
Q: Calling the base constructor in C# If I inherit from a base class and want to pass something from the constructor of the inherited class to the constructor of the base class, how do I do that?
For example, if I inherit from the Exception class I want to do something like this:
class MyExceptionClass : Exception
{
public MyExceptionClass(string message, string extraInfo)
{
//This is where it's all falling apart
base(message);
}
}
Basically what I want is to be able to pass the string message to the base Exception class.
A: Note that you can use static methods within the call to the base constructor.
class MyExceptionClass : Exception
{
public MyExceptionClass(string message, string extraInfo) :
base(ModifyMessage(message, extraInfo))
{
}
private static string ModifyMessage(string message, string extraInfo)
{
Trace.WriteLine("message was " + message);
return message.ToLowerInvariant() + Environment.NewLine + extraInfo;
}
}
A: It is true use the base (something) to call the base class constructor, but in case of overloading use the this keyword
public ClassName() : this(par1,par2)
{
// do not call the constructor it is called in the this.
// the base key- word is used to call a inherited constructor
}
// Hint used overload as often as needed do not write the same code 2 or more times
A: public class MyExceptionClass : Exception
{
public MyExceptionClass(string message,
Exception innerException): base(message, innerException)
{
//other stuff here
}
}
You can pass inner exception to one of the constructors.
A: From Framework Design Guidelines and FxCop rules.:
1. Custom Exception should have a name that ends with Exception
class MyException : Exception
2. Exception should be public
public class MyException : Exception
3. CA1032: Exception should implements standard constructors.
*
*A public parameterless constructor.
*A public constructor with one string argument.
*A public constructor with one string and Exception (as it can wrap another Exception).
*A serialization constructor protected if the type is not sealed and private if the type is sealed.
Based on MSDN:
[Serializable()]
public class MyException : Exception
{
public MyException()
{
// Add any type-specific logic, and supply the default message.
}
public MyException(string message): base(message)
{
// Add any type-specific logic.
}
public MyException(string message, Exception innerException):
base (message, innerException)
{
// Add any type-specific logic for inner exceptions.
}
protected MyException(SerializationInfo info,
StreamingContext context) : base(info, context)
{
// Implement type-specific serialization constructor logic.
}
}
or
[Serializable()]
public sealed class MyException : Exception
{
public MyException()
{
// Add any type-specific logic, and supply the default message.
}
public MyException(string message): base(message)
{
// Add any type-specific logic.
}
public MyException(string message, Exception innerException):
base (message, innerException)
{
// Add any type-specific logic for inner exceptions.
}
private MyException(SerializationInfo info,
StreamingContext context) : base(info, context)
{
// Implement type-specific serialization constructor logic.
}
}
A: Modify your constructor to the following so that it calls the base class constructor properly:
public class MyExceptionClass : Exception
{
public MyExceptionClass(string message, string extrainfo) : base(message)
{
//other stuff here
}
}
Note that a constructor is not something that you can call anytime within a method. That's the reason you're getting errors in your call in the constructor body.
A: You can also do a conditional check with parameters in the constructor, which allows some flexibility.
public MyClass(object myObject=null): base(myObject ?? new myOtherObject())
{
}
or
public MyClass(object myObject=null): base(myObject==null ? new myOtherObject(): myObject)
{
}
A: As per some of the other answers listed here, you can pass parameters into the base class constructor. It is advised to call your base class constructor at the beginning of the constructor for your inherited class.
public class MyException : Exception
{
public MyException(string message, string extraInfo) : base(message)
{
}
}
I note that in your example you never made use of the extraInfo parameter, so I assumed you might want to concatenate the extraInfo string parameter to the Message property of your exception (it seems that this is being ignored in the accepted answer and the code in your question).
This is simply achieved by invoking the base class constructor, and then updating the Message property with the extra info.
public class MyException: Exception
{
public MyException(string message, string extraInfo) : base($"{message} Extra info: {extraInfo}")
{
}
}
A: Example, using this base class you want to derive from:
public abstract class BaseClass
{
protected BaseClass(int a, int b, int c)
{
}
}
The non-compiling pseudo code you want to execute:
public class DerivedClass : BaseClass
{
private readonly object fatData;
public DerivedClass(int m)
{
var fd = new { A = 1 * m, B = 2 * m, C = 3 * m };
base(fd.A, fd.B, fd.C); // base-constructor call
this.fatData = fd;
}
}
2020 version (see below for even more stringent solution)
Using newer C# features, namely out var, you can get rid of the public static factory-method.
I just found out (by accident) that out var parameter of methods called inside base-"call" flow to the constructor body. (maybe it's a C# quirk, see 2023 version for C# 1.0 compatible solution)
Using a static private helper method which produces all required base arguments (plus additional data if needed) to the outside it is just a public plain constructor:
public class DerivedClass : BaseClass
{
private readonly object fatData;
public DerivedClass(int m)
: base(PrepareBaseParameters(m, out var b, out var c, out var fatData), b, c)
{
this.fatData = fatData;
Console.WriteLine(new { b, c, fatData }.ToString());
}
private static int PrepareBaseParameters(int m, out int b, out int c, out object fatData)
{
var fd = new { A = 1 * m, B = 2 * m, C = 3 * m };
(b, c, fatData) = (fd.B, fd.C, fd); // Tuples not required but nice to use
return fd.A;
}
}
2023 update
All you need is an additional private constructor and an accompanying private static factory method which prepares the data for the new private constructor using the same input as for the public ctor:
public class DerivedClass : BaseClass
{
private readonly FatData fatData;
public DerivedClass(int m)
: this(PrepareBaseParameters(m))
{
Console.WriteLine(this.fatData.ToString());
}
private DerivedClass(FatData fd)
: base(fd.A, fd.B, fd.C)
{
this.fatData = fd;
}
private static FatData PrepareBaseParameters(int m)
{
// might be any (non-async) code which e.x. calls factory methods
var fd = new FatData(A: 1 * m, B: 2 * m, C: 3 * m);
return fd;
}
private readonly record struct FatData(int A, int B, int C);
}
No special C# version needed, the C#10 record struct just for shortness, will work with any C#1.0 class, too.
This version seems to be slightly longer but it is far easier to read and understand.
A: public class MyException : Exception
{
public MyException() { }
public MyException(string msg) : base(msg) { }
public MyException(string msg, Exception inner) : base(msg, inner) { }
}
A: If you need to call the base constructor but not right away because your new (derived) class needs to do some data manipulation, the best solution is to resort to factory method. What you need to do is to mark private your derived constructor, then make a static method in your class that will do all the necessary stuff and later call the constructor and return the object.
public class MyClass : BaseClass
{
private MyClass(string someString) : base(someString)
{
//your code goes in here
}
public static MyClass FactoryMethod(string someString)
{
//whatever you want to do with your string before passing it in
return new MyClass(someString);
}
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/12051",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1800"
} |
Q: Should I be worried about obfuscating my .NET code? I'm sure many readers on SO have used Lutz Roeder's .NET reflector to decompile their .NET code.
I was amazed just how accurately our source code could be recontructed from our compiled assemblies.
I'd be interested in hearing how many of you use obfuscation, and for what sort of products?
I'm sure that this is a much more important issue for, say, a .NET application that you offer for download over the internet as opposed to something that is built bespoke for a particular client.
A: We currently obfuscate all our output, even though we are a small outfit who sells specialist software to a small number of clients.
We made this decision for one simple reason - we discovered a disgruntled ex-employee was actively approaching our clients requesting binaries - there was some some concern he was intending to reverse engineer newer features in order to offer competing functionality.
Of course he is still able to do this if he uses the software, but there is no reason to make it easy for him.
A: No new obfuscation, but lots of compiler tricks since 1.1
For instance every time you use an anonymous type you get IL that compiles back with a pretty obscure name. Every time you use yield you get a whole new class that implements both IEnumerable and IEnumerator (clever optimisation, unreadable code). Every time you use an anonymous delegate you get a new method with a name that's invalid in every .Net language that I know of, but that's fine in the IL.
A: @Rob Cooper
Having had some discussions with my
manager at work, he said he doesn't
obfuscate, but does NGEN on install,
apparantly that should be enough to
stop Reflector working on your
assemblies, but I have no idea if this
is true and to what extent, so please
don't take it as gospel :)
This doesn't offer any kind of protection against disassembly. First I imagine its quite possible to extract raw files from any installation package like an MSI or a CAB file.
But more importantly, Ngen runs on the client machine after the assembly has been installed. Ngen just forces the assembly to compile now instead of later using the JIT. The original assembly remains and is unmodified and it must remain because Ngen might not be able to compile the entire assembly.
Ngen is for performance, not security, and does nothing to prevent disassembly or make it even slightly more difficult.
A: easy for me - if you need to protect intellectual property - obfuicate - if not dont.
Easy to do with the right tools.
A: I wouldn't worry about it too much. I'd rather focus on putting out an awesome product, getting a good user base, and treating your customers right than worry about the minimal percentage of users concerned with stealing your code or looking at the source.
A: I think to some extent we should ALL be worrying about our IP :)
Good question though as its something I am keen to know more about (I currently do not obfuscate).
Having had some discussions with my manager at work, he said he doesn't obfuscate, but does NGEN on install, apparantly that should be enough to stop Reflector working on your assemblies, but I have no idea if this is true and to what extent, so please don't take it as gospel :)
Good question :) +1
A: We don't use obfuscation for "non public" applications but we use it for public available applications. The obfuscated app contains plenty of highly sophisticated code which took us an exorbitant amount of time to write and that's the reason that let me think that obfuscation is a must - at least in that case.
A: Remember, obfuscation is not encryption. IMHO, if somebody perceives value in reverse-engineering your code, they will do it. That's true for managed code or native code, obfuscated or not. Sure, obfuscation deters the casual observer, but is your business actually threatened by such people? Every .NET obfuscation method I've seen makes your life as a developer harder.
There are services that offer true encryption, such as SLPS from Microsoft. See http://www.microsoft.com/slps/default.aspx
A: Obsfucation is limited in it's effectiveness, it might keep the casual guy away. The most effective obsfucation is making only the smallest amount of code available to the user. If you can, make your app run depend heavily on a fat server.
A: Agree, most people who know how to code even a little bit do not need to steal your code!
| {
"language": "en",
"url": "https://stackoverflow.com/questions/12075",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "26"
} |
Q: Do you obfuscate your commercial Java code? I wonder if anyone uses commercial/free java obfuscators on his own commercial product. I know only about one project that actually had an obfuscating step in the ant build step for releases.
Do you obfuscate? And if so, why do you obfuscate?
Is it really a way to protect the code or is it just a better feeling for the developers/managers?
edit: Ok, I to be exact about my point: Do you obfuscate to protect your IP (your algorithms, the work you've put into your product)? I won't obfuscate for security reasons, that doesn't feel right. So I'm only talking about protecting your applications code against competitors.
@staffan has a good point:
The reason to stay away from chaining code flow is that some of those changes makes it impossible for the JVM to efficiently optimize the code. In effect it will actually degrade the performance of your application.
A: I guess it really comes down to what your Java code is for, how it's distributed and who your clients are. We don't obfuscate anything, as we've never found one that was particularly good and it tends to be more trouble than it's worth. If someone has access to our JAR files and has the knowledge to be able to sniff around inside them, then there's far more worrying things that they can do than rip off our source code.
A: If you do obfuscate, stay away from obfuscators that modify the code by changing code flow and/or adding exception blocks and such to make it hard to disassemble it. To make the code unreadable it is usually enough to just change all names of methods, fields and classes.
The reason to stay away from changing code flow is that some of those changes makes it impossible for the JVM to efficiently optimize the code. In effect it will actually degrade the performance of your application.
A: I think that the old (classical) way of the obfuscation is gradually losing its relevance. Because in most cases a classical obfuscators breaking a stack trace (it is not good for support your clients)
Nowadays the main point to not protect some algorithms, but to protect a sensitive data: API logins/passwords/keys, code which responsible for licensing (piracy still here, especially Western Europe, Russia, Asia, IMHO), advertisement account IDs, etc.
Interesting fact: we have all this sensitive data in Strings. Actually Strings is about 50-80% of logic of our applications.
It seems to me that future of obfuscation is "String encryption tools".
But now "String encryption" feature is available only in commercial obfuscators, such as: Allatori, Zelix KlassMaster, Smokescreen, Stringer Java Obfuscation Toolkit, DashO.
N.B.
I'm CEO at Licel LLC. Developer of Stringer Java Obfuscator.
A: I use proguard for JavaME development. It's not only very very good at making jar files smaller (Essential for mobile) but it is useful as a nicer way of doing device-specific code without resorting to IDE-unfriendly preprocessing tools such as antenna.
E.g.
public void doSomething()
{
/* Generated config class containing static finals: */
if (Configuration.ISMOTOROLA)
{
System.out.println("This is a motorola phone");
}
else
{
System.out.println("This is not a motorola phone");
}
}
This gets compiled, obfuscated, and the class file ends up as though you had written:
public void doSomething()
{
System.out.println("This is a motorola phone");
}
So you can have variants of code to work around manufacturer bugs in JVM/library implementations without bulking out the final executable class files.
I believe that some commercial obfuscators can also merge class files together in certain cases. This is useful because the more classes you have, the larger the size overhead you have in the zip (jar) file.
A: I spent some time this year trying out various Java obfuscators, and I found one to be miles ahead of the rest: JBCO. It's unfortunately a bit cumbersome to set up, and has no GUI, but in terms of the level of obfuscation it produces, it is unparalleled. You try feeding it a simple loop, and if your decompiler doesn't crash trying to load it, you will see something like this:
if(i < ll1) goto _L6; else goto _L5
_L5:
char ac[] = run(stop(lI1l));
l7 = (long)ac.length << 32 & 0xffffffff00000000L ^ l7 & 0xffffffffL;
if((int)((l7 & 0xffffffff00000000L) >> 32) != $5$)
{
l = (long)III << 50 & 0x4000000000000L ^ l & 0xfffbffffffffffffL;
} else
{
for(l3 = (long)III & 0xffffffffL ^ l3 & 0xffffffff00000000L; (int)(l3 & 0xffffffffL) < ll1; l3 = (long)(S$$ + (int)(l3 & 0xffffffffL)) ^ l3 & 0xffffffff00000000L)
{
for(int j = III; j < ll1; j++)
{
l2 = (long)actionevent[j][(int)(l3 & 0xffffffffL)] & 65535L ^ l2 & 0xffffffffffff0000L;
l6 = (long)(j << -351) & 0xffffffffL ^ l6 & 0xffffffff00000000L;
l1 = (long)((int)(l6 & 0xffffffffL) + j) & 0xffffffffL ^ l1 & 0xffffffff00000000L;
l = (long)((int)(l1 & 0xffffffffL) + (int)(l3 & 0xffffffffL)) << 16 & 0xffffffff0000L ^ l & 0xffff00000000ffffL;
l = (long)ac[(int)((l & 0xffffffff0000L) >> 16)] & 65535L ^ l & 0xffffffffffff0000L;
if((char)(int)(l2 & 65535L) != (char)(int)(l & 65535L))
{
l = (long)III << 50 & 0x4000000000000L ^ l & 0xfffbffffffffffffL;
}
}
}
}
You didn't know Java had goto's? Well, the JVM supports them =)
A: I use ProGuard and highly recommend it. While obfuscation does protect your code from casual attackers, it's main benefit is the minimizing effect of removing unused classes and methods and shortening all identifiers to 1 or 2 characters.
A: I think that for the most part obfuscation is pointless: even with full source code it's generally hard enough to figure out what the heck intention was (assuming there are no comments, and no meaningful names for local variables -- which is the case when re-generating sources from byte code). Obfuscation just decorates the cake.
I think developers and especially their managers tend to greatly over-exaggerate risk of someone seeing the source code. While good decompilers can produce nice looking source code, it's not trivial to work with it, and costs associated (not to mention legal risks) are high enough to make this approach seldom useful. I have only decompiled to debug problems with closed-source vendors' products (deadlocks in DB abstraction layer, ugh).
Bytecode was actually obfuscated, I think, but we nonetheless found the underlying problem -- it was an actual design problem.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/12088",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "51"
} |
Q: Windows Forms Threading and Events - ListBox updates promptly but progressbar experiences huge delay Our team is creating a new recruitment workflow system to replace an old one. I have been tasked with migrating the old data into the new schema. I have decided to do this by creating a small Windows Forms project as the schema are radically different and straight TSQL scripts are not an adequate solution.
The main sealed class 'ImportController' that does the work declares the following delegate event:
public delegate void ImportProgressEventHandler(object sender, ImportProgressEventArgs e);
public static event ImportProgressEventHandler importProgressEvent;
The main window starts a static method in that class using a new thread:
Thread dataProcessingThread = new Thread(new ParameterizedThreadStart(ImportController.ImportData));
dataProcessingThread.Name = "Data Importer: Data Processing Thread";
dataProcessingThread.Start(settings);
the ImportProgressEvent args carries a string message, a max int value for the progress bar and an current progress int value. The Windows form subcribes to the event:
ImportController.importProgressEvent += new ImportController.ImportProgressEventHandler(ImportController_importProgressEvent);
And responds to the event in this manner using it's own delegate:
private delegate void TaskCompletedUIDelegate(string completedTask, int currentProgress, int progressMax);
private void ImportController_importProgressEvent(object sender, ImportProgressEventArgs e)
{
this.Invoke(new TaskCompletedUIDelegate(this.DisplayCompletedTask), e.CompletedTask, e.CurrentProgress, e.ProgressMax);
}
Finally the progress bar and listbox are updated:
private void DisplayCompletedTask(string completedTask, int currentProgress, int progressMax)
{
string[] items = completedTask.Split(new string[] { Environment.NewLine }, StringSplitOptions.RemoveEmptyEntries);
foreach (string item in items)
{
this.lstTasks.Items.Add(item);
}
if (currentProgress >= 0 && progressMax > 0 && currentProgress <= progressMax)
{
this.ImportProgressBar.Maximum = progressMax;
this.ImportProgressBar.Value = currentProgress;
}
}
The thing is the ListBox seems to update very quickly, but the progress bar never moves until the batch is almost complete anyway ??? what gives ?
A: Maybe you can try the BackgroundWorker component. It makes threading easier. Examples here:
*
*BackgroundWorker Threads and Supporting Cancel
*Using the BackgroundWorker Component in .NET 2 applications
*BackgroundWorker Sample
A: Maybe outside of the scope but, to sometimes its useful to do an Application.DoEvents(); to make the gui parts react to user input, such as pressing the cancel-button on a status bar dialog.
A: Do you by any chance run Windows Vista? I've noticed the exactly same thing in some work related applications. Somehow, there seem to be a delay when the progress bar "animates".
A: @John
Thanks for the links.
@Will
There's no gain from threadpooling as I know it will only ever spawn one thread. The use of a thread is purely to have a responsive UI while SQL Server is being pounded with reads and writes. It's certainly not a short lived thread.
Regarding sledge-hammers you're right. But, as it turns out my problem was between screen and chair after all. I seem to have an unusal batch of data that has many many many more foreign key records than the other batches and just happens to get selected early in the process meaning the currentProgress doesn't get ++'d for a good 10 seconds.
@All
Thanks for all your input, it got me thinking, which got me looking elsewhere in the code, which led to my ahaa moment of humility where I prove yet again the error is usually human :)
A: Are you sure that the UI thread is running freely during all this process? i.e. it's not sitting blocked-up on a Join or some other wait? That's what it looks like to me.
The suggestion of using BackgroundWorker is a good one - definitely superior to trying to sledge-hammer your way out of the problem with a load of Refresh/Update calls.
And BackgroundWorker will use a pool thread, which is a friendlier way to behave than creating your own short-lived thread.
A:
There's no gain from threadpooling as
I know it will only ever spawn one
thread. The use of a thread is purely
to have a responsive UI while SQL
Server is being pounded with reads and
writes. It's certainly not a short
lived thread.
OK, I appreciate that, and glad you found your bug, but have you looked at BackgroundWorker? It does pretty much exactly what you're doing, but in a standardised fashion (i.e. without your own delegates) and without the need to create a new thread - both of which are (perhaps small, but maybe still useful) advantages.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/12095",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Why doesn't my Perl map return anything? When I am running the following statement:
@filtered = map {s/ //g} @outdata;
it is returning an empty list instead of the filtered list that I expected. What I am trying to do is remove every occurrence of from an array of string (which is an XML file).
Obviously, I am not understanding something. Can anyone tell me the correct way to do this might be, and why this isn't working for me as is?
A: Greg's answer has the problem that it will modify the original array as the $_ are passed aliased. You need:
@filtered = map { (my $new = $_) =~ s/ //g; $new} @outdata;
A: To follow up on Tithonium's point, this will also do the trick:
@filtered = map {local $_=$_; s/ //g; $_} @outdata;
The "local" ensures you're working on a copy, not the original.
A: In perl 5.14 you could use the /r regex modifier to make non-destructive substitution.
@filtered = map {s/ //gr} @outdata;
A: use Algorithm::Loops "Filter";
@filtered = Filter { s/ //g } @outdata;
A: As a counterpoint to Greg's answer, you could misuse grep:
@filtered = grep {s/ //g; 1} @outdata;
Don't do this.
A: Note that map is going to modify your source array as well. So you could either do:
map {s/ //g} @outdata;
and skip the @filtered variable altogether, or if you need to retain the originals,
@filtered = @outdata;
map {s/ //g} @filtered;
Although, in that case, it might be more readable to use foreach:
s/ //g foreach @filtered;
A: Try this:
@filtered = map {s/ //g; $_} @outdata;
The problem is the s operator in perl modifies $_ but actually returns the number of changes it made. So, the extra $_ at the end causes perl to return the modified string for each element of @outdata.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/12103",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Configuring VisualSVN Server to use _svn instead of .svn We were having a problem with our build server not checking out modifications from source control despite recognizing that there had been changes.
It was traced to the control folder (not sure what it's real name is), the existing working builds were using _svn. Clearing the working folder forced a new complete checkout and I noticed that now the control folder is .svn. It looks like originally our integration routines were checking out code using _svn but now it is using .svn.
The svn.exe being used during integration is from VisualSVN Server can I set this up to use _svn again?
How the original working copies were using _svn I don't know! - we only ever ever used VisualSVN Server and haven't changed this.
We had setup TortoiseSVN to use _svn following the recommendation that this works better for Visual Studio and have also installed TortoiseSVN on the build server in case it is ever needed. Could this be the cause?
Also is this really necessary? As MSBuild is Microsoft's is it recommended as it is for Visual Studio?
A: The business about _svn vs. .svn was an issue with Visual Studio web projects only (and I'm fairly sure it was fixed in VS2005 anyway), it's not a general "_svn works better with VS" thing.
It's also only a working-copy issue, not a repository issue - i.e. it doesn't matter if some users of SVN are using clients set up to do _svn and some are using .svn - the repository won't know or care - (unless somehow you end-up with a load of these _svn/.svn files actually checked-into the repository which would be confusing in the extreme.)
Unless you have absolute concrete evidence that .SVN is causing you problems, then I would stick with that wherever you can.
A: I've been using .svn with Visual Studio 2008 and 2005 as well as on our CC.Net integration server (with MSBuild) with no issues. I'd stick with the .svn format.
A:
http://subversion.tigris.org/svn_1.3_releasenotes.html
Need to read the "Official support for
Windows '_svn' directories (client and
language bindings)" section
And need to be aware that you're reading documentation which is several years old, a fact which might or might not be pertinent.
A: As far as I know _svn is needed, because WebApplications have problems when one of their directories begins with a point.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/12107",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: FileNotFoundException for mscorlib.XmlSerializers.DLL, which doesn't exist I'm using an XmlSerializer to deserialize a particular type in mscorelib.dll
XmlSerializer ser = new XmlSerializer( typeof( [.Net type in System] ) );
return ([.Net type in System]) ser.Deserialize( new StringReader( xmlValue ) );
This throws a caught FileNotFoundException when the assembly is loaded:
"Could not load file or assembly
'mscorlib.XmlSerializers,
Version=2.0.0.0, Culture=neutral,
PublicKeyToken=b77a5c561934e089' or
one of its dependencies. The system
cannot find the file specified."
FusionLog:
=== Pre-bind state information ===
LOG: User = ###
LOG: DisplayName = mscorlib.XmlSerializers, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089, processorArchitecture=x86
(Fully-specified)
LOG: Appbase = file:///C:/localdir
LOG: Initial PrivatePath = NULL
Calling assembly : System.Xml, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089.
===
LOG: This bind starts in default load context.
LOG: Using application configuration file: C:\localdir\bin\Debug\appname.vshost.exe.Config
LOG: Using machine configuration file from c:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\config\machine.config.
LOG: Post-policy reference: mscorlib.XmlSerializers, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089, processorArchitecture=x86
LOG: Attempting download of new URL file:///C:/localdir/bin/Debug/mscorlib.XmlSerializers.DLL.
LOG: Attempting download of new URL file:///C:/localdir/bin/Debug/mscorlib.XmlSerializers/mscorlib.XmlSerializers.DLL.
LOG: Attempting download of new URL file:///C:/localdir/bin/Debug/mscorlib.XmlSerializers.EXE.
LOG: Attempting download of new URL file:///C:/localdir/bin/Debug/mscorlib.XmlSerializers/mscorlib.XmlSerializers.EXE.
As far as I know there is no mscorlib.XmlSerializers.DLL, I think the DLL name has bee auto generated by .Net looking for the serializer.
You have the option of creating a myApplication.XmlSerializers.DLL when compiling to optimise serializations, so I assume this is part of the framework's checking for it.
The problem is that this appears to be causing a delay in loading the application - it seems to hang for a few seconds at this point.
Any ideas how to avoid this or speed it up?
A: The delay is because, having been unable to find the custom serializer dll, the system is building the equivalent code (which is very time-consuming) on the fly.
The way to avoid the delay is to have the system build the DLL, and make sure it's available to the .EXE - have you tried this?
A: Okay, so I ran into this problem and have found a solution for it specific to my area.
This occurred because I was trying to serialize a list into an XML document (file) without an XML root attribute. Once I added the following files, the error goes away.
XmlRootAttribute rootAttribute = new XmlRootAttribute();
rootAttribute.ElementName = "SomeRootName";
rootAttribute.IsNullable = true;
Dunno if it'll fix your problem, but it fixed mine.
A: I'm guessing now. but:
*
*The system might be generating a serializer for the whole of mscorlib, which could be very slow.
*You could probably avoid this by wrapping the system type in your own type and serialising that instead - then you'd get a serializer for your own assembly.
*You might be able to build the serializer for mscorlib with sgen.exe, which was the old way of building serializer dlls before it got integrated into VS.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/12135",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12"
} |
Q: Access to global application settings A database application that I'm currently working on, stores all sorts of settings in the database. Most of those settings are there to customize certain business rules, but there's also some other stuff in there.
The app contains objects that specifically do a certain task, e.g., a certain complicated calculation. Those non-UI objects are unit-tested, but also need access to lots of those global settings. The way we've implemented this right now, is by giving the objects properties that are filled by the Application Controller at runtime. When testing, we create the objects in the test and fill in values for testing (not from the database).
This works better, in any case much better than having all those objects need some global Settings object --- that of course effectively makes unit testing impossible :) Disadvantage can be that you sometimes need to set a dozen of properties, or that you need to let those properties 'percolate' into sub-objects.
So the general question is: how do you provide access to global application settings in your projects, without the need for global variables, while still being able to unit test your code? This must be a problem that's been solved 100's of times...
(Note: I'm not too much of an experienced programmer, as you'll have noticed; but I love to learn! And of course, I've already done research into this topic, but I'm really looking for some first-hand experiences)
A: You could use Martin Fowlers ServiceLocator pattern. In php it could look like this:
class ServiceLocator {
private static $soleInstance;
private $globalSettings;
public static function load($locator) {
self::$soleInstance = $locator;
}
public static function globalSettings() {
if (!isset(self::$soleInstance->globalSettings)) {
self::$soleInstance->setGlobalSettings(new GlobalSettings());
}
return self::$soleInstance->globalSettings;
}
}
Your production code then initializes the service locator like this:
ServiceLocator::load(new ServiceLocator());
In your test-code, you insert your mock-settings like this:
ServiceLocator s = new ServiceLocator();
s->setGlobalSettings(new MockGlobalSettings());
ServiceLocator::load(s);
It's a repository for singletons that can be exchanged for testing purposes.
A: I like to model my configuration access off of the Service Locator pattern. This gives me a single point to get any configuration value that I need and by putting it outside the application in a separate library, it allows reuse and testability. Here is some sample code, I am not sure what language you are using, but I wrote it in C#.
First I create a generic class that will models my ConfigurationItem.
public class ConfigurationItem<T>
{
private T item;
public ConfigurationItem(T item)
{
this.item = item;
}
public T GetValue()
{
return item;
}
}
Then I create a class that exposes public static readonly variables for the configuration item. Here I am just reading the ConnectionStringSettings from a config file, which is just xml. Of course for more items, you can read the values from any source.
public class ConfigurationItems
{
public static ConfigurationItem<ConnectionStringSettings> ConnectionSettings = new ConfigurationItem<ConnectionStringSettings>(RetrieveConnectionString());
private static ConnectionStringSettings RetrieveConnectionString()
{
// In .Net, we store our connection string in the application/web config file.
// We can access those values through the ConfigurationManager class.
return ConfigurationManager.ConnectionStrings[ConfigurationManager.AppSettings["ConnectionKey"]];
}
}
Then when I need a ConfigurationItem for use, I call it like this:
ConfigurationItems.ConnectionSettings.GetValue();
And it will return me a type safe value, which I can then cache or do whatever I want with.
Here's a sample test:
[TestFixture]
public class ConfigurationItemsTest
{
[Test]
public void ShouldBeAbleToAccessConnectionStringSettings()
{
ConnectionStringSettings item = ConfigurationItems.ConnectionSettings.GetValue();
Assert.IsNotNull(item);
}
}
Hope this helps.
A: Usually this is handled by an ini file or XML configuration file. Then you just have a class that reads the setting when neeed.
.NET has this built in with the ConfigurationManager classes, but it's quite easy to implement, just read text files, or load XML into DOM or parse them by hand in code.
Having config files in the database is ok, but it does tie you to the database, and creates an extra dependancy for your app that ini/xml files solve.
A: I did this:
public class MySettings
{
public static double Setting1
{ get { return SettingsCache.Instance.GetDouble("Setting1"); } }
public static string Setting2
{ get { return SettingsCache.Instance.GetString("Setting2"); } }
}
I put this in a separate infrastructure module to remove any issues with circular dependencies.
Doing this I am not tied to any specific configuration method, and have no strings running havoc in my applications code.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/12140",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: What is the worst database accident that happened to you in production? For example: Updating all rows of the customer table because you forgot to add the where clause.
*
*What was it like, realizing it and reporting it to your coworkers or customers?
*What were the lessons learned?
A: I work for a small e-commerce company, there's 2 developers and a DBA, me being one of the developers. I'm normally not in the habit of updating production data on the fly, if we have stored procedures we've changed we put them through source control and have an officially deployment routine setup.
Well anyways a user came to me needing an update done to our contact database, batch updating a bunch of facilities. So I wrote out the query in our test environment, something like
update facilities set address1 = '123 Fake Street'
where facilityid in (1, 2, 3)
Something like that. Ran it in test, 3 rows updated. Copied it to clipboard, pasted it in terminal services on our production sql box, ran it, watched in horror as it took 5 seconds to execute and updated 100000 rows. Somehow I copied the first line and not the second, and wasn't paying attention as I CTRL + V, CTRL + E'd.
My DBA, an older Greek gentleman, probably the grumpiest person I've met was not thrilled. Luckily we had a backup, and it didn't break any pages, luckily that field is only really for display purposes (and billing/shipping).
Lesson learned was pay attention to what you're copying and pasting, probably some others too.
A: A junior DBA meant to do:
delete from [table] where [condition]
Instead they typed:
delete [table] where [condition]
Which is valid T-Sql but basically ignores the where [condition] bit completely (at least it did back then on MSSQL 2000/97 - I forget which) and wipes the entire table.
That was fun :-/
A: About 7 years ago, I was generating a change script for a client's DB after working late. I had only changed stored procedures but when I generated the SQL I had "script dependent objects" checked. I ran it on my local machine and all appeared to work well. I ran it on the client's server and the script succeeded.
Then I loaded the web site and the site was empty. To my horror, the "script dependent objects" setting did a DROP TABLE for every table that my stored procedures touched.
I immediately called the lead dev and boss letting them know what happened and asking where the latest backup of the DB could be located. 2 other devs were conferenced in and the conclusion we came to was that no backup system was even in place and no data could be restored. The client lost their entire website's content and I was the root cause. The result was a $5000 credit given to our client.
For me it was a great lesson, and now I am super-cautious about running any change scripts, and backing up DBs first. I'm still with the same company today, and whenever the jokes come up about backups or database scripts someone always brings up the famous "DROP TABLE" incident.
A: Something to the effect of:
update email set processedTime=null,sentTime=null
on a production newsletter database, resending every email in the database.
A: I once managed to write an updating cursor that never exited. On a 2M+ row table. The locks just escalated and escalated until this 16-core, 8GB RAM (in 2002!) box actually ground to a halt (of the blue screen variety).
A: update Customers set ModifyUser = 'Terrapin'
I forgot the where clause - pretty innocent, but on a table with 5000+ customers, my name will be on every record for a while...
Lesson learned: use transaction commit and rollback!
A: We were trying to fix a busted node on an Oracle cluster.
The storage management module was having problems, so we clicked the un-install button with the intention of re-installing and copying the configuration over from another node.
Hmm, it turns out the un-install button applied to the entire cluster, so it cheerfully removed the storage management module from all the nodes in the system.
Causing every node in the production cluster to crash. And since none of the nodes had a storage manager, they wouldn't come up!
Here's an interesting fact about backups... the oldest backups get rotated off-site, and you know what your oldest files on a database are? The configuration files that got set up when the system was installed.
So we had to have the offsite people send a courier with that tape, and a couple of hours later we had everything reinstalled and running. Now we keep local copies of the installation and configuration files!
A: I thought I was working in the testing DB (which wasn't the case apparently), so when I finished 'testing' I run a script to reset all data back to the standard test data we use... ouch!
Luckily this happened on a database that had backups in place, so after figuring out I did something wrong we could easily bring back the original database.
However this incident did teach the company I worked for to realy seperate the production and the test environment.
A: I don't remember all the sql statements that ran out of control but I have one lesson learned - do it in a transaction if you can (beware of the big logfiles!).
In production, if you can, proceed the old fashioned way:
*
*Use a maintenance window
*Backup
*Perform your change
*verify
*restore if something went wrong
Pretty uncool, but generally working and even possible to give this procedure to somebody else to run it during their night shift while you're getting your well deserved sleep :-)
A: I did exactly what you suggested. I updated all the rows in a table that held customer documents because I forgot to add the "where ID = 5" at the end. That was a mistake.
But I was smart and paranoid. I knew I would screw up one day. I had issued a "start transaction". I issued a rollback and then checked the table was OK.
It wasn't.
Lesson learned in production: despite the fact we like to use InnoDB tables in MySQL for many MANY reasons... be SURE you haven't managed to find one of the few MyISAM tables that doesn't respect transactions and you can't roll back on. Don't trust MySQL under any circumstances, and habitually issuing a "start transaction" is a good thing. Even in the worst case scenario (what happened here) it didn't hurt anything and it would have protected me on the InnoDB tables.
I had to restore the table from a backup. Luckily we have nightly backups, the data almost never changes, and the table is a few dozen rows so it was near instantaneous. For reference, no one knew that we still had non-InnoDB tables around, we thought we converted them all long ago. No one told me to look out for this gotcha, no one knew it was there. My boss would have done the same exact thing (if he had hit enter too early before typing the where clause too).
A: I think my worst mistake was
truncate table Customers
truncate table Transactions
I didnt see what MSSQL server I was logged into, I wanted to clear my local copy out...The familiar "OH s**t" when it was taking significantly longer than about half a second to delete, my boss noticed I went visibily white, and asked what I just did. About half a mintue later, our site monitor went nuts and started emailing us saying the site was down.
Lesson learned? Never keep a connection open to live DB longer than absolutly needed.
Was only up till 4am restoring the data from the backups too! My boss felt sorry for me, and bought me dinner...
A: I dropped the live database and deleted it.
Lesson learned: ensure you know your SQL - and make sure that you back up before you touch stuff.
A: I discovered I didn't understand Oracle redo log files (terminology? it was a long time ago) and lost a weeks' trade data, which had to be manually re-keyed from paper tickets.
There was a silver lining - during the weekend I spent inputting, I learned a lot about the useability of my trade input screen, which improved dramatically thereafter.
A: Worst case scenario for most people is production data loss, but if they're not running nightly backups or replicating data to a DR site, then they deserve everything they get!
@Keith in T-SQL, isn't the FROM keyword optional for a DELETE? Both of those statements do exactly the same thing...
A: The worst thing that happened to me was that a Production server consume all the space in the HD. I was using SQL Server so I see the database files and see that the log was about 10 Gb so I decide to do what I always do when I want to trunc a Log file. I did a Detach the delete the log file and then attach again. Well I realize that if the log file is not close properly this procedure does not work. so I end up with a mdf file and no log file. Thankfully I went to the Microsoft site I get a way to restore the database as recovery and move to another database.
A:
Updating all rows of the customer table because you forgot to add the where clause.
That was exactly i did :| . I had updated the password column for all users to a sample string i had typed onto the console. The worst part of it was i was accessing the production server and i was checking out some queries when i did this. My seniors then had to revert an old backup and had to field some calls from some really disgruntled customers. Ofcourse there is another time when i did use the delete statement, which i don't even want to talk about ;-)
A:
Truncate table T_DAT_STORE
T_DAT_STORE was the fact table of the department I work in. I think I was connected to the development database. Fortunately, we have a daily backup, which hasn't been used until that day, and the data was restored in six hours.
Since then I revise everything before a truncate, and periodically I ask for a backup restoration of minor tables only to check the backup is doing well (Backup isn't done by my department)
A: This didn't happen to me, just a customer of ours whos mess I had to clean up.
They had a SQL server running on a RAID5 disk array - nice hotswap drives complete with lighted disk status indicators. Green = Good, Red = Bad.
One of their drives turned from green to red and the genius who was told to pull and replace the (Red) bad drive takes a (Green) good one out instead. Well this didn't quite manage to bring down the raid set completely - opting for the somewhat readable (Red) vs unavaliable (Green) for several minutes.. after realizing the mistake and swapping the drives back any data blocks that were written during this time became jyberish as disk synchronization was lost) ... 24-straight hours later writing meta programs to recover readable data and reconstruct a medium sized schema they were back up and running.
Morals of this story include...Never use RAID5, always maintain backups, careful who you hire.
I made a major mistake on a customers production system once -- luckily while wondering why the command was taking so long to execute realized what I had done and canceled it before the world came to an end.
Moral of this story include ... always start a new transaction before changing ANYTHING, test the results are what you expect and then and only then commit the transaction.
As a general observation many classes of rm -rf / type errors can be prevented by properly defining foreign key constraints on your schema and staying far away from any command labled 'CASCADE'
| {
"language": "en",
"url": "https://stackoverflow.com/questions/12141",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: Update database schema in Entity Framework I installed VS SP1 and played around with Entity Framework.
I created a schema from an existing database and tried some basic operations.
Most of it went well, except the database schema update.
I changed the database in every basic way:
*
*added a new table
*deleted a table
*added a new column to an existing table
*deleted a column from an existing table
*changed the type of an existing column
The first three went well, but the type change and the column deletion did not followed the database changes.
Is there any way to make is work from the designer? Or is it not supported at the moment? I didn't find any related material yet, but still searching.
A: I would guess that possibly those don't happen because they would break the build for existing code, but that's just a guess on my part.
Here's my logic:
First, EF is supposed to be more than 1:1 table mapping, so it's quite possible that just because you are deleting a column from table A doesn't mean that for that entity, there shouldn't be a property Description. You might just map that property to another table.
Second, changing a type could just break builds. that's the only rationale there.
A: I've found that, in general, there are still quite a few bugs with the 'Update Model from Database' functionality.
Keys are the killer for me - I've yet to have any modification I make to a foreign-key relationship or to add a Primary Key to a table and have the updater work correctly (in that it will give a compile error on the generated code) - but to solve the problem it's a simple matter of deleting the model and re-importing (only takes a minute) - this is less than ideal obviously, but I've never had a failure from a 'fresh' import.
A: From the demos of the designer I've seen, it's not a flawless tool. It is a version 1.0 product, so it's bound to have some pain points. The change type is one of them it seems. From watching the designer and the code generation, I figured that one would break either at compile time (not likely) or at run-time (when the model is actually executed).
A: You need to delete the column by yourself from the designer or the XML file.
A: As mentioned before, you can just delete the column from the designer. As far as changing the data type of the column: just refresh the model from the database then go to the table mappings and select the column that you changed in the DB. the values on the right represent your model, oddly enough this does not get updated automatically, but just select the column to the right and go to properties and change the data type there. It should become a drop down menu.
Cheers.
Ruddy
A: I builded similar application like your requested. But my solution was to hard.
I will try to tell;
*
*You have to create your own database management clases and these objects will be responsible for create, update database schema (I created manually that).
*I saw good article and source code on ADO.NET Team blog then you can also download EDMTools from this blog, it open source. And you can also implement model generation and update routines from that into your project.
*Finally when the your schema changed you should recreate and bind your model and rebuild your data assembly during runtime. But you have to know most important think, you should tie your data model assembly to your project with loosely coupled (check out this post)
Other way, you should wait for EF 4.0 release (it CTP 1 now), they announced that they will provide create,delete,update DatabaseScript functions.
Good lock
A: The way I'm doing this (and I'm doing all of the things you mention, plus renaming columns) is by making changes to the database and regenerating the EF code using EF Code First.
I'm not tampering with the EF Code First classes for the good or the bad (including nonsensically named columns for relations) to ease the process.
No designer or ORM schema generator will be able to make changes to your production database if it has constrained data in it. This is why you should always start with checking if your changes to the DB are feasible, try them on a development database and then adapt your code to reflect the changes.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/12142",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: Application configuration files OK, so I don't want to start a holy-war here, but we're in the process of trying to consolidate the way we handle our application configuration files and we're struggling to make a decision on the best approach to take. At the moment, every application we distribute is using it's own ad-hoc configuration files, whether it's property files (ini style), XML or JSON (internal use only at the moment!).
Most of our code is Java at the moment, so we've been looking at Apache Commons Config, but we've found it to be quite verbose. We've also looked at XMLBeans, but it seems like a lot of faffing around. I also feel as though I'm being pushed towards XML as a format, but my clients and colleagues are apprehensive about trying something else. I can understand it from the client's perspective, everybody's heard of XML, but at the end of the day, shouldn't be using the right tool for the job?
What formats and libraries are people using in production systems these days, is anyone else trying to avoid the angle bracket tax?
Edit: really needs to be a cross platform solution: Linux, Windows, Solaris etc. and the choice of library used to interface with configuration files is just as important as the choice of format.
A: @Guy
But application config isn't always just key/value pairs. Look at something like the tomcat configuration for what ports it listens on. Here's an example:
<Connector port="80" maxHttpHeaderSize="8192"
maxThreads="150" minSpareThreads="25" maxSpareThreads="75"
enableLookups="false" redirectPort="8443" acceptCount="100"
connectionTimeout="20000" disableUploadTimeout="true" />
<Connector port="8009"
enableLookups="false" redirectPort="8443" protocol="AJP/1.3" />
You can have any number of connectors. Define more in the file and more connectors exist. Don't define any more and no more exist. There's no good way (imho) to do that with plain old key/value pairs.
If your app's config is simple, then something simple like an INI file that's read into a dictionary is probably fine. But for something more complex like server configuration, an INI file would be a huge pain to maintain, and something more structural like XML or YAML would be better. It all depends on the problem set.
A: YAML, for the simple reason that it makes for very readable configuration files compared to XML.
XML:
<user id="babooey" on="cpu1">
<firstname>Bob</firstname>
<lastname>Abooey</lastname>
<department>adv</department>
<cell>555-1212</cell>
<address password="xxxx">ahunter@example1.com</address>
<address password="xxxx">babooey@example2.com</address>
</user>
YAML:
babooey:
computer : cpu1
firstname: Bob
lastname: Abooey
cell: 555-1212
addresses:
- address: babooey@example1.com
password: xxxx
- address: babooey@example2.com
password: xxxx
The examples were taken from this page: http://www.kuro5hin.org/story/2004/10/29/14225/062
A: We are using ini style config files. We use the Nini library to manage them. Nini makes it very easy to use. Nini was orignally for .NET but it has been ported to other platforms using Mono.
A: XML, JSON, INI.
They all have their strengths and weaknesses.
In an application context, I feel that the abstraction layer is the important thing.
If you can choose a way to structure the data that is a good middle ground between human readability and how you want to access/abstract the data in code, you're golden.
We mostly use XML where I work, and I cant really believe that a configuration file loaded into a cache as objects when first read or after it has been written to, and then abstracted away from the rest of the program, really is that much of a hit on neither CPU nor disk space.
And it is pretty readable too, as long as you structure the file right.
And all languages on all platforms supports XML through some pretty common libraries.
A: First: This is a really big debate issue, not a quick Q+A.
My favourite right now is to simply include Lua, because
*
*I can permit things like width=height*(1+1/3)
*I can make custom functions available
*I can forbid anything else. (impossible in, for instance, Python (including pickles.))
*I'll probably want a scripting language somewhere else in the project anyway.
Another option, if there's a lot of data is to use sqlite3, because they're right to claim
*
*Small.
*Fast.
*Reliable.
Choose any three.
To which I would like to add:
*
*backups are a snap. (just copy the db file.)
*easier to switch to another db, ODBC, whatever. (than it is from fugly-file)
But again, this is a bigger issue. A "big" answer to this probably involves some kind of feature matrix or list of situations like:
Amount of data, or short runtime
*
*For large amounts of data, you might want efficient storage, like a db.
*For short runs (often), you might want something that you don't need to do a lot of parsing for, consider something that can be mmap:ed in directly.
What does the configuration relate to?
*
*Host:
*
*I like YAML in /etc. Is that reimplemented in windows?
*User:
*
*Do you permit users to edit config with text editor?
*Should it be centrally manageable? Registry / gconf / remote db?
*May the user have several different profiles?
*Project:
*
*File(s) in project directory? (Version control usually follows this model...)
Complexity
*
*Are there only a few flat values? Consider YAML.
*Is the data nested, or dependent in some way? (This is where it gets interesting.)
*Might it be a desirable feature to permit some form of scripting?
*Templates can be viewed as a kind of configuration files..
A: XML XML XML XML. We're talking config files here. There is no "angle bracket tax" if you're not serializing objects in a performance-intense situation.
Config files must be human readable and human understandable, in addition to machine readable. XML is a good compromise between the two.
If your shop has people that are afraid of that new-fangled XML technology, I feel bad for you.
A: Without starting a new holy war, the sentiments of the 'angle bracket tax' post is one area where I majorly disagree with Jeff. There's nothing wrong with XML, it's reasonably human readable (as much as YAML or JSON or INI files are) but remember its intent is to be read by machines. Most language/framework combos come with an XML parser of some sort for free which makes XML a pretty good choice.
Also, if you're using a good IDE like Visual Studio, and if the XML comes with a schema, you can give the schema to VS and magically you get intellisense (you can get one for NHibernate for example).
Ulimately you need to think about how often you're going to be touching these files once in production, probably not that often.
This still says it all for me about XML and why it's still a valid choice for config files (from Tim Bray):
"If you want to provide general-purpose data that the receiver might want to do unforeseen weird and crazy things with, or if you want to be really paranoid and picky about i18n, or if what you’re sending is more like a document than a struct, or if the order of the data matters, or if the data is potentially long-lived (as in, more than seconds) XML is the way to go.
It also seems to me that the combination of XML and XPath hits a sweet spot for data formats that need to be extensible; that is to say, it’s pretty easy to write XML-processing code that won’t fail in the presence of changes to the message format that don’t touch the piece you care about."
A: @Herms
What I really meant was to stick to the recommended way software should store configuration values for any given platform.
What you often get then is also the recommended ways these should/can be modified. Like a configuration menu in a program or a configuration panel in a "system prefs" application (for system services softwares ie). Not letting the end users modify them directly via RegEdit or NotePad...
Why?
*
*The end users (=customers) are used to their platforms
*System for backups can better save "safe setups" etc
@ninesided
About " choice of library ", try to link in (static link) any selected library to lower the risk of getting into a version-conflict-war on end users machines.
A: If your configuration file is write-once, read-only-at-bootup, and your data is a bunch of name value pairs, your best choice is the one your developer can get working first.
If your data is a bit more complicated, with nesting etc, you are probably better off with YAML, XML, or SQLite.
If you need nested data and/or the ability to query the configuration data after bootup, use XML or SQLite. Both have pretty good query languages (XPATH and SQL) for structured/nested data.
If your configuration data is highly normalized (e.g. 5th normal form) you are better off with SQLite because SQL is better for dealing with highly normalized data.
If you are planning to write to the configuration data set during program operation, then you are better off going with SQLite. For example, if you are downloading configuration data from another computer, or if you are basing future program execution decisions on data collected in previous program execution. SQLite implements a very robust data storage engine that is extremely difficult to corrupt when you have power outages or programs that are hung in an inconsistent state due to errors. Corruptible data leads to high field support costs, and SQLite will do much better than any home-grown solution or even popular libraries around XML or YAML.
Check out my page for more information on SQLite.
A: As far as I know, the Windows registry is no longer the preferred way of storing configuration if you are using .NET - most applications now make use of System.Configuration [1, 2]. Since this is also XML based it seems to be that everything is moving in the direction of using XML for configuration.
If you want to stay cross-platform I would say that using some sort of a text file would be the best route to go. As for the formatting of said file, you might want to take into account if a human is going to be manipulating it or not. XML seems to be a bit more friendly to manual manipulation than INI files due to the visible structure of the file.
As for the angle bracket tax - I don't worry about it too often as the XML libraries take care of abstracting it. The only time it might be a consideration is if you have very little storage space to work with and every byte counts.
[1] System.Configuration Namespace - http://msdn.microsoft.com/en-us/library/system.configuration.aspx
[2] Using Application Configuration Files in .NET - http://www.developer.com/net/net/article.php/3396111
A: We are using properties files, simply because Java supports them natively. A couple of months ago I saw that SpringSource Application Platform uses JSON to configure their server and it looks very interesting. I compared various configuration notations and came to the conclusion that XML seems to be the best fit at the moment. It has nice tools support and is rather platform independent.
A: Re: epatel's comment
I think the original question was asking about application configuration that an admin would be doing, not just storing user preferences. The suggestions you gave seem more for user prefs than application config, and aren't usually something that the user would ever deal with directly (the app should provide the configuration options in the UI, and then update the files). I really hope you'd never make the user have to view/edit the Registry. :)
As for the actual question, I'd say XML is probably OK, as plenty of people will be used to using that for configuration. As long as you organize the configuration values in an easy to use manner then the "angle bracket tax" shouldn't be too bad.
A: Maybe a bit of a tangent here but my opinion is that the config file should be read into a key value dictionary/hash table when the app first starts up and always accessed via this object from then on for speed. Typically the key/value table starts off as string to string but helper functions in the object do things such DateTime GetConfigDate(string key) etc...
A: I think the only important thing is to choose a format that you prefer and can navigate quickly. XML and JSON are both fine formats for configs and are widely supported--technical implementation isn't at the crux of the issue, methinks. It's 100% about what makes the task of config files easier for you.
I have started using JSON, because I work quite a bit with it as a data transport format, and the serializers make it easy to load into any development framework. I find JSON easier to read than XML, which makes handling multiple services, each using a config file that is modified quite frequently, that much easer for me!
A: What platform are you working on? I'd recommend trying to use the preferred/common method for it.
*
*MacOSX - plists
*Win32 - Registry (or are there a new one here, long since I developed on it)
*Linux/Unix - ~/.apprc (name-value perhaps)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/12144",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "42"
} |
Q: How should I unit test multithreaded code? I have thus far avoided the nightmare that is testing multi-threaded code since it just seems like too much of a minefield. I'd like to ask how people have gone about testing code that relies on threads for successful execution, or just how people have gone about testing those kinds of issues that only show up when two threads interact in a given manner?
This seems like a really key problem for programmers today, it would be useful to pool our knowledge on this one imho.
A: Tough one indeed! In my (C++) unit tests, I've broken this down into several categories along the lines of the concurrency pattern used:
*
*Unit tests for classes that operate in a single thread and aren't thread aware -- easy, test as usual.
*Unit tests for Monitor objects (those that execute synchronized methods in the callers' thread of control) that expose a synchronized public API -- instantiate multiple mock threads that exercise the API. Construct scenarios that exercise internal conditions of the passive object. Include one longer running test that basically beats the heck out of it from multiple threads for a long period of time. This is unscientific I know but it does build confidence.
*Unit tests for Active objects (those that encapsulate their own thread or threads of control) -- similar to #2 above with variations depending on the class design. Public API may be blocking or non-blocking, callers may obtain futures, data may arrive at queues or need to be dequeued. There are many combinations possible here; white box away. Still requires multiple mock threads to make calls to the object under test.
As an aside:
In internal developer training that I do, I teach the Pillars of Concurrency and these two patterns as the primary framework for thinking about and decomposing concurrency problems. There's obviously more advanced concepts out there but I've found that this set of basics helps keep engineers out of the soup. It also leads to code that is more unit testable, as described above.
A: I handle unit tests of threaded components the same way I handle any unit test, that is, with inversion of control and isolation frameworks. I develop in the .Net-arena and, out of the box, the threading (among other things) is very hard (I'd say nearly impossible) to fully isolate.
Therefore, I've written wrappers that looks something like this (simplified):
public interface IThread
{
void Start();
...
}
public class ThreadWrapper : IThread
{
private readonly Thread _thread;
public ThreadWrapper(ThreadStart threadStart)
{
_thread = new Thread(threadStart);
}
public Start()
{
_thread.Start();
}
}
public interface IThreadingManager
{
IThread CreateThread(ThreadStart threadStart);
}
public class ThreadingManager : IThreadingManager
{
public IThread CreateThread(ThreadStart threadStart)
{
return new ThreadWrapper(threadStart)
}
}
From there, I can easily inject the IThreadingManager into my components and use my isolation framework of choice to make the thread behave as I expect during the test.
That has so far worked great for me, and I use the same approach for the thread pool, things in System.Environment, Sleep etc. etc.
A: I have faced this issue several times in recent years when writing thread handling code for several projects. I'm providing a late answer because most of the other answers, while providing alternatives, do not actually answer the question about testing. My answer is addressed to the cases where there is no alternative to multithreaded code; I do cover code design issues for completeness, but also discuss unit testing.
Writing testable multithreaded code
The first thing to do is to separate your production thread handling code from all the code that does actual data processing. That way, the data processing can be tested as singly threaded code, and the only thing the multithreaded code does is to coordinate threads.
The second thing to remember is that bugs in multithreaded code are probabilistic; the bugs that manifest themselves least frequently are the bugs that will sneak through into production, will be difficult to reproduce even in production, and will thus cause the biggest problems. For this reason, the standard coding approach of writing the code quickly and then debugging it until it works is a bad idea for multithreaded code; it will result in code where the easy bugs are fixed and the dangerous bugs are still there.
Instead, when writing multithreaded code, you must write the code with the attitude that you are going to avoid writing the bugs in the first place. If you have properly removed the data processing code, the thread handling code should be small enough - preferably a few lines, at worst a few dozen lines - that you have a chance of writing it without writing a bug, and certainly without writing many bugs, if you understand threading, take your time, and are careful.
Writing unit tests for multithreaded code
Once the multithreaded code is written as carefully as possible, it is still worthwhile writing tests for that code. The primary purpose of the tests is not so much to test for highly timing dependent race condition bugs - it's impossible to test for such race conditions repeatably - but rather to test that your locking strategy for preventing such bugs allows for multiple threads to interact as intended.
To properly test correct locking behavior, a test must start multiple threads. To make the test repeatable, we want the interactions between the threads to happen in a predictable order. We don't want to externally synchronize the threads in the test, because that will mask bugs that could happen in production where the threads are not externally synchronized. That leaves the use of timing delays for thread synchronization, which is the technique that I have used successfully whenever I've had to write tests of multithreaded code.
If the delays are too short, then the test becomes fragile, because minor timing differences - say between different machines on which the tests may be run - may cause the timing to be off and the test to fail. What I've typically done is start with delays that cause test failures, increase the delays so that the test passes reliably on my development machine, and then double the delays beyond that so the test has a good chance of passing on other machines. This does mean that the test will take a macroscopic amount of time, though in my experience, careful test design can limit that time to no more than a dozen seconds. Since you shouldn't have very many places requiring thread coordination code in your application, that should be acceptable for your test suite.
Finally, keep track of the number of bugs caught by your test. If your test has 80% code coverage, it can be expected to catch about 80% of your bugs. If your test is well designed but finds no bugs, there's a reasonable chance that you don't have additional bugs that will only show up in production. If the test catches one or two bugs, you might still get lucky. Beyond that, and you may want to consider a careful review of or even a complete rewrite of your thread handling code, since it is likely that code still contains hidden bugs that will be very difficult to find until the code is in production, and very difficult to fix then.
A: Pete Goodliffe has a series on the unit testing of threaded code.
It's hard. I take the easier way out and try to keep the threading code abstracted from the actual test. Pete does mention that the way I do it is wrong but I've either got the separation right or I've just been lucky.
A: I like to write two or more test methods to execute on parallel threads, and each of them make calls into the object under test. I've been using Sleep() calls to coordinate the order of the calls from the different threads, but that's not really reliable. It's also a lot slower because you have to sleep long enough that the timing usually works.
I found the Multithreaded TC Java library from the same group that wrote FindBugs. It lets you specify the order of events without using Sleep(), and it's reliable. I haven't tried it yet.
The biggest limitation to this approach is that it only lets you test the scenarios you suspect will cause trouble. As others have said, you really need to isolate your multithreaded code into a small number of simple classes to have any hope of thoroughly testing them.
Once you've carefully tested the scenarios you expect to cause trouble, an unscientific test that throws a bunch of simultaneous requests at the class for a while is a good way to look for unexpected trouble.
Update: I've played a bit with the Multithreaded TC Java library, and it works well. I've also ported some of its features to a .NET version I call TickingTest.
A: For Java, check out chapter 12 of JCIP. There are some concrete examples of writing deterministic, multi-threaded unit tests to at least test the correctness and invariants of concurrent code.
"Proving" thread-safety with unit tests is much dicier. My belief is that this is better served by automated integration testing on a variety of platforms/configurations.
A: Have a look at my related answer at
Designing a Test class for a custom Barrier
It's biased towards Java but has a reasonable summary of the options.
In summary though (IMO) its not the use of some fancy framework that will ensure correctness but how you go about designing you multithreaded code. Splitting the concerns (concurrency and functionality) goes a huge way towards raising confidence. Growing Object Orientated Software Guided By Tests explains some options better than I can.
Static analysis and formal methods (see, Concurrency: State Models and Java Programs) is an option but I've found them to be of limited use in commercial development.
Don't forget that any load/soak style tests are rarely guaranteed to highlight problems.
Good luck!
A: I just recently discovered (for Java) a tool called Threadsafe. It is a static analysis tool much like findbugs but specifically to spot multi-threading issues. It is not a replacement for testing but I can recommend it as part of writing reliable multi-threaded Java.
It even catches some very subtle potential issues around things like class subsumption, accessing unsafe objects through concurrent classes and spotting missing volatile modifiers when using the double checked locking paradigm.
If you write multithreaded Java give it a shot.
A: The following article suggests 2 solutions. Wrapping a semaphore (CountDownLatch) and adds functionality like externalize data from internal thread. Another way of achieving this purpose is to use Thread Pool (see Points of Interest).
Sprinkler - Advanced synchronization object
A: Look, there's no easy way to do this. I'm working on a project that is inherently multithreaded. Events come in from the operating system and I have to process them concurrently.
The simplest way to deal with testing complex, multithreaded application code is this: If it's too complex to test, you're doing it wrong. If you have a single instance that has multiple threads acting upon it, and you can't test situations where these threads step all over each other, then your design needs to be redone. It's both as simple and as complex as this.
There are many ways to program for multithreading that avoids threads running through instances at the same time. The simplest is to make all your objects immutable. Of course, that's not usually possible. So you have to identify those places in your design where threads interact with the same instance and reduce the number of those places. By doing this, you isolate a few classes where multithreading actually occurs, reducing the overall complexity of testing your system.
But you have to realize that even by doing this, you still can't test every situation where two threads step on each other. To do that, you'd have to run two threads concurrently in the same test, then control exactly what lines they are executing at any given moment. The best you can do is simulate this situation. But this might require you to code specifically for testing, and that's at best a half step towards a true solution.
Probably the best way to test code for threading issues is through static analysis of the code. If your threaded code doesn't follow a finite set of thread safe patterns, then you might have a problem. I believe Code Analysis in VS does contain some knowledge of threading, but probably not much.
Look, as things stand currently (and probably will stand for a good time to come), the best way to test multithreaded apps is to reduce the complexity of threaded code as much as possible. Minimize areas where threads interact, test as best as possible, and use code analysis to identify danger areas.
A: I also had serious problems testing multi- threaded code. Then I found a really cool solution in "xUnit Test Patterns" by Gerard Meszaros. The pattern he describes is called Humble object.
Basically it describes how you can extract the logic into a separate, easy-to-test component that is decoupled from its environment. After you tested this logic, you can test the complicated behaviour (multi- threading, asynchronous execution, etc...)
A: I have had the unfortunate task of testing threaded code and they are definitely the hardest tests I have ever written.
When writing my tests, I used a combination of delegates and events. Basically it is all about using PropertyNotifyChanged events with a WaitCallback or some kind of ConditionalWaiter that polls.
I am not sure if this was the best approach, but it has worked out for me.
A: I spent most of last week at a university library studying debugging of concurrent code. The central problem is concurrent code is non-deterministic. Typically, academic debugging has fallen into one of three camps here:
*
*Event-trace/replay. This requires an event monitor and then reviewing the events that were sent. In a UT framework, this would involve manually sending the events as part of a test, and then doing post-mortem reviews.
*Scriptable. This is where you interact with the running code with a set of triggers. "On x > foo, baz()". This could be interpreted into a UT framework where you have a run-time system triggering a given test on a certain condition.
*Interactive. This obviously won't work in an automatic testing situation. ;)
Now, as above commentators have noticed, you can design your concurrent system into a more deterministic state. However, if you don't do that properly, you're just back to designing a sequential system again.
My suggestion would be to focus on having a very strict design protocol about what gets threaded and what doesn't get threaded. If you constrain your interface so that there is minimal dependancies between elements, it is much easier.
Good luck, and keep working on the problem.
A: There are a few tools around that are quite good. Here is a summary of some of the Java ones.
Some good static analysis tools include FindBugs (gives some useful hints), JLint, Java Pathfinder (JPF & JPF2), and Bogor.
MultithreadedTC is quite a good dynamic analysis tool (integrated into JUnit) where you have to set up your own test cases.
ConTest from IBM Research is interesting. It instruments your code by inserting all kinds of thread modifying behaviours (e.g. sleep & yield) to try to uncover bugs randomly.
SPIN is a really cool tool for modelling your Java (and other) components, but you need to have some useful framework. It is hard to use as is, but extremely powerful if you know how to use it. Quite a few tools use SPIN underneath the hood.
MultithreadedTC is probably the most mainstream, but some of the static analysis tools listed above are definitely worth looking at.
A: Awaitility can also be useful to help you write deterministic unit tests. It allows you to wait until some state somewhere in your system is updated. For example:
await().untilCall( to(myService).myMethod(), greaterThan(3) );
or
await().atMost(5,SECONDS).until(fieldIn(myObject).ofType(int.class), equalTo(1));
It also has Scala and Groovy support.
await until { something() > 4 } // Scala example
A: Another way to (kinda) test threaded code, and very complex systems in general is through Fuzz Testing.
It's not great, and it won't find everything, but its likely to be useful and its simple to do.
Quote:
Fuzz testing or fuzzing is a software testing technique that provides random data("fuzz") to the inputs of a program. If the program fails (for example, by crashing, or by failing built-in code assertions), the defects can be noted. The great advantage of fuzz testing is that the test design is extremely simple, and free of preconceptions about system behavior.
...
Fuzz testing is often used in large software development projects that employ black box testing. These projects usually have a budget to develop test tools, and fuzz testing is one of the techniques which offers a high benefit to cost ratio.
...
However, fuzz testing is not a substitute for exhaustive testing or formal methods: it can only provide a random sample of the system's behavior, and in many cases passing a fuzz test may only demonstrate that a piece of software handles exceptions without crashing, rather than behaving correctly. Thus, fuzz testing can only be regarded as a bug-finding tool rather than an assurance of quality.
A: Testing MT code for correctness is, as already stated, quite a hard problem. In the end it boils down to ensuring that there are no incorrectly synchronised data races in your code. The problem with this is that there are infinitely many possibilities of thread execution (interleavings) over which you do not have much control (be sure to read this article, though). In simple scenarios it might be possible to actually prove correctness by reasoning but this is usually not the case. Especially if you want to avoid/minimize synchronization and not go for the most obvious/easiest synchronization option.
An approach that I follow is to write highly concurrent test code in order to make potentially undetected data races likely to occur. And then I run those tests for some time :) I once stumbled upon a talk where some computer scientist where showing off a tool that kind of does this (randomly devising test from specs and then running them wildly, concurrently, checking for the defined invariants to be broken).
By the way, I think this aspect of testing MT code has not been mentioned here: identify invariants of the code that you can check for randomly. Unfortunately, finding those invariants is quite a hard problem, too. Also they might not hold all the time during execution, so you have to find/enforce executions points where you can expect them to be true. Bringing the code execution to such a state is also a hard problem (and might itself incur concurrency issues. Whew, it's damn hard!
Some interesting links to read:
*
*Deterministic interleaving: A framework that allows to force certain thread interleavings and then check for invariants
*jMock Blitzer : Stress test synchronization
*assertConcurrent : JUnit version of stress testing synronization
*Testing concurrent code : Short overview of the two primary methods of brute force (stress test) or deterministic (going for the invariants)
A: I've done a lot of this, and yes it sucks.
Some tips:
*
*GroboUtils for running multiple test threads
*alphaWorks ConTest to instrument classes to cause interleavings to vary between iterations
*Create a throwable field and check it in tearDown (see Listing 1). If you catch a bad exception in another thread, just assign it to throwable.
*I created the utils class in Listing 2 and have found it invaluable, especially waitForVerify and waitForCondition, which will greatly increase the performance of your tests.
*Make good use of AtomicBoolean in your tests. It is thread safe, and you'll often need a final reference type to store values from callback classes and suchlike. See example in Listing 3.
*Make sure to always give your test a timeout (e.g., @Test(timeout=60*1000)), as concurrency tests can sometimes hang forever when they're broken.
Listing 1:
@After
public void tearDown() {
if ( throwable != null )
throw throwable;
}
Listing 2:
import static org.junit.Assert.fail;
import java.io.File;
import java.lang.reflect.InvocationHandler;
import java.lang.reflect.Proxy;
import java.util.Random;
import org.apache.commons.collections.Closure;
import org.apache.commons.collections.Predicate;
import org.apache.commons.lang.time.StopWatch;
import org.easymock.EasyMock;
import org.easymock.classextension.internal.ClassExtensionHelper;
import static org.easymock.classextension.EasyMock.*;
import ca.digitalrapids.io.DRFileUtils;
/**
* Various utilities for testing
*/
public abstract class DRTestUtils
{
static private Random random = new Random();
/** Calls {@link #waitForCondition(Integer, Integer, Predicate, String)} with
* default max wait and check period values.
*/
static public void waitForCondition(Predicate predicate, String errorMessage)
throws Throwable
{
waitForCondition(null, null, predicate, errorMessage);
}
/** Blocks until a condition is true, throwing an {@link AssertionError} if
* it does not become true during a given max time.
* @param maxWait_ms max time to wait for true condition. Optional; defaults
* to 30 * 1000 ms (30 seconds).
* @param checkPeriod_ms period at which to try the condition. Optional; defaults
* to 100 ms.
* @param predicate the condition
* @param errorMessage message use in the {@link AssertionError}
* @throws Throwable on {@link AssertionError} or any other exception/error
*/
static public void waitForCondition(Integer maxWait_ms, Integer checkPeriod_ms,
Predicate predicate, String errorMessage) throws Throwable
{
waitForCondition(maxWait_ms, checkPeriod_ms, predicate, new Closure() {
public void execute(Object errorMessage)
{
fail((String)errorMessage);
}
}, errorMessage);
}
/** Blocks until a condition is true, running a closure if
* it does not become true during a given max time.
* @param maxWait_ms max time to wait for true condition. Optional; defaults
* to 30 * 1000 ms (30 seconds).
* @param checkPeriod_ms period at which to try the condition. Optional; defaults
* to 100 ms.
* @param predicate the condition
* @param closure closure to run
* @param argument argument for closure
* @throws Throwable on {@link AssertionError} or any other exception/error
*/
static public void waitForCondition(Integer maxWait_ms, Integer checkPeriod_ms,
Predicate predicate, Closure closure, Object argument) throws Throwable
{
if ( maxWait_ms == null )
maxWait_ms = 30 * 1000;
if ( checkPeriod_ms == null )
checkPeriod_ms = 100;
StopWatch stopWatch = new StopWatch();
stopWatch.start();
while ( !predicate.evaluate(null) ) {
Thread.sleep(checkPeriod_ms);
if ( stopWatch.getTime() > maxWait_ms ) {
closure.execute(argument);
}
}
}
/** Calls {@link #waitForVerify(Integer, Object)} with <code>null</code>
* for {@code maxWait_ms}
*/
static public void waitForVerify(Object easyMockProxy)
throws Throwable
{
waitForVerify(null, easyMockProxy);
}
/** Repeatedly calls {@link EasyMock#verify(Object[])} until it succeeds, or a
* max wait time has elapsed.
* @param maxWait_ms Max wait time. <code>null</code> defaults to 30s.
* @param easyMockProxy Proxy to call verify on
* @throws Throwable
*/
static public void waitForVerify(Integer maxWait_ms, Object easyMockProxy)
throws Throwable
{
if ( maxWait_ms == null )
maxWait_ms = 30 * 1000;
StopWatch stopWatch = new StopWatch();
stopWatch.start();
for(;;) {
try
{
verify(easyMockProxy);
break;
}
catch (AssertionError e)
{
if ( stopWatch.getTime() > maxWait_ms )
throw e;
Thread.sleep(100);
}
}
}
/** Returns a path to a directory in the temp dir with the name of the given
* class. This is useful for temporary test files.
* @param aClass test class for which to create dir
* @return the path
*/
static public String getTestDirPathForTestClass(Object object)
{
String filename = object instanceof Class ?
((Class)object).getName() :
object.getClass().getName();
return DRFileUtils.getTempDir() + File.separator +
filename;
}
static public byte[] createRandomByteArray(int bytesLength)
{
byte[] sourceBytes = new byte[bytesLength];
random.nextBytes(sourceBytes);
return sourceBytes;
}
/** Returns <code>true</code> if the given object is an EasyMock mock object
*/
static public boolean isEasyMockMock(Object object) {
try {
InvocationHandler invocationHandler = Proxy
.getInvocationHandler(object);
return invocationHandler.getClass().getName().contains("easymock");
} catch (IllegalArgumentException e) {
return false;
}
}
}
Listing 3:
@Test
public void testSomething() {
final AtomicBoolean called = new AtomicBoolean(false);
subject.setCallback(new SomeCallback() {
public void callback(Object arg) {
// check arg here
called.set(true);
}
});
subject.run();
assertTrue(called.get());
}
A: It's been a while when this question was posted, but it's still not answered ...
kleolb02's answer is a good one. I'll try going into more details.
There is a way, which I practice for C# code. For unit tests you should be able to program reproducible tests, which is the biggest challenge in multithreaded code. So my answer aims toward forcing asynchronous code into a test harness, which works synchronously.
It's an idea from Gerard Meszaros's book "xUnit Test Patterns" and is called "Humble Object" (p. 695): You have to separate core logic code and anything which smells like asynchronous code from each other. This would result to a class for the core logic, which works synchronously.
This puts you into the position to test the core logic code in a synchronous way. You have absolute control over the timing of the calls you are doing on the core logic and thus can make reproducible tests. And this is your gain from separating core logic and asynchronous logic.
This core logic needs be wrapped around by another class, which is responsible for receiving calls to the core logic asynchronously and delegates these calls to the core logic. Production code will only access the core logic via that class. Because this class should only delegate calls, it's a very "dumb" class without much logic. So you can keep your unit tests for this asychronous working class at a minimum.
Anything above that (testing interaction between classes) are component tests. Also in this case, you should be able to have absolute control over timing, if you stick to the "Humble Object" pattern.
A: Assuming under "multi-threaded" code was meant something that is
*
*stateful and mutable
*AND accessed/modified by multiple threads
concurrently
In other words we are talking about testing custom stateful thread-safe class/method/unit - which should be a very rare beast nowadays.
Because this beast is rare, first of all we need to make sure that there are all valid excuses to write it.
Step 1. Consider modifying state in same synchronization context.
Today it is easy to write compose-able concurrent and asynchronous code where IO or other slow operations offloaded to background but shared state is updated and queried in one synchronization context. e.g. async/await tasks and Rx in .NET etc. - they are all testable by design, "real" Tasks and schedulers can be substituted to make testing deterministic (however this is out of scope of the question).
It may sound very constrained but this approach works surprisingly well. It is possible to write whole apps in this style without need to make any state thread-safe (I do).
Step 2. If manipulating of shared state on single synchronization context is absolutely not possible.
Make sure the wheel is not being reinvented / there's definitely no standard alternative that can be adapted for the job. It should be likely that code is very cohesive and contained within one unit e.g. with a good chance it is a special case of some standard thread-safe data structure like hash map or collection or whatever.
Note: if code is large / spans across multiple classes AND needs multi-thread state manipulation then there's a very high chance that design is not good, reconsider Step 1
Step 3. If this step is reached then we need to test our own custom stateful thread-safe class/method/unit.
I'll be dead honest : I never had to write proper tests for such code. Most of the time I get away at Step 1, sometimes at Step 2. Last time I had to write custom thread-safe code was so many years ago that it was before I adopted unit testing / probably I wouldn't have to write it with the current knowledge anyway.
If I really had to test such code (finally, actual answer) then I would try couple of things below
*
*Non-deterministic stress testing. e.g. run 100 threads simultaneously and check that end result is consistent.
This is more typical for higher level / integration testing of multiple users scenarios but also can be used at the unit level.
*Expose some test 'hooks' where test can inject some code to help make deterministic scenarios where one thread must perform operation before the other.
As ugly as it is, I can't think of anything better.
*Delay-driven testing to make threads run and perform operations in particular order. Strictly speaking such tests are non-deterministic too (there's a chance of system freeze / stop-the-world GC collection which can distort otherwise orchestrated delays), also it is ugly but allows to avoid hooks.
A: Running multiple threads is not difficult; it is piece of cake. Unfortunately, threads usually need to communicate with each other; that's what's difficult.
The mechanism that was originally invented to allow communication between modules was function calls; when module A wants to communicate with module B, it just invokes a function in module B. Unfortunately, this does not work with threads, because when you call a function, that function still runs in the current thread.
To overcome this problem, people decided to fall back to an even more primitive mechanism of communication: just declare a certain variable, and let both threads have access to that variable. In other words, allow the threads to share data. Sharing data is literally the first thing that naturally comes to mind, and it appears like a good choice because it seems very simple. I mean, how hard can it be, right? What could possibly go wrong?
Race conditions. That's what can, and will, go wrong.
When people realized their software was suffering from random, non-reproducible catastrophic failures due to race conditions, they started inventing elaborate mechanisms such as locks and compare-and-swap, aiming to protect against such things happening. These mechanisms fall under the broad category of "synchronization". Unfortunately, synchronization has two problems:
*
*It is very difficult to get it right, so it is very prone to bugs.
*It is completely untestable, because you cannot test for a race condition.
The astute reader might notice that "Very prone to bugs" and "Completely untestable" is a deadly combination.
Now, the mechanisms I mentioned above were being invented and adopted by large parts of the industry before the concept of automated software testing became prevalent; So, nobody could see how deadly the problem was; they just regarded it as a difficult topic which requires guru programmers, and everyone was okay with that.
Nowadays, whatever we do, we put testing first. So, if some mechanism is untestable, then the use of that mechanism is just out of the question, period. Thus, synchronization has fallen out of grace; very few people still practice it, and they are becoming fewer and fewer every day.
Without synchronization threads cannot share data; however, the original requirement was not to share data; it was to allow threads to communicate with each other. Besides sharing data, there exist other, more elegant mechanisms for inter-thread communication.
One such mechanism is message-passing, otherwise known as events.
With message passing, there is only one place in the entire software system which utilizes synchronization, and that is the concurrent blocking queue collection class that we use for storing messages. (The idea is that we should be able to get at least that little part right.)
The great thing about message passing is that it does not suffer from race conditions and is fully testable.
A: For J2E code, I've used SilkPerformer, LoadRunner and JMeter for concurrency testing of threads. They all do the same thing. Basically, they give you a relatively simple interface for administrating their version of the proxy server, required, in order to analyze the TCP/IP data stream, and simulate multiple users making simultaneous requests to your app server. The proxy server can give you the ability to do things like analyze the requests made, by presenting the whole page and URL sent to the server, as well as the response from the server, after processing the request.
You can find some bugs in insecure http mode, where you can at least analyze the form data that is being sent, and systematically alter that for each user. But the true tests are when you run in https (Secured Socket Layers). Then, you also have to contend with systematically altering the session and cookie data, which can be a little more convoluted.
The best bug I ever found, while testing concurrency, was when I discovered that the developer had relied upon Java garbage collection to close the connection request that was established at login, to the LDAP server, when logging in. This resulted in users being exposed to other users' sessions and very confusing results, when trying to analyze what happened when the server was brought to it's knees, barely able to complete one transaction, every few seconds.
In the end, you or someone will probably have to buckle down and analyze the code for blunders like the one I just mentioned. And an open discussion across departments, like the one that occurred, when we unfolded the problem described above, are most useful. But these tools are the best solution to testing multi-threaded code. JMeter is open source. SilkPerformer and LoadRunner are proprietary. If you really want to know whether your app is thread safe, that's how the big boys do it. I've done this for very large companies professionally, so I'm not guessing. I'm speaking from personal experience.
A word of caution: it does take some time to understand these tools. It will not be a matter of simply installing the software and firing up the GUI, unless you've already had some exposure to multi-threaded programming. I've tried to identify the 3 critical categories of areas to understand (forms, session and cookie data), with the hope that at least starting with understanding these topics will help you focus on quick results, as opposed to having to read through the entire documentation.
A: Concurrency is a complex interplay between the memory model, hardware, caches and our code. In the case of Java at least such tests have been partly addressed mainly by jcstress. The creators of that library are known to be authors of many JVM, GC and Java concurrency features.
But even this library needs good knowledge of the Java Memory Model specification so that we know exactly what we are testing. But I think the focus of this effort is mircobenchmarks. Not huge business applications.
A: There is an article on the topic, using Rust as the language in the example code:
https://medium.com/@polyglot_factotum/rust-concurrency-five-easy-pieces-871f1c62906a
In summary, the trick is to write your concurrent logic so that it is robust to the non-determinism involved with multiple threads of execution, using tools like channels and condvars.
Then, if that is how you've structured your "components", the easiest way to test them is by using channels to send messages to them, and then block on other channels to assert that the component sends certain expected messages.
The linked-to article is fully written using unit-tests.
A: It's not perfect, but I wrote this helper for my tests in C#:
using System;
using System.Collections.Generic;
using System.Threading;
using System.Threading.Tasks;
namespace Proto.Promises.Tests.Threading
{
public class ThreadHelper
{
public static readonly int multiThreadCount = Environment.ProcessorCount * 100;
private static readonly int[] offsets = new int[] { 0, 10, 100, 1000 };
private readonly Stack<Task> _executingTasks = new Stack<Task>(multiThreadCount);
private readonly Barrier _barrier = new Barrier(1);
private int _currentParticipants = 0;
private readonly TimeSpan _timeout;
public ThreadHelper() : this(TimeSpan.FromSeconds(10)) { } // 10 second timeout should be enough for most cases.
public ThreadHelper(TimeSpan timeout)
{
_timeout = timeout;
}
/// <summary>
/// Execute the action multiple times in parallel threads.
/// </summary>
public void ExecuteMultiActionParallel(Action action)
{
for (int i = 0; i < multiThreadCount; ++i)
{
AddParallelAction(action);
}
ExecutePendingParallelActions();
}
/// <summary>
/// Execute the action once in a separate thread.
/// </summary>
public void ExecuteSingleAction(Action action)
{
AddParallelAction(action);
ExecutePendingParallelActions();
}
/// <summary>
/// Add an action to be run in parallel.
/// </summary>
public void AddParallelAction(Action action)
{
var taskSource = new TaskCompletionSource<bool>();
lock (_executingTasks)
{
++_currentParticipants;
_barrier.AddParticipant();
_executingTasks.Push(taskSource.Task);
}
new Thread(() =>
{
try
{
_barrier.SignalAndWait(); // Try to make actions run in lock-step to increase likelihood of breaking race conditions.
action.Invoke();
taskSource.SetResult(true);
}
catch (Exception e)
{
taskSource.SetException(e);
}
}).Start();
}
/// <summary>
/// Runs the pending actions in parallel, attempting to run them in lock-step.
/// </summary>
public void ExecutePendingParallelActions()
{
Task[] tasks;
lock (_executingTasks)
{
_barrier.SignalAndWait();
_barrier.RemoveParticipants(_currentParticipants);
_currentParticipants = 0;
tasks = _executingTasks.ToArray();
_executingTasks.Clear();
}
try
{
if (!Task.WaitAll(tasks, _timeout))
{
throw new TimeoutException($"Action(s) timed out after {_timeout}, there may be a deadlock.");
}
}
catch (AggregateException e)
{
// Only throw one exception instead of aggregate to try to avoid overloading the test error output.
throw e.Flatten().InnerException;
}
}
/// <summary>
/// Run each action in parallel multiple times with differing offsets for each run.
/// <para/>The number of runs is 4^actions.Length, so be careful if you don't want the test to run too long.
/// </summary>
/// <param name="expandToProcessorCount">If true, copies each action on additional threads up to the processor count. This can help test more without increasing the time it takes to complete.
/// <para/>Example: 2 actions with 6 processors, runs each action 3 times in parallel.</param>
/// <param name="setup">The action to run before each parallel run.</param>
/// <param name="teardown">The action to run after each parallel run.</param>
/// <param name="actions">The actions to run in parallel.</param>
public void ExecuteParallelActionsWithOffsets(bool expandToProcessorCount, Action setup, Action teardown, params Action[] actions)
{
setup += () => { };
teardown += () => { };
int actionCount = actions.Length;
int expandCount = expandToProcessorCount ? Math.Max(Environment.ProcessorCount / actionCount, 1) : 1;
foreach (var combo in GenerateCombinations(offsets, actionCount))
{
setup.Invoke();
for (int k = 0; k < expandCount; ++k)
{
for (int i = 0; i < actionCount; ++i)
{
int offset = combo[i];
Action action = actions[i];
AddParallelAction(() =>
{
for (int j = offset; j > 0; --j) { } // Just spin in a loop for the offset.
action.Invoke();
});
}
}
ExecutePendingParallelActions();
teardown.Invoke();
}
}
// Input: [1, 2, 3], 3
// Ouput: [
// [1, 1, 1],
// [2, 1, 1],
// [3, 1, 1],
// [1, 2, 1],
// [2, 2, 1],
// [3, 2, 1],
// [1, 3, 1],
// [2, 3, 1],
// [3, 3, 1],
// [1, 1, 2],
// [2, 1, 2],
// [3, 1, 2],
// [1, 2, 2],
// [2, 2, 2],
// [3, 2, 2],
// [1, 3, 2],
// [2, 3, 2],
// [3, 3, 2],
// [1, 1, 3],
// [2, 1, 3],
// [3, 1, 3],
// [1, 2, 3],
// [2, 2, 3],
// [3, 2, 3],
// [1, 3, 3],
// [2, 3, 3],
// [3, 3, 3]
// ]
private static IEnumerable<int[]> GenerateCombinations(int[] options, int count)
{
int[] indexTracker = new int[count];
int[] combo = new int[count];
for (int i = 0; i < count; ++i)
{
combo[i] = options[0];
}
// Same algorithm as picking a combination lock.
int rollovers = 0;
while (rollovers < count)
{
yield return combo; // No need to duplicate the array since we're just reading it.
for (int i = 0; i < count; ++i)
{
int index = ++indexTracker[i];
if (index == options.Length)
{
indexTracker[i] = 0;
combo[i] = options[0];
if (i == rollovers)
{
++rollovers;
}
}
else
{
combo[i] = options[index];
break;
}
}
}
}
}
}
Example usage:
[Test]
public void DeferredMayBeBeResolvedAndPromiseAwaitedConcurrently_void0()
{
Promise.Deferred deferred = default(Promise.Deferred);
Promise promise = default(Promise);
int invokedCount = 0;
var threadHelper = new ThreadHelper();
threadHelper.ExecuteParallelActionsWithOffsets(false,
// Setup
() =>
{
invokedCount = 0;
deferred = Promise.NewDeferred();
promise = deferred.Promise;
},
// Teardown
() => Assert.AreEqual(1, invokedCount),
// Parallel Actions
() => deferred.Resolve(),
() => promise.Then(() => { Interlocked.Increment(ref invokedCount); }).Forget()
);
}
A: One simple test pattern that can work for some (not all!) cases is to repeat the same test many times. For example, suppose you have a method:
def process(input):
# Spawns several threads to do the job
# ...
return output
Create a bunch of tests:
process(input1) -> expect to return output1
process(input2) -> expect to return output2
...
Now run each of those tests many times.
If the implementation of process contains a subtle bug (e.g. deadlock, race condition, etc.) that has 0.1% chance to emerge, running the test 1000 times gives 64% probability for the bug to emerge at least once. Running the test 10000 times gives >99% probability.
A: You may use EasyMock.makeThreadSafe to make testing instance threadsafe
A: (if possible) don't use threads, use actors / active objects. Easy to test.
A: If you are testing simple new Thread(runnable).run()
You can mock Thread to run the runnable sequentially
For instance, if the code of the tested object invokes a new thread like this
Class TestedClass {
public void doAsychOp() {
new Thread(new myRunnable()).start();
}
}
Then mocking new Threads and run the runnable argument sequentially can help
@Mock
private Thread threadMock;
@Test
public void myTest() throws Exception {
PowerMockito.mockStatic(Thread.class);
//when new thread is created execute runnable immediately
PowerMockito.whenNew(Thread.class).withAnyArguments().then(new Answer<Thread>() {
@Override
public Thread answer(InvocationOnMock invocation) throws Throwable {
// immediately run the runnable
Runnable runnable = invocation.getArgumentAt(0, Runnable.class);
if(runnable != null) {
runnable.run();
}
return threadMock;//return a mock so Thread.start() will do nothing
}
});
TestedClass testcls = new TestedClass()
testcls.doAsychOp(); //will invoke myRunnable.run in current thread
//.... check expected
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/12159",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "789"
} |
Q: SVN Revision Version in .NET Assembly w/ out CC.NET Is there any way to include the SVN repository revision number in the version string of a .NET assembly? Something like Major.Minor.SVNRev
I've seen mention of doing this with something like CC.NET (although on ASP.NET actually), but is there any way to do it without any extra software? I've done similar things in C/C++ before using build batch scripts, but in was accomplished by reading the version number, then having the script write out a file called "ver.h" everytime with something to the effect of:
#define MAJORVER 4
#define MINORVER 23
#define SOURCEVER 965
We would then use these defines to generate the version string.
Is something like this possible for .NET?
A: Have a look at SubWCRev - http://tortoisesvn.net/docs/release/TortoiseSVN_en/tsvn-subwcrev.html
The assembly version numbers are usually in assemblyinfo.cs
A: Read/skim these docs:
Accessing the Subversion repository from .NET using DotSVN
How to: Write a Task
Insert SVN version and Build number in your C# AssemblyInfo file
Compiling Apps With Custom Tasks For The Microsoft Build Engine
The MSBuildCommunityTasks svnversion mentioned in third reference would not perform with svn on Mac 10.5.6 and VS2008 C# project build inside Parallels hosting Vista (ie., across OS).
Write your own task to retrieve revision from repository using DotSVN:
using System;
using Microsoft.Build.Framework;
using Microsoft.Build.Utilities;
using DotSVN.Common;
using DotSVN.Common.Entities;
using DotSVN.Common.Util;
using DotSVN.Server.RepositoryAccess;
namespace GetSVNVersion
{
public class GetRevision : Task
{
[Required]
public string Repository { get; set; }
[Output]
public string Revision { get; set; }
public override bool Execute()
{
ISVNRepository repo;
bool connected = true;
try
{
repo = SVNRepositoryFactory.Create(new SVNURL(Repository));
repo.OpenRepository();
Revision = repo.GetLatestRevision().ToString();
Log.LogCommandLine(Repository + " is revision " + Revision);
repo.CloseRepository();
}
catch(Exception e)
{
Log.LogError("Error retrieving revision number for " + Repository + ": " + e.Message);
connected = false;
}
return connected;
}
}
}
This way allows the repository path to be "file:///Y:/repo" where Y: is a Mac directory mapped into Vista.
A: Another answer mentioned that SVN revision number might not be a good idea because of the limit on the size of the number.
The following link provides not only an SNV revision number, but also a date version info template.
Adding this to a .NET project is simple - very little work needs to be done.
Here is a github project that addresses this
https://github.com/AndrewFreemantle/When-The-Version/downloads
The following url may load slowly but is a step-by-step explanation of how to make this work (easy and short 3 or 4 steps)
http://www.fatlemon.co.uk/2011/11/wtv-automatic-date-based-version-numbering-for-net-with-whentheversion/
A: Here's and C# example for updating the revision info in the assembly automatically. It is based on the answer by Will Dean, which is not very elaborate.
Example :
*
*Copy AssemblyInfo.cs to AssemblyInfoTemplate.cs in the project's
folder Properties.
*Change the Build Action to None for AssemblyInfoTemplate.cs.
*Modify the line with the AssemblyFileVersion to:
[assembly: AssemblyFileVersion("1.0.0.$WCREV$")]
*Consider adding:
[assembly: AssemblyInformationalVersion("Build date: $WCNOW=%Y-%m-%d %H:%M:%S$; Revision date: $WCDATE=%Y-%m-%d %H:%M:%S$; Revision(s) in working copy: $WCRANGE$$WCMODS?; WARNING working copy had uncommitted modifications:$.")],
which will give details about the revision status of the source the assembly was build from.
*Add the following Pre-build event to the project file properties:
subwcrev "$(SolutionDir)." "$(ProjectDir)Properties\AssemblyInfoTemplate.cs" "$(ProjectDir)Properties\AssemblyInfo.cs" -f
*Consider adding AssemblyInfo.cs to the svn ignore list. Substituted revision numbers and dates will modify the file, which results in insignificant changes and revisions and $WCMODS$ will evaluate to true. AssemblyInfo.cs must, of course, be included in the project.
In response to the objections by Wim Coenen, I noticed that, in contrast to what was suggested by Darryl, the AssemblyFileVersion also does not support numbers above 2^16. The build will complete, but the property File Version in the actual assembly will be AssemblyFileVersion modulo 65536. Thus, 1.0.0.65536 as well as 1.0.0.131072 will yield 1.0.0.0, etc. In this example, there is always the true revision number in the AssemblyInformationalVersion property. You could leave out step 3, if you consider this a significant issue.
Edit: some additional info after having used this solution for a while.
*
*It now use AssemblyInfo.cst rather than AssemblyInfoTemplate.cs, because it will automatically have Build Action option None, and it will not clutter you Error list, but you'll loose syntax highlighting.
*I've added two tests to my AssemblyInfo.cst files:
#if(!DEBUG)
$WCMODS?#error Working copy has uncommitted modifications, please commit all modifications before creating a release build.:$
#endif
#if(!DEBUG)
$WCMIXED?#error Working copy has multiple revisions, please update to the latest revision before creating a release build.:$
#endif
Using this, you will normally have to perform a complete SVN Update, after a commit and before you can do a successful release build. Otherwise, $WCMIXED will be true. This seems to be caused by the fact that the committed files re at head revision after the commit, but other files not.
*I have had some doubts whether the first parameter to subwcrev, "$(SolutionDir)", which sets the scope for checking svn version info, does always work as desired. Maybe, it should be $(ProjectDir), if you are content if each individual assembly is in a consistent revision.
Addition
To answer the comment by @tommylux.
SubWcRev can be used for any file in you project. If you want to display revision info in a web page, you could use this VersionInfo template:
public class VersionInfo
{
public const int RevisionNumber = $WCREV$;
public const string BuildDate = "$WCNOW=%Y-%m-%d %H:%M:%S$";
public const string RevisionDate = "$WCDATE=%Y-%m-%d %H:%M:%S$";
public const string RevisionsInWorkingCopy = "$WCRANGE$";
public const bool UncommitedModification = $WCMODS?true:false$;
}
Add a pre-build event just like the one for AssemblyInfo.cst and you will have easy access to all relevant SubVersion info.
A: svn info, tells you the version you are on, you can make a "pre-build" event in VS on your project to generate the assemblyinfo.cs by running svn info and parsing its results with a home grown command line app.
I have done this before, but quickly switched to just having ccnet pass it as a variable to nant.
A: If you want to update the version number in a projects AssemblyInfo.cs you may be interested in this article:
CodeProject: Use Subversion Revision numbers in your Visual Studio Projects
If you enable SVN Keywords then every time you check in the project Subversion scans your files for certain "keywords" and replaces the keywords with some information.
For example, At the top of my source files I would create a header contain the following keywords:
'$Author:$
'$Id:$
'$Rev:$
When I check this file into Subversion these keywords are replaced with the following:
'$Author: paulbetteridge $
'$Id: myfile.vb 145 2008-07-16 15:24:29Z paulbetteridge $
'$Rev: 145 $
A: It is possible but you shouldn't: the components of the assembly version string are limited to 16-bit numbers (max 65535). Subversion revision numbers can easily become bigger than that so at some point the compiler is suddenly going to complain.
A: You can use a shared Assembly Version file that you can reference in all of your projects.
UppercuT does this - http://ferventcoder.com/archive/2009/05/21/uppercut---automated-builds---versionbuilder.aspx
This will give you an idea of what you can do to get versions in your assemblies.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/12176",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "26"
} |
Q: Problem databinding an ASP.Net AJAX toolkit MaskedEditExtender I have a database that contains a date and we are using the MaskedEditExtender (MEE) and MaskedEditValidator to make sure the dates are appropriate. However, we want the Admins to be able to go in and change the data (specifically the date) if necessary.
How can I have the MEE field pre-populate with the database value when the data is shown on the page? I've tried to use 'bind' in the 'InitialValue' property but it doesn't populate the textbox.
Thanks.
A: We found out this morning why our code was mishandling the extender. Since the db was handling the date as a date/time it was returning the date in this format 99/99/9999 99:99:99 but we had the extender mask looking for this format 99/99/9999 99:99
Mask="99/99/9999 99:99:99"
the above code fixed the problem.
thanks to everyone for their help.
A: Are you referring to the asp.Net Ajax toolkit extensions at:
http://www.asp.net/AJAX/AjaxControlToolkit/Samples/MaskedEdit/MaskedEdit.aspx
If so have you checked that your data is coming back in the correct format? It will have to match your date format in order to be displayed.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/12225",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Namespace/solution structure I apologize for asking such a generalized question, but it's something that can prove challenging for me. My team is about to embark on a large project that will hopefully drag together all of the random one-off codebases that have evolved through the years. Given that this project will cover standardizing logical entities across the company ("Customer", "Employee"), small tasks, large tasks that control the small tasks, and utility services, I'm struggling to figure out the best way to structure the namespaces and code structure.
Though I guess I'm not giving you enough specifics to go on, do you have any resources or advice on how to approach splitting your domains up logically? In case it helps, most of this functionality will be revealed via web services, and we're a Microsoft shop with all the latest gizmos and gadgets.
*
*I'm debating one massive solution with subprojects to make references easier, but will that make it too unwieldy?
*Should I wrap up legacy application functionality, or leave that completely agnostic in the namespace (making an OurCRMProduct.Customer class versus a generic Customer class, for instance)?
*Should each service/project have its own BAL and DAL, or should that be an entirely separate assembly that everything references?
I don't have experience with organizing such far-reaching projects, only one-offs, so I'm looking for any guidance I can get.
A: My advice, having embarked on a similar undertaking, is to not agonize over the name spaces..
Just start developing with a few important loose guidelines, because however you start out, your project is organic, and you will end up reorganizing the name spaces and classes over time.
Don't waste time talking too much about your project. Just do it.
A: I recently experienced the exact same at work. Lots of ad-hoc code that needed to be structured and organised.
Its real hard at first, since there is so much. I think the best advice I could give is to just invest time in it on the wind down on a Friday afternoon, for a couple of weeks I would just pick an app/chunk of code, examine what was there, think about what we could make generic, copy it, put it into the new library wherever I thought it should be. Once I had all the code within an application migrated, I would then work on refactoring the application to work from the common framework.. This sometimes caused problems that needed to be fixed, but so long as your thorough it shouldn't be too big a deal.
Piece by piece, thats the only way to do it.
In terms of structure, I tried to kind of mimic the MS namespacing since for the most part its pretty logical (e.g. Company.Data , Company.Web , Company.Web.UI and so on.
One of the major benefits is probably the amount of code dupe removed. Yeah a little refactoring was required in the apps, but the code base is a lot leaner, and in many ways "smarter".
Another thing I noticed is that I would often have problems trying to figure out where to put stuff (in terms of namespacing) since I wasn't sure what it belonged to. Now this really concerned me, I viewed it as such a bad smell. Since the re-org everything now falls in to space much more nicely. And with the (now very small amount) of application specfic code, they get put into Company.Applications.ApplicationName This helps me really think about business objects a lot more since I dont want too much within this namespace, so I come up with more flexible designs.
Sorry for the long post.. It's kind of rambling!
A: We name the assemblies in .NET the following way Company.Project.XXXX.YYYY where XXXX is Project and YYYYY is subproject, for example:
*
*LCP.AdmCom.Common
*LCP.AdmCom.BusinessObjects
*LCP.AdmCom.Common.Dal
We take this from a book call Framework Design Guidelines by Krzysztof Cwalina (Author), Brad Abrams (Author)
A: For large projects the approach I like to take is to have one Domain namespace for my business objects and then use Data Transfer Objects (DTO's) in my layers where storage and retrieval of the business object is needed. A DTO is a simple object that doesn't contain any business logic.
Here is a link that explains a DTO:
http://martinfowler.com/eaaCatalog/dataTransferObject.html
A: There's a million ways to skin a cat. However, the simplest one is always the best. Which way is the simplest for you? Depends on your requirements. But there are some general rules of thumb I follow.
First, reduce the overall number of projects as much as possible. When you compile twenty times a day, that extra minute adds up.
If your app is designed for extensibility, consider splitting your assemblies along the lines of design vs. implementation. Place your interfaces and base classes in a public assembly. Create an assembly for your company's implementations of these classes.
For large applications, keep your UI logic and business logic separate.
SIMPLIFY your solution. If it looks too complex, it probably is. Combine, reduce.
A: Large solutions with lots of projects can be quite slow to compile, but are easier to manage together.
I often have Unit test assemblies in the same solution as the ones they're testing, as you tend to make changes to them together.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/12243",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
} |
Q: Have you ever reflected Reflector? Lutz Roeder's Reflector, that is.
Its obfuscated.
I still don't understand this. Can somebody please explain?
A: It would have been kind of ironic if it weren't ;-)
A: I'm curious what product he uses to obfuscate Reflector. Or maybe it's his custom solution - he obviously knows tons about IL.
A: Of course, I did. For example to find out that .NET Reflector is obfuscated with Dotfuscator.
DotfuscatorAttribute in Reflector.exe (version 5.1.6.0) http://www.freeimagehosting.net/uploads/7f6eda286f.png
A: It's always been the case that its been obfuscated. It was one of the first things I tried with it years ago ;).
A: What needs explaining, Reflector isn't open source, Lutz decided to obfuscate to protect his IP. Fair game.
A: I'll accept Keith's answer, but he's 180 degrees off. Its ironic that the tool used to peer at the source of assemblies is obfuscated.
Also, I'm suprised how serious some of you are. Lighten up! What are you, cobol programmers?
<-- (edit: Maybe some of you are!)
A: It may have been obfuscated by tools such as Xenocode or Dotfuscator. Or as someone said, Lutz may know a lot about IL.
A: Are you allowed to reflect it according to the EULA (if any) ? I would guess not, and not surprised that you can't.
A: I think you have your answer right here: Reflector sold to Red Gate
| {
"language": "en",
"url": "https://stackoverflow.com/questions/12268",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Creating Visual Studio templates under the "Windows" category. I have created a template for Visual Studio 2008 and it currently shows up under File->New Project->Visual C#. However, it is only really specific to Visual C#/Windows but I can't work out how to get it to show up under the "Windows" category and not the more general "Visual C#".
A: Check out MSDN "How to: Locate and Organize Project and Item Templates"
Create a folder within one of these
<VisualStudioInstallDir>\Common7\IDE\ItemTemplates\CSharp\
My Documents\Visual Studio 2008\Templates\ProjectTemplates\CSharp\
A: Categorization of templates depends on settings (for example, if you choose "C#" settings, all of a sudden all other languages move to an "other languages" tree).
What folder is your template in?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/12271",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Considering N2 CMS but worried about performance. Is this justified? Hy, does anyone worked with N2 Content Management System(http://www.codeplex.com/n2).
If yes, how does it perform, performance wise(under heavy load)?
It seems pretty simple and easy to use.
Adrian
A: Maybe try this question at http://www.codeplex.com/n2/Thread/List.aspx
They might be able to tell you about performance limitations or bottlenecks.
A: http://whocanhelpme.codeplex.com/ and http://www.fancydressoutfitters.co.uk/ are n2cms based.
Read more on James Broome's blog http://jamesbroo.me/integrating-n2cms-into-who-can-help-me/
A: We've built numerous sites in N2 and we love it.
Many of these sites have in excess of 20,000 users accessing on a daily basis. We've also load tested up to 50,000 users with no problems.
It's running on fairly modest hardware - one web server, one db server.
With caching enabled it is extremly fast, as the database hardly gets hit!
A: I tried it and it looked promising at first but quickly had issues actually deploying it to a Medium Trust host.
A: i have 2 high traffic web sites both using N2 CMS, its fast and reliable
| {
"language": "en",
"url": "https://stackoverflow.com/questions/12290",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: Any good tools for creating timelines? I need to create a historical timeline starting from 1600's to the present day. I also need to have some way of showing events on the timeline so that they do not appear cluttered when many events are close together.
I have tried using Visio 2007 as well as Excel 2007 Radar Charts, but I could not get the results I wanted. the timeline templates in Visio are not great and using Radar charts in Excel leads to cluttered data.
Are there any other tools or techniques I could use to create these?
@Darren:
The first link looks great. Thanks! The second link did not work in Firefox and was rendered as ASCII. It opened up fine in IE.
And yes, this is for the end users. So I want it to look as presentable as possible, if you know what I mean.
Thanks again!
A: SIMILIE Timeline would probably suit your needs.
http://simile.mit.edu/timeline/
Timeline .NET: http://www.codeplex.com/timelinenet
Oh, i guess i should ask... for personal use or for display to end users? that might change what i would suggest, but this could work for internal purposes too i suppose.
A: Lifehacker has a good overview and tutorial of SIMILIE Timeline. They seem to like it quite a bit.
A: If you need a timeline from RSS Feeeds give xTimeline a try. I just used it
http://lifehacker.com/software/rss/create-a-timeline-from-rss-feeds-with-xtimeline-283098.php
A: @Pascal this page? http://tools.mscorlib.com/timeline/Default.aspx. If it's looking like ascii maybe look for a js error, but that renders on my system fine. If all else fails, it's a decent js library by the MIT team as it is, so you could wire up your own implementation
A: I also recommend Simile Timeline... I just implemented a webpage that uses it and JQuery and produces fantastic results. The downside is that you need to implement it through some html page, hook it up with the js and create some xml files, so it probably won't do for a presentational tool.
http://infosthetics.com/ is a good data visualization blog, maybe you find something there. Also check flowingdata.com
For webbased timelines, there is also:
circavie: http://flowingdata.com/2007/10/25/create-share-and-embed-custom-timelines-with-circavie/
dipity (looks killer): http://flowingdata.com/2008/08/18/tell-stories-with-interactive-timelines-from-dipity/
A: You can used this great timeline tool built with JavaScript.
You can download it for free here: http://timeline.verite.co/#examples
| {
"language": "en",
"url": "https://stackoverflow.com/questions/12294",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: How can I remove nodes from a SiteMapNodeCollection? I've got a Repeater that lists all the web.sitemap child pages on an ASP.NET page. Its DataSource is a SiteMapNodeCollection. But, I don't want my registration form page to show up there.
Dim Children As SiteMapNodeCollection = SiteMap.CurrentNode.ChildNodes
'remove registration page from collection
For Each n As SiteMapNode In SiteMap.CurrentNode.ChildNodes
If n.Url = "/Registration.aspx" Then
Children.Remove(n)
End If
Next
RepeaterSubordinatePages.DataSource = Children
The SiteMapNodeCollection.Remove() method throws a
NotSupportedException: "Collection is read-only".
How can I remove the node from the collection before DataBinding the Repeater?
A: Using Linq and .Net 3.5:
//this will now be an enumeration, rather than a read only collection
Dim children = SiteMap.CurrentNode.ChildNodes.Where( _
Function (x) x.Url <> "/Registration.aspx" )
RepeaterSubordinatePages.DataSource = children
Without Linq, but using .Net 2:
Function IsShown( n as SiteMapNode ) as Boolean
Return n.Url <> "/Registration.aspx"
End Function
...
//get a generic list
Dim children as List(Of SiteMapNode) = _
New List(Of SiteMapNode) ( SiteMap.CurrentNode.ChildNodes )
//use the generic list's FindAll method
RepeaterSubordinatePages.DataSource = children.FindAll( IsShown )
Avoid removing items from collections as that's always slow. Unless you're going to be looping through multiple times you're better off filtering.
A: Your shouldn't need CType
Dim children = _
From n In SiteMap.CurrentNode.ChildNodes.Cast(Of SiteMapNode)() _
Where n.Url <> "/Registration.aspx" _
Select n
A: I got it to work with code below:
Dim children = From n In SiteMap.CurrentNode.ChildNodes _
Where CType(n, SiteMapNode).Url <> "/Registration.aspx" _
Select n
RepeaterSubordinatePages.DataSource = children
Is there a better way where I don't have to use the CType()?
Also, this sets children to a System.Collections.Generic.IEnumerable(Of Object). Is there a good way to get back something more strongly typed like a System.Collections.Generic.IEnumerable(Of System.Web.SiteMapNode) or even better a System.Web.SiteMapNodeCollection?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/12297",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Why do Sql Server 2005 maintenance plans use the wrong database for dbcc checkdb? This is a problem I have seen other people besides myself having, and I haven't found a good explanation.
Let's say you have a maintenance plan with a task to check the database, something like this:
USE [MyDb]
GO
DBCC CHECKDB with no_infomsgs, all_errormsgs
If you go look in your logs after the task executes, you might see something like this:
08/15/2008 06:00:22,spid55,Unknown,DBCC CHECKDB (mssqlsystemresource) executed by NT AUTHORITY\SYSTEM found 0 errors and repaired 0 errors. Elapsed time: 0 hours 0 minutes 0 seconds.
08/15/2008 06:00:21,spid55,Unknown,DBCC CHECKDB (master) executed by NT AUTHORITY\SYSTEM found 0 errors and repaired 0 errors. Elapsed time: 0 hours 0 minutes 0 seconds.
Instead of checking MyDb, it checked master and msssqlsystemresource.
Why?
My workaround is to create a Sql Server Agent Job with this:
dbcc checkdb ('MyDb') with no_infomsgs, all_errormsgs;
That always works fine.
08/15/2008 04:26:04,spid54,Unknown,DBCC CHECKDB (MyDb) WITH all_errormsgs<c/> no_infomsgs executed by NT AUTHORITY\SYSTEM found 0 errors and repaired 0 errors. Elapsed time: 0 hours 26 minutes 3 seconds.
A: For starters, always remember that GO is not a SQL keyword; it is merely a batch separator that is (generally) implemented/recognized by the client, not the server. So, depending on context and client, there really is no guarantee that the current database is preserved between batches.
A: If you are using a maintenance plan you'd probably be better off use the check database integrity task. If you really want to run you own maintenance written in t-sql then run it using a step in a job, not in a maintenance plan and the code above will work ok. Like Stu said the GO statement is client directive not a sql keyword and only seems to be respected by isql, wsql, osql, etc, clients and the sql agent. I think it works in DTS packages. Obviously, not in DTSX, though.
A: You have a check datasbase integrity task and you double-clicked it choose MyDb and when the plan runs it only checks master?? weird. Are you sure you don't another plan running?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/12304",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Can I serialize a C# Type object? I'm trying to serialize a Type object in the following way:
Type myType = typeof (StringBuilder);
var serializer = new XmlSerializer(typeof(Type));
TextWriter writer = new StringWriter();
serializer.Serialize(writer, myType);
When I do this, the call to Serialize throws the following exception:
"The type System.Text.StringBuilder was not expected. Use the
XmlInclude or SoapInclude attribute to specify types that are not
known statically."
Is there a way for me to serialize the Type object? Note that I am not trying to serialize the StringBuilder itself, but the Type object containing the metadata about the StringBuilder class.
A: According to the MSDN documentation of System.Type [1] you should be able to serialize the System.Type object. However, as the error is explicitly referring to System.Text.StringBuilder, that is likely the class that is causing the serialization error.
[1] Type Class (System) - http://msdn.microsoft.com/en-us/library/system.type.aspx
A: I came across this issue trying to do binary serialization in .net standard 2.0. I ended up solving the problem using a custom SurrogateSelector and SerializationBinder.
The TypeSerializationBinder was required because the framework was having trouble resolving System.RuntimeType before it got SurrogateSelector. I don't really understand why the type must be resolved before this step though...
Here is the code:
// Serializes and deserializes System.Type
public class TypeSerializationSurrogate : ISerializationSurrogate {
public void GetObjectData(object obj, SerializationInfo info, StreamingContext context) {
info.AddValue(nameof(Type.FullName), (obj as Type).FullName);
}
public object SetObjectData(object obj, SerializationInfo info, StreamingContext context, ISurrogateSelector selector) {
return Type.GetType(info.GetString(nameof(Type.FullName)));
}
}
// Just a stub, doesn't need an implementation
public class TypeStub : Type { ... }
// Binds "System.RuntimeType" to our TypeStub
public class TypeSerializationBinder : SerializationBinder {
public override Type BindToType(string assemblyName, string typeName) {
if(typeName == "System.RuntimeType") {
return typeof(TypeStub);
}
return Type.GetType($"{typeName}, {assemblyName}");
}
}
// Selected out TypeSerializationSurrogate when [de]serializing Type
public class TypeSurrogateSelector : ISurrogateSelector {
public virtual void ChainSelector(ISurrogateSelector selector) => throw new NotSupportedException();
public virtual ISurrogateSelector GetNextSelector() => throw new NotSupportedException();
public virtual ISerializationSurrogate GetSurrogate(Type type, StreamingContext context, out ISurrogateSelector selector) {
if(typeof(Type).IsAssignableFrom(type)) {
selector = this;
return new TypeSerializationSurrogate();
}
selector = null;
return null;
}
}
Usage Example:
byte[] bytes
var serializeFormatter = new BinaryFormatter() {
SurrogateSelector = new TypeSurrogateSelector()
}
using (var stream = new MemoryStream()) {
serializeFormatter.Serialize(stream, typeof(string));
bytes = stream.ToArray();
}
var deserializeFormatter = new BinaryFormatter() {
SurrogateSelector = new TypeSurrogateSelector(),
Binder = new TypeDeserializationBinder()
}
using (var stream = new MemoryStream(bytes)) {
type = (Type)deserializeFormatter .Deserialize(stream);
Assert.Equal(typeof(string), type);
}
A: I had the same problem, and my solution was to create a SerializableType class. It freely converts to and from System.Type, but it serializes as a string. All you have to do is declare the variable as a SerializableType, and from then on you can refer to it as System.Type.
Here is the class:
// a version of System.Type that can be serialized
[DataContract]
public class SerializableType
{
public Type type;
// when serializing, store as a string
[DataMember]
string TypeString
{
get
{
if (type == null)
return null;
return type.FullName;
}
set
{
if (value == null)
type = null;
else
{
type = Type.GetType(value);
}
}
}
// constructors
public SerializableType()
{
type = null;
}
public SerializableType(Type t)
{
type = t;
}
// allow SerializableType to implicitly be converted to and from System.Type
static public implicit operator Type(SerializableType stype)
{
return stype.type;
}
static public implicit operator SerializableType(Type t)
{
return new SerializableType(t);
}
// overload the == and != operators
public static bool operator ==(SerializableType a, SerializableType b)
{
// If both are null, or both are same instance, return true.
if (System.Object.ReferenceEquals(a, b))
{
return true;
}
// If one is null, but not both, return false.
if (((object)a == null) || ((object)b == null))
{
return false;
}
// Return true if the fields match:
return a.type == b.type;
}
public static bool operator !=(SerializableType a, SerializableType b)
{
return !(a == b);
}
// we don't need to overload operators between SerializableType and System.Type because we already enabled them to implicitly convert
public override int GetHashCode()
{
return type.GetHashCode();
}
// overload the .Equals method
public override bool Equals(System.Object obj)
{
// If parameter is null return false.
if (obj == null)
{
return false;
}
// If parameter cannot be cast to SerializableType return false.
SerializableType p = obj as SerializableType;
if ((System.Object)p == null)
{
return false;
}
// Return true if the fields match:
return (type == p.type);
}
public bool Equals(SerializableType p)
{
// If parameter is null return false:
if ((object)p == null)
{
return false;
}
// Return true if the fields match:
return (type == p.type);
}
}
and an example of usage:
[DataContract]
public class A
{
...
[DataMember]
private Dictionary<SerializableType, B> _bees;
...
public B GetB(Type type)
{
return _bees[type];
}
...
}
You might also consider using AssemblyQualifiedName instead of Type.FullName - see comment by @GreyCloud
A: I wasn't aware that a Type object could be created with only a string containing the fully-qualified name. To get the fully qualified name, you can use the following:
string typeName = typeof (StringBuilder).FullName;
You can then persist this string however needed, then reconstruct the type like this:
Type t = Type.GetType(typeName);
If you need to create an instance of the type, you can do this:
object o = Activator.CreateInstance(t);
If you check the value of o.GetType(), it will be StringBuilder, just as you would expect.
A: Brian's answer works well if the type is in the same assembly as the call (like GreyCloud pointed out in one of the comments).
So if the type is in another assembly you need to use the AssemblyQualifiedName as GreyCloud also pointed out.
However as the AssemblyQualifiedName saves the version, if your assemblies have a different version than the one in the string where you have the type, it won't work.
In my case this was an issue and I solved it like this:
string typeName = typeof (MyClass).FullName;
Type type = GetTypeFrom(typeName);
object myInstance = Activator.CreateInstance(type);
GetTypeFrom Method
private Type GetTypeFrom(string valueType)
{
var type = Type.GetType(valueType);
if (type != null)
return type;
try
{
var assemblies = AppDomain.CurrentDomain.GetAssemblies();
//To speed things up, we check first in the already loaded assemblies.
foreach (var assembly in assemblies)
{
type = assembly.GetType(valueType);
if (type != null)
break;
}
if (type != null)
return type;
var loadedAssemblies = assemblies.ToList();
foreach (var loadedAssembly in assemblies)
{
foreach (AssemblyName referencedAssemblyName in loadedAssembly.GetReferencedAssemblies())
{
var found = loadedAssemblies.All(x => x.GetName() != referencedAssemblyName);
if (!found)
{
try
{
var referencedAssembly = Assembly.Load(referencedAssemblyName);
type = referencedAssembly.GetType(valueType);
if (type != null)
break;
loadedAssemblies.Add(referencedAssembly);
}
catch
{
//We will ignore this, because the Type might still be in one of the other Assemblies.
}
}
}
}
}
catch(Exception exception)
{
//throw my custom exception
}
if (type == null)
{
//throw my custom exception.
}
return type;
}
I am posting this in case anyone needs it.
A: Just looked at its definition, it is not marked as Serializable. If you really need this data to be serialize, then you may have to convert it to a custom class that is marked as such.
public abstract class Type : System.Reflection.MemberInfo
Member of System
Summary:
Represents type declarations: class types, interface types, array types, value types, enumeration types, type parameters, generic type definitions, and open or closed constructed generic types.
Attributes:
[System.Runtime.InteropServices.ClassInterfaceAttribute(0),
System.Runtime.InteropServices.ComDefaultInterfaceAttribute(System.Runtime.InteropServices._Type),
System.Runtime.InteropServices.ComVisibleAttribute(true)]
| {
"language": "en",
"url": "https://stackoverflow.com/questions/12306",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "60"
} |
Q: _wfopen equivalent under Mac OS X I'm looking to the equivalent of Windows _wfopen() under Mac OS X. Any idea?
I need this in order to port a Windows library that uses wchar* for its File interface. As this is intended to be a cross-platform library, I am unable to rely on how the client application will get the file path and give it to the library.
A: You just want to open a file handle using a path that may contain Unicode characters, right? Just pass the path in filesystem representation to fopen.
*
*If the path came from the stock Mac OS X frameworks (for example, an Open panel whether Carbon or Cocoa), you won't need to do any conversion on it and will be able to use it as-is.
*If you're generating part of the path yourself, you should create a CFStringRef from your path and then get that in filesystem representation to pass to POSIX APIs like open or fopen.
Generally speaking, you won't have to do a lot of that for most applications. For example, many applications may have auxiliary data files stored the user's Application Support directory, but as long as the names of those files are ASCII, and you use standard Mac OS X APIs to locate the user's Application Support directory, you don't need to do a bunch of paranoid conversion of a path constructed with those two components.
Edited to add: I would strongly caution against arbitrarily converting everything to UTF-8 using something like wcstombs because filesystem encoding is not necessarily identical to the generated UTF-8. Mac OS X and Windows both use specific (but different) canonical decomposition rules for the encoding used in filesystem paths.
For example, they need to decide whether "é" will be stored as one or two code units (either LATIN SMALL LETTER E WITH ACUTE or LATIN SMALL LETTER E followed by COMBINING ACUTE ACCENT). These will result in two different — and different-length — byte sequences, and both Mac OS X and Windows work to avoid putting multiple files with the same name (as the user perceives them) in the same directory.
The rules for how to perform this canonical decomposition can get pretty hairy, so rather than try to implement it yourself it's best to leave it to the functions the system frameworks have provided for you to do the heavy lifting.
A: @JKP:
Not all functions in MacOS X accept UTF8, but filenames and filepaths may be UTF8, thus all POSIX functions dealing with file access (open, fopen, stat, etc.) accept UTF8.
See here. Quote:
How a file name looks at the API level
depends on the API. Current Carbon
APIs handle file names as an array of
UTF-16 characters; POSIX ones handle
them as an array of UTF-8, which is
why UTF-8 works well in Terminal. How
it's stored on disk depends on the
disk format; HFS+ uses UTF-16, but
that's not important in most cases.
Some other POSIX functions handle UTF8 as well. E.g. functions dealing with user names, group names or user passwords use UTF8 to store the information (thus a user name can be Japanese and your password can be Chinese, no problem).
But not all handle UTF8. E.g. for all string functions an UTF8 string is just a normal C String and characters above 126 have no special meaning. They don't understand the concept of multiple bytes (chars in C) forming a single Unicode character. How other APIs handle char * pointer being passed to them is different from API to API. However, as a rule as the thumb you can say:
Either the function only accepts C strings with pure ASCII characters (only in the range 0 to 126) or it will accept UTF8. Usually functions don't allow characters above 126 and interpret them in any other encoding than UTF8. If this really was the case, it is documented and then there must be a way to pass the encoding along with the string.
A: POSIX API in Mac OS X are usable with UTF-8 strings. In order to convert a wchar_t string to UTF-8, it is possible to use the CoreFoundation framework from Mac OS X.
Here is a class that will wrap an UTF-8 generated string from a wchar_t string.
class Utf8
{
public:
Utf8(const wchar_t* wsz): m_utf8(NULL)
{
// OS X uses 32-bit wchar
const int bytes = wcslen(wsz) * sizeof(wchar_t);
// comp_bLittleEndian is in the lib I use in order to detect PowerPC/Intel
CFStringEncoding encoding = comp_bLittleEndian ? kCFStringEncodingUTF32LE
: kCFStringEncodingUTF32BE;
CFStringRef str = CFStringCreateWithBytesNoCopy(NULL,
(const UInt8*)wsz, bytes,
encoding, false,
kCFAllocatorNull
);
const int bytesUtf8 = CFStringGetMaximumSizeOfFileSystemRepresentation(str);
m_utf8 = new char[bytesUtf8];
CFStringGetFileSystemRepresentation(str, m_utf8, bytesUtf8);
CFRelease(str);
}
~Utf8()
{
if( m_utf8 )
{
delete[] m_utf8;
}
}
public:
operator const char*() const { return m_utf8; }
private:
char* m_utf8;
};
Usage:
const wchar_t wsz = L"Here is some Unicode content: éà€œæ";
const Utf8 utf8 = wsz;
FILE* file = fopen(utf8, "r");
This will work for reading or writing files.
A: If you're using Cocoa it's fairly easy with NSString. Just load the UTF16 data in using -initWithBytes:length:encoding: (or perhaps -initWithCString:encoding:) and then get a UTF8 version by calling UTF8String on the result. Then, just call fopen with your new UTF8 string as the param.
You can definitely call fopen with a UTF-8 string, regardless of language - can't help with C++ on OSX though - sorry.
A: I have read file name from configuration UTF8 file through wifstream (it uses wchar_t buffer).
Mac implementation is different from Linux and Windows.
wifstream reads each byte from file to separate wchar_t cell in the buffer. So we have 3 empty bytes, although open requires char string. Thus programmer can use wcstombs function to convert wide character string to multi-byte string.
The API supports UTF8. For better understanding use memory watcher and hex editor for your file.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/12319",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "18"
} |
Q: Programmatically list WMI classes and their properties Is there any known way of listing the WMI classes and their properties available for a particular system? Im interested in a vbscript approach, but please suggest anything really :)
P.S. Great site.
A: I believe this is what you want.
WMI Code Creator
A part of this nifty utility allows you to browse namespaces/classes/properties on the local and remote PCs, not to mention generating WMI code in VBScript/C#/VB on the fly. Very useful.
Also, the source code used to create the utility is included in the download, which could provide a reference if you wanted to create your own browser like interface.
A: This MSDN page walks through enumerating the available classes: How to: List the Classes in a WMI Namespace
for retrieving properties from a class:
ManagementPath l_Path = new ManagementPath(l_className);
ManagementClass l_Class = new ManagementClass(myScope, l_ManagementPath, null);
foreach (PropertyData l_PropertyData in l_Class.Properties)
{
string l_type = l_PropertyData.Type.ToString());
int l_length = Convert.ToInt32(l_PropertyData.Qualifiers["maxlen"].Value);
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/12330",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: In a C/C++ program, how does the system (Windows, Linux, and Mac OS X) call the main() function? I am looking for a more technical explanation than the OS calls the function.
Is there a website or book?
A: main() is part of the C library and is not a system function. I don't know for OS X or Linux, but Windows usually starts a program with WinMainCRTStartup(). This symbol init your process, extract command line arguments and environment (argc, argv, end) and calls main(). It is also responsible of calling any code that should run after main(), like atexit().
By looking in your Visual Studio file, you should be able to find the default implementation of WinMainCRTStartup to see what it does.
You can also define a function of your own to call at startup, this is done by changing "entry point" in the linker options. This is often a function that takes no arguments and returns a void.
A: As far as Windows goes, the entry point functions are:
*
*Console: void __cdecl mainCRTStartup( void ) {}
*GUI: void __stdcall WinMainCRTStartup( void ) {}
*DLL: BOOL __stdcall _DllMainCRTStartup(HINSTANCE hinstDLL,DWORD fdwReason,void* lpReserved) {}
The only reason to use these over the normal main, WinMain, and DllMain is if you wanted to use your own run time library. (If you want smaller file size or custom features.)
For custom run-time implementations and other tricks to get smaller PE files, see:
*
*http://www.microsoft.com/msj/archive/S569.aspx
*http://www.codeproject.com/KB/tips/aggressiveoptimize.aspx
*http://www.catch22.net/tuts/minexe.asp
*http://www.hailstorm.net/papers/smallwin32.htm
A: The .exe file (or equivalent on other platforms) contains an 'entry point' address. To a first approximation, the OS loads the relevant sections of the .EXE file into RAM, and then jumps to the entry point.
As others have said, this entry point will not be 'main', but will instead be a part of the runtime library - it will do things like initialising static objects, setting up the argc and argv parameters, setting up standard input, standard output, standard error, etc. When it's done all that, it will call your main() function. When main exits, the runtime goes through an analogous process of passing your return code back to the environment, calling static destructors, calling _atexit routines, etc.
If you have Microsoft tools (perhaps not the freebie ones), then you have all the runtime source, and an easy way to look at it is to put a breakpoint on the closing brace of your main() method, and single step back up into the runtime.
A: It's OS dependent.
In OS X, there's a frame in the mach header that contains the start address for the EIP (instruction pointer) register.
Once the binary is loaded, the OS launches execution from this address:
cristi:test diciu$ otool -l ./a.out | grep -A 10 LC_UNIXTHREAD
cmd LC_UNIXTHREAD
cmdsize 80
flavor i386_THREAD_STATE
count i386_THREAD_STATE_COUNT
[..]
ss 0x00000000 eflags 0x00000000 eip 0x00001f8c cs 0x00000000
[..]
The address is the address of the "start" function from the binary:
cristi:test diciu$ nm ./a.out
0000200c D _NXArgc
00002008 D _NXArgv
00002000 D ___progname
00001fe0 t __dyld_func_lookup
00001000 A __mh_execute_header
[..]
00001f8c T start
In Mac OS X, it's the "start" function that gets called first, even before the "main" function:
(gdb) b start
Breakpoint 1 at 0x1f90
(gdb) b main
Breakpoint 2 at 0x1ff4
(gdb) r
Starting program: /Users/diciu/Programming/test/a.out
Reading symbols for shared libraries ++. done
Breakpoint 1, 0x00001f90 in start ()
A: Expert C++/CLI (check around page 279) has very specific details of the different bootstrap scenarios for native, mixed, and pure CLR assemblies.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/12332",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "19"
} |
Q: PHP / cURL on Windows install: "The specified module could not be found." I'm running PHP 5.2.3 on Windows 2000 Server with IIS 5. I'm trying
to get cURL working, so in my php.ini file, I have this line:
extension_dir ="F:\PHP\ext"
And later, I have:
extension=php_curl.dll
The file F:\PHP\ext\php_curl.dll exists, but when I try to run any PHP
script, I get this in the error log:
PHP Warning: PHP Startup: Unable to load dynamic library 'F:\PHP\ext
\php_curl.dll' - The specified module could not be found.
in Unknown on line 0
A: A tip is to use the WAMP-installer. Everything just works. It's not IIS though - so if it is important - you should ignore my advice. ;)
EDIT: I saw that you found the solution so I voted it up. +1
A: Problem solved!
Although the error message said The specified module could not be found, this is a little misleading -- it's not that it couldn't find php_curl.dll, but rather it couldn't find a module that php_curl.dll required. The 2 DLLs it requires are libeay32.dll and SSLeay32.dll.
So, you have to put those 2 DLLs somewhere in your PATH (e.g., C:\Windows\system32). That's all there is to it.
However, even that did not work for me initially. So I downloaded the Windows zip of the latest version of PHP, which includes all the necessary DLLs. I didn't reinstall PHP, I just copied all of the DLLs in the "ext" folder to my PHP extensions folder (as specified in the extension_dir variable in php.ini), and I copied the versions of libeay32.dll and SSLeay32.dll from the PHP download into my System32 directory.
I also did an iisreset, but I don't know if that was necessary.
A: libeay32.dll and ssleay32.dll have to be path-accessible for php_curl.dll to work correctly.
In Control Panel -> Search -> Advanced System Settings and use the button Environment Variables.
Under System Variables find Path add the c:/php folder (or whatever path) and restart Apache.
A: I keep having same problem although i did the suggestion above and many others suggested on the internet i get
Sorry, but this plugin requires libcurl to be activated on your
server.
When i try to activate my plugin.
Edited: I was using php 5.3.13 had win64 windows 7 and none of the soln was working for me.
1.I had tried to copy the libeay32.dll SSLeay32.dll in windows\system32 folder did not work
2. Edited and uncommented both php.ini files did not work
3. Activated php_curl in php extensions did not work
4. Copied and replaced several times the www.anindya.com version of php_curl.dll but seems i was downloading the wrong version of this. The version that worked for me was in Fixed curl extensions section the second file php_curl-5.3.13-VC9-x64
Hope this will help anyone else
A: Faced this problem when I upgraded the php in UwAmp to 7.2.*. The only solution that worked for me was to download the latest version of apache at the time (Apache/2.4.37 (Win32)) and replace the one that came with UwAmp. That also involved editing the sample httpd.conf to produce an httpd_uwamp.conf file. UwAmp needs this template to then generate the actual httpd.conf when it starts up. All other suggestions above didn't resolve it for me unfortunately. Also note that as of OpenSSL 1.1, libeay32.dll and ssleay32.dll are no longer required (see http://php.net/manual/en/curl.installation.php)
A: In your case just add "F:\PHP\ext" to Environment Variable "path".
| {
"language": "en",
"url": "https://stackoverflow.com/questions/12348",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "30"
} |
Q: How to dispose a class in .net? The .NET garbage collector will eventually free up memory, but what if you want that memory back immediately? What code do you need to use in a class MyClass to call
MyClass.Dispose()
and free up all the used space by variables and objects in MyClass?
A: Take a look at this article
Implementing the Dispose pattern, IDisposable, and/or a finalizer has absolutely nothing to do with when memory gets reclaimed; instead, it has everything to do with telling the GC how to reclaim that memory. When you call Dispose() you are in no way interacting with the GC.
The GC will only run when it determines the need to (called memory pressure) and then (and only then) will it deallocate memory for unused objects and compact the memory space.
You could call GC.Collect() but you really shouldn't unless there is a very good reason to (which is almost always "Never"). When you force an out-of-band collection cycle like this you actually cause the GC to do more work and ultimately can end up hurting your applications performance. For the duration of the GC collection cycle your application is actually in a frozen state...the more GC cycles that run, the more time your application spends frozen.
There are also some native Win32 API calls you can make to free your working set, but even those should be avoided unless there is a very good reason to do it.
The whole premise behind a gargbage collected runtime is that you don't need to worry (as much) about when the runtime allocates/deallocates actual memory; you only need to worry about making sure the your object knows how to clean up after itself when asked.
A: I wrote a summary of Destructors and Dispose and Garbage collection on http://codingcraftsman.wordpress.com/2012/04/25/to-dispose-or-not-to-dispose/
To answer the original question:
*
*Don't try to manage your memory
*Dispose is not about memory management, it's about unmanaged resource management
*Finalizers are an innate part of the Dispose pattern and actually slow down memory freeing of managed objects (as they have to go into the Finalization queue unless already Dispose d)
*GC.Collect is bad as it makes some short-lived objects appear to be required for longer and so slows them down from being collected.
However, GC.Collect could be useful if you had a performance critical section of code and wanted to reduce the likelihood of Garbage Collection slowing it down. You call that before.
On top of that, there is an argument in favour of this pattern:
var myBigObject = new MyBigObject(1);
// something happens
myBigObject = new MyBigObject(2);
// at the above line, there are temporarily two big objects in memory and neither can be collected
vs
myBigObject = null; // so it could now be collected
myBigObject = new MyBigObject(2);
But the main answer is that Garbage Collection just works unless you mess around with it!
A: public class MyClass : IDisposable
{
public void Dispose()
{
// cleanup here
}
}
then you can do something like this
MyClass todispose = new MyClass();
todispose.Dispose(); // instance is disposed right here
or
using (MyClass instance = new MyClass())
{
}
// instance will be disposed right here as it goes out of scope
A: Complete explanation by Joe Duffy on "Dispose, Finalization, and Resource Management":
Earlier in the .NET Framework’s
lifetime, finalizers were consistently
referred to as destructors by C#
programmers. As we become smarter over
time, we are trying to come to terms
with the fact that the Dispose method
is really more equivalent to a C++
destructor (deterministic), while the
finalizer is something entirely
separate (nondeterministic). The fact
that C# borrowed the C++ destructor
syntax (i.e. ~T()) surely had at least
a little to do with the development of
this misnomer.
A: You can only dispose instances that implement the IDisposable interface.
To force a garbage collect to free up the (unmanaged) memory immediately:
GC.Collect();
GC.WaitForPendingFinalizers();
This is normally bad practice, but there is for example a bug in the x64-version of the .NET framework that makes the GC behave strange in some scenarios, and then you might want to do this. I don't know if the bug have been resolved yet. Does anyone know?
To dispose a class you do this:
instance.Dispose();
or like this:
using(MyClass instance = new MyClass())
{
// Your cool code.
}
that will translate at compile-time to:
MyClass instance = null;
try
{
instance = new MyClass();
// Your cool code.
}
finally
{
if(instance != null)
instance.Dispose();
}
You can implement the IDisposable interface like this:
public class MyClass : IDisposable
{
private bool disposed;
/// <summary>
/// Construction
/// </summary>
public MyClass()
{
}
/// <summary>
/// Destructor
/// </summary>
~MyClass()
{
this.Dispose(false);
}
/// <summary>
/// The dispose method that implements IDisposable.
/// </summary>
public void Dispose()
{
this.Dispose(true);
GC.SuppressFinalize(this);
}
/// <summary>
/// The virtual dispose method that allows
/// classes inherithed from this one to dispose their resources.
/// </summary>
/// <param name="disposing"></param>
protected virtual void Dispose(bool disposing)
{
if (!disposed)
{
if (disposing)
{
// Dispose managed resources here.
}
// Dispose unmanaged resources here.
}
disposed = true;
}
}
A: You can't really force a GC to clean up an object when you want, although there are ways to force it to run, nothing says it's clean up the all the object you want/expect. It's best to call dispose in a try catch ex finally dispose end try (VB.NET rulz) way. But Dispose is for cleaning up system resources (memory, handles, db connections, etc. allocated by the object in deterministic way. Dispose doesn't (and can't) clean up the memory used by the object itself, only the the GC can do that.
A: The responses to this question have got more than a little confused.
The title asks about disposal, but then says that they want memory back immediately.
.Net is managed, which means that when you write .Net apps you don't need to worry about memory directly, the cost is that you don't have direct control over memory either.
.Net decides when it's best to clean up and free memory, not you as the .Net coder.
The Dispose is a way to tell .Net that you're done with something, but it won't actually free up the memory until it's the best time to do so.
Basically .Net will actually collect the memory back when it's easiest for it to do so - it's very good at deciding when. Unless you're writing something very memory intensive you normally don't need to overrule it (this is part of the reason games aren't often written in .Net yet - they need complete control)
In .Net you can use GC.Collect() to force it to immediately, but that is almost always bad practise. If .Net hasn't cleaned it up yet that means it isn't a particularly good time for it to do so.
GC.Collect() picks up the objects that .Net identifies as done with. If you haven't disposed an object that needs it .Net may decide to keep that object. This means that GC.Collect() is only effective if you correctly implement your disposable instances.
GC.Collect() is not a replacement for correctly using IDisposable.
So Dispose and memory are not directly related, but they don't need to be. Correctly disposing will make your .Net apps more efficient and therefore use less memory though.
99% of the time in .Net the following is best practice:
Rule 1: If you don't deal with anything unmanaged or that implements IDisposable then don't worry about Dispose.
Rule 2: If you have a local variable that implements IDisposable make sure that you get rid of it in the current scope:
//using is best practice
using( SqlConnection con = new SqlConnection("my con str" ) )
{
//do stuff
}
//this is what 'using' actually compiles to:
SqlConnection con = new SqlConnection("my con str" ) ;
try
{
//do stuff
}
finally
{
con.Dispose();
}
Rule 3: If a class has a property or member variable that implements IDisposable then that class should implement IDisposable too. In that class's Dispose method you can also dispose of your IDisposable properties:
//rather basic example
public sealed MyClass :
IDisposable
{
//this connection is disposable
public SqlConnection MyConnection { get; set; }
//make sure this gets rid of it too
public Dispose()
{
//if we still have a connection dispose it
if( MyConnection != null )
MyConnection.Dispose();
//note that the connection might have already been disposed
//always write disposals so that they can be called again
}
}
This isn't really complete, which is why the example is sealed. Inheriting classes may need to observe the next rule...
Rule 4: If a class uses an unmanaged resource then implement IDispose and add a finaliser.
.Net can't do anything with the unmanaged resource, so now we are talking about memory. If you don't clean it up you can get a memory leak.
The Dispose method needs to deal with both managed and unmanaged resources.
The finaliser is a safety catch - it ensures that if someone else creates and instance of your class and fails to dispose it the 'dangerous' unmanaged resources can still be cleaned up by .Net.
~MyClass()
{
//calls a protected method
//the false tells this method
//not to bother with managed
//resources
this.Dispose(false);
}
public void Dispose()
{
//calls the same method
//passed true to tell it to
//clean up managed and unmanaged
this.Dispose(true);
//as dispose has been correctly
//called we don't need the
//'backup' finaliser
GC.SuppressFinalize(this);
}
Finally this overload of Dispose that takes a boolean flag:
protected virtual void Dispose(bool disposing)
{
//check this hasn't been called already
//remember that Dispose can be called again
if (!disposed)
{
//this is passed true in the regular Dispose
if (disposing)
{
// Dispose managed resources here.
}
//both regular Dispose and the finaliser
//will hit this code
// Dispose unmanaged resources here.
}
disposed = true;
}
Note that once this is all in place other managed code creating an instance of your class can just treat it like any other IDisposable (Rules 2 and 3).
A: Would it be appropriate to also mention that dispose doesn't always refer to memory? I dispose resources such a references to files more often than memory. GC.Collect() directly relates to the CLR garbage collector and may or may not free memory (in Task Manager). It will likely impact your application in negative ways (eg performance).
At the end of the day why do you want the memory back immediately? If there is memory pressure from elsewhere the OS will get you memory in most cases.
A: IDisposable has nothing to do with freeing memory. IDisposable is a pattern for freeing unmanaged resources -- and memory is quite definitely a managed resource.
The links pointing to GC.Collect() are the correct answer, though use of this function is generally discouraged by the Microsoft .NET documentation.
Edit: Having earned a substantial amount of karma for this answer, I feel a certain responsibility to elaborate on it, lest a newcomer to .NET resource management get the wrong impression.
Inside a .NET process, there are two kinds of resource -- managed and unmanaged. "Managed" means that the runtime is in control of the resource, while "unmanaged" means that it's the programmer's responsibility. And there really is only one kind of managed resource that we care about in .NET today -- memory. The programmer tells the runtime to allocate memory and after that it's up to the runtime to figure out when the memory can freed. The mechanism that .NET uses for this purpose is called garbage collection and you can find plenty of information about GC on the internet simply by using Google.
For the other kinds of resources, .NET doesn't know anything about cleaning them up so it has to rely on the programmer to do the right thing. To this end, the platform gives the programmer three tools:
*
*The IDisposable interface and the "using" statement in VB and C#
*Finalizers
*The IDisposable pattern as implemented by many BCL classes
The first of these allows the programmer to efficiently acquire a resource, use it and then release it all within the same method.
using (DisposableObject tmp = DisposableObject.AcquireResource()) {
// Do something with tmp
}
// At this point, tmp.Dispose() will automatically have been called
// BUT, tmp may still a perfectly valid object that still takes up memory
If "AcquireResource" is a factory method that (for instance) opens a file and "Dispose" automatically closes the file, then this code cannot leak a file resource. But the memory for the "tmp" object itself may well still be allocated. That's because the IDisposable interface has absolutely no connection to the garbage collector. If you did want to ensure that the memory was freed, your only option would be to call GC.Collect() to force a garbage collection.
However, it cannot be stressed enough that this is probably not a good idea. It's generally much better to let the garbage collector do what it was designed to do, which is to manage memory.
What happens if the resource is being used for a longer period of time, such that its lifespan crosses several methods? Clearly, the "using" statement is no longer applicable, so the programmer would have to manually call "Dispose" when he or she is done with the resource. And what happens if the programmer forgets? If there's no fallback, then the process or computer may eventually run out of whichever resource isn't being properly freed.
That's where finalizers come in. A finalizer is a method on your class that has a special relationship with the garbage collector. The GC promises that -- before freeing the memory for any object of that type -- it will first give the finalizer a chance to do some kind of cleanup.
So in the case of a file, we theoretically don't need to close the file manually at all. We can just wait until the garbage collector gets to it and then let the finalizer do the work. Unfortunately, this doesn't work well in practice because the garbage collector runs non-deterministically. The file may stay open considerably longer than the programmer expects. And if enough files are kept open, the system may fail when trying to open an additional file.
For most resources, we want both of these things. We want a convention to be able to say "we're done with this resource now" and we want to make sure that there's at least some chance for the cleanup to happen automatically if we forget to do it manually. That's where the "IDisposable" pattern comes into play. This is a convention that allows IDispose and a finalizer to play nicely together. You can see how the pattern works by looking at the official documentation for IDisposable.
Bottom line: If what you really want to do is to just make sure that memory is freed, then IDisposable and finalizers will not help you. But the IDisposable interface is part of an extremely important pattern that all .NET programmers should understand.
A: This article has a pretty straightforward walkthrough. However, having to call the GC instead of letting it take its natural course is generally a sign of bad design/memory management, especially if no limited resources are being consumed (connections, handles, anything else that typically leads to implementing IDisposable).
What's causing you to need to do this?
A: Sorry but the selected answer here is incorrect. As a few people have stated subsequently Dispose and implementing IDisposable has nothing to do with freeing the memory associated with a .NET class. It is mainly and traditionally used to free unmanaged resources such as file handles etc.
While your application can call GC.Collect() to try to force a collection by the garbage collector this will only really have an effect on those items that are at the correct generation level in the freachable queue. So it is possible that if you have cleared all references to the object it might still be a couple of calls to GC.Collect() before the actual memory is freed.
You don't say in your question WHY you feel the need to free up memory immediately. I understand that sometimes there can be unusual circumstances but seriously, in managed code it is almost always best to let the runtime deal with memory management.
Probably the best advice if you think your code is using up memory quicker than the GC is freeing it then you should review your code to ensure that no objects that are no longer needed are referenced in any data structures you have lying around in static members etc. Also try to avoid situations where you have circular object references as it is possible that these may not be freed either.
A: @Keith,
I agree with all of your rules except #4. Adding a finalizer should only be done under very specific circumstances. If a class uses unmanaged resources, those should be cleaned up in your Dispose(bool) function. This same function should only cleanup managed resources when bool is true. Adding a finalizer adds a complexity cost to using your object as each time you create a new instance it must also be placed on the finalization queue, which is checked each time the GC runs a collection cycle. Effectively, this means that your object survives one cycle/generation longer than it should so the finalizer can be run. The finalizer should not be thought of as a "safety net".
The GC will only run a collection cycle when it determines that there is not enough available memory in the Gen0 heap to perform the next allocation, unless you "help" it by calling GC.Collect() to force an out-of-band collection.
The bottom line is that, no matter what, the GC only knows how to release resources by calling the Dispose method (and possibly the finalizer if one is implemented). It is up to that method to "do the right thing" and clean up any unmanaged resources used and instruct any other managed resources to call their Dispose method. It is very efficient at what it does and can self-optimize to a large extent as long as it isn't helped by out-of-band collection cycles. That being said, short of calling GC.Collect explicitly you have no control over when and in what order objects will be disposed of and memory released.
A: If MyClass implements IDisposable you can do just that.
MyClass.Dispose();
Best practice in C# is:
using( MyClass x = new MyClass() ) {
//do stuff
}
As that wraps up the dispose in a try-finally and makes sure that it's never missed.
A: If you don't want to (or can't) implement IDisposable on your class, you can force garbage collection like this (but it's slow) -
GC.Collect();
A: IDisposable interface is really for classes that contain unmanaged resources. If your class doesn't contain unmanaged resources, why do you need to free up resources before the garbage collector does it? Otherwise, just ensure your object is instantiated as late as possible and goes out of scope as soon as possible.
A: You can have deterministic object destruction in c++
You never want to call GC.Collect, it messes with the self tuning of the garbage collector to detect memory pressure and in some cases do nothing other than increase the current generation of every object on the heap.
For those posting IDisposable answers. Calling a Dispose method doesn't destroy an object as the asker describes.
A: @Curt Hagenlocher - that's back to front. I've no idea why so many have voted it up when it's wrong.
IDisposable is for managed resources.
Finalisers are for unmanaged resources.
As long as you only use managed resources both @Jon Limjap and myself are entirely correct.
For classes that use unmanaged resources (and bear in mind that the vast majority of .Net classes don't) Patrik's answer is comprehensive and best practice.
Avoid using GC.Collect - it is a slow way to deal with managed resources, and doesn't do anything with unmanaged ones unless you have correctly built your ~Finalizers.
I've removed the moderator comment from the original question in line with https://stackoverflow.com/questions/14593/etiquette-for-modifying-posts
A: @Keith:
IDisposable is for managed resources.
Finalisers are for unmanaged resources.
Sorry but that's just wrong. Normally, the finalizer does nothing at all. However, if the dispose pattern has been correctly implemented, the finalizer tries to invoke Dispose.
Dispose has two jobs:
*
*Free unmanaged resources, and
*free nested managed resources.
And here your statement comes into play because it's true that while finalizing, an object should never try to free nested managed resources as these may have already been freed. It must still free unmanaged resources though.
Still, finalizers have no job other than to call Dispose and tell it not to touch managed objects. Dispose, when called manually (or via Using), shall free all unmanaged resources and pass the Dispose message on to nested objects (and base class methods) but this will never free any (managed) memory.
A: Konrad Rudolph - yup, normally the finaliser does nothing at all. You shouldn't implement it unless you are dealing with unmanaged resources.
Then, when you do implement it, you use Microsoft's dispose pattern (as already described)
*
*public Dispose() calls protected Dispose(true) - deals with both managed and unmanaged resources. Calling Dispose() should suppress finalisation.
*~Finalize calls protected Dispose(false) - deals with unmanaged resources only. This prevents unmanaged memory leaks if you fail to call the public Dispose()
~Finalize is slow, and shouldn't be used unless you do have unmanaged resources to deal with.
Managed resources can't memory leak, they can only waste resources for the current application and slow its garbage collection. Unmanaged resources can leak, and ~Finalize is best practice to ensure that they don't.
In either case using is best practice.
A: In answer to the original question, with the information given so far by the original poster, it is 100% certain that he does not know enough about programming in .NET to even be given the answer: use GC.Collect(). I would say it is 99.99% likely that he really doesn't need to use GC.Collect() at all, as most posters have pointed out.
The correct answer boils down to 'Let the GC do its job. Period. You have other stuff to worry about. But you might want to consider whether and when you should dispose of or clean up specific objects, and whether you need to implement IDisposable and possibly Finalize in your class.'
Regarding Keith's post and his Rule #4:
Some posters are confusing rule 3 and rule 4. Keith's rule 4 is absolutely correct, unequivocately. It's the one rule of the four that needs no editing at all. I would slightly rephrase some of his other rules to make them clearer, but they are essentially correct if you parse them correctly, and actually read the whole post to see how he expands on them.
*
*If your class doesn't use an unmanaged resource AND it also never instantiates another object of a class that itself uses, directly or ultimately, an unmanaged object (i.e., a class that implements IDisposable), then there would be no need for your class to either implement IDisposable itself, or even call .dispose on anything. (In such a case, it is silly to think you actually NEED to immediately free up memory with a forced GC, anyway.)
*If your class uses an unmanaged resource, OR instantiates another object that itself implements IDisposable, then your class should either:
a) dispose/release these immediately in a local context in which they were created, OR...
b) implement IDisposable in the pattern recommended within Keith's post, or a few thousand places on the internet, or in literally about 300 books by now.
b.1) Furthermore, if (b), and it is an unmanaged resource that has been opened, both IDisposable AND Finalize SHOULD ALWAYS be implemented, per Keith's Rule #4.
In this context, Finalize absolutely IS a safety net in one sense: if someone instantiates YOUR IDisposable object that uses an unmanaged resource, and they fail to call dispose, then Finalize is the last chance for YOUR object to close the unmanaged resource properly.
(Finalize should do this by calling Dispose in such a way that the Dispose method skips over releasing anything BUT the unmanaged resource. Alternatively, if your object's Dispose method IS called properly by whatever instantiated your object, then it BOTH passes on the Dispose call to all IDisposable objects it has instantiated, AND releases the unmanaged resources properly, ending with a call to suppress the Finalize on your object, which means that the impact of using Finalize is reduced if your object is disposed properly by the caller. All of these points are included in Keith's post, BTW.)
b.2) IF your class is only implementing IDisposable because it needs to essentially pass on a Dispose to an IDisposable object it has instantiated, then don't implement a Finalize method in your class in that case. Finalize is for handling the case that BOTH Dispose was never called by whatever instantiated your object, AND an unmanaged resource was utilized that's still unreleased.
In short, regarding Keith's post, he is completely correct, and that post is the most correct and complete answer, in my opinion. He may use some short-hand statements that some find 'wrong' or object to, but his full post expands on the usage of Finalize completely, and he is absolutely correct. Be sure to read his post completely before jumping on one of the rules or preliminary statements in his post.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/12368",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "44"
} |
Q: VS2008 SP1 crashes when debugging an XSLT file I'm using VS2008 SP1 - don't know if this would have happened before I applied SP1 as I never tried it before yesterday. I attempted to debug a fairly simple XSLT file using VS2008 SP1 and got this crash from VS2008 SP1:
Microsoft Visual Studio
Unexpected error encountered. It is recommended that you restart the application as soon as possible.
Error: Unspecified error
File: vsee\pkgs\vssprovider\sccprj.cpp
A: Yes, sounds so.
To use it, I had to disable VSS temporarily each time by setting Tools, Options, Source Control, Plug-in selection, Current source control plug-in to "None".
A: We have reproduced this issue and will fix it in the next release of Visual Studio.
You are welcome to use Microsoft Connect site for reporting any issues related to Visual Studio.
Best regards,
Anton Lapounov
Data Programmability Team @ Microsoft
A: The same problem, after the stylesheet finishes processing, I get Unspecified error (and eveything seems ok after closing the error message box). Selecting source control plugin to "None" in VS options gets rid of the problem.
A: Here's the link to Microsoft's bug and "fix" report, which includes info on the work around (disabling your source control plugin): VS2008 sp1 - XSLT Debugging Error in sccprj.cpp
| {
"language": "en",
"url": "https://stackoverflow.com/questions/12369",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Has anyone had any success in unit testing SQL stored procedures? We’ve found that the unit tests we’ve written for our C#/C++ code have really paid off.
But we still have thousands of lines of business logic in stored procedures, which only really get tested in anger when our product is rolled out to a large number of users.
What makes this worse is that some of these stored procedures end up being very long, because of the performance hit when passing temporary tables between SPs. This has prevented us from refactoring to make the code simpler.
We have made several attempts at building unit tests around some of our key stored procedures (primarily testing the performance), but have found that setting up the test data for these tests is really hard. For example, we end up copying around test databases. In addition to this, the tests end up being really sensitive to change, and even the smallest change to a stored proc. or table requires a large amount of changes to the tests. So after many builds breaking due to these database tests failing intermittently, we’ve just had to pull them out of the build process.
So, the main part of my questions is: has anyone ever successfully written unit tests for their stored procedures?
The second part of my questions is whether unit testing would be/is easier with linq?
I was thinking that rather than having to set up tables of test data, you could simply create a collection of test objects, and test your linq code in a “linq to objects” situation? (I am a totally new to linq so don’t know if this would even work at all)
A: If you think about the kind of code that unit testing tends to promote: small highly-cohesive and lowly-coupled routines, then you should pretty much be able to see where at least part of the problem might be.
In my cynical world, stored procedures are part of the RDBMS world's long-standing attempt to persuade you to move your business processing into the database, which makes sense when you consider that server license costs tend to be related to things like processor count. The more stuff you run inside your database, the more they make from you.
But I get the impression you're actually more concerned with performance, which isn't really the preserve of unit testing at all. Unit tests are supposed to be fairly atomic and are intended to check behaviour rather than performance. And in that case you're almost certainly going to need production-class loads in order to check query plans.
I think you need a different class of testing environment. I'd suggest a copy of production as the simplest, assuming security isn't an issue. Then for each candidate release, you start with the previous version, migrate using your release procedures (which will give those a good testing as a side-effect) and run your timings.
Something like that.
A: The key to testing stored procedures is writing a script that populates a blank database with data that is planned out in advance to result in consistent behavior when the stored procedures are called.
I have to put my vote in for heavily favoring stored procedures and placing your business logic where I (and most DBAs) think it belongs, in the database.
I know that we as software engineers want beautifully refactored code, written in our favorite language, to contain all of our important logic, but the realities of performance in high volume systems, and the critical nature of data integrity, require us to make some compromises. Sql code can be ugly, repetitive, and hard to test, but I can't imagine the difficulty of tuning a database without having complete control over the design of the queries.
I am often forced to completely redesign queries, to include changes to the data model, to get things to run in an acceptable amount of time. With stored procedures, I can assure that the changes will be transparent to the caller, since a stored procedure provides such excellent encapsulation.
A: I am assuming that you want unit testing in MSSQL. Looking at DBUnit there are some limitations in it's support for MSSQL. It doesn't support NVarChar for instance. Here are some real users and their problems with DBUnit.
A: Good question.
I have similar problems, and I have taken the path of least resistance (for me, anyway).
There are a bunch of other solutions, which others have mentionned. Many of them are better / more pure / more appropriate for others.
I was already using Testdriven.NET/MbUnit to test my C#, so I simply added tests to each project to call the stored procedures used by that app.
I know, I know. This sounds terrible, but what I need is to get off the ground with some testing, and go from there. This approach means that although my coverage is low I am testing some stored procs at the same time as I am testing the code which will be calling them. There is some logic to this.
A: I'm in the exact same situation as the original poster. It comes down to performance versus testability. My bias is towards testability (make it work, make it right, make it fast), which suggests keeping business logic out of the database. Databases not only lack the testing frameworks, code factoring constructs, and code analysis and navigation tools found in languages like Java, but highly factored database code is also slow (where highly factored Java code is not).
However, I do recognize the power of database set processing. When used appropriately, SQL can do some incredibly powerful stuff with very little code. So, I'm ok with some set-based logic living in the database even though I will still do everything I can to unit test it.
On a related note, it seems that very long and procedural database code is often a symptom of something else, and I think such code can be converted to testable code without incurring a performance hit. The theory is that such code often represents batch processes that periodically process large amounts of data. If these batch processes were to be converted into smaller chunks of real-time business logic that runs whenever the input data is changed, this logic could be run on the middle-tier (where it can be tested) without taking a performance hit (since the work is done in small chunks in real-time). As a side-effect, this also eliminates the long feedback-loops of batch process error handling. Of course this approach won't work in all cases, but it may work in some. Also, if there is tons of such untestable batch processing database code in your system, the road to salvation may be long and arduous. YMMV.
A:
But I get the impression you're actually more concerned with performance, which isn't really the preserve of unit testing at all. Unit tests are supposed to be fairly atomic and are intended to check behaviour rather than performance. And in that case you're almost certainly going to need production-class loads in order to check query plans.
I think there are two quite distinct testing areas here: the performance, and the actual logic of the stored procedures.
I gave the example of testing the db performance in the past and, thankfully, we have reached a point where the performance is good enough.
I completely agree that the situation with all the business logic in the database is a bad one, but it's something that we've inherited from before most of our developers joined the company.
However, we're now adopting the web services model for our new features, and we've been trying to avoid stored procedures as much as possible, keeping the logic in the C# code and firing SQLCommands at the database (although linq would now be the preferred method). There is still some use of the existing SPs which was why I was thinking about retrospectively unit testing them.
A: You can also try Visual Studio for Database Professionals. It's mainly about change management but also has tools for generating test data and unit tests.
It's pretty expensive tho.
A: I ran into this same issue a while back and found that if I created a simple abstract base class for data access that allowed me to inject a connection and transaction, I could unit test my sprocs to see if they did the work in SQL that I asked them to do and then rollback so none of the test data is left in the db.
This felt better than the usual "run a script to setup my test db, then after the tests run do a cleanup of the junk/test data". This also felt closer to unit testing because these tests could be run alone w/out having a great deal of "everything in the db needs to be 'just so' before I run these tests".
Here is a snippet of the abstract base class used for data access
Public MustInherit Class Repository(Of T As Class)
Implements IRepository(Of T)
Private mConnectionString As String = ConfigurationManager.ConnectionStrings("Northwind.ConnectionString").ConnectionString
Private mConnection As IDbConnection
Private mTransaction As IDbTransaction
Public Sub New()
mConnection = Nothing
mTransaction = Nothing
End Sub
Public Sub New(ByVal connection As IDbConnection, ByVal transaction As IDbTransaction)
mConnection = connection
mTransaction = transaction
End Sub
Public MustOverride Function BuildEntity(ByVal cmd As SqlCommand) As List(Of T)
Public Function ExecuteReader(ByVal Parameter As Parameter) As List(Of T) Implements IRepository(Of T).ExecuteReader
Dim entityList As List(Of T)
If Not mConnection Is Nothing Then
Using cmd As SqlCommand = mConnection.CreateCommand()
cmd.Transaction = mTransaction
cmd.CommandType = Parameter.Type
cmd.CommandText = Parameter.Text
If Not Parameter.Items Is Nothing Then
For Each param As SqlParameter In Parameter.Items
cmd.Parameters.Add(param)
Next
End If
entityList = BuildEntity(cmd)
If Not entityList Is Nothing Then
Return entityList
End If
End Using
Else
Using conn As SqlConnection = New SqlConnection(mConnectionString)
Using cmd As SqlCommand = conn.CreateCommand()
cmd.CommandType = Parameter.Type
cmd.CommandText = Parameter.Text
If Not Parameter.Items Is Nothing Then
For Each param As SqlParameter In Parameter.Items
cmd.Parameters.Add(param)
Next
End If
conn.Open()
entityList = BuildEntity(cmd)
If Not entityList Is Nothing Then
Return entityList
End If
End Using
End Using
End If
Return Nothing
End Function
End Class
next you will see a sample data access class using the above base to get a list of products
Public Class ProductRepository
Inherits Repository(Of Product)
Implements IProductRepository
Private mCache As IHttpCache
'This const is what you will use in your app
Public Sub New(ByVal cache As IHttpCache)
MyBase.New()
mCache = cache
End Sub
'This const is only used for testing so we can inject a connectin/transaction and have them roll'd back after the test
Public Sub New(ByVal cache As IHttpCache, ByVal connection As IDbConnection, ByVal transaction As IDbTransaction)
MyBase.New(connection, transaction)
mCache = cache
End Sub
Public Function GetProducts() As System.Collections.Generic.List(Of Product) Implements IProductRepository.GetProducts
Dim Parameter As New Parameter()
Parameter.Type = CommandType.StoredProcedure
Parameter.Text = "spGetProducts"
Dim productList As List(Of Product)
productList = MyBase.ExecuteReader(Parameter)
Return productList
End Function
'This function is used in each class that inherits from the base data access class so we can keep all the boring left-right mapping code in 1 place per object
Public Overrides Function BuildEntity(ByVal cmd As System.Data.SqlClient.SqlCommand) As System.Collections.Generic.List(Of Product)
Dim productList As New List(Of Product)
Using reader As SqlDataReader = cmd.ExecuteReader()
Dim product As Product
While reader.Read()
product = New Product()
product.ID = reader("ProductID")
product.SupplierID = reader("SupplierID")
product.CategoryID = reader("CategoryID")
product.ProductName = reader("ProductName")
product.QuantityPerUnit = reader("QuantityPerUnit")
product.UnitPrice = reader("UnitPrice")
product.UnitsInStock = reader("UnitsInStock")
product.UnitsOnOrder = reader("UnitsOnOrder")
product.ReorderLevel = reader("ReorderLevel")
productList.Add(product)
End While
If productList.Count > 0 Then
Return productList
End If
End Using
Return Nothing
End Function
End Class
And now in your unit test you can also inherit from a very simple base class that does your setup / rollback work - or keep this on a per unit test basis
below is the simple testing base class I used
Imports System.Configuration
Imports System.Data
Imports System.Data.SqlClient
Imports Microsoft.VisualStudio.TestTools.UnitTesting
Public MustInherit Class TransactionFixture
Protected mConnection As IDbConnection
Protected mTransaction As IDbTransaction
Private mConnectionString As String = ConfigurationManager.ConnectionStrings("Northwind.ConnectionString").ConnectionString
<TestInitialize()> _
Public Sub CreateConnectionAndBeginTran()
mConnection = New SqlConnection(mConnectionString)
mConnection.Open()
mTransaction = mConnection.BeginTransaction()
End Sub
<TestCleanup()> _
Public Sub RollbackTranAndCloseConnection()
mTransaction.Rollback()
mTransaction.Dispose()
mConnection.Close()
mConnection.Dispose()
End Sub
End Class
and finally - the below is a simple test using that test base class that shows how to test the entire CRUD cycle to make sure all the sprocs do their job and that your ado.net code does the left-right mapping correctly
I know this doesn't test the "spGetProducts" sproc used in the above data access sample, but you should see the power behind this approach to unit testing sprocs
Imports SampleApplication.Library
Imports System.Collections.Generic
Imports Microsoft.VisualStudio.TestTools.UnitTesting
<TestClass()> _
Public Class ProductRepositoryUnitTest
Inherits TransactionFixture
Private mRepository As ProductRepository
<TestMethod()> _
Public Sub Should-Insert-Update-And-Delete-Product()
mRepository = New ProductRepository(New HttpCache(), mConnection, mTransaction)
'** Create a test product to manipulate throughout **'
Dim Product As New Product()
Product.ProductName = "TestProduct"
Product.SupplierID = 1
Product.CategoryID = 2
Product.QuantityPerUnit = "10 boxes of stuff"
Product.UnitPrice = 14.95
Product.UnitsInStock = 22
Product.UnitsOnOrder = 19
Product.ReorderLevel = 12
'** Insert the new product object into SQL using your insert sproc **'
mRepository.InsertProduct(Product)
'** Select the product object that was just inserted and verify it does exist **'
'** Using your GetProductById sproc **'
Dim Product2 As Product = mRepository.GetProduct(Product.ID)
Assert.AreEqual("TestProduct", Product2.ProductName)
Assert.AreEqual(1, Product2.SupplierID)
Assert.AreEqual(2, Product2.CategoryID)
Assert.AreEqual("10 boxes of stuff", Product2.QuantityPerUnit)
Assert.AreEqual(14.95, Product2.UnitPrice)
Assert.AreEqual(22, Product2.UnitsInStock)
Assert.AreEqual(19, Product2.UnitsOnOrder)
Assert.AreEqual(12, Product2.ReorderLevel)
'** Update the product object **'
Product2.ProductName = "UpdatedTestProduct"
Product2.SupplierID = 2
Product2.CategoryID = 1
Product2.QuantityPerUnit = "a box of stuff"
Product2.UnitPrice = 16.95
Product2.UnitsInStock = 10
Product2.UnitsOnOrder = 20
Product2.ReorderLevel = 8
mRepository.UpdateProduct(Product2) '**using your update sproc
'** Select the product object that was just updated to verify it completed **'
Dim Product3 As Product = mRepository.GetProduct(Product2.ID)
Assert.AreEqual("UpdatedTestProduct", Product2.ProductName)
Assert.AreEqual(2, Product2.SupplierID)
Assert.AreEqual(1, Product2.CategoryID)
Assert.AreEqual("a box of stuff", Product2.QuantityPerUnit)
Assert.AreEqual(16.95, Product2.UnitPrice)
Assert.AreEqual(10, Product2.UnitsInStock)
Assert.AreEqual(20, Product2.UnitsOnOrder)
Assert.AreEqual(8, Product2.ReorderLevel)
'** Delete the product and verify it does not exist **'
mRepository.DeleteProduct(Product3.ID)
'** The above will use your delete product by id sproc **'
Dim Product4 As Product = mRepository.GetProduct(Product3.ID)
Assert.AreEqual(Nothing, Product4)
End Sub
End Class
I know this is a long example, but it helped to have a reusable class for the data access work, and yet another reusable class for my testing so I didn't have to do the setup/teardown work over and over again ;)
A: Have you tried DBUnit? It's designed to unit test your database, and just your database, without needing to go through your C# code.
A: We use DataFresh to rollback changes between each test, then testing sprocs is relatively easy.
What is still lacking is code coverage tools.
A: I do poor man's unit testing. If I'm lazy, the test is just a couple of valid invocations with potentially problematic parameter values.
/*
--setup
Declare @foo int Set @foo = (Select top 1 foo from mytable)
--test
execute wish_I_had_more_Tests @foo
--look at rowcounts/look for errors
If @@rowcount=1 Print 'Ok!' Else Print 'Nokay!'
--Teardown
Delete from mytable where foo = @foo
*/
create procedure wish_I_had_more_Tests
as
select....
A: LINQ will simplify this only if you remove the logic from your stored procedures and reimplement it as linq queries. Which would be much more robust and easier to test, definitely. However, it sounds like your requirements would preclude this.
TL;DR: Your design has issues.
A: We unit test the C# code that calls the SPs.
We have build scripts, creating clean test databases.
And bigger ones we attach and detach during test fixture.
These tests could take hours, but I think it`s worth it.
A: One option to re-factor the code (I'll admit a ugly hack) would be to generate it via CPP (the C preprocessor) M4 (never tried it) or the like. I have a project that is doing just that and it is actually mostly workable.
The only case I think that might be valid for is 1) as an alternative to KLOC+ stored procedures and 2) and this is my cases, when the point of the project is to see how far (into insane) you can push a technology.
A: Oh, boy. sprocs don't lend themselves to (automated) unit testing. I sort-of "unit test" my complex sprocs by writing tests in t-sql batch files and hand checking the output of the print statements and the results.
A: The problem with unit testing any kind of data-related programming is that you have to have a reliable set of test data to start with. A lot also depends on the complexity of the stored proc and what it does. It would be very hard to automate unit testing for a very complex procedure that modified many tables.
Some of the other posters have noted some simple ways to automate manually testing them, and also some tools you can use with SQL Server. On the Oracle side, PL/SQL guru Steven Feuerstein worked on a free unit testing tool for PL/SQL stored procedures called utPLSQL.
However, he dropped that effort and then went commercial with Quest's Code Tester for PL/SQL. Quest offers a free downloadable trial version. I'm on the verge of trying it out; my understanding is that it is good at taking care of the overhead in setting up a testing framework so that you can focus on just the tests themselves, and it keeps the tests so you can reuse them in regression testing, one of the great benefits of test-driven-development. In addition, it is supposed to be good at more than just checking an output variable and does have provision for validating data changes, but I still have to take a closer look myself. I thought this info might be of value for Oracle users.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/12374",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "37"
} |
Q: How to attach a ChangeEvent handler to an inherited dependency property? How would you attach a propertychanged callback to a property that is inherited? Like such:
class A {
DependencyProperty prop;
}
class B : A {
//...
prop.AddListener(PropertyChangeCallback);
}
A: (edited to remove recommendation to use DependencyPropertyDescriptor, which is not available in Silverlight)
PropertyDescriptor AddValueChanged Alternative
A: Have you tried a two way data binding between the two dependency properties?
A: @MojoFilter,
Jon's last suggestion link will give you what you're looking for: it uses weak references to register listening to changes by wrapping properties in a new object. Scroll to the bottom of "PropertyDescriptor AddValueChanged Alternative". You'll have to change the Binding code around a bit since BindingOperations doesn't exist.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/12385",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: .NET VirtualPathProviders and Pre-Compilation We've been working on an application that quite heavily relies on VirtualPathProviders in ASP.NET.
We've just come to put the thing on a live server to demonstrate it and it appears that the VirtualPathProviders simply don't work when the site is pre-compiled!!
I've been looking at the workaround which has been posted here: http://sunali.com/2008/01/09/virtualpathprovider-in-precompiled-web-sites/, but so far I haven't been able to get that to work, either! (Well - it works fine in visual studio's web development server - just not on our IIS box - again!).
Does anybody here have any more information on the problem? Is it fixed in .NET v3.5 (we're currently building for v2.0)?
A: Unfortunately that is not officially supported. See the following MSDN article.
If a Web site is precompiled for deployment, content provided by a VirtualPathProvider instance is not compiled, and no VirtualPathProvider instances are used by the precompiled site.
The site you referred to is an unofficial workaround. I don't think it's been fixed in .NET 3.5 SP1
| {
"language": "en",
"url": "https://stackoverflow.com/questions/12397",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: FOSS ASP.Net Session Replication Solution? I've been searching (with little success) for a free/opensource session clustering and replication solution for asp.net. I've run across the usual suspects (indexus sharedcache, memcached), however, each has some limitations.
*
*Indexus - Very immature, stubbed session interface implementation. Its otherwise a great caching solution, though.
*Memcached - Little replication/failover support without going to a db backend.
Several SF.Net projects - All aborted in the early stages... nothing that appears to have any traction, and one which seems to have gone all commercial.
*Microsoft Velocity - Not OSS, but seems nice. Unfortunately, I didn't see where CTP1 supported failover, and there is no clear roadmap for this one. I fear that this one could fall off into the ether like many other MS dev projects.
I am fairly used to the Java world where it is kind of taken for granted that many solutions to problems such as this will be available from the FOSS world.
Are there any suitable alternatives available on the .Net world?
A: As far as Velocity is concerned I have heard some great things about that project lately. It's still in the developing stages and probably not primetime ready yet. But I think the project has a solid footing and will become a strong mature product from Microsoft and not fall off into the ether like you predict.
Recently I've heard podcasts from Scott Hanselman and Polymorphic Podcast regarding Velocity.
A: Just a quick update on this thread for the sake of completion.
Velocity (now known as Windows Server AppFabric) is already out in the production and offers a great distributed caching platform. More details are available on the msdn site
http://msdn.microsoft.com/en-us/windowsserver/ee695849.aspx
A: BTW Windows Server AppFabric is out of beta. That's what i mentioned in my previous post.
here is the link on general availability;- http://blogs.technet.com/b/appfabric/archive/2010/06/07/windows-server-appfabric-now-generally-available.aspx
which specific features do you think one can get on NCache and not on AppFabric?
A: Although Velocity has made progress from CTP1 to CTP2, it still leaves much to be desired. It will be some time before they provide all the important features in a distributed cache and even longer before it is tested in the market. I wish them good luck.
In the meantime, NCache already provides all CTP2 & V1, and many more features. NCache is the first, the most mature, and the most feature-rich distributed cache in the .NET space. NCache is an enterprise level in-memory distributed cache for .NET and also provides a distributed ASP.NET Session State. Check it out at Distributed Cache.
NCache Express is a totally free version of NCache. Check it out at Free Distributed Cache.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/12401",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: Is it possible to slipstream the Visual Studio 2008 SP1 install? From what I've read, VS 2008 SP1 and Team Foundation Server SP1 packages are traditional service packs that require you to first install the original versions before you will be able to install the SP.
Is there a way, supported or not, to slipstream the install?
A: Here's an MSDN forum post in which an MSFTie indicates it will be possible and that details are forthcoming. Another poster is relaying results of her almost-successful attempt. Looks like this will be doable soon.
Related: how to slipstream Team Foundation Server 2008 SP1 (TFS 2008 SP1)
A: Here are the steps for slipstreaming visual studio 2008 with service pack 1
*
*Consider you have visual studio 2008 ISO file or DVD mounted on G: drive. If your drive letter is different then dont worry. I will get back to this in step 4.
*You have enough space say in HDD in partition say D: of about 8GB.
*Extract the visual studio 2008 service pack 1 to D:\VS\SP1 folder
*Copy the below commands to a batch file and name it "integrate.bat" and place it in "D:\VS\" folder. If your DVD drive letter is different, then suitable modify the G: in the batch file with the corresponding drive letter.
::Extract the original visual studio 2008 installation to directory VS2k8WithSP1.
msiexec.exe /a "g:\vs_setup.msi" TARGETDIR="%CD%\VS2k8WithSP1"
::Copy some file to make slipstream integration successful.
copy "VS2k8WithSP1\Program Files\Microsoft Visual Studio 9.0\Common7\1033\*.chm" "VS2k8WithSP1\Program Files\Microsoft Visual Studio 9.0\Common7\IDE\"
::Extract each .msp files to directory VS2k8WithSP1.
msiexec.exe /a "%cd%\VS2k8WithSP1\vs_setup.msi" /p "%cd%\SP1\vs90sp1\VS90sp1-KB945140-X86-ENU.msp"
msiexec.exe /a "%cd%\VS2k8WithSP1\vs_setup.msi" /p "%cd%\SP1\vs90sp1\VC90sp1-KB947888-x86-enu.msp"
msiexec.exe /a "%cd%\VS2k8WithSP1\vs_setup.msi" /p "%cd%\SP1\vs90sp1\VC90sp1-KB948484-x86_x64-enu.msp"
msiexec.exe /a "%cd%\VS2k8WithSP1\vs_setup.msi" /p "%cd%\SP1\vs90sp1\VC90sp1-KB948560-x86_IA64-enu.msp"
::Copy the product key file
copy "VS2k8WithSP1\Setup\Setup.sdb"
::Copy the setup bootstrapper files
copy "VS2k8WithSP1\Program Files\Microsoft Visual Studio 9.0\CSetupMM\*.*" "VS2k8WithSP1\Setup"
::Copy VC runtime files
md VS2k8WithSP1\wcu\VCRuntimes
copy SP1\vs90sp1\vc_*runtime.exe VS2k8WithSP1\wcu\VCRuntimes
::copy SQL Server Database Publishing Wizard
copy SP1\vs90sp1\SqlPubWizInstaller.exe VS2k8WithSP1\wcu\SqlPub
::copy SQL Server 2008 Management Objects and SQL Server System CLR Types configuration.
md VS2k8WithSP1\wcu\SMO
copy SP1\vs90sp1\SharedManagementObjects.msi VS2k8WithSP1\wcu\SMO
copy SP1\vs90sp1\SQLSysClrTypes.msi VS2k8WithSP1\wcu\SMO
::copy SQL Server Compact 3.5 SP1 English with the Microsoft SQL Server Compact 3.5 SP1 Design Tools English.
copy /Y SP1\vs90sp1\SSCERuntime-enu.msi VS2k8WithSP1\wcu\SSCE
copy /Y SP1\vs90sp1\SSCEVSTools-enu.msi VS2k8WithSP1\wcu\SSCE
::Extract the dotnetfx35.exe manually to a %tmp% folder. Copy all files and subdirectories from %tmp%\wcu\dotnetframework to vs2k8WithSP1\wcu\dotnetframework and overwrite files
::Sorry I could not able to automate this step as /extract option is disabled in the dotnetfx35.exe file.
*Go to command prompt and navigate to "D:\VS\".
*With "D:\VS" as the current directory execute the integrate.bat batch file. This will take approximately 1hr. So relax and work in parallel with other stuff.
*After the batch file executes completely, extract the dotnetfx35.exe manually to a "D:\VS\tmp folder". Copy and overwrite all files and subdirectories from "D:VS\tmp\wcu\dotnetframework" to "D:\VS\vs2k8WithSP1\wcu\dotnetframework"
*Now you have successfully slipstreamed the visual studio 2008 with service pack1. Now D:\VS\VS2k8WithSP1 folder contains the slipstreamed copy of visual studio 2008. Now you can delete ISO files (if you have) and any other files or folders other than D:\VS\VS2k8WithSP1.
A: The steps posted above work, with some minor adjustments:
::Copy some file to make slipstream integration successful.
copy "VS2k8WithSP1\Program Files\Microsoft Visual Studio 9.0\Common7\1033*.chm" "VS2k8WithSP1\Program Files\Microsoft Visual Studio 9.0\Common7\IDE\"
I recommend copying the multiple files in the 1033 folder manually. There's only two of them. There seems to be some confusion surrounding the use of asterisks (*) in batch files.
I also recommend copying the following files manually:
::Copy the setup bootstrapper files
copy "VS2k8WithSP1\Program Files\Microsoft Visual Studio 9.0\CSetupMM*.*" "VS2k8WithSP1\Setup"
And for the SQL Publishing Wizard 1.3:
::copy SQL Server Database Publishing Wizard
copy SP1\vs90sp1\SqlPubWizInstaller.exe VS2k8WithSP1\wcu\SqlPub
Visual Studio 2008 setup will tell you at the end of the installation that this component failed to install.
VS70pgui: [2] DepCheck indicates Microsoft SQL Publishing Wizard 1.3 is not installed.
However, upon checking the Event Viewer as well as the Visual Studio setup log, this is not true. All indications say it installed successfully. And the true test of actually using the wizard, works well.
Apart from that, this tutorial works great!
Many thanks to the author(s).
A: ::Copy some file to make slipstream integration successful. copy "VS2k8WithSP1\Program Files\Microsoft Visual Studio 9.0\Common7\1033*.chm" "VS2k8WithSP1\Program Files\Microsoft Visual Studio 9.0\Common7\IDE\"
there is a error here, should be:
::Copy some file to make slipstream integration successful. copy "VS2k8WithSP1\Program Files\Microsoft Visual Studio 9.0\Common7\1033\*.chm" "VS2k8WithSP1\Program Files\Microsoft Visual Studio 9.0\Common7\IDE\"
"\" must be doubled in this case, coz one of them used as escape character of "*"
| {
"language": "en",
"url": "https://stackoverflow.com/questions/12406",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
} |
Q: Automate adding entries to a wiki Once I have my renamed files I need to add them to my project's wiki page. This is a fairly repetitive manual task, so I guess I could script it but I don't know where to start.
The process is:
Got to appropriate page on the wiki
for each team member (DeveloperA, DeveloperB, DeveloperC)
{
for each of two files ('*_current.jpg', '*_lastweek.jpg')
{
Select 'Attach' link on page
Select the 'manage' link next to the file to be updated
Click 'Browse' button
Browse to the relevant file (which has the same name as the previous version)
Click 'Upload file' button
}
}
Not necessarily looking for the full solution as I'd like to give it a go myself.
Where to begin? What language could I use to do this and how difficult would it be?
A: Check if the wiki you mean to talk to supports XMLRPC, because if it does it should be a snap. I wrote a tool called WikiUp to solve a similar problem (updating a delineated section on a wiki page).
A: If you're writing in C#, the WebClient classes might be a good place to start. I bet people could give more specific advice if you mentioned which wiki platform you are using, and whether it requires authentication, though.
I'd probably start by downloading fiddler and watching the http requests from doing it manually. Then you could use some simple scripts and regexes to build your http requests for automating the process.
Of course, if your wildly lucky, your wiki would have a backend simple enough that you could just plug them into its db directly. :)
A: You might find CoScripter useful -- it's a Firefox extension that allows you to automate tasks you perform on websites. I'm not certain how you'd integrate this with the list of files you're changing on your local system, but it can certainly handle the file uploading through a web form.
Better bet is probably using cURL or a similar HTTP library with your programming language of choice. If you're on *nix, you can use the cURL commandline program inside your shell script to get this done fairly easily. (Like @jsight said you will need to analyze the actual forms you're using on the webpage, using Fiddler or just looking at the form elements and re-creating the POST through cURL.)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/12428",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Why is my asp.net application throwing ThreadAbortException? This is a self-explanatory question:
Why does this thing bubble into my try catch's even when nothing is wrong?
Why is it showing up in my log, hundreds of times?
I know its a newb question, but if this site is gonna get search ranking and draw in newbs we have to ask them
A: The most common reason for a ThreadAbortException is calling Response.End, Response.Redirect, or Server.Transfer. Microsoft has published some suggested functions that should be used in stead of those functions.
A: This is probably coming from a Response.Redirect call. Check this link for an explanation:
http://dotnet.org.za/armand/archive/2004/11/16/7088.aspx
(In most cases, calling Response.Redirect(url, false) fixes the problem)
A: As others have said, it occurs when you call Response.End() (which occurs when you call Response.Redirect without passing false as the second parameter). This is working as designed; typically, if you call Response.Redirect, you want the redirect to happen immediately. See this for more information:
Response.Redirect and the ThreadAbortException
A: Knowing that there are (at least) three APIs that internally use Thread.Abort, I'd like to answer in more practical terms, how to work out what to do about it.
For us, this error started being logged all-of-a-sudden. What changed? We fixed a bug in some database procedure that was dealing with sitemaps.
The log4net logs showed the X-Forwarded-For header (we're behind an NLB) was Googlebot's IP address, 66.249.78.x which bolstered my theory about the sitemap change leading to Google crawling our site more aggressively looking for images.
The first thing was to find out why only the Googlebot was able to cause this problem. No other client was triggering whatever code path uses Response.Redirect, or whatever.
So in the HttpApplication.Error handler, I added some code to log extra detailed output with all headers, and most data in the HttpResponse and HttpContext spewed to log.
This let me see that the problem was that Googlebot is using an iPhone user agent string and armed with that, I was able to search the codebase for "iPhone" and come up with:
private void CheckIPhoneAccess() { ... }
And that uses a Redirect.
What to do about it?
Well, for this aged codebase, it's not worth retro-patching all the Response.Redirect calls, so I'm going to lower the logging level for ThreadAbortException for the application.
I will change the behaviour for Googlebot's mobile crawler, that would not lead to 'lies' about what our site serves to mobiles since it only redirects on the first hit, subsequently it reads a cookie and shows the image. Googlebot does not seem to cache that cookie.
It's not perfect, but the site is due to be rebuilt. probably by another team using Scala or something, so in practical terms, I think this is a good choice. I'll add comments and may revisit the issue later, build a Response.SafeRedirect extension that encapsulates this advice:
Why Response.Redirect causes System.Threading.ThreadAbortException?
Luke
A: The reason of why Response.Redirect will give this exception is asp.net internally implement this API with Thread.Abort(). When this method is called, a special ThreadAbortException is thrown.This exception wont be swallowed by any catch block. It will be re thrown at the end of each catch block.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/12476",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "24"
} |
Q: How can you publish a ClickOnce application through CruiseControl.NET? I have CruiseControl.NET Version 1.4 set up on my development server. Whenever a developer checks in code, it makes a compile.
Now we're at a place where we can start giving our application to the testers. We'd like to use ClickOnce to distribute the application, with the idea being that when a tester goes to test the application, they have the latest build.
I can't find a way to make that happen with CruiseControl.NET. We're using MSBUILD to perform the builds.
A: I remember doing this last year for a ClickOnce project I was working on. I remember it taking me forever to figure out but here it is. What I wanted my scripts to do was to generate a different installer that pointed to our dev env and a different one for prod. Not only that but i needed it to inject the right versioning information so the existing clients would 'realize' there is a new version out there which is the whole point of clickOnce.
In this script you have to replace with your own server names etc. The trick is to save the publish.htm and project.publish file and inject the new version number based on the version that is provided to you by CC.NET.
Here is what my build script looked like:
<target name="deployProd">
<exec program="<framework_dir>\msbuild.exe" commandline="<project>/<project>.csproj /property:Configuration=PublishProd /property:ApplicationVersion=${build.label}.*;PublishUrl=\\<prod_location>\binups$\;InstallUrl=\\<prod_location>\binups$\;UpdateUrl=\\<prod_location>\binups$\;BootstrapperComponentsUrl=\\<prod_location>\prereqs$\ /target:publish"/>
<copy todir="<project>\bin\PublishProd\<project>.publish">
<fileset basedir=".">
<include name="publish.htm"/>
</fileset>
<filterchain>
<replacetokens>
<token key="CURRENT_VERSION" value="${build.label}"/>
</replacetokens>
</filterchain>
</copy>
</target>
Hope this helps
A: We've done this and can give you some pointers to start.
2 things you should be aware of:
*
*MSBuild can generate the necessary deployment files for you.
*MSBuild won't deploy the files to the FTP or UNC share. You'll need a separate step for this.
To use MSBuild to generate the ClickOnce manifests, here's the command you'll need to issue:
msbuild /target:publish /p:Configuration=Release /p:Platform=AnyCPU; "c:\yourProject.csproj"
That will tell MSBuild to build your project and generate ClickOnce deployment files inside the bin\Release\YourProject.publish directory.
All that's left is to copy those files to the FTP/UNC share/wherever, and you're all set.
You can tell CruiseControl.NET to build using those MSBuild parameters.
You'll then need a CruiseControl.NET build task to take the generated deployment files and copy them to the FTP or UNC share. We use a custom little C# console program for this, but you could just as easily use a Powershell script.
A: Thanks for all the help. The final solution we implemented took a bit from every answer.
We found it easier to handle working with multiple environments using simple batch files. I'm not suggesting this is the best way to do this, but for our given scenario and requirements, this worked well. Supplement "Project" with your project name and "Environment" with your environment name (dev, test, stage, production, whatever).
Here is the tasks area of our "ccnet.config" file.
<!-- override settings -->
<exec>
<executable>F:\Source\Project\Environment\CruiseControl\CopySettings.bat</executable>
</exec>
<!-- compile -->
<msbuild>
<executable>C:\WINDOWS\Microsoft.NET\Framework\v3.5\MSBuild.exe</executable>
<workingDirectory>F:\Source\Project\Environment\</workingDirectory>
<projectFile>Project.sln</projectFile>
<buildArgs>/noconsolelogger /p:Configuration=Debug /v:diag</buildArgs>
<targets>Rebuild</targets>
<timeout>0</timeout>
<logger>ThoughtWorks.CruiseControl.MsBuild.XmlLogger,ThoughtWorks.CruiseControl.MsBuild.dll</logger>
</msbuild>
<!-- clickonce publish -->
<exec>
<executable>F:\Source\Project\Environment\CruiseControl\Publish.bat</executable>
</exec>
The first thing you will notice is that CopySettings.bat runs. This copies specific settings for the environment, such as database connections.
Next, the standard MSBUILD task runs. Any compile errors are caught here and handled as normal.
The last thing to execute is Publish.bat. This actually performs a MSBUILD "rebuild" again from command line, and parameters from CruiseControl are automatically passed in and built. Next, MSBUILD is called for the "publish" target. The exact same parameters are given to the publish as the rebuild was issued. This keeps the build numbers in sync. Also, our executables are named differently (i.e. - ProjectDev and ProjectTest). We end up with different version numbers and names, and this allows ClickOnce to do its thing.
The last part of Publish.bat copies the actual files to their new homes. We don't use the publish.htm as all our users are on the network, we just give them a shortcut to the manifest file on their desktop and they can click and always be running the correct executable with a version number that ties out in CruiseControl.
Here is CopySettings.bat
XCOPY "F:\Source\Project\Environment\CruiseControl\Project\app.config" "F:\Source\Project\Environment\Project" /Y /I /R
XCOPY "F:\Source\Project\Environment\CruiseControl\Project\My Project\Settings.Designer.vb" "F:\Source\Project\Environment\Project\My Project" /Y /I /R
XCOPY "F:\Source\Project\Environment\CruiseControl\Project\My Project\Settings.settings" "F:\Source\Project\Environment\Project\My Project" /Y /I /R
And lastly, here is Publish.bat
C:\WINDOWS\Microsoft.NET\Framework\v3.5\MSBuild.exe /target:rebuild "F:\Source\Project\Environment\Project\Project.vbproj" /property:ApplicationRevision=%CCNetLabel% /property:AssemblyName="ProjectEnvironment" /property:PublishUrl="\\Server\bin\Project\Environment\\"
C:\WINDOWS\Microsoft.NET\Framework\v3.5\MSBuild.exe /target:publish "F:\Source\Project\Environment\Project\Project.vbproj" /property:ApplicationVersion="1.0.0.%CCNetLabel%" /property:AssemblyVersion="1.0.0.%CCNetLabel%" /property:AssemblyName="ProjectEnvironment"
XCOPY "F:\Source\Project\Environment\Project\bin\Debug\app.publish" "F:\Binary\Project\Environment" /Y /I
XCOPY "F:\Source\Project\Environment\Project\bin\Debug\app.publish\Application Files" "F:\Binary\Project\Environment\Application Files" /Y /I /S
Like I said, it's probably not done the way that CruiseControl and MSBUILD developers had intended things to work, but it does work. If you need to get this working yesterday, it might be the solution you're looking for. Good luck!
A: Just be able passing the ${CCNetLabel} in the CCNET.config msbuild task would be a great improvement.
A: You want to use the ClickOnce manifest generation tasks in msbuild. The process is a little long winded, so I am just going to point you to a couple of links. Here is the reference on msdn and a sample article to hopefully get you started.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/12482",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "24"
} |
Q: How do you get a directory listing in C? How do you scan a directory for folders and files in C? It needs to be cross-platform.
A: GLib is a portability/utility library for C which forms the basis of the GTK+ graphical toolkit. It can be used as a standalone library.
It contains portable wrappers for managing directories. See Glib File Utilities documentation for details.
Personally, I wouldn't even consider writing large amounts of C-code without something like GLib behind me. Portability is one thing, but it's also nice to get data structures, thread helpers, events, mainloops etc. for free
Jikes, I'm almost starting to sound like a sales guy :) (don't worry, glib is open source (LGPL) and I'm not affiliated with it in any way)
A: The following POSIX program will print the names of the files in the current directory:
#define _XOPEN_SOURCE 700
#include <stdio.h>
#include <sys/types.h>
#include <dirent.h>
int main (void)
{
DIR *dp;
struct dirent *ep;
dp = opendir ("./");
if (dp != NULL)
{
while ((ep = readdir (dp)) != NULL)
puts (ep->d_name);
(void) closedir (dp);
return 0;
}
else
{
perror ("Couldn't open the directory");
return -1;
}
}
Credit: http://www.gnu.org/software/libtool/manual/libc/Simple-Directory-Lister.html
Tested in Ubuntu 16.04.
A: opendir/readdir are POSIX. If POSIX is not enough for the portability you want to achieve, check Apache Portable Runtime
A: The strict answer is "you can't", as the very concept of a folder is not truly cross-platform.
On MS platforms you can use _findfirst, _findnext and _findclose for a 'c' sort of feel, and FindFirstFile and FindNextFile for the underlying Win32 calls.
Here's the C-FAQ answer:
http://c-faq.com/osdep/readdir.html
A: I've created an open source (BSD) C header that deals with this problem. It currently supports POSIX and Windows. Please check it out:
https://github.com/cxong/tinydir
tinydir_dir dir;
tinydir_open(&dir, "/path/to/dir");
while (dir.has_next)
{
tinydir_file file;
tinydir_readfile(&dir, &file);
printf("%s", file.name);
if (file.is_dir)
{
printf("/");
}
printf("\n");
tinydir_next(&dir);
}
tinydir_close(&dir);
A: Directory listing varies greatly according to the OS/platform under consideration. This is because, various Operating systems using their own internal system calls to achieve this.
A solution to this problem would be to look for a library which masks this problem and portable. Unfortunately, there is no solution that works on all platforms flawlessly.
On POSIX compatible systems, you could use the library to achieve this using the code posted by Clayton (which is referenced originally from the Advanced Programming under UNIX book by W. Richard Stevens). this solution will work under *NIX systems and would also work on Windows if you have Cygwin installed.
Alternatively, you could write a code to detect the underlying OS and then call the appropriate directory listing function which would hold the 'proper' way of listing the directory structure under that OS.
A: The most similar method to readdir is probably using the little-known _find family of functions.
A: There is no standard C (or C++) way to enumerate files in a directory.
Under Windows you can use the FindFirstFile/FindNextFile functions to enumerate all entries in a directory. Under Linux/OSX use the opendir/readdir/closedir functions.
A: You can find the sample code on the wikibooks link
/**************************************************************
* A simpler and shorter implementation of ls(1)
* ls(1) is very similar to the DIR command on DOS and Windows.
**************************************************************/
#include <stdio.h>
#include <dirent.h>
int listdir(const char *path)
{
struct dirent *entry;
DIR *dp;
dp = opendir(path);
if (dp == NULL)
{
perror("opendir");
return -1;
}
while((entry = readdir(dp)))
puts(entry->d_name);
closedir(dp);
return 0;
}
int main(int argc, char **argv) {
int counter = 1;
if (argc == 1)
listdir(".");
while (++counter <= argc) {
printf("\nListing %s...\n", argv[counter-1]);
listdir(argv[counter-1]);
}
return 0;
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/12489",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "69"
} |
Q: Pretty printing XML files on Emacs I use emacs to edit my xml files (nxml-mode) and the files were generated by machine don't have any pretty formatting of the tags.
I have searched for pretty printing the entire file with indentation and saving it, but wasn't able to find an automatic way.
Is there a way? Or atleast some editor on linux which can do it.
A: If you only need pretty indenting without introducing any new line-breaks, you can apply the indent-region command to the entire buffer with these keystrokes:
C-x h
C-M-\
If you also need to introduce line-breaks, so that opening and closing tags are on separate lines, you could use the following very nice elisp function, written by Benjamin Ferrari. I found it on his blog and hope it's ok for me to reproduce it here:
(defun bf-pretty-print-xml-region (begin end)
"Pretty format XML markup in region. You need to have nxml-mode
http://www.emacswiki.org/cgi-bin/wiki/NxmlMode installed to do
this. The function inserts linebreaks to separate tags that have
nothing but whitespace between them. It then indents the markup
by using nxml's indentation rules."
(interactive "r")
(save-excursion
(nxml-mode)
(goto-char begin)
(while (search-forward-regexp "\>[ \\t]*\<" nil t)
(backward-char) (insert "\n") (setq end (1+ end)))
(indent-region begin end))
(message "Ah, much better!"))
This doesn't rely on an external tool like Tidy.
A: here's a few tweaks I made to Benjamin Ferrari's version:
*
*the search-forward-regexp didn't specify an end, so it would operate on stuff from beginning of region to end of buffer (instead of end of region)
*Now increments end properly, as Cheeso noted.
*it would insert a break between <tag></tag>, which modifies its value. Yes, technically we're modifying values of everything here, but an empty start/end is much more likely to be significant. Now uses two separate, slightly more strict searches to avoid that.
Still has the "doesn't rely on external tidy", etc. However, it does require cl for the incf macro.
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; pretty print xml region
(defun pretty-print-xml-region (begin end)
"Pretty format XML markup in region. You need to have nxml-mode
http://www.emacswiki.org/cgi-bin/wiki/NxmlMode installed to do
this. The function inserts linebreaks to separate tags that have
nothing but whitespace between them. It then indents the markup
by using nxml's indentation rules."
(interactive "r")
(save-excursion
(nxml-mode)
(goto-char begin)
;; split <foo><foo> or </foo><foo>, but not <foo></foo>
(while (search-forward-regexp ">[ \t]*<[^/]" end t)
(backward-char 2) (insert "\n") (incf end))
;; split <foo/></foo> and </foo></foo>
(goto-char begin)
(while (search-forward-regexp "<.*?/.*?>[ \t]*<" end t)
(backward-char) (insert "\n") (incf end))
(indent-region begin end nil)
(normal-mode))
(message "All indented!"))
A: One way of doing is
If you have something in below format
<abc> <abc><abc> <abc></abc> </abc></abc> </abc>
In Emacs, try
M-x nxml-mode
M-x replace-regexp RET > *< RET >C-q C-j< RET
C-M-\ to indent
This will indent above xml example to below
<abc>
<abc>
<abc>
<abc>
</abc>
</abc>
</abc>
</abc>
In VIM you can do this by
:set ft=xml
:%s/>\s*</>\r</g
ggVG=
Hope this helps.
A: Emacs can run arbitrary commands with M-|. If you have xmllint installed:
"M-| xmllint --format -" will format the selected region
"C-u M-| xmllint --format -" will do the same, replacing the region with the output
A: I use nXML mode for editing and Tidy when I want to format and indent XML or HTML. There is also an Emacs interface to Tidy.
A: For introducing line breaks and then pretty printing
M-x sgml-mode
M-x sgml-pretty-print
A: Thanks to Tim Helmstedt above I made st like this:
(defun nxml-pretty-format ()
(interactive)
(save-excursion
(shell-command-on-region (point-min) (point-max) "xmllint --format -" (buffer-name) t)
(nxml-mode)
(indent-region begin end)))
fast and easy. Many thanks.
A: *
*Emacs nxml-mode can work on presented format, but you'll have to split the lines.
*For longer files that simply isn't worth it. Run this stylesheet (ideally with Saxon
which IMHO gets the line indents about right) against longer files
to get a nice pretty print. For any elements where you want to retain white space
add their names alongside 'programlisting' as in 'programlisting yourElementName'
HTH
A: I took Jason Viers' version and added logic to put xmlns declarations on their own lines. This assumes that you have xmlns= and xmlns: with no intervening whitespace.
(defun cheeso-pretty-print-xml-region (begin end)
"Pretty format XML markup in region. You need to have nxml-mode
http://www.emacswiki.org/cgi-bin/wiki/NxmlMode installed to do
this. The function inserts linebreaks to separate tags that have
nothing but whitespace between them. It then indents the markup
by using nxml's indentation rules."
(interactive "r")
(save-excursion
(nxml-mode)
;; split <foo><bar> or </foo><bar>, but not <foo></foo>
(goto-char begin)
(while (search-forward-regexp ">[ \t]*<[^/]" end t)
(backward-char 2) (insert "\n") (incf end))
;; split <foo/></foo> and </foo></foo>
(goto-char begin)
(while (search-forward-regexp "<.*?/.*?>[ \t]*<" end t)
(backward-char) (insert "\n") (incf end))
;; put xml namespace decls on newline
(goto-char begin)
(while (search-forward-regexp "\\(<\\([a-zA-Z][-:A-Za-z0-9]*\\)\\|['\"]\\) \\(xmlns[=:]\\)" end t)
(goto-char (match-end 0))
(backward-char 6) (insert "\n") (incf end))
(indent-region begin end nil)
(normal-mode))
(message "All indented!"))
A: as of 2017 emacs already comes with this capability by default, but you have to write this little function into your ~/.emacs.d/init.el:
(require 'sgml-mode)
(defun reformat-xml ()
(interactive)
(save-excursion
(sgml-pretty-print (point-min) (point-max))
(indent-region (point-min) (point-max))))
then just call M-x reformat-xml
source: https://davidcapello.com/blog/emacs/reformat-xml-on-emacs/
A: You don't even need to write your own function - sgml-mode (a gnu emacs core module) has a built-in pretty printing function called (sgml-pretty-print ...) which takes region beginning and end arguments.
If you are cutting and pasting xml and you find your terminal is chopping the lines in arbitrary places you can use this pretty printer which fixes broken lines first.
A: Tidy looks like a good mode. Must look at it. Will use it if I really need all the features it offers.
Anyway, this problem was nagging me for about a week and I wasn't searching properly. After posting, I started searching and found one site with an elisp function which does it pretty good. The author also suggests using Tidy.
Thanks for answer Marcel (too bad I don't have enough points to upmod you).
Will post about it soon on my blog. Here is a post about it (with a link to Marcel's site).
A: I use xml-reformat-tags from xml-parse.el. Usually you will want to have the point at the beginning of the file when running this command.
It's interesting that the file is incorporated into Emacspeak. When I was using Emacspeak on day-by-day basis, I thought xml-reformat-tags is an Emacs builtin. One day I lost it and had to make an internet search for that, and thus entered the wiki page mentioned above.
I'm attaching also my code to start xml-parse. Not sure if this is the best piece of Emacs code, but seems to work for me.
(if (file-exists-p "~/.emacs.d/packages/xml-parse.el")
(let ((load-path load-path))
(add-to-list 'load-path "~/.emacs.d/packages")
(require 'xml-parse))
)
A: If you use spacemacs, just use command 'spacemacs/indent-region-or-buffer'.
M-x spacemacs/indent-region-or-buffer
A: I'm afraid I like Benjamin Ferrari version much better. The internal pretty print always places the end tag in a new line after the value, inserting unwanted CR in the tag values.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/12492",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "86"
} |
Q: PowerShell's Invoke-Expression missing param I thought that I had the latest CTP of PowerShell 2 but when I try the command:
invoke-expression –computername Server01 –command 'get-process PowerShell'
I get an error message:
A parameter cannot be found that matches parameter name 'computername'.
So the question is: How can I tell which version of PowerShell I have installed? And what the latest version is?
A: From last night's build (which means you might have this in CTP3 but if not, you'll get it in the next public drop):
[4120:0]PS> $psversiontable
Name Value
---- -----
CLRVersion 2.0.50727.3521
BuildVersion 6.1.7047.0
PSVersion 2.0
WSManStackVersion 2.0
PSCompatibleVersions {1.0, 2.0}
SerializationVersion 1.1.0.1
PSRemotingProtocolVersion 2.0
Experiment! Enjoy! Engage!
Jeffrey Snover [MSFT]
Windows Management Partner Architect
A: $host.version.tostring() will return the version number.
RTM of v1 is 1.0.0.0
Couldn't honestly tell you what the latest version of the previews are because I haven't had a chance to play yet.
A: The problem is that from CTP 1 to CTP2, they switched up the Invoke stuff, all the remoting stuff is done through Invoke-Command now, and Invoke-Expression is solely for turning a string into a script ;)
P.S.: If you're on v2 you can run $PSVersionTable to see a list of versions including the CLR and Build versions.
A: The latest CTP is CTP2 released on 05/02/08 and can be found here. Remoting requires WinRM to be installed on both the calling machine and the target machine. Included in the CTP is a script to configure WS-Management called Configure-WSMan.ps1.
This command should get you the version number of PowerShell that you have installed.
Get-Command "$PSHome\powershell.exe" | Format-List FileVersionInfo
V1.0 is 6.0.5430.0
CTP2 is 6.1.6585.1
I don't have the version number for the first CTP on hand, but I can find it if you really need it.
A: I'm guessing that this is a change to the cmdlet made during the configuration process Configure-Wsman.ps1. I don't have an environment setup to test right now, but I'm guessing something went wrong with the configuration. I can verify that on XP the parameter is not available (duh). I'd assume that you will find the same on Vista/08 without the configuration completed.
A: If the $PSVersionTable variable doesn't exist, then you are running V1.
If it exists, then the version will be available as $PSVersionTable.PSVersion.
function Get-PSVersion {
if (test-path variable:psversiontable)
{$psversiontable.psversion}
else
{[version]"1.0.0.0"}
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/12501",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Why Are People Still Creating RSS Feeds? ...instead of using the Atom syndication format?
Atom is a well-defined, general-purpose XML syndication format. RSS is fractured into four different versions. All the major feed readers have supported Atom for as long as I can remember, so why isn't its use more prevalent?
Worst of all are sites that provide feeds in both formats - what's the point?!
*
*UPDATE (18 August): Interestingly,
this site itself is using Atom for
its feeds rather than RSS.
A: Worse is better.
A: If you are asking why the Atom syndication format is not more widely adopted than the various versions of the RSS format, I think it would be difficult to come to an objective answer. A variety of factors like the amount of investment in RSS prior to Atom, the relative ease in implementing RSS versus Atom, and 'marketing' all come into play.
I can, however, think of a few things that should be considered when choosing what syndication format to use in representing resources:
Atom
*
*Atom is an official Internet standards track protocol.
*Atom has a registered content media type for its feed and entry representations.
*Without the use of syndication extensions Atom tends to be a more robust format, especially in the link relations arena.
*Representing resources using Atom allows you to leverage the Atom Publishing Protocol (AtomPub) to provide a RESTful API.
*The RFC-3339 DateTime format used by Atom is easy to parse.
*There is only one deprecated version of Atom (0.3) you might conceivably have to support.
*Implementing Atom typically takes more time to do correctly than RSS, as there are more restrictions and the technical specification can be a bit dense.
RSS
*
*RSS is a relatively simple format to implement in a short amount of time.
*There are not as many constraints/rules placed on RSS as there are on Atom.
*RSS is not an official Internet standards track protocol. However, RSS has enough adoption that you can reasonably expect it to be consumable by a variety of clients.
*As there are quite a few 'deprecated' versions of RSS, you might conceivably have to support RSS feeds that vary quite a bit in their formatting details.
*RSS does not have a registered media content type. However enough publishers use the same unregistered content type that it is almost a defacto MIME type.
*The RFC-822 DateTime format utilized by RSS is more difficult to parse as this particular timestamp format allows a lot more possible variation in the format that is still considered valid.
*You will need to extend the RSS format using a variety of published syndication extensions when you start trying to represent resources with complex link relations.
I think it is important to remember that to the end-user, what syndication format you choose to use is not very important, as most feed readers and browsers handle either format equally well. The choice of syndication format however can be very important to the developer, as there are technical details that can impact the software development side of things.
A: The 'why' was fairly well answered, but I would suggest going forward that developers only implement Atom on sites. There's no reason to have multiple formats for a site available and any modern feed client can now parse Atom feeds.
Atom has quite a few technical advantages over RSS and is being widely supported and utilized by major companies such as Google and Microsoft.
As for branding, I don't really care what acronym is used. I think the universal orange broadcast icon or the word "feed" is what people care about. Despite the proliferation of RSS and Atom feeds, I'd say the average web user still has no idea what they are. Looking at it in that way, the whole concept of syndication feeds is still in its infancy.
A: For the same reason that every "better" solution did not succeed for mass market. RSS is widely deployed and it solves the same problem Atom is trying to solve.
Personaly, I have a large number of RSS feeds that I generate myself. They are working today and solving a problem. I wonder how you could convince me to rewrite all those feeds to Atom just to use a "better" format.
Now if you consider how the REST architecture is gaining visibility these days because of better and simpler caching and scalability, these are real arguments. I understand that Atom is closely related to the REST hype and it may be the best way to market it. As REST will be gaining visibility, so are its related formats like Atom.
A: The fundamental thing that the Atom creators didn't understand (and that the Atom supporters still don't understand), is that Atom isn't somehow separate from RSS. There's this idea that RSS fractured, and that somehow Atom fixes that problem. But it doesn't. Atom is just another RSS splinter. A new name doesn't change the fact that it's just one more standard competing to do the same job, a job for which any of the competing standards are sufficient.
No one outside a fairly small group of people care at all which standard is used. They just want it to work. Atom, RSS 2.0, RSS 1.0, RSS 401(k), whatever. As long as it works, the users are happy. The RSS "brand" very much defines the entire feed category, though, so on the rare occasion that someone does know enough to choose, they will tend to choose RSS, because it's got "the name." They will also tend to choose RSS 2.0, because it's got the bigger number.
RSS, and especially RSS 2.0, are very much entrenched in the feed "industry." Atom hasn't taken off because it doesn't bring much except a new name. Why switch away from RSS when it works just fine? And why even bother using Atom on new projects if RSS is sufficient? Switching to a new feed format mostly means extra time spent learning the new format.
If nothing else Apple's exclusive use of RSS 2.0 for podcasts means that RSS 2.0 is here for the foreseeable future.
A: The same reason that people are HTML 4 loose, strict, XHTML transitional, XHTML strict, etc. Legacy code / working with what you already know.
Besides, both formats have their merits. Better to support a couple different formats than have one be-all-end-all-subscribe-to-everything feed that becomes bloated.
A: Because for the majority of purposes either will work, and RSS has the advantage of being the acronym that defines the category.
Beyond that you would have to email individual sites and ask them.
A: Vincent, I'm not suggesting that anyone rewrite existing RSS feeds just for the sake of it, that would be a big waste of time! In terms of RSS being the acronym that defines the category, I'm guessing that most users now identify with the orange feed icon, rather than the specific flavour of XML behind it.
A: I think RSS has a better marketable name :-) RSS is something easy to say, serious-sounding, and virtually senseless. Really sounds like a silver-bullet technology. "Atom" word has sense, but hardly there are a lot of people associating this with some cosmic hi-tech, more usual associations are students, high-school physics, communists' weaponsm hypeware "web 2.0". A very small qty of people out there really corellate it with what it really is and why has it got it's name - an atomic-precise descriptive structured knowledge representation framework. If I say "RSS" in non-IT-pro crowd I've got a very good chance to be instantly understood what I mean today, but if I say "Atom" - hardly anyone will get it any of that fast and clear. RSS is de-facto a name for syndication feeds. That's why think it can be a good idea to label it RSS while technically giving 100% Atom and only Atom - software does not care oat's drawn on a button and end-users will get what they've meant to get. IMHO.
A: There are a lot of RSS feed readers out there that people are used to using, and most importantly, RSS is very well known and has been around much longer. Why mess with something if it works?
A: Because the developers of sed sights got reemed from their marketing manager that they were "excluding" people by not providing rss. Since Mr. Marketing has never heard of Atom, you just provide both.
If you are restfully implementing it, its not a big deal to just do both and not get yelled at by other departments.
A: RSS is more simple, that's where its strength is. Atom is better defined, yes, but that's the problem: they made it easy to write a very complex feed when at the end of the day you want a simplified summary.
Why do so many people prefer RSS ? It's easy and gets the job done. You can edit it by hand; not so easy with Atom. Have you ever tried to write a feed reader ?
IMHO, the only thing Atom did good was multiple enclosures.
Why do some websites offer two feeds, RSS and Atom ? Because they can and because some ancient feed reader may not support Atom.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/12509",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "40"
} |
Q: Expression Evaluation and Tree Walking using polymorphism? (ala Steve Yegge) This morning, I was reading Steve Yegge's: When Polymorphism Fails, when I came across a question that a co-worker of his used to ask potential employees when they came for their interview at Amazon.
As an example of polymorphism in
action, let's look at the classic
"eval" interview question, which (as
far as I know) was brought to Amazon
by Ron Braunstein. The question is
quite a rich one, as it manages to
probe a wide variety of important
skills: OOP design, recursion, binary
trees, polymorphism and runtime
typing, general coding skills, and (if
you want to make it extra hard)
parsing theory.
At some point, the candidate hopefully
realizes that you can represent an
arithmetic expression as a binary
tree, assuming you're only using
binary operators such as "+", "-",
"*", "/". The leaf nodes are all
numbers, and the internal nodes are
all operators. Evaluating the
expression means walking the tree. If
the candidate doesn't realize this,
you can gently lead them to it, or if
necessary, just tell them.
Even if you tell them, it's still an
interesting problem.
The first half of the question, which
some people (whose names I will
protect to my dying breath, but their
initials are Willie Lewis) feel is a
Job Requirement If You Want To Call
Yourself A Developer And Work At
Amazon, is actually kinda hard. The
question is: how do you go from an
arithmetic expression (e.g. in a
string) such as "2 + (2)" to an
expression tree. We may have an ADJ
challenge on this question at some
point.
The second half is: let's say this is
a 2-person project, and your partner,
who we'll call "Willie", is
responsible for transforming the
string expression into a tree. You get
the easy part: you need to decide what
classes Willie is to construct the
tree with. You can do it in any
language, but make sure you pick one,
or Willie will hand you assembly
language. If he's feeling ornery, it
will be for a processor that is no
longer manufactured in production.
You'd be amazed at how many candidates
boff this one.
I won't give away the answer, but a
Standard Bad Solution involves the use
of a switch or case statment (or just
good old-fashioned cascaded-ifs). A
Slightly Better Solution involves
using a table of function pointers,
and the Probably Best Solution
involves using polymorphism. I
encourage you to work through it
sometime. Fun stuff!
So, let's try to tackle the problem all three ways. How do you go from an arithmetic expression (e.g. in a string) such as "2 + (2)" to an expression tree using cascaded-if's, a table of function pointers, and/or polymorphism?
Feel free to tackle one, two, or all three.
[update: title modified to better match what most of the answers have been.]
A:
The problem, I think, is that we need to parse perentheses, and yet they are not a binary operator? Should we take (2) as a single token, that evaluates to 2?
The parens don't need to show up in the expression tree, but they do affect its shape. E.g., the tree for (1+2)+3 is different from 1+(2+3):
+
/ \
+ 3
/ \
1 2
versus
+
/ \
1 +
/ \
2 3
The parentheses are a "hint" to the parser (e.g., per superjoe30, to "recursively descend")
A: This gets into parsing/compiler theory, which is kind of a rabbit hole... The Dragon Book is the standard text for compiler construction, and takes this to extremes. In this particular case, you want to construct a context-free grammar for basic arithmetic, then use that grammar to parse out an abstract syntax tree. You can then iterate over the tree, reducing it from the bottom up (it's at this point you'd apply the polymorphism/function pointers/switch statement to reduce the tree).
I've found these notes to be incredibly helpful in compiler and parsing theory.
A: Representing the Nodes
If we want to include parentheses, we need 5 kinds of nodes:
*
*the binary nodes: Add Minus Mul Divthese have two children, a left and right side
+
/ \
node node
*a node to hold a value: Valno children nodes, just a numeric value
*a node to keep track of the parens: Parena single child node for the subexpression
( )
|
node
For a polymorphic solution, we need to have this kind of class relationship:
*
*Node
*BinaryNode : inherit from Node
*Plus : inherit from Binary Node
*Minus : inherit from Binary Node
*Mul : inherit from Binary Node
*Div : inherit from Binary Node
*Value : inherit from Node
*Paren : inherit from node
There is a virtual function for all nodes called eval(). If you call that function, it will return the value of that subexpression.
A: String Tokenizer + LL(1) Parser will give you an expression tree... the polymorphism way might involve an abstract Arithmetic class with an "evaluate(a,b)" function, which is overridden for each of the operators involved (Addition, Subtraction etc) to return the appropriate value, and the tree contains Integers and Arithmetic operators, which can be evaluated by a post(?)-order traversal of the tree.
A:
I won't give away the answer, but a
Standard Bad Solution involves the use
of a switch or case statment (or just
good old-fashioned cascaded-ifs). A
Slightly Better Solution involves
using a table of function pointers,
and the Probably Best Solution
involves using polymorphism.
The last twenty years of evolution in interpreters can be seen as going the other way - polymorphism (eg naive Smalltalk metacircular interpreters) to function pointers (naive lisp implementations, threaded code, C++) to switch (naive byte code interpreters), and then onwards to JITs and so on - which either require very big classes, or (in singly polymorphic languages) double-dispatch, which reduces the polymorphism to a type-case, and you're back at stage one. What definition of 'best' is in use here?
For simple stuff a polymorphic solution is OK - here's one I made earlier, but either stack and bytecode/switch or exploiting the runtime's compiler is usually better if you're, say, plotting a function with a few thousand data points.
A: Hm... I don't think you can write a top-down parser for this without backtracking, so it has to be some sort of a shift-reduce parser. LR(1) or even LALR will of course work just fine with the following (ad-hoc) language definition:
Start -> E1
E1 -> E1+E1 | E1-E1
E1 -> E2*E2 | E2/E2 | E2
E2 -> number | (E1)
Separating it out into E1 and E2 is necessary to maintain the precedence of * and / over + and -.
But this is how I would do it if I had to write the parser by hand:
*
*Two stacks, one storing nodes of the tree as operands and one storing operators
*Read the input left to right, make leaf nodes of the numbers and push them into the operand stack.
*If you have >= 2 operands on the stack, pop 2, combine them with the topmost operator in the operator stack and push this structure back to the operand tree, unless
*The next operator has higher precedence that the one currently on top of the stack.
This leaves us the problem of handling brackets. One elegant (I thought) solution is to store the precedence of each operator as a number in a variable. So initially,
*
*int plus, minus = 1;
*int mul, div = 2;
Now every time you see a a left bracket increment all these variables by 2, and every time you see a right bracket, decrement all the variables by 2.
This will ensure that the + in 3*(4+5) has higher precedence than the *, and 3*4 will not be pushed onto the stack. Instead it will wait for 5, push 4+5, then push 3*(4+5).
A: Polymorphic Tree Walking, Python version
#!/usr/bin/python
class Node:
"""base class, you should not process one of these"""
def process(self):
raise('you should not be processing a node')
class BinaryNode(Node):
"""base class for binary nodes"""
def __init__(self, _left, _right):
self.left = _left
self.right = _right
def process(self):
raise('you should not be processing a binarynode')
class Plus(BinaryNode):
def process(self):
return self.left.process() + self.right.process()
class Minus(BinaryNode):
def process(self):
return self.left.process() - self.right.process()
class Mul(BinaryNode):
def process(self):
return self.left.process() * self.right.process()
class Div(BinaryNode):
def process(self):
return self.left.process() / self.right.process()
class Num(Node):
def __init__(self, _value):
self.value = _value
def process(self):
return self.value
def demo(n):
print n.process()
demo(Num(2)) # 2
demo(Plus(Num(2),Num(5))) # 2 + 3
demo(Plus(Mul(Num(2),Num(3)),Div(Num(10),Num(5)))) # (2 * 3) + (10 / 2)
The tests are just building up the binary trees by using constructors.
program structure:
abstract base class: Node
*
*all Nodes inherit from this class
abstract base class: BinaryNode
*
*all binary operators inherit from this class
*process method does the work of evaluting the expression and returning the result
binary operator classes: Plus,Minus,Mul,Div
*
*two child nodes, one each for left side and right side subexpressions
number class: Num
*
*holds a leaf-node numeric value, e.g. 17 or 42
A: Re: Justin
I think the tree would look something like this:
+
/ \
2 ( )
|
2
Basically, you'd have an "eval" node, that just evaluates the tree below it. That would then be optimized out to just being:
+
/ \
2 2
In this case the parens aren't required and don't add anything. They don't add anything logically, so they'd just go away.
A: I think the question is about how to write a parser, not the evaluator. Or rather, how to create the expression tree from a string.
Case statements that return a base class don't exactly count.
The basic structure of a "polymorphic" solution (which is another way of saying, I don't care what you build this with, I just want to extend it with rewriting the least amount of code possible) is deserializing an object hierarchy from a stream with a (dynamic) set of known types.
The crux of the implementation of the polymorphic solution is to have a way to create an expression object from a pattern matcher, likely recursive. I.e., map a BNF or similar syntax to an object factory.
A: should use a functional language imo. Trees are harder to represent and manipulate in OO languages.
A:
Or maybe this is the real question:
how can you represent (2) as a BST?
That is the part that is tripping me
up.
Recursion.
A: @Justin:
Look at my note on representing the nodes. If you use that scheme, then
2 + (2)
can be represented as
.
/ \
2 ( )
|
2
A: As people have been mentioning previously, when you use expression trees parens are not necessary. The order of operations becomes trivial and obvious when you're looking at an expression tree. The parens are hints to the parser.
While the accepted answer is the solution to one half of the problem, the other half - actually parsing the expression - is still unsolved. Typically, these sorts of problems can be solved using a recursive descent parser. Writing such a parser is often a fun exercise, but most modern tools for language parsing will abstract that away for you.
The parser is also significantly harder if you allow floating point numbers in your string. I had to create a DFA to accept floating point numbers in C -- it was a very painstaking and detailed task. Remember, valid floating points include: 10, 10., 10.123, 9.876e-5, 1.0f, .025, etc. I assume some dispensation from this (in favor of simplicty and brevity) was made in the interview.
A: I've written such a parser with some basic techniques like
Infix -> RPN and
Shunting Yard and
Tree Traversals.
Here is the implementation I've came up with.
It's written in C++ and compiles on both Linux and Windows.
Any suggestions/questions are welcomed.
So, let's try to tackle the problem all three ways. How do you go from an arithmetic expression (e.g. in a string) such as "2 + (2)" to an expression tree using cascaded-if's, a table of function pointers, and/or polymorphism?
This is interesting,but I don't think this belongs to the realm of object-oriented programming...I think it has more to do with parsing techniques.
A: I've kind of chucked this c# console app together as a bit of a proof of concept. Have a feeling it could be a lot better (that switch statement in GetNode is kind of clunky (it's there coz I hit a blank trying to map a class name to an operator)). Any suggestions on how it could be improved very welcome.
using System;
class Program
{
static void Main(string[] args)
{
string expression = "(((3.5 * 4.5) / (1 + 2)) + 5)";
Console.WriteLine(string.Format("{0} = {1}", expression, new Expression.ExpressionTree(expression).Value));
Console.WriteLine("\nShow's over folks, press a key to exit");
Console.ReadKey(false);
}
}
namespace Expression
{
// -------------------------------------------------------
abstract class NodeBase
{
public abstract double Value { get; }
}
// -------------------------------------------------------
class ValueNode : NodeBase
{
public ValueNode(double value)
{
_double = value;
}
private double _double;
public override double Value
{
get
{
return _double;
}
}
}
// -------------------------------------------------------
abstract class ExpressionNodeBase : NodeBase
{
protected NodeBase GetNode(string expression)
{
// Remove parenthesis
expression = RemoveParenthesis(expression);
// Is expression just a number?
double value = 0;
if (double.TryParse(expression, out value))
{
return new ValueNode(value);
}
else
{
int pos = ParseExpression(expression);
if (pos > 0)
{
string leftExpression = expression.Substring(0, pos - 1).Trim();
string rightExpression = expression.Substring(pos).Trim();
switch (expression.Substring(pos - 1, 1))
{
case "+":
return new Add(leftExpression, rightExpression);
case "-":
return new Subtract(leftExpression, rightExpression);
case "*":
return new Multiply(leftExpression, rightExpression);
case "/":
return new Divide(leftExpression, rightExpression);
default:
throw new Exception("Unknown operator");
}
}
else
{
throw new Exception("Unable to parse expression");
}
}
}
private string RemoveParenthesis(string expression)
{
if (expression.Contains("("))
{
expression = expression.Trim();
int level = 0;
int pos = 0;
foreach (char token in expression.ToCharArray())
{
pos++;
switch (token)
{
case '(':
level++;
break;
case ')':
level--;
break;
}
if (level == 0)
{
break;
}
}
if (level == 0 && pos == expression.Length)
{
expression = expression.Substring(1, expression.Length - 2);
expression = RemoveParenthesis(expression);
}
}
return expression;
}
private int ParseExpression(string expression)
{
int winningLevel = 0;
byte winningTokenWeight = 0;
int winningPos = 0;
int level = 0;
int pos = 0;
foreach (char token in expression.ToCharArray())
{
pos++;
switch (token)
{
case '(':
level++;
break;
case ')':
level--;
break;
}
if (level <= winningLevel)
{
if (OperatorWeight(token) > winningTokenWeight)
{
winningLevel = level;
winningTokenWeight = OperatorWeight(token);
winningPos = pos;
}
}
}
return winningPos;
}
private byte OperatorWeight(char value)
{
switch (value)
{
case '+':
case '-':
return 3;
case '*':
return 2;
case '/':
return 1;
default:
return 0;
}
}
}
// -------------------------------------------------------
class ExpressionTree : ExpressionNodeBase
{
protected NodeBase _rootNode;
public ExpressionTree(string expression)
{
_rootNode = GetNode(expression);
}
public override double Value
{
get
{
return _rootNode.Value;
}
}
}
// -------------------------------------------------------
abstract class OperatorNodeBase : ExpressionNodeBase
{
protected NodeBase _leftNode;
protected NodeBase _rightNode;
public OperatorNodeBase(string leftExpression, string rightExpression)
{
_leftNode = GetNode(leftExpression);
_rightNode = GetNode(rightExpression);
}
}
// -------------------------------------------------------
class Add : OperatorNodeBase
{
public Add(string leftExpression, string rightExpression)
: base(leftExpression, rightExpression)
{
}
public override double Value
{
get
{
return _leftNode.Value + _rightNode.Value;
}
}
}
// -------------------------------------------------------
class Subtract : OperatorNodeBase
{
public Subtract(string leftExpression, string rightExpression)
: base(leftExpression, rightExpression)
{
}
public override double Value
{
get
{
return _leftNode.Value - _rightNode.Value;
}
}
}
// -------------------------------------------------------
class Divide : OperatorNodeBase
{
public Divide(string leftExpression, string rightExpression)
: base(leftExpression, rightExpression)
{
}
public override double Value
{
get
{
return _leftNode.Value / _rightNode.Value;
}
}
}
// -------------------------------------------------------
class Multiply : OperatorNodeBase
{
public Multiply(string leftExpression, string rightExpression)
: base(leftExpression, rightExpression)
{
}
public override double Value
{
get
{
return _leftNode.Value * _rightNode.Value;
}
}
}
}
A: Ok, here is my naive implementation. Sorry, I did not feel to use objects for that one but it is easy to convert. I feel a bit like evil Willy (from Steve's story).
#!/usr/bin/env python
#tree structure [left argument, operator, right argument, priority level]
tree_root = [None, None, None, None]
#count of parethesis nesting
parenthesis_level = 0
#current node with empty right argument
current_node = tree_root
#indices in tree_root nodes Left, Operator, Right, PRiority
L, O, R, PR = 0, 1, 2, 3
#functions that realise operators
def sum(a, b):
return a + b
def diff(a, b):
return a - b
def mul(a, b):
return a * b
def div(a, b):
return a / b
#tree evaluator
def process_node(n):
try:
len(n)
except TypeError:
return n
left = process_node(n[L])
right = process_node(n[R])
return n[O](left, right)
#mapping operators to relevant functions
o2f = {'+': sum, '-': diff, '*': mul, '/': div, '(': None, ')': None}
#converts token to a node in tree
def convert_token(t):
global current_node, tree_root, parenthesis_level
if t == '(':
parenthesis_level += 2
return
if t == ')':
parenthesis_level -= 2
return
try: #assumption that we have just an integer
l = int(t)
except (ValueError, TypeError):
pass #if not, no problem
else:
if tree_root[L] is None: #if it is first number, put it on the left of root node
tree_root[L] = l
else: #put on the right of current_node
current_node[R] = l
return
priority = (1 if t in '+-' else 2) + parenthesis_level
#if tree_root does not have operator put it there
if tree_root[O] is None and t in o2f:
tree_root[O] = o2f[t]
tree_root[PR] = priority
return
#if new node has less or equals priority, put it on the top of tree
if tree_root[PR] >= priority:
temp = [tree_root, o2f[t], None, priority]
tree_root = current_node = temp
return
#starting from root search for a place with higher priority in hierarchy
current_node = tree_root
while type(current_node[R]) != type(1) and priority > current_node[R][PR]:
current_node = current_node[R]
#insert new node
temp = [current_node[R], o2f[t], None, priority]
current_node[R] = temp
current_node = temp
def parse(e):
token = ''
for c in e:
if c <= '9' and c >='0':
token += c
continue
if c == ' ':
if token != '':
convert_token(token)
token = ''
continue
if c in o2f:
if token != '':
convert_token(token)
convert_token(c)
token = ''
continue
print "Unrecognized character:", c
if token != '':
convert_token(token)
def main():
parse('(((3 * 4) / (1 + 2)) + 5)')
print tree_root
print process_node(tree_root)
if __name__ == '__main__':
main()
| {
"language": "en",
"url": "https://stackoverflow.com/questions/12516",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "27"
} |
Q: Tools for automating mouse and keyboard events sent to a windows application What tools are useful for automating clicking through a windows form application? Is this even useful? I see the testers at my company doing this a great deal and it seems like a waste of time.
A: Check out https://github.com/TestStack/White and http://nunitforms.sourceforge.net/. We've used the White project with success.
A: Though they're mostly targeted at automating administration tasks or shortcuts for users, Autohotkey and AutoIT let you automate nearly anything you want as far as mouse/keyboard interaction.
Some of the mouse stuff can get tricky when the only way to really tell it what you want to click is an X,Y coordinate, but for automating entirely arbitrary tasks on a Windows machine, it does the trick.
Like I said, they're not necessarily intended for testing purposes, so they're not instrumented for unit test conventions. However, I use them all of the time to automate stuff that isn't testing related.
A: You can do it programmatically via the Microsoft UI Automation API. There's an MSDN Magazine article about it.
Integrates well with unit test frameworks. A better option than the coordinate-based script runners because you don't have to rewrite scripts when layouts change.
A: There's a couple out there. They all hook into the windows API to log item clicks, and then reproduce them to test.
We're now mostly web based (using WatiN), but we used to use Mercury Quicktest.
Don't use Quicktest, it's awful for a tremendously long list of reasons.
A: This is what i was looking for.
Check out http://www.codeplex.com/white and http://nunitforms.sourceforge.net/. We've used the White project with success.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/12523",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
} |
Q: How do you convert the number you get from datepart to the name of the day? Is there a quick one-liner to call datepart in Sql Server and get back the name of the day instead of just the number?
select datepart(dw, getdate());
This will return 1-7, with Sunday being 1. I would like 'Sunday' instead of 1.
A: select datename(weekday, getdate());
A: It actually took me more searching than I thought it would to find this answer. It's funny how you can use a technology for ages and never know about simple functions like this.
select datename(dw, getdate())
I'm not sure how localization would work with this function. Getting the name client-side is probably the answer, but it would be nice to do it on the database. Would Sql Server use the collation setting to determine the output for this?
A: This is not possible without using the result to select the day yourself. For one thing the textual representation of the day is locale-dependent. For another the returned value depends upon the 'datefirst' setting.
A: If you want a localizable solution, just join the result against a table with the names and numbers.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/12533",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: What is your experience using the TIBCO General Interface? It looks interesting and I've played around with it some --- but the development IDE in a web browser seems to be nightmare eventually.
Does anyone have experience using it and what are your thoughts?
A: From a coworker who used to work at TIBCO:
TIBCO is a complicated, hard to use system because it's used for complicated, hard to solve problems.
A: We evaluated GI a few months ago for a project but didn't end up selecting it.
The IDE-in-a-browser (which is itself build with GI) actually works surprisingly well, though there are some features you normally expect from an editor that it lacks, most notably (and irritatingly) an Undo command. It's also impossible to do things like subdocument includes (practically a necessity for team development) from the IDE, though you can do them manually in the underlying XML and the IDE will respect them.
In the end the main reason we didn't go with it was that it was difficult to make the resulting web application look as good as the designers really wanted. It was relatively easy to build functionality, but the components were very restrictive in look and feel. The way GI renders its own document model to HTML involves a lot of style attributes which makes skinning in CSS all but impossible. It seems to prefer making web applications that look like applications, instead of web applications that look like websites.
So it would probably be great for building intranet type applications where look and feel isn't a huge issue, but I probably wouldn't use it to make a public facing site.
By the way for those that don't know, TIBCO GI is a completely separate product from the rest of TIBCO's SOA business integration stuff - General Interface was a separate company that was acquired by TIBCO a couple of years ago.
A: Kieron does a good job of summarizing GI. It's really for enterprise web applications, not consumer-y widgets. The overhead of loading the entire GI framework and waiting a second or two for it to load doesn;t seem like much if you're firing up a call center or an employee provisioning application you're going to use for the next few hours. But, it seems like forever if you're waiting for a widget to load into an existing web page. And, even though, GI supports some nice functional and performance QA tools, they really are overkill unless you're working on something important and complex. So, if all you want is to toss a sexy looking datepicker on screen, use something else for sure.
A: Yup, couldn't agree more. I have developed a few applications with TIBCO GI and integrated it with TIBCO CIM. I work for TIBCO and GI is something I have been working with quite heavily doing some complicated stuff. Whilst doing it, I came across the odd sides of GI, things you sometimes can't explain but are just the way they are, working with JavaScript and dealing with multithreading issues can be a nightmare etc. It's good to create something quick without being too fussy about the sexiness of the application hence good for internal apps but not for consumers unless you want to get lost in a jungle of crazy CSS styling. The XML Mapping utility is a great feature saving you lots of time to implement SOA applications. The other good part is that deployment is really easy - GI apps use a combination of XML, XSLT, X-Path and JavaScript. In GI 3.8 there are also a couple of testing tools. Unfortunately, development inside GI's editor is slow and painful, so I recommend using an external editor like Notepad++.
A: you dont need to run tibco-GI from a web-browser, but you need to run the Programfile GI_Builder.exe which is an ActiveX application. just double-click on it and run-it.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/12556",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: What do the different brackets in Ruby mean? In Ruby, what's the difference between {} and []?
{} seems to be used for both code blocks and hashes.
Are [] only for arrays?
The documention isn't very clear.
A: It depends on the context:
*
*When on their own, or assigning to a variable, [] creates arrays, and {} creates hashes. e.g.
a = [1,2,3] # an array
b = {1 => 2} # a hash
*[] can be overridden as a custom method, and is generally used to fetch things from hashes (the standard library sets up [] as a method on hashes which is the same as fetch)
There is also a convention that it is used as a class method in the same way you might use a static Create method in C# or Java. e.g.
a = {1 => 2} # create a hash for example
puts a[1] # same as a.fetch(1), will print 2
Hash[1,2,3,4] # this is a custom class method which creates a new hash
See the Ruby Hash docs for that last example.
*This is probably the most tricky one -
{} is also syntax for blocks, but only when passed to a method OUTSIDE the arguments parens.
When you invoke methods without parens, Ruby looks at where you put the commas to figure out where the arguments end (where the parens would have been, had you typed them)
1.upto(2) { puts 'hello' } # it's a block
1.upto 2 { puts 'hello' } # syntax error, ruby can't figure out where the function args end
1.upto 2, { puts 'hello' } # the comma means "argument", so ruby sees it as a hash - this won't work because puts 'hello' isn't a valid hash
A: The square brackets [ ] are used to initialize arrays.
The documentation for initializer case of [ ] is in
ri Array::[]
The curly brackets { } are used to initialize hashes.
The documentation for initializer case of { } is in
ri Hash::[]
The square brackets are also commonly used as a method in many core ruby classes, like Array, Hash, String, and others.
You can access a list of all classes that have method "[ ]" defined with
ri []
most methods also have a "[ ]=" method that allows to assign things, for example:
s = "hello world"
s[2] # => 108 is ascii for e
s[2]=109 # 109 is ascii for m
s # => "hemlo world"
Curly brackets can also be used instead of "do ... end" on blocks, as "{ ... }".
Another case where you can see square brackets or curly brackets used - is in the special initializers where any symbol can be used, like:
%w{ hello world } # => ["hello","world"]
%w[ hello world ] # => ["hello","world"]
%r{ hello world } # => / hello world /
%r[ hello world ] # => / hello world /
%q{ hello world } # => "hello world"
%q[ hello world ] # => "hello world"
%q| hello world | # => "hello world"
A: Another, not so obvious, usage of [] is as a synonym for Proc#call and Method#call. This might be a little confusing the first time you encounter it. I guess the rational behind it is that it makes it look more like a normal function call.
E.g.
proc = Proc.new { |what| puts "Hello, #{what}!" }
meth = method(:print)
proc["World"]
meth["Hello",","," ", "World!", "\n"]
A: a few examples:
[1, 2, 3].class
# => Array
[1, 2, 3][1]
# => 2
{ 1 => 2, 3 => 4 }.class
# => Hash
{ 1 => 2, 3 => 4 }[3]
# => 4
{ 1 + 2 }.class
# SyntaxError: compile error, odd number list for Hash
lambda { 1 + 2 }.class
# => Proc
lambda { 1 + 2 }.call
# => 3
A: Note that you can define the [] method for your own classes:
class A
def [](position)
# do something
end
def @rank.[]= key, val
# define the instance[a] = b method
end
end
A: Broadly speaking, you're correct. As well as hashes, the general style is that curly braces {} are often used for blocks that can fit all onto one line, instead of using do/end across several lines.
Square brackets [] are used as class methods in lots of Ruby classes, including String, BigNum, Dir and confusingly enough, Hash. So:
Hash["key" => "value"]
is just as valid as:
{ "key" => "value" }
| {
"language": "en",
"url": "https://stackoverflow.com/questions/12565",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "87"
} |
Q: Rigor in capturing test cases for unit testing Let's say we have a simple function defined in a pseudo language.
List<Numbers> SortNumbers(List<Numbers> unsorted, bool ascending);
We pass in an unsorted list of numbers and a boolean specifying ascending or descending sort order. In return, we get a sorted list of numbers.
In my experience, some people are better at capturing boundary conditions than others. The question is, "How do you know when you are 'done' capturing test cases"?
We can start listing cases now and some clever person will undoubtedly think of 'one more' case that isn't covered by any of the previous.
A:
How do you know when you are 'done' capturing test cases?
You don't.You can't get to 100% except for the most trivial cases. Also 100% coverage (of lines, paths, conditions...) doesn't tell you you've hit all boundary conditions.
Most importantly, the test cases are not write-and-forget. Each time you find a bug, write an additional test. Check it fails with the original program, check it passes with the corrected program and add it to your test set.
An excerpt from The Art of Software Testing by Glenford J. Myers:
*
*If an input condition specifies a range of values, write test cases for the ends of the range, and invalid-input test cases for situations just beyond the ends.
*If an input condition specifies a number of values, write test cases for the minimum and maximum number of values and one beneath and beyond these values.
*Use guideline 1 for each output condition.
*Use guideline 2 for each output condition.
*If the input or output of a program is an ordered set focus attention on the first and last elements of the set.
*In addition, use your ingenuity to search for other boundary conditions
(I've only pasted the bare minimum for copyright reasons.)
Points 3. and 4. above are very important. People tend to forget boundary conditions for the outputs. 5. is OK. 6. really doesn't help :-)
Short exam
This is more difficult than it looks. Myers offers this test:
The program reads three integer values from an input dialog. The three values represent the lengths of the sides of a triangle. The program displays a message that states whether the triangle is scalene, isosceles, or equilateral.
Remember that a scalene triangle is one where no two sides are equal, whereas an isosceles triangle has two equal sides, and an equilateral triangle has three sides of equal length. Moreover, the angles opposite the equal sides in an isosceles triangle also are equal (it also follows that the sides opposite equal angles in a triangle are equal), and all angles in an equilateral triangle are equal.
Write your test cases. How many do you have? Myers asks 14 questions about your test set and reports that highly qualified professional programmes average 7.8 out of a possible 14.
A: Don't waste too much time trying to think of every boundry condition. Your tests won't be able to catch every bug first time around. The idea is to have tests that are pretty good, and then each time a bug does surface, write a new test specifically for that bug so that you never hear from it again.
Another note I want to make about code coverage tools. In a language like C# or Java where your have many get/set and similar methods, you should not be shooting for 100% coverage. That means you are wasting too much time writing tests for trivial code. You only want 100% coverage on your complex business logic. If your full codebase is closer to 70-80% coverage, you are doing a good job. If your code coverage tool allows multiple coverage metrics, the best one is 'block coverage' which measures coverage of 'basic blocks'. Other types are class and method coverage (which don't give you as much information) and line coverage (which is too fine grain).
A: From a practical standpoint, I create a list of tests that I believe must pass prior to acceptance. I test these and automate where possible. Based on how much time I've estimated for the task or how much time I've been given, I extend my test coverage to include items that should pass prior to acceptance. Of course, the line between must and should is subjective. After that, I update automated tests as bugs are discovered.
A: A good code coverage tool really helps.
100% coverage doesn't mean that it definitely is adequately tested, but it's a good indicator.
For .Net NCover's quite good, but is no longer open source.
@Mike Stone -
Yeah, perhaps that should have been "high coverage" - we aim for 80% minimum, past about 95% it's usually diminishing returns, especially if you have belt 'n' braces code.
A: @Keith
I think you nailed it, code coverage is important to look at if you want to see how "done" you are, but I think 100% is a bit unrealistic a goal. Striving for 75-90% will give you pretty good coverage without going overboard... don't test for the pure sake of hitting 100%, because at that point you are just wasting your time.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/12569",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |