text
stringlengths 8
267k
| meta
dict |
---|---|
Q: Static Methods in an Interface/Abstract Class First off, I understand the reasons why an interface or abstract class (in the .NET/C# terminology) cannot have abstract static methods. My question is then more focused on the best design solution.
What I want is a set of "helper" classes that all have their own static methods such that if I get objects A, B, and C from a third party vendor, I can have helper classes with methods such as
AHelper.RetrieveByID(string id);
AHelper.RetrieveByName(string name);
AHelper.DumpToDatabase();
Since my AHelper, BHelper, and CHelper classes will all basically have the same methods, it seems to makes sense to move these methods to an interface that these classes then derive from. However, wanting these methods to be static precludes me from having a generic interface or abstract class for all of them to derive from.
I could always make these methods non-static and then instantiate the objects first such as
AHelper a = new AHelper();
a.DumpToDatabase();
However, this code doesn't seem as intuitive to me. What are your suggestions? Should I abandon using an interface or abstract class altogether (the situation I'm in now) or can this possibly be refactored to accomplish the design I'm looking for?
A: If I were you I would try to avoid any statics. IMHO I always ended up with some sort of synchronization issues down the road with statics. That being said you are presenting a classic example of generic programming using templates. I will adopt the template based solution of Rob Copper presented in one of the posts above.
A: Looking at your response I am thinking along the following lines:
*
*You could just have a static method that takes a type parameter and performs the expected logic based on the type.
*You could create a virtual method in your abstract base, where you specify the SQL in the concrete class. So that contains all the common code that is required by both (e.g. exectuting the command and returning the object) while encapsulating the "specialist" bits (e.g. the SQL) in the sub classes.
I prefer the second option, although its of course down to you. If you need me to go into further detail, please let me know and I will be happy to edit/update :)
A: For a generic solution to your example, you can do this:
public static T RetrieveByID<T>(string ID)
{
var fieldNames = getFieldNamesBasedOnType(typeof(T));
QueryResult qr = webservice.query("SELECT "+fieldNames + " FROM "
+ tyepof(T).Name
+" WHERE Id = '" + ID + "'");
return (T) qr.records[0];
}
A: I personally would perhaps question why each of the types need to have a static method before even thinking further..
Why not create a utlity class with the static methods that they need to share? (e.g. ClassHelper.RetrieveByID(string id) or ClassHelper<ClassA>.RetrieveByID(string id)
In my experience with these sort of "roadblocks" the problem is not the limitations of the language, but the limitations of my design..
A: How are ObjectA and AHelper related? Is AHelper.RetrieveByID() the same logic as BHelper.RetrieveByID()
If Yes, how about a Utility class based approach (class with public static methods only and no state)
static [return type] Helper.RetrieveByID(ObjectX x)
A: You can't overload methods by varying just the return type.
You can use different names:
static AObject GetAObject(string id);
static BObject GetBObject(string id);
Or you can create a class with casting operators:
class AOrBObject
{
string id;
AOrBObject(string id) {this.id = id;}
static public AOrBObject RetrieveByID(string id)
{
return new AOrBObject(id);
}
public static AObject explicit operator(AOrBObject ab)
{
return AObjectQuery(ab.id);
}
public static BObject explicit operator(AOrBObject ab)
{
return BObjectQuery(ab.id);
}
}
Then you can call it like so:
var a = (AObject) AOrBObject.RetrieveByID(5);
var b = (BObject) AOrBObject.RetrieveByID(5);
A: In C# 3.0, static methods can be used on interfaces as if they were a part of them by using extension methods, as with DumpToDatabase() below:
static class HelperMethods
{ //IHelper h = new HeleperA();
//h.DumpToDatabase()
public static void DumpToDatabase(this IHelper helper) { /* ... */ }
//IHelper h = a.RetrieveByID(5)
public static IHelper RetrieveByID(this ObjectA a, int id)
{
return new HelperA(a.GetByID(id));
}
//Ihelper h = b.RetrieveByID(5)
public static IHelper RetrieveByID(this ObjectB b, int id)
{
return new HelperB(b.GetById(id.ToString()));
}
}
A: How do I post feedback on Stack Overflow? Edit my original post or post an "answer"? Anyway, I thought it might help to give an example of what is going on in AHelper.RetrieveByID() and BHelper.RetreiveByID()
Basically, both of these methods are going up against a third party webservice that returns various a generic (castable) object using a Query method that takes in a pseudo-SQL string as its only parameters.
So, AHelper.RetrieveByID(string ID) might look like
public static AObject RetrieveByID(string ID)
{
QueryResult qr = webservice.query("SELECT Id,Name FROM AObject WHERE Id = '" + ID + "'");
return (AObject)qr.records[0];
}
public static BObject RetrieveByID(string ID)
{
QueryResult qr = webservice.query("SELECT Id,Name,Company FROM BObject WHERE Id = '" + ID + "'");
return (BObject)qr.records[0];
}
Hopefully that helps. As you can see, the two methods are similar, but the query can be quite a bit different based on the different object type being returned.
Oh, and Rob, I completely agree -- this is more than likely a limitation of my design and not the language. :)
A: Are you looking for polymorphic behavior? Then you'll want the interface and normal constructor. What is unintuitive about calling a constructor? If you don't need polymorphism (sounds like you don't use it now), then you can stick with your static methods. If these are all wrappers around a vendor component, then maybe you might try to use a factory method to create them like VendorBuilder.GetVendorThing("A") which could return an object of type IVendorWrapper.
A: marxidad Just a quick point to note, Justin has already said that the SQL varies a lot dependant on the type, so I have worked on the basis that it could be something completely different dependant on the type, hence delegating it to the subclasses in question. Whereas your solution couples the SQL VERY tightly to the Type (i.e. it is the SQL).
rptony Good point on the possible sync issues with statics, one I failed to mention, so thank you :) Also, its Rob Cooper (not Copper) BTW ;) :D ( EDIT: Just thought I would mention that in case it wasn't a typo, I expect it is, so no problem!)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/14614",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
} |
Q: How to retrieve a file from a server via SFTP? I'm trying to retrieve a file from a server using SFTP (as opposed to FTPS) using Java. How can I do this?
A: hierynomus/sshj has a complete implementation of SFTP version 3 (what OpenSSH implements)
Example code from SFTPUpload.java
package net.schmizz.sshj.examples;
import net.schmizz.sshj.SSHClient;
import net.schmizz.sshj.sftp.SFTPClient;
import net.schmizz.sshj.xfer.FileSystemFile;
import java.io.File;
import java.io.IOException;
/** This example demonstrates uploading of a file over SFTP to the SSH server. */
public class SFTPUpload {
public static void main(String[] args)
throws IOException {
final SSHClient ssh = new SSHClient();
ssh.loadKnownHosts();
ssh.connect("localhost");
try {
ssh.authPublickey(System.getProperty("user.name"));
final String src = System.getProperty("user.home") + File.separator + "test_file";
final SFTPClient sftp = ssh.newSFTPClient();
try {
sftp.put(new FileSystemFile(src), "/tmp");
} finally {
sftp.close();
}
} finally {
ssh.disconnect();
}
}
}
A: JSch library is the powerful library that can be used to read file from SFTP server. Below is the tested code to read file from SFTP location line by line
JSch jsch = new JSch();
Session session = null;
try {
session = jsch.getSession("user", "127.0.0.1", 22);
session.setConfig("StrictHostKeyChecking", "no");
session.setPassword("password");
session.connect();
Channel channel = session.openChannel("sftp");
channel.connect();
ChannelSftp sftpChannel = (ChannelSftp) channel;
InputStream stream = sftpChannel.get("/usr/home/testfile.txt");
try {
BufferedReader br = new BufferedReader(new InputStreamReader(stream));
String line;
while ((line = br.readLine()) != null) {
System.out.println(line);
}
} catch (IOException io) {
System.out.println("Exception occurred during reading file from SFTP server due to " + io.getMessage());
io.getMessage();
} catch (Exception e) {
System.out.println("Exception occurred during reading file from SFTP server due to " + e.getMessage());
e.getMessage();
}
sftpChannel.exit();
session.disconnect();
} catch (JSchException e) {
e.printStackTrace();
} catch (SftpException e) {
e.printStackTrace();
}
Please refer the blog for whole program.
A: Below is an example using Apache Common VFS:
FileSystemOptions fsOptions = new FileSystemOptions();
SftpFileSystemConfigBuilder.getInstance().setStrictHostKeyChecking(fsOptions, "no");
FileSystemManager fsManager = VFS.getManager();
String uri = "sftp://user:password@host:port/absolute-path";
FileObject fo = fsManager.resolveFile(uri, fsOptions);
A: Andy, to delete file on remote system you need to use (channelExec) of JSch and pass unix/linux commands to delete it.
A: A nice abstraction on top of Jsch is Apache commons-vfs which offers a virtual filesystem API that makes accessing and writing SFTP files almost transparent. Worked well for us.
A: This was the solution I came up with
http://sourceforge.net/projects/sshtools/ (most error handling omitted for clarity). This is an excerpt from my blog
SshClient ssh = new SshClient();
ssh.connect(host, port);
//Authenticate
PasswordAuthenticationClient passwordAuthenticationClient = new PasswordAuthenticationClient();
passwordAuthenticationClient.setUsername(userName);
passwordAuthenticationClient.setPassword(password);
int result = ssh.authenticate(passwordAuthenticationClient);
if(result != AuthenticationProtocolState.COMPLETE){
throw new SFTPException("Login to " + host + ":" + port + " " + userName + "/" + password + " failed");
}
//Open the SFTP channel
SftpClient client = ssh.openSftpClient();
//Send the file
client.put(filePath);
//disconnect
client.quit();
ssh.disconnect();
A: There is a nice comparison of the 3 mature Java libraries for SFTP: Commons VFS, SSHJ and JSch
To sum up SSHJ has the clearest API and it's the best out of them if you don't need other storages support provided by Commons VFS.
Here is edited SSHJ example from github:
final SSHClient ssh = new SSHClient();
ssh.loadKnownHosts(); // or, to skip host verification: ssh.addHostKeyVerifier(new PromiscuousVerifier())
ssh.connect("localhost");
try {
ssh.authPassword("user", "password"); // or ssh.authPublickey(System.getProperty("user.name"))
final SFTPClient sftp = ssh.newSFTPClient();
try {
sftp.get("test_file", "/tmp/test.tmp");
} finally {
sftp.close();
}
} finally {
ssh.disconnect();
}
A: Another option is to consider looking at the JSch library. JSch seems to be the preferred library for a few large open source projects, including Eclipse, Ant and Apache Commons HttpClient, amongst others.
It supports both user/pass and certificate-based logins nicely, as well as all a whole host of other yummy SSH2 features.
Here's a simple remote file retrieve over SFTP. Error handling is left as an exercise for the reader :-)
JSch jsch = new JSch();
String knownHostsFilename = "/home/username/.ssh/known_hosts";
jsch.setKnownHosts( knownHostsFilename );
Session session = jsch.getSession( "remote-username", "remote-host" );
{
// "interactive" version
// can selectively update specified known_hosts file
// need to implement UserInfo interface
// MyUserInfo is a swing implementation provided in
// examples/Sftp.java in the JSch dist
UserInfo ui = new MyUserInfo();
session.setUserInfo(ui);
// OR non-interactive version. Relies in host key being in known-hosts file
session.setPassword( "remote-password" );
}
session.connect();
Channel channel = session.openChannel( "sftp" );
channel.connect();
ChannelSftp sftpChannel = (ChannelSftp) channel;
sftpChannel.get("remote-file", "local-file" );
// OR
InputStream in = sftpChannel.get( "remote-file" );
// process inputstream as needed
sftpChannel.exit();
session.disconnect();
A: Try edtFTPj/PRO, a mature, robust SFTP client library that supports connection pools and asynchronous operations. Also supports FTP and FTPS so all bases for secure file transfer are covered.
A: I found complete working example for SFTP in java using JSCH API
http://kodehelp.com/java-program-for-uploading-file-to-sftp-server/
A: Though answers above were very helpful, I've spent a day to make them work, facing various exceptions like "broken channel", "rsa key unknown" and "packet corrupt".
Below is a working reusable class for SFTP FILES UPLOAD/DOWNLOAD using JSch library.
Upload usage:
SFTPFileCopy upload = new SFTPFileCopy(true, /path/to/sourcefile.png", /path/to/destinationfile.png");
Download usage:
SFTPFileCopy download = new SFTPFileCopy(false, "/path/to/sourcefile.png", "/path/to/destinationfile.png");
The class code:
import com.jcraft.jsch.Channel;
import com.jcraft.jsch.ChannelSftp;
import com.jcraft.jsch.JSch;
import com.jcraft.jsch.Session;
import com.jcraft.jsch.UIKeyboardInteractive;
import com.jcraft.jsch.UserInfo;
import java.io.BufferedInputStream;
import java.io.BufferedOutputStream;
import java.io.ByteArrayInputStream;
import java.io.File;
import java.io.FileInputStream;
import java.io.FileNotFoundException;
import java.io.FileOutputStream;
import java.io.IOException;
import java.io.OutputStream;
import javax.swing.JOptionPane;
import menue.Menue;
public class SFTPFileCopy1 {
public SFTPFileCopy1(boolean upload, String sourcePath, String destPath) throws FileNotFoundException, IOException {
Session session = null;
Channel channel = null;
ChannelSftp sftpChannel = null;
try {
JSch jsch = new JSch();
//jsch.setKnownHosts("/home/user/.putty/sshhostkeys");
session = jsch.getSession("login", "mysite.com", 22);
session.setPassword("password");
UserInfo ui = new MyUserInfo() {
public void showMessage(String message) {
JOptionPane.showMessageDialog(null, message);
}
public boolean promptYesNo(String message) {
Object[] options = {"yes", "no"};
int foo = JOptionPane.showOptionDialog(null,
message,
"Warning",
JOptionPane.DEFAULT_OPTION,
JOptionPane.WARNING_MESSAGE,
null, options, options[0]);
return foo == 0;
}
};
session.setUserInfo(ui);
session.setConfig("StrictHostKeyChecking", "no");
session.connect();
channel = session.openChannel("sftp");
channel.setInputStream(System.in);
channel.setOutputStream(System.out);
channel.connect();
sftpChannel = (ChannelSftp) channel;
if (upload) { // File upload.
byte[] bufr = new byte[(int) new File(sourcePath).length()];
FileInputStream fis = new FileInputStream(new File(sourcePath));
fis.read(bufr);
ByteArrayInputStream fileStream = new ByteArrayInputStream(bufr);
sftpChannel.put(fileStream, destPath);
fileStream.close();
} else { // File download.
byte[] buffer = new byte[1024];
BufferedInputStream bis = new BufferedInputStream(sftpChannel.get(sourcePath));
OutputStream os = new FileOutputStream(new File(destPath));
BufferedOutputStream bos = new BufferedOutputStream(os);
int readCount;
while ((readCount = bis.read(buffer)) > 0) {
bos.write(buffer, 0, readCount);
}
bis.close();
bos.close();
}
} catch (Exception e) {
System.out.println(e);
} finally {
if (sftpChannel != null) {
sftpChannel.exit();
}
if (channel != null) {
channel.disconnect();
}
if (session != null) {
session.disconnect();
}
}
}
public static abstract class MyUserInfo
implements UserInfo, UIKeyboardInteractive {
public String getPassword() {
return null;
}
public boolean promptYesNo(String str) {
return false;
}
public String getPassphrase() {
return null;
}
public boolean promptPassphrase(String message) {
return false;
}
public boolean promptPassword(String message) {
return false;
}
public void showMessage(String message) {
}
public String[] promptKeyboardInteractive(String destination,
String name,
String instruction,
String[] prompt,
boolean[] echo) {
return null;
}
}
}
A: See http://www.mysamplecode.com/2013/06/sftp-apache-commons-file-download.html
Apache Commons SFTP library
Common java properties file for all the examples
serverAddress=111.222.333.444
userId=myUserId
password=myPassword
remoteDirectory=products/
localDirectory=import/
Upload file to remote server using SFTP
import java.io.File;
import java.io.FileInputStream;
import java.util.Properties;
import org.apache.commons.vfs2.FileObject;
import org.apache.commons.vfs2.FileSystemOptions;
import org.apache.commons.vfs2.Selectors;
import org.apache.commons.vfs2.impl.StandardFileSystemManager;
import org.apache.commons.vfs2.provider.sftp.SftpFileSystemConfigBuilder;
public class SendMyFiles {
static Properties props;
public static void main(String[] args) {
SendMyFiles sendMyFiles = new SendMyFiles();
if (args.length < 1)
{
System.err.println("Usage: java " + sendMyFiles.getClass().getName()+
" Properties_file File_To_FTP ");
System.exit(1);
}
String propertiesFile = args[0].trim();
String fileToFTP = args[1].trim();
sendMyFiles.startFTP(propertiesFile, fileToFTP);
}
public boolean startFTP(String propertiesFilename, String fileToFTP){
props = new Properties();
StandardFileSystemManager manager = new StandardFileSystemManager();
try {
props.load(new FileInputStream("properties/" + propertiesFilename));
String serverAddress = props.getProperty("serverAddress").trim();
String userId = props.getProperty("userId").trim();
String password = props.getProperty("password").trim();
String remoteDirectory = props.getProperty("remoteDirectory").trim();
String localDirectory = props.getProperty("localDirectory").trim();
//check if the file exists
String filepath = localDirectory + fileToFTP;
File file = new File(filepath);
if (!file.exists())
throw new RuntimeException("Error. Local file not found");
//Initializes the file manager
manager.init();
//Setup our SFTP configuration
FileSystemOptions opts = new FileSystemOptions();
SftpFileSystemConfigBuilder.getInstance().setStrictHostKeyChecking(
opts, "no");
SftpFileSystemConfigBuilder.getInstance().setUserDirIsRoot(opts, true);
SftpFileSystemConfigBuilder.getInstance().setTimeout(opts, 10000);
//Create the SFTP URI using the host name, userid, password, remote path and file name
String sftpUri = "sftp://" + userId + ":" + password + "@" + serverAddress + "/" +
remoteDirectory + fileToFTP;
// Create local file object
FileObject localFile = manager.resolveFile(file.getAbsolutePath());
// Create remote file object
FileObject remoteFile = manager.resolveFile(sftpUri, opts);
// Copy local file to sftp server
remoteFile.copyFrom(localFile, Selectors.SELECT_SELF);
System.out.println("File upload successful");
}
catch (Exception ex) {
ex.printStackTrace();
return false;
}
finally {
manager.close();
}
return true;
}
}
Download file from remote server using SFTP
import java.io.File;
import java.io.FileInputStream;
import java.util.Properties;
import org.apache.commons.vfs2.FileObject;
import org.apache.commons.vfs2.FileSystemOptions;
import org.apache.commons.vfs2.Selectors;
import org.apache.commons.vfs2.impl.StandardFileSystemManager;
import org.apache.commons.vfs2.provider.sftp.SftpFileSystemConfigBuilder;
public class GetMyFiles {
static Properties props;
public static void main(String[] args) {
GetMyFiles getMyFiles = new GetMyFiles();
if (args.length < 1)
{
System.err.println("Usage: java " + getMyFiles.getClass().getName()+
" Properties_filename File_To_Download ");
System.exit(1);
}
String propertiesFilename = args[0].trim();
String fileToDownload = args[1].trim();
getMyFiles.startFTP(propertiesFilename, fileToDownload);
}
public boolean startFTP(String propertiesFilename, String fileToDownload){
props = new Properties();
StandardFileSystemManager manager = new StandardFileSystemManager();
try {
props.load(new FileInputStream("properties/" + propertiesFilename));
String serverAddress = props.getProperty("serverAddress").trim();
String userId = props.getProperty("userId").trim();
String password = props.getProperty("password").trim();
String remoteDirectory = props.getProperty("remoteDirectory").trim();
String localDirectory = props.getProperty("localDirectory").trim();
//Initializes the file manager
manager.init();
//Setup our SFTP configuration
FileSystemOptions opts = new FileSystemOptions();
SftpFileSystemConfigBuilder.getInstance().setStrictHostKeyChecking(
opts, "no");
SftpFileSystemConfigBuilder.getInstance().setUserDirIsRoot(opts, true);
SftpFileSystemConfigBuilder.getInstance().setTimeout(opts, 10000);
//Create the SFTP URI using the host name, userid, password, remote path and file name
String sftpUri = "sftp://" + userId + ":" + password + "@" + serverAddress + "/" +
remoteDirectory + fileToDownload;
// Create local file object
String filepath = localDirectory + fileToDownload;
File file = new File(filepath);
FileObject localFile = manager.resolveFile(file.getAbsolutePath());
// Create remote file object
FileObject remoteFile = manager.resolveFile(sftpUri, opts);
// Copy local file to sftp server
localFile.copyFrom(remoteFile, Selectors.SELECT_SELF);
System.out.println("File download successful");
}
catch (Exception ex) {
ex.printStackTrace();
return false;
}
finally {
manager.close();
}
return true;
}
}
Delete a file on remote server using SFTP
import java.io.FileInputStream;
import java.util.Properties;
import org.apache.commons.vfs2.FileObject;
import org.apache.commons.vfs2.FileSystemOptions;
import org.apache.commons.vfs2.impl.StandardFileSystemManager;
import org.apache.commons.vfs2.provider.sftp.SftpFileSystemConfigBuilder;
public class DeleteRemoteFile {
static Properties props;
public static void main(String[] args) {
DeleteRemoteFile getMyFiles = new DeleteRemoteFile();
if (args.length < 1)
{
System.err.println("Usage: java " + getMyFiles.getClass().getName()+
" Properties_filename File_To_Delete ");
System.exit(1);
}
String propertiesFilename = args[0].trim();
String fileToDownload = args[1].trim();
getMyFiles.startFTP(propertiesFilename, fileToDownload);
}
public boolean startFTP(String propertiesFilename, String fileToDownload){
props = new Properties();
StandardFileSystemManager manager = new StandardFileSystemManager();
try {
props.load(new FileInputStream("properties/" + propertiesFilename));
String serverAddress = props.getProperty("serverAddress").trim();
String userId = props.getProperty("userId").trim();
String password = props.getProperty("password").trim();
String remoteDirectory = props.getProperty("remoteDirectory").trim();
//Initializes the file manager
manager.init();
//Setup our SFTP configuration
FileSystemOptions opts = new FileSystemOptions();
SftpFileSystemConfigBuilder.getInstance().setStrictHostKeyChecking(
opts, "no");
SftpFileSystemConfigBuilder.getInstance().setUserDirIsRoot(opts, true);
SftpFileSystemConfigBuilder.getInstance().setTimeout(opts, 10000);
//Create the SFTP URI using the host name, userid, password, remote path and file name
String sftpUri = "sftp://" + userId + ":" + password + "@" + serverAddress + "/" +
remoteDirectory + fileToDownload;
//Create remote file object
FileObject remoteFile = manager.resolveFile(sftpUri, opts);
//Check if the file exists
if(remoteFile.exists()){
remoteFile.delete();
System.out.println("File delete successful");
}
}
catch (Exception ex) {
ex.printStackTrace();
return false;
}
finally {
manager.close();
}
return true;
}
}
A: Here is the complete source code of an example using JSch without having to worry about the ssh key checking.
import com.jcraft.jsch.*;
public class TestJSch {
public static void main(String args[]) {
JSch jsch = new JSch();
Session session = null;
try {
session = jsch.getSession("username", "127.0.0.1", 22);
session.setConfig("StrictHostKeyChecking", "no");
session.setPassword("password");
session.connect();
Channel channel = session.openChannel("sftp");
channel.connect();
ChannelSftp sftpChannel = (ChannelSftp) channel;
sftpChannel.get("remotefile.txt", "localfile.txt");
sftpChannel.exit();
session.disconnect();
} catch (JSchException e) {
e.printStackTrace();
} catch (SftpException e) {
e.printStackTrace();
}
}
}
A: You also have JFileUpload with SFTP add-on (Java too):
http://www.jfileupload.com/products/sftp/index.html
A: I use this SFTP API called Zehon, it's great, so easy to use with a lot of sample code. Here is the site http://www.zehon.com
A: The best solution I've found is Paramiko. There's a Java version.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/14617",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "241"
} |
Q: Multiple choice on WinForms What's the best way of implementing a multiple choice option in Windows Forms? I want to enforce a single selection from a list, starting with a default value.
It seems like a ComboBox would be a good choice, but is there a way to specify a non-blank default value?
I could just set it in the code at some appropriate initialisation point, but I feel like I'm missing something.
A: If you only want one answer from the group, then a RadioButton control would be your best fit or you could use the ComboBox if you will have a lot of options. To set a default value, just add the item to the ComboBox's collection and set the SelectedIndex or SelectedItem to that item.
Depending on how many options you are looking at, you can use a ListBox with the SelectionMode property set to MultiSimple, if it will be multiple choice or you could use the CheckBox control.
A: You should be able to just set the ComboBox.SelectedIndex property with what you want the default value to be.
http://msdn.microsoft.com/en-us/library/system.windows.forms.combobox.selectedindex.aspx
A: Use the ComboBox.SelectedItem or SelectedIndex property after the items have been inserted to select the default item.
You could also consider using RadioButton control to enforce selection of a single option.
A: You can use a ComboBox with the DropDownStyle property set to DropDownList and SelectedIndex to 0 (or whatever the default item is). This will force always having an item from the list selected. If you forget to do that, the user could just type something else into the edit box part - which would be bad :)
A: If you are giving the user a small list of choices then stick with the radio buttons. However, if you will want want to use the combo box for dynamic or long lists. Set the style to DropDownList.
private sub populateList( items as List(of UserChoices))
dim choices as UserChoices
dim defaultChoice as UserChoices
for each choice in items
cboList.items.add(choice)
'-- you could do user specific check or base it on some other
'---- setting to find the default choice here
if choice.state = _user.State or choice.state = _settings.defaultState then
defaultChoice = choice
end if
next
'-- you chould select the first one
if cboList.items.count > 0 then
cboList.SelectedItem = cboList.item(0)
end if
'-- continuation of hte default choice
cboList.SelectedItem = defaultChoice
end sub
| {
"language": "en",
"url": "https://stackoverflow.com/questions/14618",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Is it possible to automatically make check-outs from any VCS? Let's take a web development environment, where developers checkout a project onto their local machines, work on it, and check in changes to development.
These changes are further tested on development and moved live on a regular schedule (eg weekly, monthly, etc.).
Is it possible to have an auto-moveup of the latest tagged version (and not the latest checkin, as that might not be 100% stable), for example 8AM on Monday mornings, either using a script or a built-in feature of the VCS?
A: Yes, it is possible. This is usually a feature provided by continuous integration tools. Typically they will get the latest source from version control, build the project, test it (running unit tests) and possibly deploy it on a (test) server.
If you don't require all those steps, you can easily do the same thing with some shell scripting or similar (i.e. checkout from version control and copy to the production folder on the server).
A: Certainly, but the exact product may be dependent upon the VCS you are using.
What you might want to do, is have a a few different branches, and migrate up as you progress. E.g., Development -> Stable-Dev -> Beta -> Production. You can then simply auto-update to the latest version of Stable-Dev and Beta for your testers, and always be able to deploy a new Production version at the drop of a hat.
A: Anything you can do with cvs can be done with the command line, and I am pretty sure svn is the same. Just work out the functionality you want and stick it in a shell script or a command file.
A: The only two I have experience with are SVN and Mercurial. For Mercurial, you specify which branch you want it to update from (let's say default) and then whenever you merge a branch into default, you can just have the server run:
hg update
Which updates your repository to the latest version of the branch you set it to.
SVN is the same concept, you only check out which branch you want initially
svn co http://host/repository/branchname/
then you have your server update that with a cron job, ala
svn up
In theory though, any VCS that supports branching (all the good ones do : git, mercurial, SVN, etc...), should be able to do something similar to this.
A: I doubt many VCSs provide this ability directly, however it should be very simple to script. Either a date or branch based checkout.
A: As a follow up,
I'm of the opinion that an app should do one job and do it well. Often if you start combining tools into one product, none of them will shine, and most of them will be "'alright, sort-of".
If I was doing something like this, I would get myself something like SVN, ANT, and Subversion Ant Library (http://ant.apache.org/antlibs/svn/index.html) - your millage may vary though.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/14634",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: How to add "Project Description" in FogBugz? When I create a new project (or even when I edit the Sample Project) there is no way to add Description to the project.
Or am I blind to the obvious?
A: You are not crazy. It is used internally and not even stored in the database. I wondered the same thing when I first started using FogBugz, but found a forum entry to answer my question. As of today, I still don't think they have implemented it. Jump over to FogCreek and submit a request, if you would like to make it editable.
*
*"Description" missing from Project?
*How to Edit a Project Description
A: There's no such thing as a project description, really. There's a column in the Projects page which is used so you can see which project is the default, built-in inbox, and we couldn't think of anything better to put as the column header for that column.
A: The description is mostly for system projects, like e-mail inbox.
You might be able to set one in the underlying DB table.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/14646",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: Can a proxy server cache SSL GETs? If not, would response body encryption suffice? Can a (||any) proxy server cache content that is requested by a client over https? As the proxy server can't see the querystring, or the http headers, I reckon they can't.
I'm considering a desktop application, run by a number of people behind their companies proxy. This application may access services across the internet and I'd like to take advantage of the in-built internet caching infrastructure for 'reads'. If the caching proxy servers can't cache SSL delivered content, would simply encrypting the content of a response be a viable option?
I am considering all GET requests that we wish to be cachable be requested over http with the body encrypted using asymmetric encryption, where each client has the decryption key. Anytime we wish to perform a GET that is not cachable, or a POST operation, it will be performed over SSL.
A: The comment by Rory that the proxy would have to use a self-signed cert if not stricltly true.
The proxy could be implemented to generate a new cert for each new SSL host it is asked to deal with and sign it with a common root cert. In the OP's scenario of a corportate environment the common signing cert can rather easily be installed as a trusted CA on the client machines and they will gladly accept these "faked" SSL certs for the traffic being proxied as there will be no hostname mismatch.
In fact this is exactly how software such as the Charles Web Debugging Proxy allow for inspection of SSL traffic without causing security errors in the browser, etc.
A: No, it's not possible to cache https directly. The whole communication between the client and the server is encrypted. A proxy sits between the server and the client, in order to cache it, you need to be able to read it, ie decrypt the encryption.
You can do something to cache it. You basically do the SSL on your proxy, intercepting the SSL sent to the client. Basically the data is encrypted between the client and your proxy, it's decrypted, read and cached, and the data is encrypted and sent on the server. The reply from the server is likewise descrypted, read and encrypted. I'm not sure how you do this on major proxy software (like squid), but it is possible.
The only problem with this approach is that the proxy will have to use a self signed cert to encrypt it to the client. The client will be able to tell that a proxy in the middle has read the data, since the certificate will not be from the original site.
A: I think you should just use SSL and rely on an HTTP client library that does caching (Ex: WinInet on windows). It's hard to imagine that the benefits of enterprise wide caching is worth the pain of writing a custom security encryption scheme or certificate fun on the proxy. Worse, on the encryption scheme you mention, doing asymmetric ciphers on the entity body sounds like a huge perf hit on the server side of your application; there is a reason that SSL uses symmetric ciphers for the actual payload of the connection.
A: How about setting up a server cache on the application server behind the component that encrypts https responses? This can be useful if you have a reverse-proxy setup.
I am thinking of something like this:
application server <---> Squid or Varnish (cache) <---> Apache (performs SSL encryption)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/14656",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "32"
} |
Q: Different solutions/project files for Local vs Build environments As part of improvements to our build process, we are currently debating whether we should have separate project/solution files on our CI production environment from our local development environments.
The reason this has come about is because of reference problems we experienced in our previous project. On a frequent basis people would mistakenly add a reference to an assembly in the wrong location, which would mean it would work okay on their local environment, but might break on someone else's or on the build machine.
Also, the reference paths are in the csproj.user files which means these must be committed to source control, so everyone has to share these same settings.
So we are thinking about having separate projects and solutions on our CI server, so that when we do a build it uses these projects rather than local development ones.
It has obvious drawbacks such as an overhead to maintaining these separate files and the associated process that would need to be defined and followed, but it has benefits in that we would be in more control over EXACTLY what happens in the production environment.
What I haven't been able to find is anything on this subject - can't believe we are the only people to think about this - so all thoughts are welcome.
A: In our largest project (a system comprising of many applications) we have the following structure
/3rdPartyAssemblies /App1 /App2 /App3 /.....
All external assemblies are added to 3rdPartyAssemblies/Vendor/Version/...
We have a CoreBuild.sln file which acts as an MSBuild script for all of the assemblies that are shared to ensure building in dependancy order (ie, make sure App1.Interfaces is built before App2 as App2 has a reference to App1.Interfaces).
All inter-application references target the /bin folder (we don't use bin/debug and bin/release, just bin, this way the references remain the same and we just change the release configuration depending on the build target).
Cruise Control builds the core solution for any dependencies before building any other app, and because the 3rdPartAssemblies folder is present on the server we ensure developer machines and build server have the same development layout.
A: I know it's anachronistic. But the single best way I've found to handle the references issue is to have a folder mapped to a drive letter such as R: and then all projects build into or copy output into that folder also. Then all references are R:\SomeFile.dll etc. This gets you around the problem that sometimes references are added by absolute path and sometimes they are added relatively. (there's something to do with "HintPath" which I can't really remember)
The nice thing then, is that you can still use the same solution files on your build server. Which to be honest is an absolute must as you lose the certainty that what is being built on the dev machine is the same as on the build server otherwise.
A: Usually, you would be creating Build projects/scripts in some form or another for your Production, and so putting together another Solution file doesn't come in the picture.
It would be easier to train everyone to use project references, and create a directory under the project file structure for external assembly references. This way everyone follows the same environment.
A: I would strongly recommend against this.
*
*Reference paths aren't only stored in the .user file. A hint path is stored in the project file itself. You should never have to check a .user file into source control.
*Let there be one set of (okay, possibly versioned) solution/project files which all developers use, and the Release configurations of which are what you're ultimately building in production. Having separate project files is going to cause confusion down the road, when some project setting is tweaked, not carried across, and slipped into production.
You might also check this out:
http://www.objectsharp.com/cs/blogs/barry/archive/2004/10/29/988.aspx
http://bytes.com/forum/thread268546.html
A: We have changed our project structure (making use of SVN Externals) where each project is now completely self-contained. That is, any references never go outwith the project directory (for example, if Project A references ASM X, then ASM X exists within a subfolder of ProjectA)
I suspect that this should go some way towards helping solve some of our problems, but I can still see some advantages of having more control over the build projects.
A: @David - believe it or not this is what we actually have just now, and yet it's still causing us problems!
We're making some changes though, which are forced upon us due to moving to TeamCity and multiple build agents - so we can't have references to directories outwith the current project, as I've mentioned in my previous answer.
Look at the Externals section of this link to see what I mean - http://www.dummzeuch.de/delphi/subversion/english.html
| {
"language": "en",
"url": "https://stackoverflow.com/questions/14674",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: What Url rewriter do you use for ASP.Net? I've looked at several URL rewriters for ASP.Net and IIS and was wondering what everyone else uses, and why.
Here are the ones that I have used or looked at:
*
*ThunderMain URLRewriter: used in a previous project, didn't quite have the flexibility/performance we were looking for
*Ewal UrlMapper: used in a current project, but source seems to be abandoned
*UrlRewritingNet.UrlRewrite: seems like a decent library but documentation's poor grammar leaves me feeling uneasy
*UrlRewriter.NET: this is my current fav, has great flexibility, although the extra functions pumped into the replacement regexs changes the standard .Net regex syntax a bit
*Managed Fusion URL Rewriter: I found this one in a previous question on stack overflow, but haven't tried it out yet, from the example syntax, it doesn't seem to be editable via web.config
A: There's System.Web.Routing that was just released with .NET 3.5.
You can just use Request.RewritePath() in a custom HttpModule
I prefer using an IHttpHandlerFactory implementation and have full control over all incoming URLs and where they're mapped to.
A: If I were starting a new web project now I'd be looking at using MVC from scratch. That uses re-written URLs as standard.
A: +1 UrlRewritingNET.URLRewrite -- used in several hundred services/portals/sites on a single box without issue for years! (@Jason -- that is the one you're talking about, right?)
and I've also used the URLRewriter.NET on a personal site, and found it, ah, interesting. @travis, you're right about the changed syntax, but once you get used to it, it's good.
A: IIS 7 has an URL Rewrite Module that is fairly capable and integrates well with IIS.
A: I've used UrlRewriting.NET before on a very high-traffic site - it worked great for us. I believe the developers are German, so the English documentation is probably not as good as it could be. I'd highly recommend it.
A: I've had a good experience with Ionic's ISAPI Rewrite Filter which is very similar to ISAPI_Rewrite, except free. Both are modeled after mod_rewrite and are ISAPI filters, so you can't manage them in code as you have to set them up in IIS.
A: I would not recommend UrlRewritingNet if you are in an IIS7 Windows 2008 environment.
Reason:
UrlRewritingNet requires that you app pool mode = Classic and NOT integrated.
This is not optimal
Also, their project seems very dead that last 2 years.
A: I just installed Helicon's ISAPI Rewrite 3. Works exactly like htaccess. I'm diggin it so far.
A: I used .NET URL Rewriter and Reverse Proxy with great success. It's almost on par with mod_rewrite and uses almost all of the same syntax's. The owner of the project is extremely helpful and friendly and the product works great. This gem provides both Rewriting and Proxy functionality, which many solutions don't offer. IMO, worth a look.
A: +1 for UrlRewritingNet.UrlRewrite too but why do I always need to end my URL with .aspx? I think it should be improved better regular expression partern.
Why do I always have to end with aspx in virtualURL localhost/Products/Beverages.aspx", "localhost/Products/Condiments.aspx". I just want to type localhost/Products/Beverages", "localhost/Products/Condiments" which look like MVC route.
This one look good but it is not working for my site. I still can't figure it out.
A: asp.net routing serves the requirement of url rewriting as well and even much more than. With asp.net routing you can not just "rewrite the url" but create custom handlers for various requests.
asp.net routing however requires at least asp.net sp1.
The basic thing that you do for a simple routing to work is add a few route handlers in the Application_Start even inside the Global.asax.cs file.
protected void Application_Start(object sender, EventArgs e)
{
RegisterRoutes(RouteTable.Routes);
}
private static void RegisterRoutes(RouteCollection routes)
{
routes.Add("Routing1", new Route("/Blog/id/2","/Blog.aspx"));
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/14697",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14"
} |
Q: How to make Pro*C cope with #warning directives? When I try to precompile a *.pc file that contains a #warning directive I recieve the following error:
PCC-S-02014, Encountered the symbol "warning" when expecting one of the following: (bla bla bla).
Can I somehow convince Pro*C to ignore the thing if it doesn't know what to do with it? I can't remove the #warning directive as it's used in a header file that I can't change and must include.
A: According to the Pro*C/C++ Programmer's Guide (chapter 5 "Advanced Topics"), Pro*C silently ignores a number of preprocessor directives including #error and #pragma, but sadly not #warning. Since your warning directives are included in a header file, you might be able to use the ORA_PROC macro:
#ifndef ORA_PROC
#include <irrelevant.h>
#endif
For some reason, Pro*C errors out if you try to hide a straight #warning that way, however.
A: use option parse=none with proc
A: You can't. Pro*C only knows #if and #include. My best advice would be to preprocess the file as part of your build process to remove stuff Pro*C won't like. Something like
grep -v -E '^#(warning|pragma|define)' unchangeable.h >unchangeable.pc.h
My other advice would be to avoid the abomination which is Pro*C, but I'm guessing you're stuck with it...
A: Jons Ericsons answer is correct.
There is a second circumstance where you may need to use that trick.
Some versions of Pro*c can't deal with include files that don't have a file extension.
The ORA_PROC constant is one workable solution to that problem as well.
A: /bin/make -f /css/hwmig/pcprg/proc9i32.mk PROCFLAGS="sqlcheck=SEMANTICS userid=cssd/india09" PCCSRC=bic I_SYM=include= pc1
proc sqlcheck=SEMANTICS userid=cssd/india09 iname=bic include=. include=/oracle/Ora92/precomp/public include=/oracle/Ora92/rdbms/public include=/oracle/Ora92/rdbms/demo include=/oracle/Ora92/plsql/public include=/oracle/Ora92/network/public
Pro*C/C++: Release 9.2.0.6.0 - Production on Tue Dec 2 14:05:38 2008
Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.
System default option values taken from: /oracle/Ora92/precomp/admin/pcscfg.cfg
Syntax error at line 135, column 2, file /usr/include/standards.h:
Error at line 135, column 2 in file /usr/include/standards.h
warning The -qdfp option is required to process DFP code in headers.
.1
PCC-S-02014, Encountered the symbol "warning" when expecting one of the followin
g:
a numeric constant, newline, define, elif, else, endif,
error, if, ifdef, ifndef, include, line, pragma, undef,
an immediate preprocessor command, a C token,
The symbol "newline," was substituted for "warning" to continue.
Syntax error at line 30, column 7, file bic.pc:
Error at line 30, column 7 in file bic.pc
FILE fp;
......1
PCC-S-02201, Encountered the symbol "" when expecting one of the following:
; , = ( [
The symbol ";" was substituted for "*" to continue.
Error at line 0, column 0 in file bic.pc
PCC-F-02102, Fatal error while doing C preprocessing
A: Modify /usr/include/standards.h.
Delete the line #warning The -qdfp option is required to process DFP code in headers. The proc does not support the #warning,just #else #if etc.
A: Remove below two lines from /usr/include/standards.h
warning The -qdfp option is required to process DFP code in headers.
else
| {
"language": "en",
"url": "https://stackoverflow.com/questions/14698",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: PowerShell FINDSTR eqivalent? What's the DOS FINDSTR equivalent for PowerShell? I need to search a bunch of log files for "ERROR".
A: For example, find all instances of "#include" in the c files in this directory and all sub-directories.
gci -r -i *.c | select-string "#include"
gci is an alias for get-childitem
A: Here's the quick answer
Get-ChildItem -Recurse -Include *.log | select-string ERROR
I found it here which has a great indepth answer!
A: Just to expand on Monroecheeseman's answer. gci is an alias for Get-ChildItem (which is the equivalent to dir or ls), the -r switch does a recursive search and -i means include.
Piping the result of that query to select-string has it read each file and look for lines matching a regular expression (the provided one in this case is ERROR, but it can be any .NET regular expression).
The result will be a collection of match objects, showing the line matching, the file, and and other related information.
A: if ($entry.EntryType -eq "Error")
Being Object Oriented, you want to test the property in question with one of the standard comparison operators you can find here.
I have a PS script watching logs remotely for me right now - some simple modification should make it work for you.
edit: I suppose I should also add that is a cmdlet built for this already if you don't want to unroll the way I did. Check out:
man Get-EventLog
Get-EventLog -newest 5 -logname System -EntryType Error
A: On a related note, here's a search that will list all the files containing a particular regex search or string. It could use some improvement so feel free to work on it. Also if someone wanted to encapsulate it in a function that would be welcome.
I'm new here so if this should go in it's own topic just let me know. I figured I'd put it her since this looks mostly related.
# Search in Files Script
# ---- Set these before you begin ----
$FolderToSearch="C:\" # UNC paths are ok, but remember you're mass reading file contents over the network
$Search="Looking For This" # accepts regex format
$IncludeSubfolders=$True #BUG: if this is set $False then $FileIncludeFilter must be "*" or you will always get 0 results
$AllMatches=$False
$FileIncludeFilter="*".split(",") # Restricting to specific file types is faster than excluding everything else
$FileExcludeFilter="*.exe,*.dll,*.wav,*.mp3,*.gif,*.jpg,*.png,*.ghs,*.rar,*.iso,*.zip,*.vmdk,*.dat,*.pst,*.gho".split(",")
# ---- Initialize ----
if ($AllMatches -eq $True) {$SelectParam=@{AllMatches=$True}}
else {$SelectParam=@{List=$True}}
if ($IncludeSubfolders -eq $True) {$RecurseParam=@{Recurse=$True}}
else {$RecurseParam=@{Recurse=$False}}
# ---- Build File List ----
#$Files=Get-Content -Path="$env:userprofile\Desktop\FileList.txt" # For searching a manual list of files
Write-Host "Building file list..." -NoNewline
$Files=Get-ChildItem -Include $FileIncludeFilter -Exclude $FileExcludeFilter -Path $FolderToSearch -ErrorAction silentlycontinue @RecurseParam|Where-Object{-not $_.psIsContainer} # @RecurseParam is basically -Recurse=[$True|$False]
#$Files=$Files|Out-GridView -PassThru -Title 'Select the Files to Search' # Manually choose files to search, requires powershell 3.0
Write-Host "Done"
# ---- Begin Search ----
Write-Host "Searching Files..."
$Files|
Select-String $Search @SelectParam| #The @ instead of $ lets me pass the hastable as a list of parameters. @SelectParam is either -List or -AllMatches
Tee-Object -Variable Results|
Select-Object Path
Write-Host "Search Complete"
#$Results|Group-Object path|ForEach-Object{$path=$_.name; $matches=$_.group|%{[string]::join("`t", $_.Matches)}; "$path`t$matches"} # Show results including the matches separated by tabs (useful if using regex search)
<# Other Stuff
#-- Saving and restoring results
$Results|Export-Csv "$env:appdata\SearchResults.txt" # $env:appdata can be replaced with any UNC path, this just seemed like a logical place to default to
$Results=Import-Csv "$env:appdata\SearchResults.txt"
#-- alternate search patterns
$Search="(\d[-|]{0,}){15,19}" #Rough CC Match
#>
A: This is not the best way to do this:
gci <the_directory_path> -filter *.csv | where { $_.OpenText().ReadToEnd().Contains("|") -eq $true }
This helped me find all csv files which had the | character in them.
A: PowerShell has basically precluded the need for findstr.exe as the previous answers demonstrate. Any of these answers should work fine.
However, if you actually need to use findstr.exe (as was my case) here is a PowerShell wrapper for it:
Use the -Verbose option to output the findstr command line.
function Find-String
{
[CmdletBinding(DefaultParameterSetName='Path')]
param
(
[Parameter(Mandatory=$true, Position=0)]
[string]
$Pattern,
[Parameter(ParameterSetName='Path', Mandatory=$false, Position=1, ValueFromPipeline=$true)]
[string[]]
$Path,
[Parameter(ParameterSetName='LiteralPath', Mandatory=$true, ValueFromPipelineByPropertyName=$true)]
[Alias('PSPath')]
[string[]]
$LiteralPath,
[Parameter(Mandatory=$false)]
[switch]
$IgnoreCase,
[Parameter(Mandatory=$false)]
[switch]
$UseLiteral,
[Parameter(Mandatory=$false)]
[switch]
$Recurse,
[Parameter(Mandatory=$false)]
[switch]
$Force,
[Parameter(Mandatory=$false)]
[switch]
$AsCustomObject
)
begin
{
$value = $Pattern.Replace('\', '\\\\').Replace('"', '\"')
$findStrArgs = @(
'/N'
'/O'
@('/R', '/L')[[bool]$UseLiteral]
"/c:$value"
)
if ($IgnoreCase)
{
$findStrArgs += '/I'
}
function GetCmdLine([array]$argList)
{
($argList | foreach { @($_, "`"$_`"")[($_.Trim() -match '\s')] }) -join ' '
}
}
process
{
$PSBoundParameters[$PSCmdlet.ParameterSetName] | foreach {
try
{
$_ | Get-ChildItem -Recurse:$Recurse -Force:$Force -ErrorAction Stop | foreach {
try
{
$file = $_
$argList = $findStrArgs + $file.FullName
Write-Verbose "findstr.exe $(GetCmdLine $argList)"
findstr.exe $argList | foreach {
if (-not $AsCustomObject)
{
return "${file}:$_"
}
$split = $_.Split(':', 3)
[pscustomobject] @{
File = $file
Line = $split[0]
Column = $split[1]
Value = $split[2]
}
}
}
catch
{
Write-Error -ErrorRecord $_
}
}
}
catch
{
Write-Error -ErrorRecord $_
}
}
}
}
A: FYI:
If you update to Powershell version 7 you can use grep...
I know egrep is in powershell on Azure CLI...
But SS is there!
An old article here: [https://devblogs.microsoft.com/powershell/select-string-and-grep/]
| {
"language": "en",
"url": "https://stackoverflow.com/questions/14708",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "30"
} |
Q: TortoiseSVN & Putty very slow does anyone have a clue why the TortoiseSVN windows client (in Win32 XP and Vista)
is so incredible slow when used with Putty and PAM? It seems it connects for each request
since datatransfers (checkout) are not slow at all?
Any ideas how to change it?
Update: I had no problems with SSH before. But I have to use key based authentification.
A: Do you have a problem with standard SSH connections to the server as well? If it's generally slow to connect to your server via SSH, this could be a problem with reverse DNS lookups.
Andrew
A: I don't use Putty for my ssh+svn connections. I use TortoisePlink, which is a wrapper around Putty, I think. It is provided by TortoiseSVN in the install directory under bin.
Basically go to the Settings dialog, by right clicking in a windows explorer window -> TortoiseSVN -> Settings
Click on the Network item in the tree. Setup your SSH client by providing the path to TortoisePlink:
C:\Program Files\TortoiseSVN\bin\TortoisePlink.exe -l username -pw password
I have not ran into any troubles running my svn server this way and connecting to it with TortoiseSVN over SSH. The speed is great and so is the security.
A: What type of system are you connecting to? If you connect to OpenSUSE, for example, default DNS Reverse Lookup settings generally cause SSH connections to be very slow. If you can, put your client side IP address into the /etc/hosts table on the server. If Reverse DNS is your issue, this will resolve (remember to restart nscd daemon - "/etc/init.d/nscd restart" so that the change will take effect.)
SSH will be slower than native SVN protocol but not by an order or magnitude and not likely to a level that you will notice. But Reverse DNS timeouts could be several seconds per request.
A: Ensure that there is a matching PuTTY session.
If your URL is
svn+ssh://xxx.yy/path/to/svn/trunk/foobar
then you need a session in PuTTY named
xxx.yy
which must be fully configured, especially in means of
*
*username
*private key file
I am using this together with Pageant.
A: Try using Cygwin's version of ssh.exe.
I found this advice here, and it sped up my downloads by 3-5x
| {
"language": "en",
"url": "https://stackoverflow.com/questions/14712",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Identifying SQL Server Performance Problems We're having sporadic, random query timeouts on our SQL Server 2005 cluster. I own a few apps that use it, so I'm helping out in the investigation. When watching the % CPU time in regular ol' Perfmon, you can certainly see it pegging out. However, SQL activity monitor only gives cumulative CPU and IO time used by a process, not what it's using right then, or over a specific timeframe. Perhaps I could use the profiler and run a trace, but this cluster is very heavily used and I'm afraid I'd be looking for a needle in a haystack. Am I barking up the wrong tree?
Does anyone have some good methods for tracking down expensive queries/processes in this environment?
A: I've found the Performance Dashboard Reports to be very helpful. They are a set of custom RS reports supplied by Microsoft. You just have to run the installer on your client PC and then run the setup.sql on the SQL Server instance.
After that, right click on a database (does not matter which one) in SSMS and goto Reports -> Custom Reports. Navigate to and select the performance_dashboard_main.rdl which is located at in the \Program Files\Microsoft SQL Server\90\Tools\PerformanceDashboard folder by default. You only need to do this once. After the first time, it will show up in the reports list.
The main dashboard view will show CPU utilization over time, among other things. You can refresh it occasionally. When you see a spike, just click on the bar in the graph to get the detail data behind it.
A: We use Quest's Spotlight product. Obviously it's an investment in time and money so it might not help you out in the short term but if you are have a large SQL environment it's pretty useful.
A: As Yaakov says, run profiler for a few minutes under typical load and save the results to a table which will allow you to run queries against the results making it much easier to spot any resource hogging queries.
A: Profiler may seem like a "needle in a haystack" approach, but it may turn up something useful. Try running it for a couple of minutes while the databases are under typical load, and see if any queries stand out as taking way too much time or hogging resources in some way. While a situation like this could point to some general issue, it could also be related to some specific issue with one or two sites, which mess things up enough in certain circumstances to cause very poor performance across the board.
A: Run Profiler and filter for queries that take more than a certain number of reads. For the application I worked on, any non-reporting query that took more than 5000 reads deserved a second look. Your app may have a different threshold, but the idea is the same.
A: This utility by Erland Sommarskog is awesomely useful.
It's a stored procedure you add to your database. Run it whenever you want to see what queries are active and get a good picture of locks, blocks, etc. I use it regularly when things seem gummed up.
A: This will give you the top 50 statements by average CPU time, check here for other scripts: http://www.microsoft.com/technet/scriptcenter/scripts/sql/sql2005/default.mspx?mfr=true
SELECT TOP 50
qs.total_worker_time/qs.execution_count as [Avg CPU Time],
SUBSTRING(qt.text,qs.statement_start_offset/2,
(case when qs.statement_end_offset = -1
then len(convert(nvarchar(max), qt.text)) * 2
else qs.statement_end_offset end -qs.statement_start_offset)/2)
as query_text,
qt.dbid, dbname=db_name(qt.dbid),
qt.objectid
FROM sys.dm_exec_query_stats qs
cross apply sys.dm_exec_sql_text(qs.sql_handle) as qt
ORDER BY
[Avg CPU Time] DESC
| {
"language": "en",
"url": "https://stackoverflow.com/questions/14717",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
} |
Q: UrlEncode through a console application? Normally I would just use:
HttpContext.Current.Server.UrlEncode("url");
But since this is a console application, HttpContext.Current is always going to be null.
Is there another method that does the same thing that I could use?
A: Try this!
Uri.EscapeUriString(url);
Or
Uri.EscapeDataString(data)
No need to reference System.Web.
Edit: Please see another SO answer for more...
A: I'm not a .NET guy, but, can't you use:
HttpUtility.UrlEncode Method (String)
Which is described here:
HttpUtility.UrlEncode Method (String) on MSDN
A: Add using System.Net; then use: WebUtility.UrlDecode(string)
or Fully-Qualify: System.Net.WebUtility.UrlDecode(string)
No need to add any additional References.
The WebUtility is included (by Default) in System (under the "References" Project Folder).
A: You'll want to use
System.Web.HttpUtility.urlencode("url")
Make sure you have system.web as one of the references in your project. I don't think it's included as a reference by default in console applications.
A: Try using the UrlEncode method in the HttpUtility class.
*
*http://msdn.microsoft.com/en-us/library/system.web.httputility.urlencode.aspx
A: I ran into this problem myself, and rather than add the System.Web assembly to my project, I wrote a class for encoding/decoding URLs (its pretty simple, and I've done some testing, but not a lot). I've included the source code below. Please: leave the comment at the top if you reuse this, don't blame me if it breaks, learn from the code.
''' <summary>
''' URL encoding class. Note: use at your own risk.
''' Written by: Ian Hopkins (http://www.lucidhelix.com)
''' Date: 2008-Dec-23
''' </summary>
Public Class UrlHelper
Public Shared Function Encode(ByVal str As String) As String
Dim charClass = String.Format("0-9a-zA-Z{0}", Regex.Escape("-_.!~*'()"))
Dim pattern = String.Format("[^{0}]", charClass)
Dim evaluator As New MatchEvaluator(AddressOf EncodeEvaluator)
' replace the encoded characters
Return Regex.Replace(str, pattern, evaluator)
End Function
Private Shared Function EncodeEvaluator(ByVal match As Match) As String
' Replace the " "s with "+"s
If (match.Value = " ") Then
Return "+"
End If
Return String.Format("%{0:X2}", Convert.ToInt32(match.Value.Chars(0)))
End Function
Public Shared Function Decode(ByVal str As String) As String
Dim evaluator As New MatchEvaluator(AddressOf DecodeEvaluator)
' Replace the "+"s with " "s
str = str.Replace("+"c, " "c)
' Replace the encoded characters
Return Regex.Replace(str, "%[0-9a-zA-Z][0-9a-zA-Z]", evaluator)
End Function
Private Shared Function DecodeEvaluator(ByVal match As Match) As String
Return "" + Convert.ToChar(Integer.Parse(match.Value.Substring(1), System.Globalization.NumberStyles.HexNumber))
End Function
End Class
A: Kibbee offers the real answer. Yes, HttpUtility.UrlEncode is the right method to use, but it will not be available by default for a console application. You must add a reference to System.Web. To do that,
*
*In your solution explorer, right click on references
*Choose "add reference"
*In the "Add Reference" dialog box, use the .NET tab
*Scroll down to System.Web, select that, and hit ok
NOW you can use the UrlEncode method. You'll still want to add,
using System.Web
at the top of your console app or use the full namespace when calling the method,
System.Web.HttpUtility.UrlEncode(someString)
A: The code from Ian Hopkins does the trick for me without having to add a reference to System.Web. Here is a port to C# for those who are not using VB.NET:
/// <summary>
/// URL encoding class. Note: use at your own risk.
/// Written by: Ian Hopkins (http://www.lucidhelix.com)
/// Date: 2008-Dec-23
/// (Ported to C# by t3rse (http://www.t3rse.com))
/// </summary>
public class UrlHelper
{
public static string Encode(string str) {
var charClass = String.Format("0-9a-zA-Z{0}", Regex.Escape("-_.!~*'()"));
return Regex.Replace(str,
String.Format("[^{0}]", charClass),
new MatchEvaluator(EncodeEvaluator));
}
public static string EncodeEvaluator(Match match)
{
return (match.Value == " ")?"+" : String.Format("%{0:X2}", Convert.ToInt32(match.Value[0]));
}
public static string DecodeEvaluator(Match match) {
return Convert.ToChar(int.Parse(match.Value.Substring(1), System.Globalization.NumberStyles.HexNumber)).ToString();
}
public static string Decode(string str)
{
return Regex.Replace(str.Replace('+', ' '), "%[0-9a-zA-Z][0-9a-zA-Z]", new MatchEvaluator(DecodeEvaluator));
}
}
A: HttpUtility.UrlEncode("url") in System.Web.
A: use the static HttpUtility.UrlEncode method.
A: Best thing is to Add Reference to System.web..dll
and use
var EncodedUrl=System.Web.HttpUtility.UrlEncode("URL_TEXT");
You can find the File at System.web.dll
A: Uri.EscapeUriString should not be used for escaping a string to be passed in a URL as it does not encode all characters as you might expect. The '+' is a good example which is not escaped. This then gets converted to a space in the URL since this is what it means in a simple URI. Obviously that causes massive issues the minute you try and pass something like a base 64 encoded string in the URL and spaces appear all over your string at the receiving end.
You can use HttpUtility.UrlEncode and add the required references to your project (and if you're communicating with a web application then I see no reason why you shouldn't do this).
Alternatively use Uri.EscapeDataString over Uri.EscapeUriString as explained very well here: https://stackoverflow.com/a/34189188/7391
| {
"language": "en",
"url": "https://stackoverflow.com/questions/14731",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "62"
} |
Q: Printers not available unless shared We are using classic asp to call a C# dll and in the C# dll we are using System.Drawing.Printing.PrinterSettings.InstalledPrinters to get a list of availabe printers. If the printers are not shared they will not show up when a user trys to print. The Local System account can see and print to them from a VB6 dll and Administrators can print just fine from the C# dll as you might expect. Is there some sort of permissions we need to grant the user so these printers will be available?
A: As I recall, running a website uses the Network User account, which may not have permission to view local printers.
There was a page on MSDN that said how you can impersonate another user that might have access to the printers, but I've not been able to find it.
Edit: I posted too soon. Here's the page.
HTH
A: I'm fairly certain that impersonating a user or using their credentials does not constitute the ability to see the printers for that user. I believe explorer.exe reconnects all the network resources (shares/printers) upon logon.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/14756",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: Is it possible to disable command input in the toolbar search box? In the Visual Studio toolbar, you can enter commands into the search box by prefixing them with a > symbol. Is there any way to disable this? I've never used the feature, and it's slightly annoying when trying to actually search for something that you know is prefixed by greater-than in the code. It's particularly annoying when you accidentally search for "> exit" and the IDE quits (I knew there was a line in the code that was something like if(counter > exitCount) so entered that search without thinking).
At the very least, can you escape the > symbol so that you can search for it? Prefixing with ^ doesn't seem to work.
A: This is a really cool feature. I've poked through the feature documentation, and the accompanying command list, and not a heck of a lot is showing up in terms of turning it off.
If you want to search for >exit, you could always type >Edit.Find >exit in the search box; that seems to do the trick. A bit verbose, though, but it really is an edge case.
A:
you can enter commands into the search box by prefixing them with a > symbol.
Wow, I didn't know that. Where do I find the list of possible commands?
I never actually use the search box, I've remapped ctrl+F to incremental search, which is usually ctrl+I
I find this much cooler than the normal search - give it a go, you might end up not caring about the search box anymore.
A:
Wow, I didn't know that. Where do I
find the list of possible commands?
The commands are the same as those you can enter in the command window, so you can pretty much drive the entire IDE and debugger using it. There are a load of predefined aliases for common commands. Open up the command window and enter alias for a list, to get you started.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/14760",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: How best to use File Version and Assembly Version? In .NET there are two version numbers available when building a project, File Version and Assembly Version. How are you using these numbers? Keeping them the same? Auto-incrementing one, but manually changing the other?
Also what about the AssemblyInformationalVersion attribute?
I'd found this support Microsoft Knowledge Base (KB) article that provided some help: How to use Assembly Version and Assembly File Version.
A: In solutions with multiple projects, one thing I've found very helpful is to have all the AssemblyInfo files point to a single project that governs the versioning. So my AssemblyInfos have a line:
[assembly: AssemblyVersion(Foo.StaticVersion.Bar)]
I have a project with a single file that declares the string:
namespace Foo
{
public static class StaticVersion
{
public const string Bar= "3.0.216.0"; // 08/01/2008 17:28:35
}
}
My automated build process then just changes that string by pulling the most recent version from the database and incrementing the second last number.
I only change the Major build number when the featureset changes dramatically.
I don't change the file version at all.
A: @Adam: Are you changing the file version with each build? Are you using version control (SYN or VSS) and using that information to link source back to the binaries?
Seems to make sense that the Assembly version stays the same. i.e. "2.0.0.0". That corresponds to the deployment of the product.
The file version changes to match the revision from the source control. "2.0.??.revision" This would provide a link from a specific dll (or exe) to the source that built it.
A: The KB article mentions the most important distinction: File versions are only used for display purposes, whereas the assembly version plays an important part in the .NET loading behaviour.
If you change the assembly version number, then the identity of your assembly as a whole has changed. Developers will need to rebuild to reference your new version (unless you put some auto-versioning "policy" in place) and at runtime only assemblies with matching version numbers will be loaded.
This is important in my environment, where we need an incrementing, highly visible version number for audit purposes, but we don't want to force developers to rebuild or have many versions concurrently in production. In this case for backwardly-compatible minor changes we update the file version, but not the assembly version.
A: In a scenario where I have multiple file assemblies (i.e. 1 exe and 5 dlls) I will use a different file version for each, but the same assembly version for all of them, allowing you to know which exe each of the dlls go with.
A:
File versions are only used for display purposes, whereas the assembly version plays an important part in the .NET loading behaviour.
Not quite. The file version is also important for Windows Installer when you upgrade an existing version over a previous one.
A: With my current application, each VS project has a link to an "AssemblyBuildInfo" source file which has the following attributes:
[assembly: AssemblyVersion("1.0.*")]
[assembly: AssemblyCompany("Acme Corporationy")]
[assembly: AssemblyCopyright("Copyright © 2009 Acme Corporation")]
This way, all the assemblies in my solution share same version and company information (meaning if I have to change it, I change it only one time). By excluding the FileVersion, it is automatically set to the AssemblyVersion.
A: I keep them the same. But then, I don't have multifile assemblies, which is when the AssemblyVersion number becomes important. I use Microsoft-style date encoding for my build numbers, rather than auto-incrementing (I don't find the number of times that something has been built to be all that important).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/14770",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "55"
} |
Q: Interview question on C# and VB.net similarities/differences I have been a VB.net developer for a few years now but I am currently applying to a few companies that use C#. I have even been told that at least one of the companies doesn't want VB.net developers.
I have been looking online trying to find real differences between the two and have asked on crackoverflow. The only major differences are a few syntax difference which are trivial to me because I am also a Java developer.
What would be a good response to an interviewer when they tell me they are looking for a C# developer - or similar questions?
A: I've had to interview people for a few C# positions and this is my general advice for VB.Net developers interviewing for a C# position:
*
*Make sure you are clear that you have been working VB.Net. This seems obvious but is something that apparently isn't (in my experience).
*Try to give a code sample, if possible. I've seen some horrible VB.Net (and C#) written by VB programmers who didn't seem to learn much in the transition to .Net.
*Be able to write in C# during the interview, if asked. I know there aren't many real differences between the two, but I don't want to pay you to learn the new syntax.
For your specific question: I've asked that type of question before and what I wanted to hear about was how the underlying system and framework were the same. If possible, talk about garbage collection, IDisposable, finalizers, the dangers of unsafe code blocks, stack vs heap, etc. All the kind of stuff to show that you really understand the intricacies of the .Net framework. Right or wrong, the heritage of VB brings with it an expectation of a lack of understand of lower level programming and windows in general (which, ironically enough, a c++ developer would have of a c# developer... and so on).
Lastly, how you frame your experience can make a world of difference. If you position yourself as a .Net developer, rather than VB.Net or C#, the stupid, pseudo-religious, banter may not enter the conversation. This of course requires that you actually know both VB.Net and C# at the time of the interview, but that's a good policy regardless.
The truth of the matter is that if you find that the person interviewing you writes you off simply because you've previously been developing in VB.Net, it's likely not going to be a place you want to work at anyway.
A: Some differences (that are more substantial than syntactical) that suitably catch me out sometimes:
*
*VB.NET does not have anonymous delegates
*Unsafe code blocks aren't in VB.NET
A: I love C# to death, but I envy VB.NET's optional parameters. Office automation in C# is so very, very painful.
A: I think the truth will-out on this:
I'm a software developer, the syntax of the language is the final part of the puzzle. By employing me, you're getting someone with demonstrable experience of problem solving and logic. I'm experienced with the .NET environment, the CLR and the associated Windows stack, including SQL and Windows server. I don't know the C# syntax, but, I am used to object-oriented approach, I will have no problem getting totally up to speed on the finer points of the syntax within a couple of weeks. You can see from these examples of my code that I'm an experienced developer, I assure you the transition will be seamless. I have already made a start learning the language, I haven't had any problems so far.
A: @DAC - VB now (in .net3.5) has support for lambdas:
Function(x) x.ToString()
In an interview I'd say that understanding the CLR and what the code's really doing is what's important, and that you're happy to code in either (assuming that you are).
A: I'm also a C# feller to the death but the differences are not just syntax. There are places where VB wins HANDS down
*
*Optional parameters
*Late binding
Working with COM from C# can be a very frustrating affair. (Which is not to say impossible!)
A: Truthfully? If you're a Java developer too, I'd lead with that. A Java developer with some experience in .NET will generally make a better impression than a VB developer with experience with Java. If you can answer questions on the difference between .NET and Java you should be in good shape.
I'd also recommend you learn the C# syntax ASAP. If you're experienced with VB and Java it won't take long.
A lot of developers have issues with VB and the people that use it. Valid or not you'll have to deal with that reality.
A: VB or C# both are syntax only diffrent but if you are applying way of logic is same.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/14775",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: Is "include file" in shtml the best method to keep non-database changing data We have a website that uses #include file command to roll info into some web pages. The authors can access the text files to update things like the occasional class or contact information for the department.
My question is this, I don't see anyone using this method and wonder if it is a good idea to keep using it. If not, what method should I transition to instead?
A: If you are using ASP.NET then you could bundle that code into a nice little UserControl that will display all of the important information.
Other platforms should allow you to bundle the logic into a class object, and display it using that.
It really depends on the platform that you are using to deploy the application in. The include file could be your best solution if you are deploying in a more limited platform.
A: I don't think there is anything wrong with it, and I've done some similar things with PHP. If the people that make the changes are comfortable with how they do it and they can do what they need to, I think you should stick with it. There could be some retraining involved and/or a fair amount of work involved for changing the system.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/14791",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: How can I override an EJB 3 session bean method with a generic argument - if possible at all? Suppose you have the following EJB 3 interfaces/classes:
public interface Repository<E>
{
public void delete(E entity);
}
public abstract class AbstractRepository<E> implements Repository<E>
{
public void delete(E entity){
//...
}
}
public interface FooRepository<Foo>
{
//other methods
}
@Local(FooRepository.class)
@Stateless
public class FooRepositoryImpl extends
AbstractRepository<Foo> implements FooRepository
{
@Override
public void delete(Foo entity){
//do something before deleting the entity
super.delete(entity);
}
//other methods
}
And then another bean that accesses the FooRepository bean :
//...
@EJB
private FooRepository fooRepository;
public void someMethod(Foo foo)
{
fooRepository.delete(foo);
}
//...
However, the overriding method is never executed when the delete method of the FooRepository bean is called. Instead, only the implementation of the delete method that is defined in AbstractRepository is executed.
What am I doing wrong or is it simply a limitation of Java/EJB 3 that generics and inheritance don't play well together yet ?
A: I tried it with a pojo and it seems to work. I had to modify your code a bit.
I think your interfaces were a bit off, but I'm not sure.
I assumed "Foo" was a concrete type, but if not I can do some more testing for you.
I just wrote a main method to test this.
I hope this helps!
public static void main(String[] args){
FooRepository fooRepository = new FooRepositoryImpl();
fooRepository.delete(new Foo("Bar"));
}
public class Foo
{
private String value;
public Foo(String inValue){
super();
value = inValue;
}
public String toString(){
return value;
}
}
public interface Repository<E>
{
public void delete(E entity);
}
public interface FooRepository extends Repository<Foo>
{
//other methods
}
public class AbstractRespository<E> implements Repository<E>
{
public void delete(E entity){
System.out.println("Delete-" + entity.toString());
}
}
public class FooRepositoryImpl extends AbstractRespository<Foo> implements FooRepository
{
@Override
public void delete(Foo entity){
//do something before deleting the entity
System.out.println("something before");
super.delete(entity);
}
}
A: Can you write a unit test against your FooRepository class just using it as a POJO. If that works as expected then I'm not familiar with any reason why it would function differently inside a container.
I suspect there is something else going on and it will probably be easier to debug if you test it as a POJO.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/14801",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: Issues with DB after publishing via Database Publishing Wizard from MSFT I work on quite a few DotNetNuke sites, and occasionally (I haven't figured out the common factor yet), when I use the Database Publishing Wizard from Microsoft to create scripts for the site I've created on my Dev server, after running the scripts at the host (usually GoDaddy.com), and uploading the site files, I get an error... I'm 99.9% sure that it's not file related, so not sure where to begin in the DB. Unfortunately with DotNetNuke you don't get the YSOD, but a generic error, with no real way to find the actual exception that has occured.
I'm just curious if anyone has had similar deployment issues using the Database Publishing Wizard, and if so, how they overcame them? I own the RedGate toolset, but some hosts like GoDaddy don't allow you to direct connect to their servers...
A: The Database Publishing Wizard's generated scripts usually need to be tweaked since it sometimes gets the order wrong of table/procedure creation when dealing with constraints. What I do is first backup the database, then run the script, and if I get an error, I move that query to the end of the script. Continue restoring the database and running the script until it works.
A: There are two areas that I would look at -
*
*Are you running in the dbo schema and was your scripted database
using dbo?
*Are you using an objectqualifier in either your dev or your
production environment? (look at your sqldataprovider configuration
settings)
A: You should be able to expose the underlying error message by setting the following in the web.config:
customErrors mode="Off"
Could you elaborate on "and uploading the site files"? New instance of DNN? updating an existing site? upgrading DNN version? If upgrade or update -- what files are you adding/overwriting?
Also, when using GoDaddy, can you check to verify that the web site's identity (network service or asp.net machine account depending on your IIS version) has sufficient permissions to the website's file system? It should have modify permissions and these may need to be reapplied if you are overwriting files.
*
*IIS6 (XP, Server 2000, 2003) = ASP.Net Machine Account
*IIS7 (Vista, Server 2008) = Network Service
A: Test your generated scripts on a new local database (using the free SQL Express product or the full meal deal). If it runs fine locally, then you can be confident that it will run elsewhere, all things being equal.
If it bombs when you run it locally, use the process of elimination and work your way through the script execution to find the offending code.
My hunch is that the order of scripts could be off. I think I've had that happen before with the database publishing wizard.
A: Just read your follow up. In every case that I've had your problem, it was always something to do with the connection string in web.config. Even after hours of staring at it, it was always a connection string issue in web.config. Get up, take a walk and then come back.
A: If you are getting one of DNN's error pages, there is a chance it may have logged the error to the eventlog table.
A: Depending on exactly what is happening and what DNN is showing you you might be able to manually look inside the EventLog table, pull out the XML data stored there, and parse it to find the stack trace and detailed information regarding the specific error at hand.
I have found however though that I get MUCH better overall experiences with deployments using backups and restores of my database, that way I am 100% sure that all objects moved correctly, and honestly it works better in my experience.
With GoDaddy I know another MAJOR common issue is incorrect file permissions, preventing DNN from modifying the web.config and other files that it needs to do.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/14828",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: How to get started with speech-to-text? I'm really interested in speech-to-text algorithms, but I'm not sure where to start studying up on them. A bunch of searching around led me to this, but it's from 1996 and I'm fairly certain that there have been improvements since then.
Does anyone who has any experience with this sort of stuff have any recommendations for reading / source code to examine? Or just general advice on what I should be trying to learn about if I want to get into the world of writing speech recognition programs (sometimes it's hard to know what to search for if you don't have much knowledge about the domain).
Edit: I'd like to do something cross-platform, but for the moment I'd be targeting linux.
Edit 2: Thanks csmba for the well-thought out reply. At this point in time, I'm mainly interested in being able to create applications that allow automation, or execution of different commands through voice. So, a limited amount of recognizable commands being able to be strung together. An example would be a music player that took commands like "Play the album Hello Everything by Squarepusher", or an application launcher that allowed the user to create voice-shortcuts to launch specific apps.
I realize that it's a pretty giant problem, and that I have nowhere near the level of knowledge required right now to tackle implementing an entire recognition engine, although the techniques involved with doing so fascinate me, and it is something I'd like to work myself up to doing. In all likelihood, I'll probably end up picking up a book or two on the subject and studying up / playing with "simple" implementations in my free time.
A: This is a HUGE questions, I wouldn't know how to begin... So let me just try giving you the right "terms" so you can refine your quest:
First, understand that Speech Recognition is a diverse and complicated subject, and it has many different applications. People tend to map this domain to the first thing that comes to their head (usually, that would be computers understanding what you are saying like in IVR systems). So first lets distinguise the concept into the main categories:
Human-to-Machine: Applications that deal with understanding what a human is saying, but the human knows he is talking to a machine and the grammar is very limited. Examples are
*
*Computer automation
*Specialized: Pilots automating some controls for example (noise a huge problem)
*IVR (Interactive Voice Response) systems like Google-411 or when you call the bank and the computer on the other side says "say 'service' to get customer service"
human-to-human (Spontaneous speech): This is a bigger, more complex problem. Here we can also break it down into different applciations:
*
*Call Center: conversation between Agent-Customer, phone quality, compressed
*Intelligence: radio/phone/live conversations between 2 or more individuals
Now, Speech-To-Text is not what you should be saying that you care about. What you care about is solving a problem. Different technologies are used to solve different problems. See an overview here of some of them. to summarize, other approaches are Phonetic transcription, LVCSR and direct based.
Also, are you interested in being the PHd behind the technology? you would need a Masters equivalent involving Signal processing and probably a PHd to be cutting edge. In which case, you will work for a company that develops the actual speech engine. Companies like Nuance and IBM are the big ones, but also Phillips and other startups exist.
On the other hand, if you want to be the one implementing applications, you will not be working on the engine, but working on building application that USE the engine. A good analogy I think is form the gaming industry:
Are you developing the graphic engine (like the Cry engine), or working on one of several hundred games, all use the same graphic engine?
Don't get me wrong, there is plenty to work on the quality of the search also outside the IBM/Nuance of the world. The engine is usually very open, and there are a lot of algorithmic tweaking to be done that can dramatically affect performance. Each business application has different constraints and cost/benefit function, so you can make experiments for many years building better voice recognition based applications.
one more thing: in general, you would also want to have good statistics background the lower in the stack you want to be.
At this point in time, I'm mainly interested in being able to create applications that allow automation
Good, we are converging here... Then you have no interest in "Speech-to-Text". That buzzwords takes you to the world of full transcription, a place you do not need to go to. You should be focusing on some of the more Human-to-Machine technologies like Voice XML and the ones used in IVR systems (Nuance is the biggest player there)
A: I would definitely recommend picking up a book or two if you are new to the field. I've got no experience in the field, so I can't make a recommendation. If you are still in college (or still have close ties), you should find out if any of your professors can make a recommendation.
The survey you linked is probably an excellent resource, too. I'm sure there have been advancements since 1996, but the basics are unlikely to have fundamentally changed. If the survey is well-written, then it would be well worth your time to read it.
A: For OS X check out this: OS X Speech Technologies
For Windows check out this: Microsoft Speech API
A: I have worked with IBMs ViaVoice product. It has a good ASR (automated speech recognition) engine, and a nice text-to-speech engine.
The websites not very good, but this is a link for the Embedded version http://www-01.ibm.com/software/voice/support/
It is platform agnostic though, and everything works through a MVC architecture using vxml a variant of xml for voice purposes.
A: What platform are you targeting ?. There is Microsoft Speech APIs that you can use if its for windows.
A: There is also the Speech Recognition Service for Android.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/14837",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
} |
Q: Mixed C++/CLI TypeLoadException Internal limitation: too many fields On a quest to migrate some new UI into Managed/C# land, I have recently turned on Common Language Runtime Support (/clr) on a large legacy project, which uses MFC in a Shared DLL and relies on about a dozen other projects within our overall solution. This project is the core of our application, and would drive any managed UI code that is produced (hence the need to turn on clr support for interop).
After fixing a ton of little niggly errors and warnings, I finally managed to get the application to compile..
However, running the application causes an EETypeLoadException and leaves me unable to debug...
Doing some digging, I found the cause to be "System.TypeLoadException: Internal limitation: too many fields." which occurs right at the end of compilation. I then found this link which suggests to break the assembly down into two or more dlls. However, this is not possible in my case, as a limitation I have is that the legacy code basically remains untouched.
Can anyone suggest any other possible solutions? I'm really at a dead end here.
A: I have done this with very large mixed-mode (C#/C++) applications three times (3x) and once putting the above fix into place have never seen the error again.
And no, if anything this should result in slightly faster run-time execution (nothing you could ever measure, however.)
But I agree it's somewhat of a stopgap. The internal limit on symbols didn't use to be an issue, or if it was, that limit was much higher. Then MS changed some of the loader code. I got onto MSDN and ranted about it and was told in no uncertain terms, "only an idiot would put that many symbols in a single assembly".
(Which is one of the reasons I no longer participate on MSDN.)
Well, color me stupid, but I don't think I should have to change the physical structure of my application, breaking things out into satellite DLLs, merely to get around the fact that the loader has decided 10,001 symbols is 1 too many.
And as you pointed out, we often don't have control over how assemblies/satellite DLLs are structure, and the sort of dependencies they contain.
But I don't think you'll see this error again, in any case.
A: Do you need to turn /clr on for the entire project? Could you instead turn it on only for a small select number of files and be very careful how you include managed code? I work with a large C++/MFC application and we have found it very difficult to use managed C++. I love C# and .NET but managed C++ has been nothing but a headache. Most of our problems happened with .NET 1.0/1.1 ... maybe things are better now.
A: Make sure the Enable String Pooling option under C/C++ Code Generation is turned on.
That usually fixes this issue, which is one of those "huh?" MS limitations like the 64k limit on Excel spreadsheets. Only this one affects the number of symbols that may appear in an assembly.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/14843",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
} |
Q: Why does VS 2005 keep giving me the "'x' is ambiguous in the namespace 'y'" error? Bounty: I will send $5 via paypal for an answer that fixes this problem for me.
I'm not sure what VS setting I've changed or if it's a web.config setting or what, but I keep getting this error in the error list and yet all solutions build fine. Here are some examples:
Error 5 'CompilerGlobalScopeAttribute' is ambiguous in the namespace 'System.Runtime.CompilerServices'. C:\projects\MyProject\Web\Controls\EmailStory.ascx 609 184 C:\...\Web\
Error 6 'ArrayList' is ambiguous in the namespace 'System.Collections'. C:\projects\MyProject\Web\Controls\EmailStory.ascx.vb 13 28 C:\...\Web\
Error 7 'Exception' is ambiguous in the namespace 'System'. C:\projects\MyProject\Web\Controls\EmailStory.ascx.vb 37 21 C:\...\Web\
Error 8 'EventArgs' is ambiguous in the namespace 'System'. C:\projects\MyProject\Web\Controls\EmailStory.ascx.vb 47 64 C:\...\Web\
Error 9 'EventArgs' is ambiguous in the namespace 'System'. C:\projects\MyProject\Web\Controls\EmailStory.ascx.vb 140 72 C:\...\Web\
Error 10 'Array' is ambiguous in the namespace 'System'. C:\projects\MyProject\Web\Controls\EmailStory.ascx.vb 147 35 C:\...\Web\
[...etc...]
Error 90 'DateTime' is ambiguous in the namespace 'System'. C:\projects\MyProject\Web\App_Code\XsltHelperFunctions.vb 13 8 C:\...\Web\
As you can imagine, it's really annoying since there are blue squiggly underlines everywhere in the code, and filtering out relevant errors in the Error List pane is near impossible. I've checked the default ASP.Net web.config and machine.config but nothing seemed to stand out there.
Edit: Here's some of the source where the errors are occurring:
'Error #5: whole line is blue underlined'
<%= addEmailToList.ToolTip %>
'Error #6: ArrayList is blue underlined'
Private _emails As New ArrayList()
'Error #7: Exception is blue underlined'
Catch ex As Exception
'Error #8: System.EventArgs is blue underlined'
Protected Sub Page_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load
'Error #9: System.EventArgs is blue underlined'
Protected Sub sendMessage_Click(ByVal sender As Object, ByVal e As System.EventArgs) Handles sendMessage.Click
'Error #10: Array is blue underlined'
Me.emailSentTo.Text = Array.Join(";", mailToAddresses)
'Error #90: DateTime is blue underlined'
If DateTime.TryParse(data, dateValue) Then
Edit: GacUtil results
C:\WINDOWS\Microsoft.NET\Framework\v1.1.4322\gacutil -l mscorlib
Microsoft (R) .NET Global Assembly Cache Utility. Version 1.1.4318.0
Copyright (C) Microsoft Corporation 1998-2002. All rights reserved.
The Global Assembly Cache contains the following assemblies:
The cache of ngen files contains the following entries:
mscorlib, Version=1.0.5000.0, Culture=neutral, PublicKeyToken=b77a5c5619
34e089, Custom=5a00410050002d004e0035002e0031002d003800460053002d003700430039004
40037004500430036000000
mscorlib, Version=1.0.5000.0, Culture=neutral, PublicKeyToken=b77a5c5619
34e089, Custom=5a00410050002d004e0035002e0031002d0038004600440053002d00370043003
900450036003100370035000000
Number of items = 2
"C:\Program Files\Microsoft Visual Studio 8\SDK\v2.0\Bin\gacutil" -l mscorlib
Microsoft (R) .NET Global Assembly Cache Utility. Version 2.0.50727.42
Copyright (c) Microsoft Corporation. All rights reserved.
The Global Assembly Cache contains the following assemblies:
Number of items = 0
Edit: interesting results from ngen:
C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\ngen display mscorlib /verbose
Microsoft (R) CLR Native Image Generator - Version 2.0.50727.832
Copyright (C) Microsoft Corporation 1998-2002. All rights reserved.
NGEN Roots:
mscorlib, Version=1.0.5000.0, Culture=neutral, PublicKeyToken=b77a5c561934e089, Custom=5a00410050002d004e0035002e0031002d003800460053002d00330037004200430043003300430035000000
ScenarioDefault
mscorlib, Version=1.0.5000.0, Culture=neutral, PublicKeyToken=b77a5c561934e089, Custom=5a00410050002d004e0035002e0031002d003800460053002d00330037004200430043003300430035000000
DisplayName = mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089
Native image = {7681CE0F-F0E7-F03A-2B56-96345589D82B}
Hard Dependencies:
Soft Dependencies:
mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089
ScenarioNoDependencies
mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089
DisplayName = mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089
Native image = {7681CE0F-F0E7-F03A-2B56-96345589D82B}
Hard Dependencies:
Soft Dependencies:
NGEN Roots that depend on "mscorlib":
[...a bunch of stuff...]
Native Images:
mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089
Source MVID: {D34102CF-2ABF-4004-8B42-2859D8FF27F3}
Source HASH: bbf5cfc19bea4e13889e39eb1fb72479a45ad0ec
NGen GUID sign: {7681CE0F-F0E7-F03A-2B56-96345589D82B}
OS: WinNT
Processor: x86(Pentium 4) (features: 00008001)
Runtime: 2.0.50727.832
mscorwks.dll: TimeStamp=461F2E2A, CheckSum=00566DC9
Flags:
Scenarios: <no debug info> <no debugger> <no profiler> <no instrumentation>
Granted set: <PermissionSet class="System.Security.PermissionSet" version="1" Unrestricted="true"/>
File:
C:\WINDOWS\assembly\NativeImages_v2.0.50727_32\mscorlib\0fce8176e7f03af02b5696345589d82b\mscorlib.ni.dll
Dependencies:
mscorlib, Version=2.0.0.0, PublicKeyToken=b77a5c561934e089:
Guid:{D34102CF-2ABF-4004-8B42-2859D8FF27F3}
Sign:bbf5cfc19bea4e13889e39eb1fb72479a45ad0ec
mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089
Source MVID: {D34102CF-2ABF-4004-8B42-2859D8FF27F3}
Source HASH: bbf5cfc19bea4e13889e39eb1fb72479a45ad0ec
NGen GUID sign: {7681CE0F-F0E7-F03A-2B56-96345589D82B}
OS: WinNT
Processor: x86(Pentium 4) (features: 00008001)
Runtime: 2.0.50727.832
mscorwks.dll: TimeStamp=461F2E2A, CheckSum=00566DC9
Flags:
Scenarios: <no debug info> <no debugger> <no profiler> <no instrumentation>
Granted set: <PermissionSet class="System.Security.PermissionSet" version="1" Unrestricted="true"/>
File:
C:\WINDOWS\assembly\NativeImages_v2.0.50727_32\mscorlib\0fce8176e7f03af02b5696345589d82b\mscorlib.ni.dll
Dependencies:
mscorlib, Version=2.0.0.0, PublicKeyToken=b77a5c561934e089:
Guid:{D34102CF-2ABF-4004-8B42-2859D8FF27F3}
Sign:bbf5cfc19bea4e13889e39eb1fb72479a45ad0ec
mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089
Source MVID: {D34102CF-2ABF-4004-8B42-2859D8FF27F3}
Source HASH: bbf5cfc19bea4e13889e39eb1fb72479a45ad0ec
NGen GUID sign: {7681CE0F-F0E7-F03A-2B56-96345589D82B}
OS: WinNT
Processor: x86(Pentium 4) (features: 00008001)
Runtime: 2.0.50727.832
mscorwks.dll: TimeStamp=461F2E2A, CheckSum=00566DC9
Flags:
Scenarios: <no debug info> <no debugger> <no profiler> <no instrumentation>
Granted set: <PermissionSet class="System.Security.PermissionSet" version="1" Unrestricted="true"/>
File:
C:\WINDOWS\assembly\NativeImages_v2.0.50727_32\mscorlib\0fce8176e7f03af02b5696345589d82b\mscorlib.ni.dll
Dependencies:
mscorlib, Version=2.0.0.0, PublicKeyToken=b77a5c561934e089:
Guid:{D34102CF-2ABF-4004-8B42-2859D8FF27F3}
Sign:bbf5cfc19bea4e13889e39eb1fb72479a45ad0ec
mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089
Source MVID: {D34102CF-2ABF-4004-8B42-2859D8FF27F3}
Source HASH: bbf5cfc19bea4e13889e39eb1fb72479a45ad0ec
NGen GUID sign: {7681CE0F-F0E7-F03A-2B56-96345589D82B}
OS: WinNT
Processor: x86(Pentium 4) (features: 00008001)
Runtime: 2.0.50727.832
mscorwks.dll: TimeStamp=461F2E2A, CheckSum=00566DC9
Flags:
Scenarios: <no debug info> <no debugger> <no profiler> <no instrumentation>
Granted set: <PermissionSet class="System.Security.PermissionSet" version="1" Unrestricted="true"/>
File:
C:\WINDOWS\assembly\NativeImages_v2.0.50727_32\mscorlib\0fce8176e7f03af02b5696345589d82b\mscorlib.ni.dll
Dependencies:
mscorlib, Version=2.0.0.0, PublicKeyToken=b77a5c561934e089:
Guid:{D34102CF-2ABF-4004-8B42-2859D8FF27F3}
Sign:bbf5cfc19bea4e13889e39eb1fb72479a45ad0ec
mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089
Source MVID: {D34102CF-2ABF-4004-8B42-2859D8FF27F3}
Source HASH: bbf5cfc19bea4e13889e39eb1fb72479a45ad0ec
NGen GUID sign: {7681CE0F-F0E7-F03A-2B56-96345589D82B}
OS: WinNT
Processor: x86(Pentium 4) (features: 00008001)
Runtime: 2.0.50727.832
mscorwks.dll: TimeStamp=461F2E2A, CheckSum=00566DC9
Flags:
Scenarios: <no debug info> <no debugger> <no profiler> <no instrumentation>
Granted set: <PermissionSet class="System.Security.PermissionSet" version="1" Unrestricted="true"/>
File:
C:\WINDOWS\assembly\NativeImages_v2.0.50727_32\mscorlib\0fce8176e7f03af02b5696345589d82b\mscorlib.ni.dll
Dependencies:
mscorlib, Version=2.0.0.0, PublicKeyToken=b77a5c561934e089:
Guid:{D34102CF-2ABF-4004-8B42-2859D8FF27F3}
Sign:bbf5cfc19bea4e13889e39eb1fb72479a45ad0ec
mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089
Source MVID: {D34102CF-2ABF-4004-8B42-2859D8FF27F3}
Source HASH: bbf5cfc19bea4e13889e39eb1fb72479a45ad0ec
NGen GUID sign: {7681CE0F-F0E7-F03A-2B56-96345589D82B}
OS: WinNT
Processor: x86(Pentium 4) (features: 00008001)
Runtime: 2.0.50727.832
mscorwks.dll: TimeStamp=461F2E2A, CheckSum=00566DC9
Flags:
Scenarios: <no debug info> <no debugger> <no profiler> <no instrumentation>
Granted set: <PermissionSet class="System.Security.PermissionSet" version="1" Unrestricted="true"/>
File:
C:\WINDOWS\assembly\NativeImages_v2.0.50727_32\mscorlib\0fce8176e7f03af02b5696345589d82b\mscorlib.ni.dll
Dependencies:
mscorlib, Version=2.0.0.0, PublicKeyToken=b77a5c561934e089:
Guid:{D34102CF-2ABF-4004-8B42-2859D8FF27F3}
Sign:bbf5cfc19bea4e13889e39eb1fb72479a45ad0ec
mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089
Source MVID: {D34102CF-2ABF-4004-8B42-2859D8FF27F3}
Source HASH: bbf5cfc19bea4e13889e39eb1fb72479a45ad0ec
NGen GUID sign: {7681CE0F-F0E7-F03A-2B56-96345589D82B}
OS: WinNT
Processor: x86(Pentium 4) (features: 00008001)
Runtime: 2.0.50727.832
mscorwks.dll: TimeStamp=461F2E2A, CheckSum=00566DC9
Flags:
Scenarios: <no debug info> <no debugger> <no profiler> <no instrumentation>
Granted set: <PermissionSet class="System.Security.PermissionSet" version="1" Unrestricted="true"/>
File:
C:\WINDOWS\assembly\NativeImages_v2.0.50727_32\mscorlib\0fce8176e7f03af02b5696345589d82b\mscorlib.ni.dll
Dependencies:
mscorlib, Version=2.0.0.0, PublicKeyToken=b77a5c561934e089:
Guid:{D34102CF-2ABF-4004-8B42-2859D8FF27F3}
Sign:bbf5cfc19bea4e13889e39eb1fb72479a45ad0ec
mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089
Source MVID: {D34102CF-2ABF-4004-8B42-2859D8FF27F3}
Source HASH: bbf5cfc19bea4e13889e39eb1fb72479a45ad0ec
NGen GUID sign: {7681CE0F-F0E7-F03A-2B56-96345589D82B}
OS: WinNT
Processor: x86(Pentium 4) (features: 00008001)
Runtime: 2.0.50727.832
mscorwks.dll: TimeStamp=461F2E2A, CheckSum=00566DC9
Flags:
Scenarios: <no debug info> <no debugger> <no profiler> <no instrumentation>
Granted set: <PermissionSet class="System.Security.PermissionSet" version="1" Unrestricted="true"/>
File:
C:\WINDOWS\assembly\NativeImages_v2.0.50727_32\mscorlib\0fce8176e7f03af02b5696345589d82b\mscorlib.ni.dll
Dependencies:
mscorlib, Version=2.0.0.0, PublicKeyToken=b77a5c561934e089:
Guid:{D34102CF-2ABF-4004-8B42-2859D8FF27F3}
Sign:bbf5cfc19bea4e13889e39eb1fb72479a45ad0ec
mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089
Source MVID: {D34102CF-2ABF-4004-8B42-2859D8FF27F3}
Source HASH: bbf5cfc19bea4e13889e39eb1fb72479a45ad0ec
NGen GUID sign: {7681CE0F-F0E7-F03A-2B56-96345589D82B}
OS: WinNT
Processor: x86(Pentium 4) (features: 00008001)
Runtime: 2.0.50727.832
mscorwks.dll: TimeStamp=461F2E2A, CheckSum=00566DC9
Flags:
Scenarios: <no debug info> <no debugger> <no profiler> <no instrumentation>
Granted set: <PermissionSet class="System.Security.PermissionSet" version="1" Unrestricted="true"/>
File:
C:\WINDOWS\assembly\NativeImages_v2.0.50727_32\mscorlib\0fce8176e7f03af02b5696345589d82b\mscorlib.ni.dll
Dependencies:
mscorlib, Version=2.0.0.0, PublicKeyToken=b77a5c561934e089:
Guid:{D34102CF-2ABF-4004-8B42-2859D8FF27F3}
Sign:bbf5cfc19bea4e13889e39eb1fb72479a45ad0ec
mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089
Source MVID: {D34102CF-2ABF-4004-8B42-2859D8FF27F3}
Source HASH: bbf5cfc19bea4e13889e39eb1fb72479a45ad0ec
NGen GUID sign: {7681CE0F-F0E7-F03A-2B56-96345589D82B}
OS: WinNT
Processor: x86(Pentium 4) (features: 00008001)
Runtime: 2.0.50727.832
mscorwks.dll: TimeStamp=461F2E2A, CheckSum=00566DC9
Flags:
Scenarios: <no debug info> <no debugger> <no profiler> <no instrumentation>
Granted set: <PermissionSet class="System.Security.PermissionSet" version="1" Unrestricted="true"/>
File:
C:\WINDOWS\assembly\NativeImages_v2.0.50727_32\mscorlib\0fce8176e7f03af02b5696345589d82b\mscorlib.ni.dll
Dependencies:
mscorlib, Version=2.0.0.0, PublicKeyToken=b77a5c561934e089:
Guid:{D34102CF-2ABF-4004-8B42-2859D8FF27F3}
Sign:bbf5cfc19bea4e13889e39eb1fb72479a45ad0ec
mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089
Source MVID: {D34102CF-2ABF-4004-8B42-2859D8FF27F3}
Source HASH: bbf5cfc19bea4e13889e39eb1fb72479a45ad0ec
NGen GUID sign: {7681CE0F-F0E7-F03A-2B56-96345589D82B}
OS: WinNT
Processor: x86(Pentium 4) (features: 00008001)
Runtime: 2.0.50727.832
mscorwks.dll: TimeStamp=461F2E2A, CheckSum=00566DC9
Flags:
Scenarios: <no debug info> <no debugger> <no profiler> <no instrumentation>
Granted set: <PermissionSet class="System.Security.PermissionSet" version="1" Unrestricted="true"/>
File:
C:\WINDOWS\assembly\NativeImages_v2.0.50727_32\mscorlib\0fce8176e7f03af02b5696345589d82b\mscorlib.ni.dll
Dependencies:
mscorlib, Version=2.0.0.0, PublicKeyToken=b77a5c561934e089:
Guid:{D34102CF-2ABF-4004-8B42-2859D8FF27F3}
Sign:bbf5cfc19bea4e13889e39eb1fb72479a45ad0ec
There should only be one mscorlib in the native images, correct? How can I get rid of the others?
A: I had the same error recently.
Here's how I fixed it (I hope it works for you too):
-Open your project properties, go to the references section.
-Remove the reference to System in the upper section.
I think it's referencing System twice but it's only showing once. Hence the ambigous references.
A: Based on the results of your gacutil output (thanks for doing that; I think it helps), I would say you need to try and run a repair on the .NET Framework install and Visual Studio 2005. I'm not sure if that will fix it, but as you can see from the output of the gacutil, you have none for 2.0.
From my VS2005 Command Prompt, I get:
Microsoft (R) .NET Global Assembly Cache Utility. Version 2.0.50727.42
Copyright (c) Microsoft Corporation. All rights reserved.
The Global Assembly Cache contains the following assemblies:
mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089, processorArchitecture=x86
Number of items = 1
From my VS2003 Command Prompt, I get:
Microsoft (R) .NET Global Assembly Cache Utility. Version 1.1.4322.573
Copyright (C) Microsoft Corporation 1998-2002. All rights reserved.
The Global Assembly Cache contains the following assemblies:
The cache of ngen files contains the following entries:
mscorlib, Version=1.0.5000.0, Culture=neutral, PublicKeyToken=b77a5c561934e089, Custom=5a00410050002d004e0035002e0031002d003800460053002d00330037004200430043003300430035000000
mscorlib, Version=1.0.5000.0, Culture=neutral, PublicKeyToken=b77a5c561934e089, Custom=5a00410050002d004e0035002e0031002d0038004600440053002d00330037004200440036004600430034000000
Number of items = 2
A: When asking for help diagnosing compilation problems, it often helps to post the offending source code :)
These errors really mean that the specified name conflicts with another and the compiler cannot resolve this. It does look a little odd tho..
A: I've been hit by this as well, specifically System.Data.SqlClient. Try unchecking namespaces in the Project manager and manually including them in the .vb file, like you would with C#:
Imports System.Data.SqlClient
A: Take one error (like ArrayList) and replace the type with the full-qualified name (I'm not sure, but I guess here: System.Collection.ArrayList). If the error vanishes, you really have a resolving conflict. If not, it's something else.
If all solutions build "fine" with these errors, I suggest cleaning your projects. Delete all compiled stuff (dll, pdb, whatsoever), also shadow cached ones. Maybe it compiles because it uses an old version of something.
A: I know this sounds odd, but do you use "Build" or "Rebuild" to build the solution? If I have funny problems like that, a "Rebuild All" to the solution helps.
A: Yesterday I got the same in VS2005 ASP.NET web site project: suddenly, with any previous significant code change, loads of 'x' is ambiguous in the namespace 'y' appeared, all of them originated from very fundamental symbols, like EventArgs, Type, DBNull, etc.
Immediate reason of that is double-referenced mscorlib, as I can see in VS's Class View. The true reason, I believe, is the automatic Windows Update which had forced me to restart the machine minutes before.
Trying such stunts like establishing a brand new ASP.NET web site project, copy-paste the source text on it (on the same machine - doesn't help) or move the project on the second machine with the same VS2005 installation (it helps, project works normally) I'm nearly sure there is nothing wrong with my code, but with my VS/.NET configuration. And I desperately don't know how to cure it, as there is no trace on the Internet describing similar troubles, apart from this one.
A: Reinstall .Net Framework 2.0.
That should fix it. Afterwards, gacutil (from v2.0) would show 1 mscorlib and not 0.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/14857",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: [ADO.NET error]: CREATE DATABASE permission denied in database 'master'. An attempt to attach an auto-named database for file HelloWorld.mdf failed
CREATE DATABASE permission denied in database 'master'.
An attempt to attach an auto-named database for file
C:\Documents and Settings\..\App_Data\HelloWorld.mdf failed.
A database with the same name exists, or specified file cannot be
opened, or it is located on UNC share.
I've found these links:
*
*http://blog.benhall.me.uk/2008/03/sql-server-and-vista-create-database.html
*http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=702726&SiteID=1
A: Generally the user that you are using to run the SQL Server service will not have access to your personal user folders, that is why you're getting the error. You either need to change the credentials used for the service, or move the database to another folder, which did the trick in your case.
A: For me helped a lot to set this tag under system.web tag on the web.config file:
<system.web>
<identity impersonate="true" userName="admin_user" password="admin_password" />
...
Hope this can help somebody
A: I was stuck on this today with compound issue in mvc3 and entity framework code first.
My SqlExpress install is messed up (permissions issues) so I switched to SqlCE.
My ConnectionString.Name attribute didn't match my "ProjectNameContext" class name.
When the connection string isn't found, it uses default conventions. Default conventions means my SqlExpress service with a database name like "ProjectNameContext". The permissions are messed up on that so I got a permissions error on SqlExpress when I thought I was using SqlSE
A: I was also experiencing the same Problem, finally i find Solution-
SOLUTION -- Is simple Move or Cut your database from the App_Data folder to any where (e.g., Desktop) then Move or Cut back DataBase to App_Data folder.
That is it..........
Hope it work!
A: Yes, a previous attachment wasn't unattached properly, or it was attached manually. Go in to Management Studio connect to the database, and disconnect the long database name that looks like a file path. Then try again it should work.
A: My friend fixed it by just using another directory, namely C:\TEMP. I think it was just a permissions thing.
A: for some guys like me,add "User Instance=true" to your connection string
| {
"language": "en",
"url": "https://stackoverflow.com/questions/14872",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: How do I display database query statistics on Wordpress site? I've noticed that a few Wordpress blogs have query statistics present in their footer that simply state the number of queries and the total time required to process them for the particular page, reading something like:
23 queries. 0.448 seconds
I was wondering how this is accomplished. Is it through the use of a particular Wordpress plug-in or perhaps from using some particular php function in the page's code?
A: To explain pix0r's code:
*
*$wpdb->num_queries is the number of database queries.
*_e is for localization: http://faq.wordpress.net/view.php?p=50
*timer_stop() returns the amount of time taken to render the page: http://codex.wordpress.org/Function_Reference/timer_stop
A: Try adding this to the bottom of the footer in your template:
<?php echo $wpdb->num_queries; ?> <?php _e('queries'); ?>. <?php timer_stop(1); ?> <?php _e('seconds'); ?>
A: wordpress has a easy solution now
There is a function called get_num_queries() and as specified in other answers timer_stop().
<?php echo get_num_queries(); _e(' queries'); ?> in <?php timer_stop(1); _e(' seconds'); ?></p>
get_num_queries() - returns the number of database queries during the WordPress execution and accepts no parameters
| {
"language": "en",
"url": "https://stackoverflow.com/questions/14873",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15"
} |
Q: Drawing a view hierarchy into a specific context in Cocoa For part of my application I have a need to create an image of a certain view and all of its subviews.
To do this I'm creating a context that wraps a bitmap with the same-size as the view, but I'm unsure how to draw the view hierarchy into it. I can draw a single view just be setting the context and explicitly calling drawRect, but this does not deal with all of the subviews.
I can't see anything in the NSView interface that could help with this so I suspect the solution may lie at a higher level.
A: You can use -[NSView dataWithPDFInsideRect:] to render the entire hierarchy of the view you send it to into a PDF, returned as an NSData object. You can then do whatever you wish with that, including render it into a bitmap.
Are you sure you want a bitmap representation though? After all, that PDF could be (at least in theory) resolution-independent.
A: I found that writing the drawing code myself was the best way to:
*
*deal with potential transparency issues (some of the other options do add a white background to the whole image)
*performance was much better
The code below is not perfect, because it does not deal with scaling issues when going from bounds to frames, but it does take into account the isFlipped state, and works very well for what I used it for. Note that it only draws the subviews (and the subsubviews,... recursively), but getting it to also draw itself is very easy, just add a [self drawRect:[self bounds]] in the implementation of imageWithSubviews.
- (void)drawSubviews
{
BOOL flipped = [self isFlipped];
for ( NSView *subview in [self subviews] ) {
// changes the coordinate system so that the local coordinates of the subview (bounds) become the coordinates of the superview (frame)
// the transform assumes bounds and frame have the same size, and bounds origin is (0,0)
// handling of 'isFlipped' also probably unreliable
NSAffineTransform *transform = [NSAffineTransform transform];
if ( flipped ) {
[transform translateXBy:subview.frame.origin.x yBy:NSMaxY(subview.frame)];
[transform scaleXBy:+1.0 yBy:-1.0];
} else
[transform translateXBy:subview.frame.origin.x yBy:subview.frame.origin.y];
[transform concat];
// recursively draw the subview and sub-subviews
[subview drawRect:[subview bounds]];
[subview drawSubviews];
// reset the transform to get back a clean graphic contexts for the rest of the drawing
[transform invert];
[transform concat];
}
}
- (NSImage *)imageWithSubviews
{
NSImage *image = [[[NSImage alloc] initWithSize:[self bounds].size] autorelease];
[image lockFocus];
// it seems NSImage cannot use flipped coordinates the way NSView does (the method 'setFlipped:' does not seem to help)
// Use instead an NSAffineTransform
if ( [self isFlipped] ) {
NSAffineTransform *transform = [NSAffineTransform transform];
[transform translateXBy:0 yBy:NSMaxY(self.bounds)];
[transform scaleXBy:+1.0 yBy:-1.0];
[transform concat];
}
[self drawSubviews];
[image unlockFocus];
return image;
}
A: You can use -[NSBitmapImageRep initWithFocusedViewRect:] after locking focus on a view to have the view render itself (and its subviews) into the given rectangle.
A: What you want to do is available explicitly already. See the section "NSView Drawing Redirection API" in the 10.4 AppKit release notes.
Make an NSBitmapImageRep for caching and clear it:
NSGraphicsContext *bitmapGraphicsContext = [NSGraphicsContext graphicsContextWithBitmapImageRep:cacheBitmapImageRep];
[NSGraphicsContext saveGraphicsState];
[NSGraphicsContext setCurrentContext:bitmapGraphicsContext];
[[NSColor clearColor] set];
NSRectFill(NSMakeRect(0, 0, [cacheBitmapImageRep size].width, [cacheBitmapImageRep size].height));
[NSGraphicsContext restoreGraphicsState];
Cache to it:
-[NSView cacheDisplayInRect:toBitmapImageRep:]
If you want to more generally draw into a specified context handling view recursion and transparency correctly,
-[NSView displayRectIgnoringOpacity:inContext:]
| {
"language": "en",
"url": "https://stackoverflow.com/questions/14874",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Find the best combination from a given set of multiple sets Say you have a shipment. It needs to go from point A to point B, point B to point C and finally point C to point D. You need it to get there in five days for the least amount of money possible. There are three possible shippers for each leg, each with their own different time and cost for each leg:
Array
(
[leg0] => Array
(
[UPS] => Array
(
[days] => 1
[cost] => 5000
)
[FedEx] => Array
(
[days] => 2
[cost] => 3000
)
[Conway] => Array
(
[days] => 5
[cost] => 1000
)
)
[leg1] => Array
(
[UPS] => Array
(
[days] => 1
[cost] => 3000
)
[FedEx] => Array
(
[days] => 2
[cost] => 3000
)
[Conway] => Array
(
[days] => 3
[cost] => 1000
)
)
[leg2] => Array
(
[UPS] => Array
(
[days] => 1
[cost] => 4000
)
[FedEx] => Array
(
[days] => 1
[cost] => 3000
)
[Conway] => Array
(
[days] => 2
[cost] => 5000
)
)
)
How would you go about finding the best combination programmatically?
My best attempt so far (third or fourth algorithm) is:
*
*Find the longest shipper for each leg
*Eliminate the most "expensive" one
*Find the cheapest shipper for each leg
*Calculate the total cost & days
*If days are acceptable, finish, else, goto 1
Quickly mocked-up in PHP (note that the test array below works swimmingly, but if you try it with the test array from above, it does not find the correct combination):
$shippers["leg1"] = array(
"UPS" => array("days" => 1, "cost" => 4000),
"Conway" => array("days" => 3, "cost" => 3200),
"FedEx" => array("days" => 8, "cost" => 1000)
);
$shippers["leg2"] = array(
"UPS" => array("days" => 1, "cost" => 3500),
"Conway" => array("days" => 2, "cost" => 2800),
"FedEx" => array("days" => 4, "cost" => 900)
);
$shippers["leg3"] = array(
"UPS" => array("days" => 1, "cost" => 3500),
"Conway" => array("days" => 2, "cost" => 2800),
"FedEx" => array("days" => 4, "cost" => 900)
);
$times = 0;
$totalDays = 9999999;
print "<h1>Shippers to Choose From:</h1><pre>";
print_r($shippers);
print "</pre><br />";
while($totalDays > $maxDays && $times < 500){
$totalDays = 0;
$times++;
$worstShipper = null;
$longestShippers = null;
$cheapestShippers = null;
foreach($shippers as $legName => $leg){
//find longest shipment for each leg (in terms of days)
unset($longestShippers[$legName]);
$longestDays = null;
if(count($leg) > 1){
foreach($leg as $shipperName => $shipper){
if(empty($longestDays) || $shipper["days"] > $longestDays){
$longestShippers[$legName]["days"] = $shipper["days"];
$longestShippers[$legName]["cost"] = $shipper["cost"];
$longestShippers[$legName]["name"] = $shipperName;
$longestDays = $shipper["days"];
}
}
}
}
foreach($longestShippers as $leg => $shipper){
$shipper["totalCost"] = $shipper["days"] * $shipper["cost"];
//print $shipper["totalCost"] . " <?> " . $worstShipper["totalCost"] . ";";
if(empty($worstShipper) || $shipper["totalCost"] > $worstShipper["totalCost"]){
$worstShipper = $shipper;
$worstShipperLeg = $leg;
}
}
//print "worst shipper is: shippers[$worstShipperLeg][{$worstShipper['name']}]" . $shippers[$worstShipperLeg][$worstShipper["name"]]["days"];
unset($shippers[$worstShipperLeg][$worstShipper["name"]]);
print "<h1>Next:</h1><pre>";
print_r($shippers);
print "</pre><br />";
foreach($shippers as $legName => $leg){
//find cheapest shipment for each leg (in terms of cost)
unset($cheapestShippers[$legName]);
$lowestCost = null;
foreach($leg as $shipperName => $shipper){
if(empty($lowestCost) || $shipper["cost"] < $lowestCost){
$cheapestShippers[$legName]["days"] = $shipper["days"];
$cheapestShippers[$legName]["cost"] = $shipper["cost"];
$cheapestShippers[$legName]["name"] = $shipperName;
$lowestCost = $shipper["cost"];
}
}
//recalculate days and see if we are under max days...
$totalDays += $cheapestShippers[$legName]['days'];
}
//print "<h2>totalDays: $totalDays</h2>";
}
print "<h1>Chosen Shippers:</h1><pre>";
print_r($cheapestShippers);
print "</pre>";
I think I may have to actually do some sort of thing where I literally make each combination one by one (with a series of loops) and add up the total "score" of each, and find the best one....
EDIT:
To clarify, this isn't a "homework" assignment (I'm not in school). It is part of my current project at work.
The requirements (as always) have been constantly changing. If I were given the current constraints at the time I began working on this problem, I would be using some variant of the A* algorithm (or Dijkstra's or shortest path or simplex or something). But everything has been morphing and changing, and that brings me to where I'm at right now.
So I guess that means I need to forget about all the crap I've done to this point and just go with what I know I should go with, which is a path finding algorithm.
A: Could alter some of the shortest path algorithms, like Dijkstra's, to weight each path by cost but also keep track of time and stop going along a certain path if the time exceeds your threshold. Should find the cheapest that gets you in under your threshold that way
A: Sounds like what you have is called a "linear programming problem". It also sounds like a homework problem, no offense.
The classical solution to a LP problem is called the "Simplex Method". Google it.
However, to use that method, you must have the problem correctly formulated to describe your requirements.
Still, it may be possible to enumerate all possible paths, since you have such a small set. Such a thing won't scale, though.
A: Sounds like a job for Dijkstra's algorithm:
Dijkstra's algorithm, conceived by Dutch computer scientist Edsger Dijkstra in 1959, 1 is a graph search algorithm that solves the single-source shortest path problem for a graph with non negative edge path costs, outputting a shortest path tree. This algorithm is often used in routing.
There are also implementation details in the Wikipedia article.
A: If I knew I only had to deal with 5 cities, in a predetermined order, and that there were only 3 routes between adjacent cities, I'd brute force it. No point in being elegant.
If, on the other hand, this were a homework assignment and I were supposed to produce an algorithm that could actually scale, I'd probably take a different approach.
A: This is a knapsack problem. The weights are the days in transit, and the profit should be $5000 - cost of leg. Eliminate all negative costs and go from there!
A: As Baltimark said, this is basically a Linear programming problem. If only the coefficients for the shippers (1 for included, 0 for not included) were not (binary) integers for each leg, this would be more easily solveable. Now you need to find some (binary) integer linear programming (ILP) heuristics as the problem is NP-hard.
See Wikipedia on integer linear programming for links; on my linear programming course we used at least Branch and bound.
Actually now that I think of it, this special case is solveable without actual ILP as the amount of days does not matter as long as it is <= 5. Now start by choosing the cheapest carrier for first choice (Conway 5:1000). Next you choose yet again the cheapest, resulting 8 days and 4000 currency units which is too much so we abort that. By trying others too we see that they all results days > 5 so we back to first choice and try the second cheapest (FedEx 2:3000) and then ups in the second and fedex in the last. This gives us total of 4 days and 9000 currency units.
We then could use this cost to prune other searches in the tree that would by some subtree-stage result costs larger that the one we've found already and leave that subtree unsearched from that point on.
This only works as long as we can know that searching in the subtree will not produce a better results, as we do here when costs cannot be negative.
Hope this rambling helped a bit :).
A: I think that Dijkstra's algorithm is for finding a shortest path.
cmcculloh is looking for the minimal cost subject to the constraint that he gets it there in 5 days.
So, merely finding the quickest way won't get him there cheapest, and getting there for the cheapest, won't get it there in the required amount of time.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/14884",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
} |
Q: Improving Your Build Process Or, actually establishing a build process when there isn't much of one in place to begin with.
Currently, that's pretty much the situation my group faces. We do web-app development primarily (but no desktop development at this time). Software deployments are ugly and unwieldy even with our modest apps, and we've had far too many issues crop up in the two years I have been a part of this team (and company). It's past time to do something about that, and the upshot is that we'll be able to kill two Joel Test birds with one stone (daily builds and one-step builds, neither of which exists in any form whatsoever).
What I'm after here is some general insight on the kinds of things I need to be doing or thinking about, from people who have been in software development for longer than I have and also have bigger brains. I'm confident that will be most of the people currently posting in the beta.
Relevant Tools:
Visual Build
Source Safe 6.0 (I know, but I can't do anything about whether or not we use Source Safe at this time. That might be the next battle I fight.)
Tentatively, I've got a Visual Build project that does this:
*
*Get source and place in local directory, including necessary DLLs needed for project.
*Get config files and rename as needed (we're storing them in a special sub directory that isn't part of the actual application, and they are named according to use).
*Build using Visual Studio
*Precompile using command line, copying into what will be a "build" directory
*Copy to destination.
*Get any necessary additional resources - mostly things like documents, images, and reports that are associated with the project (and put into directory from step 5). There's a lot of this stuff, and I didn't want to include it previously. However, I'm going to only copy changed items, so maybe it's irrelevant. I wasn't sure whether I really wanted to include this stuff in earlier steps.
I still need to coax some logging out of Visual Build for all of this, but I'm not at a point where I need to do that yet.
Does anyone have any advice or suggestions to make? We're not currently using a Deployment Project, I'll note. It would remove some of the steps necessary in this build I presume (like web.config swapping).
A: I have a set of Powershell scripts that do all of this for me.
Script 1: Build - this one is simple, it is mostly handled by a call to msbuild, and also it creates my database scripts.
Script 2: Package - This one takes various arguments to package a release for various environments, such as test, and subsets of the production environment, which consists of many machines.
Script 3: Deploy - This is run on each individual machine from within the folder created by the Package script (the Deploy script is copied in as a part of packaging)
From the deploy script, I do sanity checks on things like the machine name so things don't accidentally get deployed to the wrong place.
For web.config files, I use the
<appSettings file="Local.config">
feature to have overrides that are already on the production machines, and they are read-only so they don't accidentally get written over. The Local.config files are not checked in, and I don't have to do any file switching at build time.
[Edit] The equivalent of appSettings file= for a config section is configSource="Local.config"
A: We switched from using a perl script to MSBuild two years ago and haven't looked back.
Building visual studio solutions can be done by just specifying them in the main xml file.
For anything more complicated (getting your source code, executing unit tests, building install packages, deploying web sites) you can just create a new class in .net deriving from Task that overrides the Execute function, and then reference this from your build xml file.
There is a pretty good introduction here:
introduction
A: When taking on a project that has never had an automated build process, it is easier to take it in steps. Do not try to swallow to much at one time, otherwise it can feel overwhelming.
*
*First get your code compiling with one step using an automated build program (i.e. nant/msbuild). I am not going to debate which one is better. Find one that feels comfortable to you and use it. Have the build scripts live with the project in source control.
*Figure out how you want your automated build to be triggered. Whether it is hooking it up to CruiseControl or running a nightly build task using Scheduled Tasks. CruiseControl or TeamCity is probably the best choice for this, because they include a lot of tools you can use to make this step easier. CruiseControl is free and TeamCity is free to a point, where you might have to pay for it depending on how big the project is.
*Ok, by this point you will be pretty comfortable with the tools. Now you are ready to add more tasks based on what you want to do for testing, deployment, and etc...
Hope this helps.
A: I've only worked on a couple of .Net projects (I've done mostly Java) but one thing I would recommend is using a tool like NAnt. I have a real problem with coupling my build to the IDE, it ends up making it a real pain to set up build servers down the road since you have to go do a full VS install on any box that you want to build from in the future.
That being said, any automated build is better than no automated build.
A: Our build process is a bunch of homegrown Perl scripts that have evolved over a decade or so, nothing fancy but it gets the job done. One script gets the latest source code, another builds it, a third stages it to a network location. We do desktop application development so our staging process also builds install packages for testing and eventually shipping to customers.
I suggest you break it down to individual steps because there will be times when you want to rebuild but not get latest, or maybe just need to re-stage. Our scripts can also handle building from different branches so consider that also with whatever solution you develop.
Finally we have a dedicated build machine that rebuilds the trunk and maintenance branches every night and sends out an email with any problems or if it completed successfully.
A: One thing I would suggest ensure your build script (and installer project, if relevant in your case) is in source control. I tend to have a very simple script that just checks out\gets latest the "main" build script then launches it.
I say this b/c I see teams just running the latest version of the build script on the server but either never putting it in source control or when they do they only check it in on a random basis. If you make the build process to "get" from source control it will force you to keep the latest and greatest build script in there.
A: Our build system is a makefile (or two). It has been rather fun getting it working as it needs to run on both windows (as a build task under VS) and under Linux (as a normal "make bla" task). The really fun thing is that the build gets the actual file list from a .csproj file, builds (another) makefile from that, and run that. In the processes the make file actually calls it's self.
If that thought doesn't scare the reader, then (either they are crazy or) they can probably get make + "your favorite string mangler" to work for them.
A: We use UppercuT.
UppercuT uses NAnt to build and it is extremely easy to use.
http://code.google.com/p/uppercut/
Some good explanations here: UppercuT
| {
"language": "en",
"url": "https://stackoverflow.com/questions/14893",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "27"
} |
Q: Tree-Based (vs. HTML-Based) Web Framework? Anyone who writes client-side JavaScript is familiar with the DOM - the tree structure that your browser references in memory, generated from the HTML it got from the server. JavaScript can add, remove and modify nodes on the DOM tree to make changes to the page. I find it very nice to work with (browser bugs aside), and very different from the way my server-side code has to generate the page in the first place.
My question is: what server-side frameworks/languages build a page by treating it as a DOM tree from the beginning - inserting nodes instead of echoing strings? I think it would be very helpful if the client-side and server-side code both saw the page the same way. You could certainly hack something like this together in any web server language, but a framework dedicated to creating a page this way could make some very nice optimizations.
Open source, being widely deployed and having been around a while would all be pluses.
A: You're describing Rhino on Rails, which is not out but will be soon.
Similarly, Aptana Jaxer, however RnR will include an actual framework (Rails) whereas Jaxer is just the server technology.
A: Aptana's Jaxer AJAX server might be something for you to check out, as it uses JS server-side, as well.
That being said, I would argue that you're better off not generating your markup with print statements or echos, but rather template and hook in your dynamic content.
A: Jaxer is server-side javascript + the DOM. You can integrate jaxer with other languages, by post-processing their output.
Also in java, php, ... you can use xpath to manipulate the DOM.
A: I see where you're coming from but it's all a bit moot isn't it. You can't send anything but rendered content to the browser, and you have to do it all in one go (AJAX aside). There's no value from what you are suggesting (from what I can see) as even if you build it tree-like, you're still only building a page which is sent wholesale to the client.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/14909",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: PDF generation from XHTML in a LAMP environment Can anyone recommend a good server-side PDF generation tool that would work in a Linux environment. I want easy as possible, pass it a XHTML file (with images) and have it generate a PDF from the rendered source.
I don't have a massive budget, but anything under $1000 should be alright.
Andrew
A: I sounds like FPDF might be of help...
Also, the creation of PDF documents is called "PDF printing". I believe that might help you find other resources.
A: You might want to take a look at FOP, which stands for Formatting Objects Processor. It can generate PDF files on linux since it is Java based. From their site:
Apache FOP (Formatting Objects Processor) is a print formatter driven
by XSL formatting objects (XSL-FO) and an output independent formatter.
It is a Java application that reads a formatting object (FO) tree
and renders the resulting pages to a specified output. Output formats
currently supported include PDF, PS, PCL, AFP, XML (area tree
representation), Print, AWT and PNG, and to a lesser extent, RTF and
TXT. The primary output target is PDF.
You can find it here
A: I used HTMLDoc about 8 years ago and it did a good job of turning HTML tables with some basic formatting into a decent PDF report. There also seems to be an open source version as well.
A: I did some searching, what about tbookdtd?
It's downloadable here but it hasn't been active since 2005. It appears to convert the xml to Latex, into PDF.
A: Have you investigated PHP's documentation? There's also PHP FAQ with a few different links. PHP primarily supports PDFlib.
A: I have recently came across dompdf which I have used to convert pages created in HTML into PDF documents. It uses PHP5 (assuming using PHP does not bother you). This is also assuming that you don't want to statically create HTML files on the file system and then convert them using some kind of command-line tool?
One problem I found with dompdf is that you don't get a whole lot of configuration options natively, but it is open-source and doesn't seem to be too large, so you could probably jury-rig something up pretty easily.
A: If you do have a budget take a look at the following OpenEdge. I know that they did excatly what you want for us. A linux based PDF generation system.
I'd ask what they can do for you. Val Cassidy is the persons name.
BTW: I'm not getting anything for this and I don't even work for bespoke company anymore nor for OpenEdge ...
A: You could take a look at using OpenOffice via the OpenOffice API to load your XHTML document and export a PDF version. There is a bit of a learning curve to using the OpenOffice API but it is very powerful and can be run in server mode on systems without any graphical interface. It performs well - we've used it on some internal projects.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/14911",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Can I generate ASP.NET MVC routes from a Sitemap? I'm thinking of learning the ASP.NET MVC framework for an upcoming project. Can I use the advanced routing to create long URLs based on the sitemap hierarchy?
Example navigation path:
Home > Shop > Products > Household > Kitchen > Cookware > Cooksets > Nonstick
Typical (I think) MVC URL:
http://example.com/products/category/NonstickCooksets
Desired URL:
http://example.com/shop/products/household/kitchen/cookware/cooksets/nonstick
Can I do this?
A: The MVC routing lets you define pretty much any structure you want, you just need to define what each of the pieces mean semantically. You can have bits that are "hard-coded", like "shop/products", and then define the rest as variable, "{category}/{subcategory}/{speciality}", etc.
You can also define several routes that all map to the same end point if you like. Basically, when a URL comes into your MVC app, it goes through the routing table until it finds a pattern that matches, fills in the variables and passes the request off to the appropriate controller for processing.
While the default route is a simple Controller, Action, Id kind of setup, that's certainly not the extent of what you can do.
A: Zack, if I understand right you want unlimited depth of the subcategories. No biggie, since MVC Preview 3 (I think 3 or 4) this has been solved.
Just define a route like
"{controller}/{action}/{*categoryPath}"
for an url such as :
http://example.com/shop/products/household/kitchen/cookware/cooksets/nonstick
you should have a ShopController with a Products action :
public class ShopController : Controller
{
...
public ActionResult Products(string categoryPath)
{
// the categoryPath value would be
// "household/kitchen/cookware/cooksets/nonstick". Process it (for ex. split it)
// and then decide what you do..
return View();
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/14923",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Parameter Binding: What happens under the hood? .NET, Java and other high level database API's in various language often provide techniques known as prepared statements and parameter binding as opposed to sending plain text commands to the Database server. What I would like to know is what happens when you execute a statement like this:
SqlCommand cmd = new SqlCommand("GetMemberByID");
cmd.CommandType = CommandType.StoredProcedure;
SqlParameter param = new SqlParameter("@ID", memberID);
para.DbType = DbType.Integer;
cmd.Parameters.Add(param);
I know this is a best practice. SQL injection attacks are minimized this way. But what exactly happens under the hood when you execute these statements? Is the end result still a SQL safe string? If not, what is the end result? And is this enough to prevent SQL injection attacks?
A: The MySQL manual page on prepared statements provides lots of information (which should apply to any other RDBMS).
Basically, your statement is parsed and processed ahead of time, and the parameters are sent separately instead of being handled along with the SQL code. This eliminates SQL-injection attacks because the SQL is parsed before the parameters are even set.
A: If you're using MS SQL, load up the profiler and you'll see what SQL statements are generated when you use parameterised queries. Here's an example (I'm using Enterprise Libary 3.1, but the results are the same using SqlParameters directly) against SQL Server 2005:
string sql = "SELECT * FROM tblDomains WHERE DomainName = @DomName AND DomainID = @Did";
Database db = DatabaseFactory.CreateDatabase();
using(DbCommand cmd = db.GetSqlStringCommand(sql))
{
db.AddInParameter(cmd, "DomName", DbType.String, "xxxxx.net");
db.AddInParameter(cmd, "Did", DbType.Int32, 500204);
DataSet ds = db.ExecuteDataSet(cmd);
}
This generates:
exec sp[underscore]executesql N'SELECT * FROM tblDomains WHERE DomainName = @DomName AND DomainID = @Did',
N'@DomName nvarchar(9),
@Did int',
@DomName=N'xxxxx.net',
@Did=500204
You can also see here, if quotation characters were passed as parameters, they are escaped accordingly:
db.AddInParameter(cmd, "DomName", DbType.String, "'xxxxx.net");
exec sp[underscore]executesql N'SELECT * FROM tblDomains WHERE DomainName = @DomName AND DomainID = @Did',
N'@DomName nvarchar(10),
@Did int',
@DomName=N'''xxxxx.net',
@Did=500204
A: in layman terms: if a prepared statement is sent then the DB will use a plan if it is available, it doesn't not have to recreate a plan every time this query is sent over but only the values of the params have changed. this is very similar to how procs work, the additional benefit with procs is that you can give permission through procs only and not to the underlying tables at all
| {
"language": "en",
"url": "https://stackoverflow.com/questions/14934",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: How to Disable Alt + F4 closing form? What is the best way to disable Alt + F4 in a c# win form to prevent the user from closing the form?
I am using a form as a popup dialog to display a progress bar and I do not want the user to be able to close it.
A: This is a hack to disable Alt + F4.
private void test_FormClosing(object sender, FormClosingEventArgs e)
{
if (this.ModifierKeys == Keys.Alt || this.ModifierKeys == Keys.F4)
{
e.Cancel = true;
}
}
A: If you look at the value of FormClosingEventArgs e.CloseReason, it will tell you why the form is being closed. You can then decide what to do, the possible values are:
Member name - Description
None - The cause of the closure was not defined or could not be determined.
WindowsShutDown - The operating system is closing all applications before shutting down.
MdiFormClosing - The parent form of this multiple document interface (MDI) form is closing.
UserClosing - The user is closing the form through the user interface (UI), for example by clicking the Close button on the form window, selecting Close from the window's control menu, or pressing ALT+F4.
TaskManagerClosing - The Microsoft Windows Task Manager is closing the application.
FormOwnerClosing - The owner form is closing.
ApplicationExitCall - The Exit method of the Application class was invoked.
A: Subscribe FormClosing event
private void Form1_FormClosing(object sender, FormClosingEventArgs e)
{
e.Cancel = e.CloseReason == CloseReason.UserClosing;
}
Only one line in the method body.
A: This does the job:
bool myButtonWasClicked = false;
private void Exit_Click(object sender, EventArgs e)
{
myButtonWasClicked = true;
Application.Exit();
}
private void Form1_FormClosing(object sender, FormClosingEventArgs e)
{
if (myButtonWasClicked)
{
e.Cancel = false;
}
else
{
e.Cancel = true;
}
}
A: I believe this is the right way to do it:
protected override void OnFormClosing(FormClosingEventArgs e)
{
switch (e.CloseReason)
{
case CloseReason.UserClosing:
e.Cancel = true;
break;
}
base.OnFormClosing(e);
}
A: Would FormClosing be called even when you're programatically closing the window? If so, you'd probably want to add some code to allow the form to be closed when you're finished with it (instead of always canceling the operation)
A: Note that it is considered bad form for an application to completely prevent itself from closing. You should check the event arguments for the Closing event to determine how and why your application was asked to close. If it is because of a Windows shutdown, you should not prevent the close from happening.
A: You could handle the FormClosing event and set FormClosingEventArgs.Cancel to true.
A:
I am using a form as a popup dialog to display a progress bar and I do not want the user to be able to close it.
If the user is determined to close your app (and knowledgeable) enough to press alt+f4, they'll most likely also be knowledgeable enough to run task manager and kill your application instead.
At least with alt+f4 your app can do a graceful shutdown, rather than just making people kill it. From experience, people killing your app means corrupt config files, broken databases, half-finished tasks that you can't resume, and many other painful things.
At least prompt them with 'are you sure' rather than flat out preventing it.
A: This does the job:
private void Form1_FormClosing(object sender, FormClosingEventArgs e)
{
e.Cancel = true;
}
Edit: In response to pix0rs concern - yes you are correct that you will not be able to programatically close the app. However, you can simply remove the event handler for the form_closing event before closing the form:
this.FormClosing -= new System.Windows.Forms.FormClosingEventHandler(this.Form1_FormClosing);
this.Close();
A: Hide close button on form by using the following in constructor of the form:
this.ControlBox = false;
| {
"language": "en",
"url": "https://stackoverflow.com/questions/14943",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "83"
} |
Q: .NET 3.5 Service Pack 1 causes 404 pages on ASP.NET Web App I have a problem with IIS 6.0 ceasing to work for an ASP.NET application after installing Service Pack 1 for .NET 3.5.
I have 2 identical virtual dedicated servers. Installing SP1 on the first had no adverse effect. Installing it on the second caused ASP.NET pages to start returning 404 page not found.
Static .html pages working okay on both servers.
Has anybody else experienced this?
A: This is broad problem, so let's start by asking some troubleshooting questions:
*
*Based on your description, the ASP.NET runtime is not catching your request and processing the aspx files. You may need to register the asp.net pipeline with IIS again using ASPNET_REGIIS -i.
*Have you made sure that the app_offline.htm file has been removed
from the directory of the application?
I have had this happen before after an
update.
*Have you setup fiddler for instance to follow the request to see what is
exactly being requested?
*Make sure ASP.NET is enabled in the IIS Administration Console under "Web
Service Extensions." Make sure everything is set to allowed for your different versions of the framework.
Well, let's start with those and hopefully we can guide you to the problem.
A: I've seen various people with this problem recently. This link might help.
And this one.
And a few others.
A: Is CustomErrors in your web.config set to On or RemoteOnly? If so, what do you get when you change it to Off?
A: I have not had this exact error with .NET 3.5 SP1, but have seen similar occur in the past. Typically it can be resolved by opening a command prompt, going to the appropriate .NET folder and running ASPNET_REGIIS -i. In the case of .NET 3.5 there wasn't an update to the main bits of the framework, so you'd actually go to the .NET 2.0 folder, which on my machine can be found at:
\Windows\Microsoft.Net\framework\v2.0.50727
Running the ASPNET_REGIIS -i will re-register all the ASP.NET libraries with IIS, and should be the equivalent of a re-install of the framework on a given machine (as far as IIS is concerned)
A: Just to clarify. The last (4th) point given by Dale was the problem. During the installation of SP1 the Status for ASP.NET and WebDAV became set to Prohibited under Web Service Extensions.
Why the installation of SP1 changed this setting on one server and not the other is a mystery that I wouldn't mind (but not expect) an answer to...
The second link provided by CodingTheWheel also had the answer so I'm also going to mark this as an answer.
A: No-one did before, so I'll point to the trivial solution:
Have you already de-installed the Service Pack and re-installed it again (or the whole framework)?
Edit: @Kev:
Easy explanation: He said the update works on one machine, but not on the other. I had similar problems in the past and re-installing helped to solve some of them. And it is trivial to do.
That's my approach:
1. trivial
2. easy
3. headache
You are right, on productive systems you must be careful, but that's his decision. And because it is a virtual server, maybe it is easy for him to copy it and try as a test environment first.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/14963",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Are there any suggestions for developing a C# coding standards / best practices document? I'm a recent AI graduate (circa 2 years) working for a modest operation. It has fallen to me (primarily as I'm the first 'adopter' in the department) to create a basic (read useful?) C# coding standards document.
I think I should explain that I'm probably the most junior software engineer going, but I'm looking forward to this task as hopefully I might actually be able to produce something half usable. I've done a pretty extensive search of the Internet and read articles on what a coding standards document should / should not contain. This seems like a good as place as any to ask for some suggestions.
I realise that I am potentially opening a door to a whole world of disagreement about 'the best way to do things'. I both understand and respect the undeniable fact that each programmer has a preferred method of solving each individual task, as a result I'm not looking to write anything so draconianly proscriptive as to stifle personal flair but to try and get a general methodology and agreed standards (e.g. naming conventions) to help make individuals code more readable.
So here goes .... any suggestions? Any at all?
A: The other posters have pointed you at the baseline, all I would add is make your document short, sweet, and to the point, employing a heavy dose of Strunk and White to distinguish the "must haves" from the "it would be nice ifs".
The problem with coding standards documents is that nobody really reads them like they should, and when they do read them, they don't follow them. The likelihood of such a document being read and followed varies inversely with its length.
I agree FxCop is a good tool but too much of this can take all the fun right out of programming, so be careful.
A: Never write your own coding standards use the MS ones (or the Sun ones or ... as appropriate for your language). The clue is in the word standard, the world would be a much easier place to code in if each organization hadn't decided to write their own. Who really thinks learning a new set of 'standards' each time you change teams/projects/roles is a good use of anyone's time.
The most you should ever do is summarize the critical points but I'd advise against doing even that because what is critical varies from person to person.
Two other points I'd like to make on coding standards
*
*Close is good enough - Changing code to follow coding standards to the letter is a waste of time as long as the code is close enough.
*If you're changing code you didn't write follow the 'local coding standards', i.e. make your new code look like the surrounding code.
These two points are the reality to my wish that everybody would write code that looked the same.
A: I found the following documentation very helpful and concise. It comes from the idesign.net site and it is authored by Juval Lowy
C# Coding Standard
NB: the above link is now dead. To get the .zip file you need to give them your email address (but they won't use it for marketing... honestly) Try here
A: I would add Code Complete 2 to the list (I know Jeff is kind of a fan here)... If you are a junior developer, the book comes in handy to set up your mind in a way that sets the foundation for the best code writing practices and software building there are.
I have to say that I came to it a bit late in my career, but it rules a lot of the ways I think about coding and framework development in my professional life.
It's worth checking out ;)
A: I've just started at a place where the coding standards mandate the use of m_ for member variables, p_ for parameters and prefixes for types, such as 'str' for strings.
So, you might have something like this in the body of a method:
m_strName = p_strName;
Horrible. Really horrible.
A: Microsoft's own rules are an excellent starting point. You can enforce them with FxCop.
A: I would be tempted to enforce Microsoft's StyleCop as the standard. It can be enforced at the build time. but if you have legacy code then just enforce using StyleCop on new code.
http://code.msdn.microsoft.com/sourceanalysis
Eventually it will have a refactor option to cleanup code.
http://blogs.msdn.com/sourceanalysis/
A: Personally I like the one that IDesign has put together. But that's not why I'm posting...
The tricky bit at my company was taking all the different languages into account. And I know my company isn't alone on this. We use C#, C, assembly (we make devices), SQL, XAML, etc. Although there will be some similarities in standards, each is usually handled differently.
Also, I believe that higher level standards have a greater impact on the quality of the final product. For example: how and when to use comments, when exceptions are mandatory (e.g. user initiated events), whether (or when) to use exceptions vs. return values, what is the objective way to determine what should be controller code vs presentation code, etc. Don't get me wrong, low level standards are also needed (formatting is important to readability!) I just have a bias towards overall structure.
Another piece to keep in mind is buy-in and enforcement. Coding standards are great. But if nobody agrees with them and (probably more importantly) no one enforces them then it's all for naught.
A: As I wrote both the one published for Philips Medical Systems and the one on http://csharpguidelines.codeplex.com I might be a bit biased, but I have more than 10 years on writing, maintaing and the promotion of coding standards. I've tried to write the one CodePlex with differences in opinions in mind and spent the majority of the introduction on how to deal with that in your particular organisation. Read it and provide me with feedback.....
A: IDesign has a C# coding standards document that is commonly used. Also see the Framework Design Guidelines 2nd Ed.
A: Ironically setting the actual standards are likely to be the easy part.
My first suggestion would be to elicit suggestions from the other engineers about what they feel should be covered, and what guidelines they feel are important. Enforcing any kind of guidelines requires a degree of buy-in from people. If you suddenly drop a document on them that specifies how to write code you'll encounter resistance, whether you're the most junior or senior guy.
After you have a set of proposals then send them out to the team for feedback and review. Again, get people to all buy into them.
There may already be informal coding practices that are adopted (e.g prefixing member variables, camelcase function names). If this exists, and most code conforms to it, then it will pay to formalize its use. Adopting a contrary standard is going to cause more grief than it's worth, even if it's something generally recommended.
It's also worth considering refactoring existing code to meet the new coding-standards. This can seem like a waste of time, but having code that does not meet the standards can be counter-productive as you will have a mish-mash of different styles. It also leaves people in a dilemma whether code in a certain module should conform to the new standard or follow the existing code style.
A: SSW Rules
It includes some C# standards + a whole lot more.... primarily focused at Microsoft developers
A: I have always used Juval Lowy's pdf as a reference when doing coding standards / best practices internally. It follows very close to FxCop/Source Analysis, which is another invaluable tool to make sure that the standard is being followed. Between these tools and references, you should be able to come up with a nice standard that all your developers won't mind following and be able to enforce them.
A: We start with
*
*Microsoft's .NET guidelines: http://msdn.microsoft.com/en-us/library/ms229042.aspx (link updated for .NET 4.5)
*Microsoft's C# guidelines: http://blogs.msdn.com/brada/articles/361363.aspx.
and then document the differences from and additions to that baseline.
A:
You are most likely being set up to fail. Welcome to the industry.
I disagree - so long as he creates the document, the worst that can happen is that it gets forgotten by everyone.
If other people have issues with the content, then you can ask them to update it to show what they'd prefer. That way it's off your plate, and the others have the responsibility to justify their changes.
A: I have recently found Encodo C# Handbook, which includes ideas from many other sources (IDesign, Philips, MSDN).
Another source may be Professional C#/VB .NET Coding Guidelines.
A: I'm a big fan of the Francesco Balena book "Practical Guidelines and Best Practices for VB and C# Developers".
It's very detailed and covers all the essential topics, It doesn't just give you the rule, but also explains the reason behind the rule, and even provides an anti-rule where there could be two opposing best practices. The only downside is that it was written for .NET 1.1 developers.
A: See this:
http://www.noesispedia.com/post/2008/11/28/C-Coding-Guidelines-and-Best-Practices.aspx.
A: Our entire coding standard reads roughly, "Use StyleCop."
A: I have to suggest the dotnetspider.com document.
It is a great and detailed document that is useful anywhere.
A: I've used Juval's before and that's through if not overkill, but I'm lazy and now just conform to the will of Resharper.
A: You can check out this,Top 7 Coding Standards & Guideline Documents For C#/.NET Developers http://www.amazedsaint.com/2010/11/top-6-coding-standards-guideline.html hope this helps
A: I think I echo the other comments here that the MS guidlines already linked are an excellent starting point. I model my code largely on those.
Which is interesting because my manager has told me in the past that he is not too keen on them :D
You have a fun task ahead of you my friend. Best of luck, and please ask if you need anything more :)
A: The standard from Philips Medical Systems is well written, and mostly follows Microsoft guidelines: www.tiobe.com/content/paperinfo/gemrcsharpcs.pdf
My standards are based on this with a few tweaks, and some updates for .NET 2.0 (the Philips standard is written for .NET 1.x so is a bit dated).
A: I also follow Resharper.
Also the guide line mentioned on scott guthrie blog
http://weblogs.asp.net/scottgu/archive/2007/10/08/october-8th-links-asp-net-asp-net-ajax-silverlight-and-net.aspx
And
http://csharpguidelines.codeplex.com/releases/view/46280
A: In the code I write I usually follow .NET Framework Design Guidelines for publicly exposed APIs and Mono Coding Guidelines for private member casing and indentation. Mono is an open source implementation of .NET, and I think those guys know their business.
I hate how Microsoft code wastes space:
try
{
if (condition)
{
Something(new delegate
{
SomeCall(a, b);
});
}
else
{
SomethingElse();
Foobar(foo, bar);
}
}
catch (Exception ex)
{
Console.WriteLine("Okay, you got me");
}
What you might find strange in Mono guidelines, is that they use 8-space tabs. However, after some practice, I found that it actually helps me writing less tangled code by enforcing a kind of indentation limit.
I also love how they put a space before opening parenthesis.
try {
if (condition) {
Something (new delegate {
SomeCall (a, b);
});
} else {
SomethingElse ();
Foobar (foo, bar);
}
} catch (Exception ex) {
Console.WriteLine ("Okay, you got me");
}
But please, don't enforce anything like that if your coworkers dislike it (unless you are willing to contribute to Mono ;-)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/14967",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "159"
} |
Q: Executing JavaScript to Render HTML for Server-Side Caching There are lots of widgets provided by sites that are effectively bits of JavaScript that generate HTML through DOM manipulation or document.write(). Rather than slow the browser down even more with additional requests and trust yet another provider to be fast, reliable and not change the widget output, I want to execute* the JavaScript to generate the rendered HTML, and then save that HTML source.
Things I've looked into that seem unworkable or way too difficult:
*
*The Links Browser (not lynx!)
*Headless use of Xvfb plus Firefox plus Greasemonkey (yikes)
*The all-Java browser toolkit Cobra (the best bet!)
Any ideas?
** Obviously you can't really execute the JavaScript completely, as it doesn't necessarily have an exit path, but you get the idea.
A: Wikipedia's "Server-side JavaScript" article lists numerous implementations, many of which are based on Mozilla's Rhino JavaScript-to-Java converter, or its cousin SpiderMonkey (the same engine as found in Firefox and other Gecko-based browsers). In particular, something simple like mod_js for Apache may suit your needs.
A: If you're just using plain JS, Rhino should do the trick. But if the JS code is actually calling DOM methods and so on, you're going to need a full-blown browser. Crowbar might help you.
Is this really going to make things faster for users without causing compatibility issues?
A: There's John Resig's project Bringing the Browser to the Server: "browser/DOM environment, written in JavaScript, that runs on top of Rhino; capable of running jQuery, Prototype, and MochiKit (at the very least)."
| {
"language": "en",
"url": "https://stackoverflow.com/questions/15007",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15"
} |
Q: Web Application Testing for .Net (WatiN Test Recorder) I've been using WatiN as a testing tool for my current project. Besides the minor bugs with the Test Recorder, I've been able to use it and automate a lot of my tests in conjunction with NUnit. Anyone else out there with experience with different tools they might suggest?
A: I have used Selenium before and hooked it into CruiseControl.NET and while it has it's quirks worked quite well.
Here are some useful links.
http://selenium-ide.openqa.org/
http://wiki.openqa.org/display/SIDE/Automating+Selenium+IDE+tests
http://agiletesting.blogspot.com/2006/03/remote-web-app-testing-with-selenium.html
http://www.nofluffjuststuff.com/blog_detail.jsp?rssItemId=97932
http://www.testearly.com/2006/10/04/selenium-using-selenium-ide-selenium-remote-control-and-ant/
Cheers
John
A: I have used:
*
*WatiN
*AutomatedQA TestComplete
All of them have had their purpose and are very good tools.
A: I just wrote a blog article comparing Selenium and Visual Studio Automation Testing (Coded UI) :
A: WatiN is excellent.
I inherited Mercury Quicktest for functional testing a while back. £30k for the licences and it was truly awful. We never got the same results twice (running on the exact same application). Their support was terrible. It stored tests as collections of encrypted binaries in folders called useful things like Action1 and Action2, so we couldn't source control it properly.
No idea whether HP have improved it since they bought out Mercury, but why bother when WatiN is so good?
A: I can also recommend WatiN. I've been using it exclusively for my web testing. I've even got it to play nice with VB.Net and HP/Mercury Quality Center(TestDirector).
A: The best Open Source automation tool I have used are Selenium IDE and Selenium Remote Control. You can then run the scripts on IE, Firefox in both Mac and Windows.
If you prefer record-play, then download the Firefox add-on Selenium IDE and then record your scripts and run them. You can very easily look at the scripts and figure out how to make minor edits.
If you want more power and flexibility of a full programming language, then consider Selenium Remote Control where I use Java and JUnit to drive the automation scripts. An easy way to started using RC is to use IDE to record your scripts, save them as RC scripts and use JUnit framework to drive your test suite.
For more information, check out:
http://selenium-ide.openqa.org/
http://selenium-rc.openqa.org/
| {
"language": "en",
"url": "https://stackoverflow.com/questions/15015",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: How to get list of installed BitmapEncoders/Decoders (the WPF world)? In WindowsForms world you can get a list of available image encoders/decoders with
System.Drawing.ImageCodecInfo.GetImageDecoders() / GetImageEncoders()
My question is, is there a way to do something analogous for the WPF world that would allow me to get a list of available
System.Windows.Media.Imaging.BitmapDecoder / BitmapEncoder
A: You've got to love .NET reflection. I worked on the WPF team and can't quite think of anything better off the top of my head. The following code produces this list on my machine:
Bitmap Encoders:
System.Windows.Media.Imaging.BmpBitmapEncoder
System.Windows.Media.Imaging.GifBitmapEncoder
System.Windows.Media.Imaging.JpegBitmapEncoder
System.Windows.Media.Imaging.PngBitmapEncoder
System.Windows.Media.Imaging.TiffBitmapEncoder
System.Windows.Media.Imaging.WmpBitmapEncoder
Bitmap Decoders:
System.Windows.Media.Imaging.BmpBitmapDecoder
System.Windows.Media.Imaging.GifBitmapDecoder
System.Windows.Media.Imaging.IconBitmapDecoder
System.Windows.Media.Imaging.LateBoundBitmapDecoder
System.Windows.Media.Imaging.JpegBitmapDecoder
System.Windows.Media.Imaging.PngBitmapDecoder
System.Windows.Media.Imaging.TiffBitmapDecoder
System.Windows.Media.Imaging.WmpBitmapDecoder
There is a comment in the code where to add additional assemblies (if you support plugins for example). Also, you will want to filter the decoder list to remove:
System.Windows.Media.Imaging.LateBoundBitmapDecoder
More sophisticated filtering using constructor pattern matching is possible, but I don't feel like writing it. :-)
All you need to do now is instantiate the encoders and decoders to use them. Also, you can get better names by retrieving the CodecInfo property of the encoder decoders. This class will give you human readable names among other factoids.
using System;
using System.Linq;
using System.Collections.Generic;
using System.Reflection;
using System.Windows.Media.Imaging;
namespace Codecs {
class Program {
static void Main(string[] args) {
Console.WriteLine("Bitmap Encoders:");
AllEncoderTypes.ToList().ForEach(t => Console.WriteLine(t.FullName));
Console.WriteLine("\nBitmap Decoders:");
AllDecoderTypes.ToList().ForEach(t => Console.WriteLine(t.FullName));
Console.ReadKey();
}
static IEnumerable<Type> AllEncoderTypes {
get {
return AllSubclassesOf(typeof(BitmapEncoder));
}
}
static IEnumerable<Type> AllDecoderTypes {
get {
return AllSubclassesOf(typeof(BitmapDecoder));
}
}
static IEnumerable<Type> AllSubclassesOf(Type type) {
var r = new Reflector();
// Add additional assemblies here
return r.AllSubclassesOf(type);
}
}
class Reflector {
List<Assembly> assemblies = new List<Assembly> {
typeof(BitmapDecoder).Assembly
};
public IEnumerable<Type> AllSubclassesOf(Type super) {
foreach (var a in assemblies) {
foreach (var t in a.GetExportedTypes()) {
if (t.IsSubclassOf(super)) {
yield return t;
}
}
}
}
}
}
A: Hopefully someone will correct me if I'm wrong, but I don't think there's anything like that in WPF. But hopefully this is one of the many cases where advances in the technology have rendered obsolete the way we're used to doing things. Like "how do I wind my digital watch?"
To my understanding, the reason why ImageCodecInfo.GetImageDecoders() is necessary in System.Drawing has to do with the kludgy nature of System.Drawing itself: System.Drawing is a managed wrapper around GDI+, which is an unmanaged wrapper around a portion of the Win32 API. So there might be a reason why a new codec would be installed in Windows without .NET inherently knowing about it. And what's returned from GetImageDecoders() is just a bunch of strings that are typically passed back into System.Drawing/GDI+, and used to find and configure the appropriate DLL for reading/saving your image.
On the other hand, in WPF, the standard encoders and decoders are built into the framework, and, if I'm not mistaken, don't depend on anything that that isn't guaranteed to be installed as part of the framework. The following classes inherit from BitmapEncoder and are available out-of-the-box with WPF: BmpBitmapEncoder, GifBitmapEncoder, JpegBitmapEncoder, PngBitmapEncoder, TiffBitmapEncoder, WmpBitmapEncoder. There are BitmapDecoders for all the same formats, plus IconBitmapDecoder and LateBoundBitmapDecoder.
You may be dealing with a case I'm not imagining, but it seems to me that if you're having to use a class that inherits from BitmapEncoder but wasn't included with WPF, it's probably your own custom class that you would install with your application.
Hope this helps. If I'm missing a necessary part of the picture, please let me know.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/15023",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
} |
Q: Tools to help a small shop score higher on the "Joel Test" Questions #1 through #4 on the Joel Test in my opinion are all about the development tools being used and the support system in place for developers:
*
*Do you use source control?
*Can you make a build in one step?
*Do you make daily builds?
*Do you have a bug database?
I'm just curious what free/cheap (but good) tools exist for the small development shops that don't have large bank accounts to use to achieve a positive answer on these questions.
For source control I know Subversion is a great solution, and if you are a one man shop you could even use SourceGear's Vault.
I use NAnt for my larger projects, but have yet to set up a script to build my installers as well as running the obfusication tools all as a single step. Any other suggestions?
If you can answer yes to the building in a single step, I think creating daily builds would be easy, but what tools would you recommend for automating those daily builds?
For a one or two man team, it's already been discussed on SO that you can use FogBugz On Demand, but what other bug tracking solutions exist for small teams?
A: 1) Subversion
2) Ant / Maven
3) Continuum
4) Bugzilla / Trac
A: My preferred stack:
1) Subversion. I'm intrigued about distributed source control but haven't had chance to try any in anger yet. For a centralized solution svn is rock solid.
2) Ant. Maven is a joy to use when it's working but as an old ant hacker I find maven to be hard to follow once things go wrong.
3) Hudson. Not been mentioned so far but definitely worth investigating. Incredibly usable and actively maintained tool. PreviousLy we paid for Anthill Pro which seemed flakey and was painful to fix each time it screwed up.
4) We pay for jira. Not cheap but much more usable than the open source options we looked at and very flexible too.
A: My engineering stack:
*
*Git (I love GitHub, but Git doesn't require a hosted solution)
*Rake
*CruiseControl.rb
*FogBugz
No doubt these choices are influenced by my development stack, which most often includes Ruby, Rails, SQLite, Firefox, and OSX.
A: You may want to look at an existing question of mine for finding an alternative to Team System. There are plenty of recommendations in there also.
A: *
*Git
*Make
*Cron
*Trac
I'm a man of few syllables ;-)
Be sure to use some kind of version control where developers can easily create private branches willy-nilly, then take their private branch and squeeze it into a single commit on the main branch. That way, individual developers---as opposed to the organization---can get the benefits of version control without polluting anyone else's code (and slowing down their work) with broken commits.
This feature is what I like about git. I think it's only really present in distributed version control systems; using a DVCS doesn't mean you actually have to do distributed development, though.
Regarding one-step building, make is the default build tool and it works quite well for most tasks. I'd go with that unless you have a good reason not to.
You want daily builds, put the build command in your cron.daily. Set up a procmail hook to handle the mail from cron if need be.
For bug tracking, use $(apt-cache search bug tracking). Basically, as long as it says "bug tracker" on the box and you know other people are using it, it's probably going to work fine. Among the regulars are bugzilla, mantis and trac.
A: *
*source control: Subversion or Mercurial or Git
*build automation: NAnt, MSBuild, Rake, Maven
*continuous integration: CruiseControl.NET or Continuum or Jenkins
*issue tracking: Trac, Bugzilla, Gemini (if it must be .NET and free-ish)
Don't forget automated testing with NUnit, Fit, and WatiN.
A: I don't have any tools to suggest, but I do have a suggestion about the daily builds. I always answer yes to that question, even though we don't have daily builds. Instead, we do a build every time someone does a commit. We thereby catch any problems almost immediately. If any of our projects ever has enough LOC that building takes more than trivial time, doing this will also gracefully degrade in the direction of a daily build.
A: A good issue tracker that was relatively inexpensive was axoSoft OnTime. I used it for years before getting MS TFS.
Nant and CruiseControl are staples of my environment.
A: I don't think you really need obfuscation on .Net any more (see another response)
I wouldn't consider Vault, SVN is really the market leader at the moment (and free). Git is looking pretty promising but currently is command line only with a steep learning curve.
MSBuild beats NAnt for .Net 2 or 3.5
CC.Net is excellent.
A: *4) Redmine
I recommend Bitnami for testing out different stacks. It's got Trac, Redmine, and Subversion, as well as several other unrelated ones.
A: Check out these articles on Continuous Integration using MSBuild, CruiseControl.NET, FxCop, NUnit, NCover and Subversion...
From the software development trenches
A: I'm currently using SVN but I've generally had a lot or problem with checkouts to a network drive on a dev server. There tend to be locking issues that require a lot of fishing around to fix. It may be that using the WebDav access method, would ease some of these problems, but I haven't experimented yet.
Any of Bugzilla, Trac or Fogbugz will help you with your bug tracking, and each offer an export feature, so you can always change your mind later on. Also, if you can get your team to fully buy in, time management software can also be handy for post-mortems, etc (if everyone is motivated to fully participate.
A: For build automation and continuous integration take a look at TeamCity from Jetbrains.
It has a lot of features and is really a breeze to set up and use.
If you use Visual Studio 2005/2008 it will build your solution directly without the need for extra scripts (if a build is all you want.)
It will also execute your unit tests and gather stats on build success, unit test execution times, etc, etc.
Best of all: The Pro edition is free for teams with up to 20 users and 3 build agents.
A: *
*source control: cvs
*build gnu make
*cron job that calls bash scripts
*bugzilla
| {
"language": "en",
"url": "https://stackoverflow.com/questions/15024",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17"
} |
Q: Visual Studio Error: The "GenerateResource" task failed unexpectedly When building a VS 2008 solution with 19 projects I sometimes get:
The "GenerateResource" task failed unexpectedly.
System.OutOfMemoryException: Exception of type 'System.OutOfMemoryException' was thrown.
at System.IO.MemoryStream.set_Capacity(Int32 value)
at System.IO.MemoryStream.EnsureCapacity(Int32 value)
at System.IO.MemoryStream.WriteByte(Byte value)
at System.IO.BinaryWriter.Write(Byte value)
at System.Resources.ResourceWriter.Write7BitEncodedInt(BinaryWriter store, Int32 value)
at System.Resources.ResourceWriter.Generate()
at System.Resources.ResourceWriter.Dispose(Boolean disposing)
at System.Resources.ResourceWriter.Close()
at Microsoft.Build.Tasks.ProcessResourceFiles.WriteResources(IResourceWriter writer)
at Microsoft.Build.Tasks.ProcessResourceFiles.WriteResources(String filename)
at Microsoft.Build.Tasks.ProcessResourceFiles.ProcessFile(String inFile, String outFile)
at Microsoft.Build.Tasks.ProcessResourceFiles.Run(TaskLoggingHelper log, ITaskItem[] assemblyFilesList, ArrayList inputs, ArrayList outputs, Boolean sourcePath, String language, String namespacename, String resourcesNamespace, String filename, String classname, Boolean publicClass)
at Microsoft.Build.Tasks.GenerateResource.Execute()
at Microsoft.Build.BuildEngine.TaskEngine.ExecuteInstantiatedTask(EngineProxy engineProxy, ItemBucket bucket, TaskExecutionMode howToExecuteTask, ITask task, Boolean& taskResult) C:\Windows\Microsoft.NET\Framework\v3.5
Usually happens after VS has been running for about 4 hours; the only way to get VS to compile properly is to close out VS, and start it again.
I'm on a machine with 3GB Ram. TaskManager shows the devenv.exe working set to be 578060K, and the entire memory allocation for the machine is 1.78GB. It should have more than enough ram to generate the resources.
A: From https://social.msdn.microsoft.com/Forums/vstudio/en-US/5154ef26-ccfe-44d5-a322-6804b61ac774/systemoutofmemoryexception?forum=clr:
Try deleting the .suo file and re-opening the solution.
A: Sounds like a bug.
http://www.codeprof.com/dev-archive/66/6-27-664019.shtm
Toward the bottom, someone suggests adding:
<GenerateResourceNeverLockTypeAssemblies>true</GenerateResourceNeverLockTypeAssemblies>
to your project file. Seems kind of dubious, but worth a shot.
A: In case someone else is looking in the future...
In my case, turned out I had a corrupted resx file.
I had increased my GDI handles and the compile error went away.
But then when I tried to run the app (with the debugger),
We have a login screen that loads the main screen. The login screen called the main screen's "show" event... and the main object never got instantiated - with no error's being raised.
I reverted the resx file to a previous one and everything is fine now.
Visual Studio 2008, VB.Net, Windows 7
A: Can you please try adding this property under the first PropertyGroup in your project file?
<GenerateResourceNeverLockTypeAssemblies>true</GenerateResourceNeverLockTypeAssemblies>
Let me know if that works.
A: I used to hit this now and again with larger solutions. My tactic was to break the larger solution down into smaller solutions.
You could also try:
http://stevenharman.net/blog/archive/2008/04/29/hacking-visual-studio-to-use-more-than-2gigabytes-of-memory.aspx
A: I have already passed by this erros sometimes. All you must do is delete all files in the obj path. After that clean and rebuild your solution and it´s done.
A: "Clean solution" works fine. Top Menu Build ->Clean , then build, debug and
publish all work fine again. Also antivirus like AVAST best disabled to publish and install trouble free. Re-enable after.
A: TFS likes to mark files as Read Only.
delete the contents of obj/x86
| {
"language": "en",
"url": "https://stackoverflow.com/questions/15034",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: How do you set your LAMP testing server? I am using xampp on Windows, but I would like to use something closer to my server setup.
Federico Cargnelutti tutorial explains how to setup LAMP VMWARE appliance; it is a great introduction to VMware appliances, but one of the commands was not working and it doesn't describe how to change the keyboard layout and the timezone.
ps: the commands are easy to find but I don't want to look for them each time I reinstall the server. I am using this question as a reminder.
A: Assuming you have VMware workstation, VMware player or anything that can run vmware appliance, you just need to:
*
*Download, unzip Ubuntu 8.04 Server and start the virtual machine.
*Update ubuntu and set the layout and the timezone:
sudo apt-get update
sudo apt-get upgrade
sudo dpkg-reconfigure console-setup
sudo dpkg-reconfigure tzdata
sudo vim /etc/network/interfaces
*set a fixed IP (Optional).
*install apache+mysql+php:
sudo tasksel install lamp-server
A: This is my install scrpt, I use it on debian servers, but it will work in Ubuntu (Ubuntu is built on Debian)
apt-get -yq update
apt-get -yq upgrade
apt-get -yq install sudo
apt-get -yq install gcc
apt-get -yq install g++
apt-get -yq install make
apt-get -yq install apache2
apt-get -yq install php5
apt-get -yq install php5-curl
apt-get -yq install php5-mysql
apt-get -yq install php5-gd
apt-get -yq install mysql-common
apt-get -yq install mysql-client
apt-get -yq install mysql-server
apt-get -yq install phpmyadmin
apt-get -yq install samba
echo '[global]
workgroup = workgroup
server string = %h server
dns proxy = no
log file = /var/log/samba/log.%m
max log size = 1000
syslog = 0
panic action = /usr/share/samba/panic-action %d
encrypt passwords = true
passdb backend = tdbsam
obey pam restrictions = yes
;invalid users = root
unix password sync = no
passwd program = /usr/bin/passwd %u
passwd chat = *Enter\snew\sUNIX\spassword:* %n\n *Retype\snew\sUNIX\spassword:* %n\n *password\supdated\ssuccessfully* .
socket options = TCP_NODELAY
[homes]
comment = Home Directories
browseable = no
writable = no
create mask = 0700
directory mask = 0700
valid users = %S
[www]
comment = WWW
writable = yes
locking = no
path = /var/www
public = yes' > /etc/samba/smb.conf
(echo SAMBAPASSWORD; echo SAMBAPASSWORD) | smbpasswd -sa root
echo 'NameVirtualHost *
<VirtualHost *>
ServerAdmin webmaster@localhost
DocumentRoot /var/www/
<Directory />
Options FollowSymLinks
AllowOverride None
</Directory>
<Directory /var/www/>
Options Indexes FollowSymLinks MultiViews
AllowOverride None
Order allow,deny
allow from all
</Directory>
ErrorLog /var/log/apache2/error.log
LogLevel warn
CustomLog /var/log/apache2/access.log combined
ServerSignature On
</VirtualHost>' > /etc/apache2/sites-enabled/000-default
/etc/init.d/apache2 stop
/etc/init.d/samba stop
/etc/init.d/apache2 start
/etc/init.d/samba start
edit: add this to set your MySQL password
/etc/init.d/mysql stop
echo "UPDATE mysql.user SET Password=PASSWORD('MySQLPasswrod') WHERE User='root'; FLUSH PRIVILEGES;" > /root/MySQLPassword
mysqld_safe --init-file=/root/MySQLPassword &
sleep 1
/etc/init.d/mysql stop
sleep 1
/etc/init.d/mysql start
end edit
This is a bit specailised but you get the idea, if you save this to a file ('install' for example) all you have to do is:
chmod +x install
./install
Some of my apt-get commands are not necessary, because apt will automatically get the dependencies but I prefer to be specific, for my installs.
A: Provided this question is properly tagged, you can select LAMP server option during installation of Ubuntu. This will install and configure all required components automatically. A detailed instruction on how to do this can be found, for example, there: http://www.ubuntugeek.com/ubuntu-804-hardy-heron-lamp-server-setup.html
A: You can rapidly customize LAMP, RoR, Python Django, Java Stack, Spring, etc servers for Ubuntu-based VM images at http://www.elasticserver.com - Unbuntu 8.04LTS now supported.
A: I don't really understand your question because i really didn't see one. But i'll do my best to infer two: to change your keyboard layout, check this forum post on ubuntu forums and to change the timezone, check this forum post.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/15040",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: Looking for example of Command pattern for UI I'm working on a WinForm .Net application with the basic UI that includes toolbar buttons, menu items and keystrokes that all initiate the same underlying code. Right now the event handlers for each of these call a common method to perform the function.
From what I've read this type of action could be handled by the Command design pattern with the additional benefit of automatically enabling/disabling or checking/unchecking the UI elements.
I've been searching the net for a good example project, but really haven't found one. Does anyone have a good example that can be shared?
A: Let's first make sure we know what the Command pattern is:
Command pattern encapsulates a request
as an object and gives it a known
public interface. Command Pattern
ensures that every object receives its
own commands and provides a decoupling
between sender and receiver. A sender
is an object that invokes an
operation, and a receiver is an object
that receives the request and acts on
it.
Here's an example for you. There are many ways you can do this, but I am going to take an interface base approach to make the code more testable for you. I am not sure what language you prefer, but I am writing this in C#.
First, create an interface that describes a Command.
public interface ICommand
{
void Execute();
}
Second, create command objects that will implement the command interface.
public class CutCommand : ICommand
{
public void Execute()
{
// Put code you like to execute when the CutCommand.Execute method is called.
}
}
Third, we need to setup our invoker or sender object.
public class TextOperations
{
public void Invoke(ICommand command)
{
command.Execute();
}
}
Fourth, create the client object that will use the invoker/sender object.
public class Client
{
static void Main()
{
TextOperations textOperations = new TextOperations();
textOperation.Invoke(new CutCommand());
}
}
I hope you can take this example and put it into use for the application you are working on. If you would like more clarification, just let me know.
A: Your on the right track. Basically you will have a model that represents the document. You will use this model in the CutCommand. You will want to change the CutCommand's constructor to accept the information you want to cut. Then everytime, say the Cut Button is clicked, you invoke a new CutCommand and passing the arguments in the constructor. Then use those arguments in the class when the Execute method is called.
A: Try open source, .NET editors like SharpDevelop or Notepad++.
There is (naturally) some discussion of the Command Pattern at http://c2.com/cgi/wiki?CommandPattern that might be helpful.
A: Qt uses Command Pattern for Menubar/Toolbar items.
QActions are created seperately from QMenuItem and QToolbar, and the Actions can be assigned to QMenuItem and QToolbar with setAction() and addAction() method respectively.
http://web.archive.org/web/20100801023349/http://cartan.cas.suffolk.edu/oopdocbook/html/menus.html
http://web.archive.org/web/20100729211835/http://cartan.cas.suffolk.edu/oopdocbook/html/actions.html
A: I can't help you with example link, but can provide example by myself.
1) Define ICommand interface:
public interface ICommand {
void Do();
void Undo();
}
2) Do your ICommand implementations for concrete commands, but also define abstract base class for them:
public abstract class WinFormCommand : ICommand {
}
3) Create command invoker:
public interface ICommandInvoker {
void Invoke(ICommand command);
void ReDo();
void UnDo();
}
public interface ICommandDirector {
void Enable(ICommand);
void Disable(ICommand);
}
public class WinFormsCommandInvoker : ICommandInvoker, ICommandDirector {
private readonly Dictionary<ICommand, bool> _commands;
private readonly Queue<ICommand> _commandsQueue;
private readonly IButtonDirector _buttonDirector;
// you can define additional queue for support of ReDo operation
public WinFormsCommandInvoker(ICommandsBuilder builder, IButtonDirector buttonDirector) {
_commands = builder.Build();
_buttonDirector = buttonDirector;
_commandsQueue = new Queue<ICommand>();
}
public void Invoke(ICommand command) {
command.Do();
__commandsQueue.Enqueue(command);
}
public void ReDo() {
//you can implement this using additional queue
}
public void UnDo() {
var command = __commandsQueue.Dequeue();
command.Undo();
}
public void Enable(ICommand command) {
_commands.[command] = true;
_buttonDirector.Enable(command);
}
public void Disable(ICommand command) {
_commands.[command] = false;
_buttonDirector.Disable(command);
}
}
4) Now you can implement your ICommandsBuilder, IButtonDirector and add other interfaces such as ICheckBoxDirector to WinFormsCommandInvoker.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/15047",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "18"
} |
Q: How to get out parameters working in SharePoint workflows I'm trying to create a custom workflow action with an output parameter for error handling. Working from various examples, I can't get Parameter Direction="Out" to work. Everything seems right, but when I try to assign the output to the "error" variable in SharePoint Designer, it places asterisks around it and flags it as a workflow error. Here is what the action XML looks like:
<Action Name="Create Folder"
ClassName="ActivityLibrary.CreateFolderActivityTest"
Assembly="ActivityLibrary, Version=1.0.0.0, Culture=neutral, PublicKeyToken=xxxxxxxxxxx"
AppliesTo="all"
CreatesInList="ListId"
Category="Custom">
<RuleDesigner Sentence="Create a folder %1 in the %2 base folder. If an error occurs it will be output to %3.">
<FieldBind Field="FolderName" Text="folder name" Id="1" />
<FieldBind Field="BaseFolderPath" Text="folder path" Id="2"/>
<FieldBind Field="OutError" DesignerType="ParameterNames" Text="out error" Id="3"/>
</RuleDesigner>
<Parameters>
<Parameter Name="FolderName" Type="System.String, mscorlib" Direction="In" />
<Parameter Name="BaseFolderPath" Type="System.String, mscorlib" Direction="In" />
<Parameter Name="OutError" Type="System.String, mscorlib" Direction="Out" />
</Parameters>
</Action>
A: I think you may want Direction="InOut" from the looks of the binding
A: Are you sure the issue is with the parameters and not maybe the variable in SPD? Certainly nothing looks wrong with your XML.
I always hated the way SPD and workflows make you create a variable within the workflow and another within the page to assign to the same value as the workflow variable.
A: Did you get anywhere with this? I suspect the problem was more likely in your logic code rather than this xml (.actions) file. It looks perfectly acceptable to me.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/15053",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Do you have any recommended macros for Microsoft Visual Studio? What are some macros that you have found useful in Visual Studio for code manipulation and automation?
A: This is one of the handy ones I use on HTML and XML files:
''''replaceunicodechars.vb
Option Strict Off
Option Explicit Off
Imports EnvDTE
Imports System.Diagnostics
Public Module ReplaceUnicodeChars
Sub ReplaceUnicodeChars()
DTE.ExecuteCommand("Edit.Find")
ReplaceAllChar(ChrW(8230), "…") ' ellipses
ReplaceAllChar(ChrW(8220), "“") ' left double quote
ReplaceAllChar(ChrW(8221), "”") ' right double quote
ReplaceAllChar(ChrW(8216), "‘") ' left single quote
ReplaceAllChar(ChrW(8217), "’") ' right single quote
ReplaceAllChar(ChrW(8211), "–") ' en dash
ReplaceAllChar(ChrW(8212), "—") ' em dash
ReplaceAllChar(ChrW(176), "°") ' °
ReplaceAllChar(ChrW(188), "¼") ' ¼
ReplaceAllChar(ChrW(189), "½") ' ½
ReplaceAllChar(ChrW(169), "©") ' ©
ReplaceAllChar(ChrW(174), "®") ' ®
ReplaceAllChar(ChrW(8224), "†") ' dagger
ReplaceAllChar(ChrW(8225), "‡") ' double-dagger
ReplaceAllChar(ChrW(185), "¹") ' ¹
ReplaceAllChar(ChrW(178), "²") ' ²
ReplaceAllChar(ChrW(179), "³") ' ³
ReplaceAllChar(ChrW(153), "™") ' ™
''ReplaceAllChar(ChrW(0), "�")
DTE.Windows.Item(Constants.vsWindowKindFindReplace).Close()
End Sub
Sub ReplaceAllChar(ByVal findWhat, ByVal replaceWith)
DTE.Find.FindWhat = findWhat
DTE.Find.ReplaceWith = replaceWith
DTE.Find.Target = vsFindTarget.vsFindTargetCurrentDocument
DTE.Find.MatchCase = False
DTE.Find.MatchWholeWord = False
DTE.Find.MatchInHiddenText = True
DTE.Find.PatternSyntax = vsFindPatternSyntax.vsFindPatternSyntaxLiteral
DTE.Find.ResultsLocation = vsFindResultsLocation.vsFindResultsNone
DTE.Find.Action = vsFindAction.vsFindActionReplaceAll
DTE.Find.Execute()
End Sub
End Module
It's useful when you have to do any kind of data entry and want to escape everything at once.
A: This is my macro to close the solution, delete the intellisense file, and reopen the solution. Essential if you're working in native C++.
Sub UpdateIntellisense()
Dim solution As Solution = DTE.Solution
Dim filename As String = solution.FullName
Dim ncbFile As System.Text.StringBuilder = New System.Text.StringBuilder
ncbFile.Append(System.IO.Path.GetDirectoryName(filename) + "\")
ncbFile.Append(System.IO.Path.GetFileNameWithoutExtension(filename))
ncbFile.Append(".ncb")
solution.Close(True)
System.IO.File.Delete(ncbFile.ToString())
solution.Open(filename)
End Sub
A: This is one I created which allows you to easily change the Target Framework Version of all projects in a solution: http://geekswithblogs.net/sdorman/archive/2008/07/18/visual-studio-2008-and-targetframeworkversion.aspx
A: I'm using Jean-Paul Boodhoo's BDD macro. It replaces whitespace characters with underscores within the header line of a method signature. This way I can type the names of a test case, for example, as a normal sentence, hit a keyboard shortcut and I have valid method signature.
A: You might want to add in code snippets as well, they help to speed up the development time and increase productivity.
The standard VB code snippets come with the default installation. The C# code snippets must be downloaded and added seperately. (Link below for those)
As far as macros go, I generally have not used any but the working with Visual studio 2005 book has some pretty good ones in there.
C# Code snippets Link:
http://www.codinghorror.com/blog/files/ms-csharp-snippets.7z.zip
(Jeff Atwood provided the link)
HIH
| {
"language": "en",
"url": "https://stackoverflow.com/questions/15056",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
} |
Q: find orphaned methods in codebase I'm sure we've all seen it before...A code base that has been around for a year or two, and as features have been added and bugs fixed, we end up with pieces of code that aren't actually in use anymore. I'm wondering if there is a way (especially in VS.NET with or without a third-party tool) to search the codebase and show me which methods are NEVER used anywhere else in the code?
The one challenge I can think of in regards to this type of utility would be the inability to map back when implicit type conversions are occuring. But assuming that wasn't a problem, what are my options?
A: As it turns out, one of the things that FxCop does is identify unused bits of code, but it sometimes misses stuff. However, your best bet would likely be ReSharper.
A: Remember though that any public-facing method, property, or field can be accessed via reflection or in a derived type in a seperate assembly.
FxCop is the right answer here, but you also need to limit accessibility to your code. I.e. decorate things with private/protected/internal where appropriate.
A: FxCop will warn you of methods where nothing calls them.
A: The following tool can find orphan/unused code:
MZ-Tools
| {
"language": "en",
"url": "https://stackoverflow.com/questions/15057",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
} |
Q: How do I do a string replacement in a PowerShell function? How do I convert function input parameters to the right type?
I want to return a string that has part of the URL passed into it removed.
This works, but it uses a hard-coded string:
function CleanUrl($input)
{
$x = "http://google.com".Replace("http://", "")
return $x
}
$SiteName = CleanUrl($HostHeader)
echo $SiteName
This fails:
function CleanUrl($input)
{
$x = $input.Replace("http://", "")
return $x
}
Method invocation failed because [System.Array+SZArrayEnumerator] doesn't contain a method named 'Replace'.
At M:\PowerShell\test.ps1:13 char:21
+ $x = $input.Replace( <<<< "http://", "")
A: function CleanUrl([string] $url)
{
return $url.Replace("http://", "")
}
A: Steve's answer works. The problem with your attempt to reproduce ESV's script is that you're using $input, which is a reserved variable (it automatically collects multiple piped input into a single variable).
You should, however, use .Replace() unless you need the extra feature(s) of -replace (it handles regular expressions, etc).
function CleanUrl([string]$url)
{
$url.Replace("http://","")
}
That will work, but so would:
function CleanUrl([string]$url)
{
$url -replace "http://",""
}
Also, when you invoke a PowerShell function, don't use parenthesis:
$HostHeader = "http://google.com"
$SiteName = CleanUrl $HostHeader
Write-Host $SiteName
Hope that helps. By the way, to demonstrate $input:
function CleanUrls
{
$input -replace "http://",""
}
# Notice these are arrays ...
$HostHeaders = @("http://google.com","http://stackoverflow.com")
$SiteNames = $HostHeader | CleanUrls
Write-Output $SiteNames
A: The concept here is correct.
The problem is with the variable name you have chosen. $input is a reserved variable used by PowerShell to represent an array of pipeline input. If you change your variable name, you should not have any problem.
PowerShell does have a replace operator, so you could make your function into
function CleanUrl($url)
{
return $url -replace 'http://'
}
A: This worked for me:
function CleanUrl($input)
{
return $input.Replace("http://", "")
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/15062",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17"
} |
Q: Cycle Button Background Images in C# I have a form in C# that has a button that, when clicked, I want the background image to cycle through a set of images (which I have as resources to the project). The images are named '_1', '_2', etc. and each time I click the button I want its background image to increment to the next one and go back to "_1" when it gets to the highest. Is there a way to do this?
I tried getting button1.BackgroundImage.ToString() but that yields System.Drawing.Bitmap instead of Resources._1 like I was thinking it would (in which case I could just get the last character and switch on that to change the background to the appropriate new image).
Thanks for your help.
A: Why don't you just put the images in an array?
A: You could subclass Button and override the BackgroundImage property so you can better keep track of the current resource that represents the image. You might also override the onclick method to internally handle cycling to the next image, though that might be a little weird if the resources are handled outside of your derived button class.
A: class YourClass
{
private IEnumerator<Image> enumerator;
YourClass(IEnumerable<Image> images)
{
enumerator = (from i in Enumerable.Range(0, int.Max)
from image in images
select image).GetEnumerator();
enumerator.MoveNext();
}
public Image CurrentImage { get { return enumerator.Current; } }
public void OnButtonClick() { enumerator.MoveNext(); }
}
You can use this code as a backing class for your control under the assumption that user wont click the button more than two billion times.
Just note that once this class is created you cannot modify given image list outside. If you want to do such things you need to implement disposable pattern and dispose the enumerator accordingly.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/15066",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Access 2000 connecting to SQL Server 2005 The company I work for has an old Access 2000 application that was using a SQL Server 2000 back-end. We were tasked with moving the back-end to a SQL Server 2005 database on a new server. Unfortunately, the application was not functioning correctly while trying to do any inserts or updates. My research has found many forum posts that Access 2000 -> SQL 2005 is not supported by Microsoft, but I cannot find any Microsoft documentation to verify that.
Can anyone either link me to some official documentation, or has anyone used this setup and can confirm that this should be working and our problems lie somewhere else?
Not sure if it matters, but the app is an ADP compiled into an ADE.
A: I've had a similar problem before when using ODBC linked tables to connect to an Sql Server. The solution was to relink the tables and specify the primary key to the table. If Access doesn't know the primary key it cannot perform inserts or updates.
I haven't any experience with ADPs but it could be a similar thing, theres a knowledge base article about it here http://support.microsoft.com/?scid=kb%3Ben-us%3B235267&x=15&y=13
A: I'd say check the VBA in the Macros to see how it is doing it. It is probably using some form of VB connection to the Database in the back. I love the fact a Database is contacting a Database for it's data... :)
A: All I've read about Access 2000 -> SQL Server 2005 is that the upsizing wizard isn't supported.
If only the inserts and updates aren't functioning, it sounds like a permissions issue. Make sure the sql server login you are using in your connection string has read/write permission on your database.
Please avoid using the "sa" account for this purpose!
A: I'm not sure about that particular combination being supported, but have you tried setting the compatibilty mode for the database to sql server 2000. Maybe that will resolve your issues.
Edit: To do this run the following SQL:
EXEC sp_dbcmptlevel Name_of_your_database, 80;
More details here: http://blog.sqlauthority.com/2007/05/29/sql-server-2005-change-database-compatible-level-backward-compatibility/
A:
If only the inserts and updates aren't
functioning, it sounds like a
permissions issue. Make sure the sql
server login you are using in your
connection string has read/write
permission on your database.
Please avoid using the "sa" account
for this purpose!
We wanted to use a generic apps account but that login "could not find" any of the stored procedures even though they existed and the login has explicit permissions to run them (and was also tested successfully, as that user, in SQL Management Studio). It wasn't until we granted that login "sa" privileges that we could actually access the database at all through the application.
but have you tried setting the
compatibilty mode for the database to
sql server 2000.
I'm not really sure how this is done. Could you explain?
Also of note, if we upgrade the app to Access 2003, everything works fine. Unfortunately, our IT dept does not want to upgrade everyone from Office 2000 to 2003, so this is not an option.
Thanks for your help.
A:
but have you tried setting the
compatibilty mode for the database to
sql server 2000.
I just checked the 2005 database, I selected the database, and clicked Properties->Options, and it says the db is already in 2000 compatibility mode.
A: Access ADPs are very closely tied to SQL Server versions, and MS has done a really poor job of fixing and breaking ADPs in the 3 major versions that have been released (2000, 2002 and 2003).
If you are trying to use the compiled ADE, I'd suggest that first you find the original ADP and see if you can get it to work. You may need to do some work there before creating your ADE.
Caveat: I don't do ADPs, and am glad I made the decision not to, as Microsoft is now deprecating them in favor of MDB=>ODBC=>SQL Server.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/15087",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Office VSTO Word 2003 project keeps trying to autoconvert to 2007 I am working on a Office Word add-in for Word 2003. When I reopen the project, the VS2008 auto covert dialog box opens and tries to convert it to the Word 2007 format.
How can I reopen this file and keep it in the Word 2003 format?
A: Got a answer over at MSDN Forums
This is the default behavior when you have Office 2007 installed on your
development computer. You can modify
this behavior under Tools->Options.
For more informaiton, see the
following threads:
http://forums.microsoft.com/Forums/ShowPost.aspx?PostID=3762143&SiteID=1
http://forums.microsoft.com/Forums/ShowPost.aspx?PostID=3742203&SiteID=1&mode=1
I hope this helps,
McLean Schofield
| {
"language": "en",
"url": "https://stackoverflow.com/questions/15102",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: Visual Studio 2005 Setup project install crashes over Terminal Server I have a setup project created by Visual Studio 2005, and consists of both a C# .NET 2.0 project and C++ MFC project, and the C++ run time. It works properly when run from the main console, but when run over a Terminal Server session on a Windows XP target, the install fails in the following way -
When the Setup.exe is invoked, it immediately crashes before the first welcome screen is displayed. When invoked over a physical console, the setup runs normally.
I figured I could go back to a lab machine to debug, but it runs fine on a lab machine over Terminal Server.
I see other descriptions of setup problems over Terminal Server sessions, but I don't see a definite solution. Both machines have a nearly identical configuration except that the one that is failing also has the GoToMyPC Host installed.
Has anyone else seen these problems, and how can I troubleshoot this?
Thanks,
A: I had LOTS of issues with developing installers (and software in general) for terminal server. I hate that damn thing.
Anyway, VS Setup Projects are just .msi files, and run using the Windows installer framework.
This will drop a log file when it errors out, they're called MSIc183.LOG (swap the c183 for some random numbers and letters), and they go in your logged-in-user account's temp directory.
The easiest way to find that is to type %TEMP% into the windows explorer address bar - once you're there have a look for these log files, they might give you a clue.
*
*Note - Under terminal server, sometimes the logs don't go directly into %TEMP%, but under numbered subdirectories. If you can't find any MSIXYZ.LOG files in there, look for directories called 1, 2, and so on, and look in those.
If you find a log file, but can't get any clues from it, post it here. I've looked at more than I care to thing about, so I may be able to help
A: Before installing, drop to a command prompt and type
CHANGE USER /INSTALL
Then install your software. Once the install has completed, drop back to the command prompt and type:
CHANGE USER /EXECUTE
Alternatively, don't start the installation by a double click but instead go to Add/Remove Programs and select "install software" from there.
Good luck!
| {
"language": "en",
"url": "https://stackoverflow.com/questions/15109",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: .net Job Interview I have a job interview tomorrow for a .NET shop. For the past few years I have been developing in languages other than .NET and figure it is probably a good idea to brush up on what is cool and new in the world of .NET. I've been reading about LINQ and WPF but these are more technologies than trends. What else should I look at?
Been reading things like:
*
*http://msdn.microsoft.com/en-us/library/bb332048.aspx
*http://msdn.microsoft.com/en-us/library/ms754130.aspx
Edit
As it turns out this interview was high level and we didn't really get into much which was more .NET specific than generics.
A: This is completely language agnostic so you may want to skip over it, but I've based a lot of my practice and preparation for job interviews around Steve Yegge's getting a job at google post.
I use a lot of the topics there not only as an interview preparedness guide, but also as a list of things that I SHOULD know about. Admittedly I am still working my way through some of the books and exercises, but every little bit helps.
EDIT: I'm not sure if it necessarily a good thing to focus specifically on the latest trends in web development for job interviews. When I am interviewing someone I am more impressed if they can write a recursive function to solve some problem or write a really cool algorithm, then if they know all the details about some latest thing that is going to fix everything but it's really just a buzzword
A: Take this with a grain of salt, but in my experience, LINQ and WPF are still in the realm of "yeah we'd like to get into that someday".
Most shops are still on VS2005 and .NET 2.0, so I'd want to make sure I was up to speed on core facilities:
*
*generics
*ADO.NET
*WinForms / WebForms depending
And so forth.
A: It's probably a bit late to be looking tonight at code trends for an interview tomorrow.
Microsoft is currently busy doing what it has always done: me-too functionality, only better. New dynamically typed languages with a new language runtime and MVC are looking really promising.
With WPF and Expression they're creating different interfaces for UI developers and business logic developers to use. I'm not sure about that - I'd rather see Expression Blend as part of VS.
They're pushing open source more than they ever have - http://www.codeplex.com is getting busier. VS Express editions are an excellent route in to the technologies.
With their Team System they're pushing Agile methods more and more - they've even resolved them with more structured processes like CMMI.
-1? serves me right for starting with a sarcastic comment ;-(
How about: how to hack an interview?
A: As a student of many languages/frameworks, I can't stress enough that you shouldn't be concentrating on the whizz-bang latest and greatest stuff. It's a solid understanding of the tried and true programming principles (see design patterns, DRY principle, OOP conventions, etc.) and general familiarity with the framework that employers (and fellow developers) are looking for.
A: If you're doing web development, ASP.NET MVC and Silverlight (née WPF/e) come to mind as relatively recent trends.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/15124",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: Tool in Visual Studio 2008 for helping with Localization Does anyone have any recommendations of tools that can be of assistance with moving literal values into resource files for localization?
I've used a resharper plugin called RGreatX but was wondering if there is anything else out there.
It's one heck of a long manual process for moving the strings across and think there must be a better way! RGreatX is OK but could be a bit slicker I feel.
A: Here's one:
http://www.codeplex.com/ResourceRefactoring
It'a actually a Microsoft "open source" Visual Studio(2005 and up) tool that integrates with the IDE. You can easily replace every occurence of a string with a ressource reference with a few clicks.
A: You may find Zeta Resource Editor useful too.
A: ReSharper itself (5.0+) now has support for localization which includes moving strings to resource files and highlighting localizabile strings.
A: Try Visual Localizer - you can batch-process whole code, select which strings may be localized and the tool will add them to a resource file and create a reference instead. Many other features easing localization are included.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/15133",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15"
} |
Q: Building Standalone Applications in JavaScript With the increased power of JavaScript frameworks like YUI, JQuery, and Prototype, and debugging tools like Firebug, doing an application entirely in browser-side JavaScript looks like a great way to make simple applications like puzzle games and specialized calculators.
Is there any downside to this other than exposing your source code? How should you handle data storage for this kind of program?
Edit: yes, Gears and cookies can be used for local storage, but you can't easily get access to files and other objects the user already has around. You also can't save data to a file for a user without having them invoke some browser feature like printing to PDF or saving page as a file.
A: Another option for developing simple desktop like applications or games in JavaScript is Adobe AIR. You can build your app code in either HTML + JavaScript or using Flash/Flex or a combination of both. It has the advantage of being cross-platform (actually cross-platform, Linux, OS X, and Windows. Not just Windows and OS X).
Heck, it may be the only time in your career as a developer that you can write a web page and ONLY target ONE browser.
A: SproutCore is a wholly JavaScript-hosted application framework, borrowing concepts particularly from Cocoa (such as KVO) and Ruby on Rails (such as using a CLI generator for your models, views and controllers). It includes Prototype, but builds plenty of stuff such as sophisticated controls on top of that. Its Photos demo is arguably impressive (especially in Safari 3.1).
Greg already pointed you to Gears; in addition, HTML 5 will come with a standardized means of local storage. Safari 3.1 ships with an implementation where you have a per-site SQLite database with user-settable size maximums, as well as a built-in database browser with SQL querying. Unfortunately, it will be a long time until we can expect broad browser support. Until then, Gears is indeed an alternative (but not for Safari… yet!). For simpler storage, there is of course always cookies.
A: The downside to this would be that you are at the mercy of them having js enabled. I'm not sure that this is a big deal now. Virtually every browser supports js and has it enabled by default.
Of course the other downside would be performance. You are again at the mercy of the client handling all the intensive work. This also may not be that big of a deal, and would be dependent on the type of app you are building.
I've never used Gears, but it looks like it is worth a shot. The backup plan would be to run some server side script through ajax that dumps your data somewhere.
Not completely client side, but oh well.
A: Nihilogic (not my site) does a lot of stuff with Javascript. They even have several games that they've made in Javascript.
I've also seen a neat roguelike game made in Javascript. Unfortunately, I can't remember what it was called...
A: If you want to write a standalone JavaScript application, look at XULrunner. It's what Firefox is built on, but it is also built so that you can distribute it as an application runtime. You will write some of the interface in JavaScript and use JavaScript for your code.
A: I've written several application in JS including a spreadsheet.
Upside:
*
*great language
*short code-run-review cycle
*DOM manipulation is great for UI design
*clients on every computer (and phone)
Downside:
*
*differences between browsers (especially IE)
*code base scalability (with no intrinsic support for namespaces and classes)
*no good debuggers (especially, again, for IE)
*performance (even though great progress has been made with FireFox and Safari)
*You need to write some server code as well.
Bottom line: Go for it. I did.
A: Gears might provide the client-side persistent data storage you need. There isn't a terribly good way of not exposing your source code, though. You could obfuscate it but that only helps somewhat.
I've done simple apps like this for stuff like a Sudoku solver.
A: You might run into performance issues given that you're completely at the mercy of the client's Javascript interpreter. Gears would be a nice way of data storage, but I don't think it has penetrated the market that much. You could just use cookies if you're not fussy about that kind of thing.
A: Standalone games in GWT:
*
*http://gpokr.com/
*http://kdice.com/
A: I'm with ScottKoon here, Adobe AIR is great. I've really only made one really nice (imho) widget thus far, but I did so using jQuery and Prototype.js, which floored in such wonderful ways because I didn't have to learn a whole new event model. Adobe AIR is really sweet, the memory foot print isn't too bad, upgrading to a new version is built into AIR so it's almost automatic, and best of all it's cross-platform...they even have an alpha-version for Linux, but it works pretty well already on my Eee.
A: In regard to saving files from a javascript application:
I am really excited about the possibilities of client-side applications. Flash 10 introduced the ability to create files for save right in the browser. I thought it was super cool, so I built a javascript+flash component to wrap the saving feature. Right now it only works for creating text based files (vcard, ical, xml, html, css, etc.)
*
*Downloadify Home Page
*Source Code & Documentation on Github
*See It In Use at Starter for jQuery
I am looking to add support for non-text files soon, but this is a start.
A: My RSS feeds have served me well- I found that Javascript roguelike!
It's called The Tombs of Asciiroth.
A: Given that you're going to be writing some server code anyway, it makes sense to keep storage on the server for a lot of domains (address books, poker scores, gui configuration, etc.,.) For anything the size of what you'll get in Webkit or Gears, you can probably also keep it on your server.
The advantage of keeping it on your server is two-fold:
*
*You can integrate it fairly simply as a Model layer in a typical MVC framework, and,
*Users get a consistent view without being tied to their browser/PC, or in a less-than-ideal environment (Internet Cafés).
The server code for handling this can also be fairly trivial, particularly if it's written with this task in mind, so it's not a huge cognitive burden.
A: Go with qooxdoo. They recently realsed 1.0, although most users of it say it was ripe for 1.0 at least two versions ago.
I compared qooxdoo with YUI and ext, and I think qooxdoo is the way to go for programmers - YUI isn't that polished as qooxdoo, from a programmer's point of view and ext has a not so friendly licensing model.
A few of the strong points (for me) of qooxdoo are:
*
*extremely clean code
*the nicest OO programming model I've seen among Javascript frameworks
*an extremely rich UI widget library
It also features a test runner for unit tests, an API doc generator and reader, a logging facility, and several useful features for debugging, grouped under something called Inspector.
The only downside is that there aren't readymade themes (something like skins) for qooxdoo. But creating your own theme is quite easy.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/15139",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "23"
} |
Q: What are the pros and cons to keeping SQL in Stored Procs versus Code What are the advantages/disadvantages of keeping SQL in your C# source code or in Stored Procs? I've been discussing this with a friend on an open source project that we're working on (C# ASP.NET Forum). At the moment, most of the database access is done by building the SQL inline in C# and calling to the SQL Server DB. So I'm trying to establish which, for this particular project, would be best.
So far I have:
Advantages for in Code:
*
*Easier to maintain - don't need to run a SQL script to update queries
*Easier to port to another DB - no procs to port
Advantages for Stored Procs:
*
*Performance
*Security
A: This is being discussed on a few other threads here currently. I'm a consistent proponent of stored procedures, although some good arguments for Linq to Sql are being presented.
Embedding queries in your code couples you tightly to your data model. Stored procedures are a good form of contractual programming, meaning that a DBA has the freedom to alter the data model and the code in the procedure, so long as the contract represented by the stored procedure's inputs and outputs is maintained.
Tuning production databases can be extremely difficult when the queries are buried in the code and not in one central, easy to manage location.
[Edit] Here is another current discussion
A: I like stored procs, dont know how many times I was able to make a change to an application using a stored procedure which didn't produce any downtime to the application.
Big fan of Transact SQL, tuning large queries have proven to be very useful for me. Haven't wrote any inline SQL in about 6 years!
A: You list 2 pro-points for sprocs:
Performance - not really. In Sql 2000 or greater the query plan optimisations are pretty good, and cached. I'm sure that Oracle etc do similar things. I don't think there's a case for sprocs for performance any more.
Security? Why would sprocs be more secure? Unless you have a pretty unsecured database anyway all the access is going to be from your DBAs or via your application. Always parametrise all queries - never inline something from user input and you'll be fine.
That's best practice for performance anyway.
Linq is definitely the way I'd go on a new project right now. See this similar post.
A: @Keith
Security? Why would sprocs be more secure?
As suggested by Komradekatz, you can disallow access to tables (for the username/password combo that connects to the DB) and allow SP access only. That way if someone gets the username and password to your database they can execute SP's but can't access the tables or any other part of the DB.
(Of course executing sprocs may give them all the data they need but that would depend on the sprocs that were available. Giving them access to the tables gives them access to everything.)
A: Think of it this way
You have 4 webservers and a bunch of windows apps which use the same SQL code
Now you realized there is a small problem with the SQl code
so do you rather......
change the proc in 1 place
or
push the code to all the webservers, reinstall all the desktop apps(clickonce might help) on all the windows boxes
I prefer stored procs
It is also easier to do performance testing against a proc, put it in query analyzer
set statistics io/time on
set showplan_text on and voila
no need to run profiler to see exactly what is being called
just my 2 cents
A: I prefer keeping in them in code (using an ORM, not inline or ad-hoc) so they're covered by source control without having to deal with saving out .sql files.
Also, stored procedures aren't inherently more secure. You can write a bad query with a sproc just as easily as inline. Parameterized inline queries can be just as secure as a sproc.
A: Use your app code as what it does best: handle logic.
User your database for what it does best: store data.
You can debug stored procedures but you will find easier to debug and maintaing logic in code.
Usually you will end recompiling your code every time you change the database model.
Also stored procedures with optional search parameters are very inneficient because you have to specify in advance all the possible parameters and complex searches are sometimes not possible because you cant predict how many times a parameter is going to be repeated in the seach.
A: When it comes to security, stored procedures are much more secure. Some have argued that all access will be through the application anyway. The thing that many people are forgetting is that most security breaches come from inside a company. Think about how many developers know the "hidden" user name and password for your application?
Also, as MatthieuF pointed out, performance can be much improved due to fewer round trips between the application (whether it's on a desktop or web server) and the database server.
In my experience the abstraction of the data model through stored procedures also vastly improves maintainability. As someone who has had to maintain many databases in the past, it's such a relief when confronted with a required model change to be able to simply change a stored procedure or two and have the change be completely transparent to ALL outside applications. Many times your application isn't the only one pointed at a database - there are other applications, reporting solutions, etc. so tracking down all of those affected points can be a hassle with open access to the tables.
I'll also put checks in the plus column for putting the SQL programming in the hands of those who specialize in it, and for SPs making it much easier to isolate and test/optimize code.
The one downside that I see is that many languages don't allow the passing of table parameters, so passing an unknown number data values can be annoying, and some languages still can't handle retrieving multiple resultsets from a single stored procedure (although the latter doesn't make SPs any worse than inline SQL in that respect).
A: In my opinion you can't vote for yes or no on this question. It totally depends on the design of your application.
I totally vote against the use of SPs in an 3-tier environment, where you have an application server in front. In this kind of environment your application server is there to run your business logic. If you additionally use SPs you start distributing your implementation of business logic all over your system and it will become very unclear who is responsible for what. Eventually you will end up with an application server that will basically do nothing but the following:
(Pseudocode)
Function createOrder(Order yourOrder)
Begin
Call SP_createOrder(yourOrder)
End
So in the end you have your middle tier running on this very cool 4 Server cluster each of them equipped with 16 CPUs and it will actually do nothing at all! What a waste!
If you have a fat gui client that directly connects to your DB or maybe even more applications it's a different story. In this situation SPs can serve as some sort of pseudo middle tier that decouples your application from the data model and offers a controllable access.
A:
Advantages for in Code:
*
*Easier to maintain - don't need to run a SQL script to update queries
*Easier to port to another DB - no procs to port
Actually, I think you have that backwards. IMHO, SQL in code is pain to maintain because:
*
*you end up repeating yourself in related code blocks
*SQL isn't supported as a language in many IDE's so you have just a series of un-error checked strings performing tasks for you
*changes in a data type, table name or constraint are far more prevalent than swapping out an entire databases for a new one
*your level of difficulty increases as your query grows in complexity
*and testing an inline query requires building the project
Think of Stored Procs as methods you call from the database object - they are much easier to reuse, there is only one place to edit and in the event that you do change DB providers, the changes happen in your Stored Procs and not in your code.
That said, the performance gains of stored procs is minimal as Stu said before me and you can't put a break point in a stored procedure (yet).
A: One of the suggestions from a Microsoft TechEd sessions on security which I attended, to make all calls through stored procs and deny access directly to the tables. This approach was billed as providing additional security. I'm not sure if it's worth it just for security, but if you're already using stored procs, it couldn't hurt.
A: Definitely easier to maintain if you put it in a stored procedure. If there's difficult logic involved that will potentially change in the future it is definitely a good idea to put it in the database when you have multiple clients connecting. For example I'm working on an application right now that has an end user web interface and an administrative desktop application, both of which share a database (obviously) and I'm trying to keep as much logic on the database as possible. This is a perfect example of the DRY principle.
A: I'm firmly on the side of stored procs assuming you don't cheat and use dynamic SQL in the stored proc. First, using stored procs allows the dba to set permissions at the stored proc level and not the table level. This is critical not only to combating SQL injection attacts but towards preventing insiders from directly accessing the database and changing things. This is a way to help prevent fraud. No database that contains personal information (SSNs, Credit card numbers, etc) or that in anyway creates financial transactions should ever be accessed except through strored procedures. If you use any other method you are leaving your database wide open for individuals in the company to create fake financial transactions or steal data that can be used for identity theft.
Stored procs are also far easier to maintain and performance tune than SQL sent from the app. They also allow the dba a way to see what the impact of a database structural change will have on the way the data is accessed. I've never met a good dba who would allow dynamic access to the database.
A: We use stored procedures with Oracle DB's where I work now. We also use Subversion. All the stored procedures are created as .pkb & .pks files and saved in Subversion. I've done in-line SQL before and it is a pain! I much prefer the way we do it here. Creating and testing new stored procedures is much easier than doing it in your code.
Theresa
A: CON
I find that doing lots of processing inside stored procedures would make your DB server a single point of inflexibility, when it comes to scaling your act.
However doing all that crunching in your program as opposed to the sql-server, might allow you to scale more if you have multiple servers that runs your code. Of-course this does not apply to stored procs that only does the normal fetch or update but to ones that perform more processing like looping over datasets.
PROS
*
*Performance for what it may be worth (avoids query parsing by DB driver / plan recreation etc)
*Data manipulation is not embedded in the C/C++/C# code which means I have less low level code to look through. SQL is less verbose and easier to look through when listed separately.
*Due to the separation folks are able to find and reuse SQL code much easier.
*Its easier to change things when schema changes - you just have to give the same output to the code and it will work just fine
*Easier to port to a different database.
*I can list individual permissions on my stored procedures and control access at that level too.
*I can profile my data query/ persistence code separate from my data transformation code.
*I can implement changeable conditions in my stored procedure and it would be easy to customize at a customer site.
*It becomes easier to use some automated tools to convert my schema and statements together rather than when it is embedded inside my code where I would have to hunt them down.
*Ensuring best practices for data access is easier when you have all your data access code inside a single file - I can check for queries that access the non performant table or that which uses a higher level of serialization or select *'s in the code etc.
*It becomes easier to find schema changes / data manipulation logic changes when all of it is listed in one file.
*It becomes easier to do search and replace edits on SQL when they are in the same place e.g. change / add transaction isolation statements for all stored procs.
*I and the DBA guy find that having a separate SQL file is easier / convenient when the DBA has to review my SQL stuff.
*Lastly you don't have to worry about SQL injection attacks because some lazy member of your team did not use parametrized queries when using embedded sqls.
A: Smaller logs
Another minor pro for stored procedures that has not been mentioned: when it comes to SQL traffic, sp-based data access generates much less traffic. This becomes important when you monitor traffic for analysis and profiling - the logs will be much smaller and readable.
A: I'm not a big fan of stored procedures, but I use them in one condition:
When the query is pretty huge, it's better to store it in the database as a stored procedure instead of sending it from the code. That way, instead of sending huge ammounts of string characters from the application server to the database, only the "EXEC SPNAME" command will be sent.
This is overkill when the database server and the web server are not on the same network (For example, internet communication). And even if that's not the case, too much stress means a lot of wasted bandwith.
But man, they're so terrible to manage. I avoid them as much as I can.
A: A SQL stored proc doesn't increase the performance of the query
A: Well obviously using stored procedures has several advantages over constructing SQL in code.
*
*Your code implementation and SQL become independent of each other.
*Code is easier to read.
*Write once use many times.
*Modify once
*No need to give internal details to the programmer about the database. etc , etc.
A: The performance advantage for stored procedures is often negligable.
More advantages for stored procedures:
*
*Prevent reverse engineering (if created With Encryption, of course)
*Better centralization of database access
*Ability to change data model transparently (without having to deploy new clients); especially handy if multiple programs access the same data model
A: Stored Procedures are MORE maintainable because:
*
*You don't have to recompile your C# app whenever you want to change some SQL
*You end up reusing SQL code.
Code repetition is the worst thing you can do when you're trying to build a maintainable application!
What happens when you find a logic error that needs to be corrected in multiple places? You're more apt to forget to change that last spot where you copy & pasted your code.
In my opinion, the performance & security gains are an added plus. You can still write insecure/inefficient SQL stored procedures.
Easier to port to another DB - no procs to port
It's not very hard to script out all your stored procedures for creation in another DB. In fact - it's easier than exporting your tables because there are no primary/foreign keys to worry about.
A: @Terrapin - sprocs are just as vulnerable to injection attacks. As I said:
Always parametrise all queries - never inline something from user input and you'll be fine.
That goes for sprocs and dynamic Sql.
I'm not sure not recompiling your app is an advantage. I mean, you have run your unit tests against that code (both application and DB) before going live again anyway.
@Guy - yes you're right, sprocs do let you control application users so that they can only perform the sproc, not the underlying action.
My question would be: if all the access it through your app, using connections and users with limited rights to update/insert etc, does this extra level add security or extra administration?
My opinion is very much the latter. If they've compromised your application to the point where they can re-write it they have plenty of other attacks they can use.
Sql injections can still be performed against those sprocs if they dynamically inline code, so the golden rule still applies, all user input must always be parametrised.
A: Something that I haven't seen mentioned thus far: the people who know the database best aren't always the people that write the application code. Stored procedures give the database folks a way to interface with programmers that don't really want to learn that much about SQL. Large--and especially legacy--databases aren't the easiest things to completely understand, so programmers might just prefer a simple interface that gives them what they need: let the DBAs figure out how to join the 17 tables to make that happen.
That being said, the languages used to write stored procedures (PL/SQL being a notorious example) are pretty brutal. They typically don't offer any of the niceties you'd see in today's popular imperative, OOP, or functional languages. Think COBOL.
So, stick to stored procedures that merely abstract away the relational details rather than those that contain business logic.
A: I am a huge supporter of code over SPROC's. The number one reasons is keeping the code tightly coupled, then a close second is the ease of source control without a lot of custom utilities to pull it in.
In our DAL if we have very complex SQL statements, we generally include them as resource files and update them as needed (this could be a separate assembly as well, and swapped out per db, etc...).
This keeps our code and our sql calls stored in the same version control, without "forgetting" to run some external applications for updating.
A: I generally write OO code. I suspect that most of you probably do, too. In that context, it seems obvious to me that all of the business logic - including SQL queries - belongs in the class definitions. Splitting up the logic such that part of it resides in the object model and part is in the database is no better than putting business logic into the user interface.
Much has been said in earlier answers about the security benefits of stored procs. These fall into two broad categories:
1) Restricting direct access to the data. This definitely is important in some cases and, when you encounter one, then stored procs are pretty much your only option. In my experience, such cases are the exception rather than the rule, however.
2) SQL injection/parametrized queries. This objection is a red herring. Inline SQL - even dynamically-generated inline SQL - can be just as fully parametrized as any stored proc and it can be done just as easily in any modern language worth its salt. There is no advantage either way here. ("Lazy developers might not bother with using parameters" is not a valid objection. If you have developers on your team who prefer to just concatenate user data into their SQL instead of using parameters, you first try to educate them, then you fire them if that doesn't work, just like you would with developers who have any other bad, demonstrably detrimental habit.)
A: I am not a fan of stored procedures
Stored Procedures are MORE maintainable because:
* You don't have to recompile your C# app whenever you want to change some SQL
You'll end up recompiling it anyway when datatypes change, or you want to return an extra column, or whatever. The number of times you can 'transparently' change the SQL out from underneath your app is pretty small on the whole
*
*You end up reusing SQL code.
Programming languages, C# included, have this amazing thing, called a function. It means you can invoke the same block of code from multiple places! Amazing! You can then put the re-usable SQL code inside one of these, or if you want to get really high tech, you can use a library which does it for you. I believe they're called Object Relational Mappers, and are pretty common these days.
Code repetition is the worst thing you can do when you're trying to build a maintainable application!
Agreed, which is why storedprocs are a bad thing. It's much easier to refactor and decompose (break into smaller parts) code into functions than SQL into... blocks of SQL?
You have 4 webservers and a bunch of windows apps which use the same SQL code Now you realized there is a small problem with the SQl code so do you rather...... change the proc in 1 place or push the code to all the webservers, reinstall all the desktop apps(clickonce might help) on all the windows boxes
Why are your windows apps connecting directly to a central database? That seems like a HUGE security hole right there, and bottleneck as it rules out server-side caching. Shouldn't they be connecting via a web service or similar to your web servers?
So, push 1 new sproc, or 4 new webservers?
In this case it is easier to push one new sproc, but in my experience, 95% of 'pushed changes' affect the code and not the database. If you're pushing 20 things to the webservers that month, and 1 to the database, you hardly lose much if you instead push 21 things to the webservers, and zero to the database.
More easily code reviewed.
Can you explain how? I don't get this. Particularly seeing as the sprocs probably aren't in source control, and therefore can't be accessed via web-based SCM browsers and so on.
More cons:
Storedprocs live in the database, which appears to the outside world as a black box. Simple things like wanting to put them in source control becomes a nightmare.
There's also the issue of sheer effort. It might make sense to break everything down into a million tiers if you're trying to justify to your CEO why it just cost them 7 million dollars to build some forums, but otherwise creating a storedproc for every little thing is just extra donkeywork for no benefit.
A: I fall on the code side. We build data access layer that's used by all all the apps (both web and client), so it's DRY from that perspective. It simplifies the database deployment because we just have to make sure the table schema's are correct. It simplifies code maintenance because we don't have to look at source code and the database.
I don't have much problem with the tight coupling with the data model because I don't see where it's possible to really break that coupling. An application and its data are inherently coupled.
A: Stored procedures.
If an error slips or the logic changes a bit, you do not have to recompile the project. Plus, it allows access from different sources, not just the one place you coded the query in your project.
I don't think it is harder to maintain stored procedures, you should not code them directly in the database but in separate files first, then you can just run them on whatever DB you need to set-up.
A: Advantages for Stored procedures:
More easily code reviewed.
Less coupled, therefore more easily tested.
More easily tuned.
Performance is generally better, from the point of view of network traffic - if you have a cursor, or similar, then there aren't multiple trips to the database
You can protect access to the data more easily, remove direct access to the tables, enforce security through the procs - this also allows you to find relatively quickly any code that updates a table.
If there are other services involved (such as Reporting services), you may find it easier to store all of your logic in a stored procedure, rather than in code, and having to duplicate it
Disadvantages:
Harder to manage for the developers: version control of the scripts: does everyone have their own database, is the version control system integrated with the database and IDE?
A: In some circumstances, dynamically created sql in code can have better performance than a stored proc. If you have created a stored proc (let's say sp_customersearch) that gets extremely complicated with dozens of parameters because it must be very flexible, you can probably generate a much simpler sql statement in code at runtime.
One could argue that this simply moves some processing from SQL to the web server, but in general that would be a good thing.
The other great thing about this technique is that if you're looking in SQL profiler you can see the query you generated and debug it much easier than seeing a stored proc call with 20 parameters come in.
A: @Keith
Security? Why would sprocs be more secure?
Stored procedures offer inherent protection from SQL Injection attacks.
However, you're not completely protected because you can still write stored procedures that are vulnerable to such attacks (i.e. dynamic SQL in a stored proc).
A: SQL injection attacks are on the upswing. It's very easy for someone to find this code and run injection attacks on your website. You must always always parameterize your queries. It's best to never run exec(@x) on a dynamic SQL query. It's just not a great idea to use inline SQL ever, IMO.
Stored Procedures, as argued by some, are a hassle because they are another set of items to maintain separate from your code. But they are reusable and if you end up finding a bug in your queries, you can at fix them without recompiling.
A: I'd like to cast another vote for using stored procs (despite the hassle they can introduce when it comes to maintenance and versioning) as a way to restrict direct access to the underlying tables for better security.
A: Stored procedures are the worst when they are used to stand in-between applications and the database. Many of the reasons for their use stated above are better handled by views.
The security argument is spurious. It just moves the security problem from the application to the database. Code is code. I have seen stored procedures that take in SQL from the applications and use it build queries that were subject to SQL injection attacks.
In general, they tend to create a rift between so-called database developers and so-called application developers. In reality, all of the code that is written is application code, it is only a difference of the execution context.
Using rich SQL generation libraries like LINQ, Rails ActiveRecord, or Hibernate/NHibernate makes development faster. Inserting stored procedures in the mix slows it down.
A: I prefer to use an O/R Mapper such as LLBLGen Pro.
It gives you relatively painless database portability, allows you to write your database access code in the same language as your application using strongly-typed objects and, in my opinion, allows you more flexibility in how you work with the data that you pull back.
Actually, being able to work with strongly-typed objects is reason enough to use an O/R Mapper.
A: My vote for stored procedures; as an abstraction layer close to the data, efficient with sets, reusable by many "clients" (client languages). The T-SQL language is a bit primitive (and I guess that's what most of the C# guys here at SO have been exposed to), but Oracle's PL/SQL is on par with any modern programming language.
As for version control, just put the stored procedure code in text files under version control, then run the scripts to create the procs in the database.
A: One point I did not find in the other answers is the following:
If in your environment the database and its schema are the heart of the architecture and applications have a more satellite role then it may make sense to make heavier use of stored procedures, which may help provide a level base for all the applications that need to access the DB, and thus induce less code repetition (e.g. are you sure that all your DB accessing applications will always be written in C# or other .NET languages?).
If, on the other hand, the application has a more central role and the DB acts more as a backing store for the application, then it may be sensible to make less use of stored procedures and achieve reduced code repetition by providing a common persistence layer, possibly based on an ORM tool/framework.
In both cases it's important that the DB is not considered a convenient repository for stored procedures. Keep them in source files within a version control system and try and automate their deployment as much as possible (this is actually valid for all schema related artifacts).
A: Nobody mentioned unit testing!
If you have a saveOrder method you can call several methods inside and create a unit test for each one of those but if you are only calling a store procedure there is no way to do that.
A: I have yet to find a good way of easily maintaining stored procs in source control that makes it as seamless as the code base. It just doesn't happen. This alone makes putting the SQL in your code worthwhile for me. Performance differences are negligible on modern systems.
A: Preference of stored procedures because:
- enable fix some data related issues in production while system is running (this is num. one for me)
- clean contract definition between DB and program (clean separation of concerns)
- better portability to different DB vendor (if written well than code change is usually only on SP side).
- better positioned for performance tuning
Cons - problematic in case WHERE clause has great variation in used conditions and high performance is needed.
A: Pros to stored procedures
1). Improved security as the SQL in stored procedure is static in nature(Mostly). This will protect against SQL injection.
2). Reusability. If there is a need to return the same data for multiple applications/components, this may be a better choice instead of repeating the SQL statements.
3). Reduces calls between client and database server.
I am not sure about other databases but you can create stored procedures in host languages in db2 on mainframe which makes them very powerful.
A: Foot firmly in the "Stored Procs are bad for CRUD/business logic use" camp. I understand the need in reporting, data import, etc
Write up here...
A: Programmers want the code in their app. DBA's want it in the database.
If you have both, you can divide the work between the two by using stored procedures and the programmers don't have to worry about how all those tables join together etc. (Sorry, I know you want to be in control of everything.).
We have a 3rd party application that allows custom reports to be created on a View or Stored Procedure in the database. If I put all of my logic in the code in another application, I could not reuse it. If you are in a situation where you write all of the apps using the database, this isn't a problem.
A: Stored procedures can go out of sync between database and source control system more easily than code. The application code can too, but it's less likely when you have continuous integration.
Database being what it is, people inevitably make changes to production databases, just to get out of the woods for the moment. Then forget to sync it across the environments and source control system. Sooner or later, production db becomes the de facto record rather than the source control system - you get into a situation where you cannot remove any sprocs, because you don't know whether it's being used.
A good process should only allow changes to the production only through proper channels, so that you should be able to rebuild a database from scratch from the source control system (sans data). But I'm just saying just because it can be done and does get done - changes are made to production database at the heat of moment, between calls from yelling clients, managers breathing down your neck, etc.
Running ad-hoc queries is awkward with stored procedures - it's easier done with dynamic sql (or ORM), which may be the biggest drawback to using stored procedures for myself.
Stored procedures, on the other hand is nice in situations where you make a change but doesn't require re-deployment of app code. It also allows you to shape your data before sending it over the network where sql in code might have to make multiple calls to retrieve than shape it (although there are now ways to run multiple sql statements and return multiple result sets in a single "call", as in MARS in ADO.NET), resulting in probably more data travelling through your network.
I don't buy any other arguments regarding performance and security though. Either can be good or bad, and equally controlled.
A: Your programming language and application framework are likely:
*
*high-level, especially as compared with SQL
*easy to version and deploy via automated processes, especially as compared with SQL
If these two conditions are two, then skip the stored procedures.
A: The biggest advantage of sproc in the place I work is that we have way less code to port to VB.NET (from VB6) when time comes. And it's WAY less code because we use sprocs for all our queries.
Also it helps a lot when we need to update the queries instead of updating the VB code, re-compile and re-install it on all computers.
A: For Microsoft SQL Server you should use stored procedures wherever possible to assist with execution plan caching and reuse. Why do you want to optimise plan re-use? Because the generation of execution plans is fairly expensive to do.
Although the caching and reuse of execution plans for ad-hoc queries has improved significantly in later editions of SQL server (especially 2005 and 2008) there are still far fewer issues with plan reuse when dealing with stored procedures than there are for ad-hoc queries. For example, SQL server will only re-use an execution plan if the plan text matches exactly - right down to comments and white space, for example, if each of the following lines of SQL were to be executed independently, none of them would use the same execution plan:
SELECT MyColumn FROM MyTable WHERE id = @id
select MyColumn from MyTable WHERE id = @id
SELECT MyColumn FROM MyTable WHERE id = @id
SELECT MyColumn FROM MyTable WHERE id = @id -- "some comment"
SELECT MyColumn FROM MyTable WHERE id = @id -- "some other comment"
On top of this, if you don't explicitly specify the types of your parameters then there is a good chance that SQL Server might get it wrong, for example if you executed the above query with the input 4, then SQL Server will parametrise the query with @id as a SMALLINT (or possibly a TINYINT), and so if you then execute the same query with an @id of say 4000, SQL Server will parametrise it as an INT, and wont reuse the same cache.
I think there are also some other issues, and in honesty most of them can probably be worked around - especially with later editions of SQL Server, but stored procedures generally offer you more control.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/15142",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "274"
} |
Q: Prevent a TreeView from firing events in VB6? In some VB6 code, I have a handler for a TreeView's Collapse event:
Private Sub MyTree_Collapse(ByVal Node as MSComCtlLib.Node)
This is called whenever a node in the tree is collapsed, whether by the user or programmatically. As it turns out, through some roundabout execution, it may happen that this handler will wind up telling a node to collapse, leading to infinite recursion.
I can think of multiple ways to skin this cat, but what seems simplest to me is to tell the TreeView not to raise events for some period of time. I can't find a simple call to let me do this, though. Has anyone successfully done this, or do I need to keep track of state in some other manner so I can respond appropriately when recursive events come along?
A: @Phil - I came to the same conclusion. My implementation of MyTree_Collapse now looks something like this (where m_bHandlingCallback is a member variable):
Private Sub MyTree_Collapse(ByVal Node as MSComCtlLib.Node)
If m_bHandlingCallback Then Exit Sub
m_bHandlingCallback = True
DoSomeStuff
m_bHandlingCallback = False
End Sub
A: Another way in VB6 is to have an alternate WithEvents reference to the control:
Private WithEvents alt as TreeView
and in Form_Load:
Private Sub Form_Load()
Set alt = MyTree
End Sub
Now alt will receive lots of events like this:
Private Sub alt_Collapse(ByVal Node as MSComCtlLib.Node)
Set alt = Nothing
'DoSomeStuff'
Set alt = MyTree
End Sub
But, during the DoSomeStuff, the events are unhooked - which also applies to all other event Subs for alt without the need for them to have intrusive changes.
A: I think that like many events in VB, it can't be switched off.
Just set a boolean flag as you've suggested.
A: I would declare the flag variable as STATIC in the Sub. This avoids making the variable global and makes it keep its value between calls.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/15163",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: User script location linux (debian etch) In the linux file system, where should user scripts be placed?
I'm thinking specifically python scripts to be called by cron.
A: For whom it interests, the Filesystem Hierarchy Standard (FHS) is a standards document and still a very good read. I describes the foundation for almost any Linux distribution and is officially endorsed e.g. by Debian and the Linux Standards Base (LSB).
You won't find any positive answer for that question, though, since ... it's not defined ;-). Only thing I can say: Don't put in /bin (neither in /usr/bin). /usr/local/scripts is unusual as well. $HOME/bin seems to be an acceptable place, iff the script is only used by this single user.
A: I'm a strong beleiver that if a file is made by a user, it goes in his user directory (/home/username) if he didn't make it then it gets more complicated. I have in the past just put them in either /usr/local/bin, /bin, or /usr/local/scripts, I'm not sure about etch, but you need to check to make sure that /usr/local/scripts is actually in Cron's $PATH.
A: The Debian guide can be quite useful when it comes to Ubuntu:
Normally, programs install themselves in the /usr/local subdirectories. But, Debian packages must not use that directory, since it is reserved for system administrator's (or user's) private use
/usr/local/bin seems to be acceptable according to the guide.
Personally I put my scripts in $HOME/.scripts.
I wish that LSB would specifically address this question though.
A: the information i got:
/usr/local/sbin custom script meant for root
/usr/local/bin custom script meant for all users including non-root
chatlog snips from irc.debian.org #debian:
(02:48:49) c33s: question: where is the _correct_ location, to put custom scripts
for the root user (like a script on a webserver for createing everything needed
for a new webuser)? is it /bin, /usr/local/bin,...? /usr/local/scripts is
mentioned in (*link to this page*)
(02:49:15) Hydroxide: c33s: typically /usr/local/sbin
(02:49:27) Hydroxide: c33s: no idea what /usr/local/scripts would be
(02:49:32) Hydroxide: it's nonstandard
(02:49:53) Hydroxide: if it's a custom script meant for all users including
non-root, then /usr/local/bin
(02:52:43) Hydroxide: c33s: Debian follows the Filesystem Hierarchy Standard,
with a very small number of exceptions, which is online in several formats at
http://www.pathname.com/fhs/ (also linked from http://www.debian.org/devel/ and
separately online at http://www.debian.org/doc/packaging-manuals/fhs/fhs-2.3.html)
(02:53:03) Hydroxide: c33s: if you have the debian-policy package installed, it's
also in several formats at /usr/share/doc/debian-policy/fhs/ on your system
(02:53:37) Hydroxide: c33s: most linux distributions follow that standard, though
usually less strictly and with more deviations than Debian.
thanks go out to Hydroxide
A: How about /home/username/bin?
Add ~/bin to $PATH and make the script executable with chmod +x filename.
A: personally I prefer
/home/username/.bin
This way the bin folder is hidden but you can still add it to the PATH and execute all scripts with the x-bit inside.
I like my home directory to be clean (at first glance) with very few folders.
A: If you're talking about scripts created by a user that will be run from that users crontab, I typically put those in either a bin or scripts folder in the home directory, or if they're intended to be shared between users, a /usr/local/scripts directory.
A: You can also add paths to your crontab file as shown in a previous cron-related question.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/15171",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "34"
} |
Q: How to gauge the quality of a software product I have a product, X, which we deliver to a client, C every month, including bugfixes, enhancements, new development etc.) Each month, I am asked to err "guarantee" the quality of the product.
For this we use a number of statistics garnered from the tests that we do, such as:
*
*reopen rate (number of bugs reopened/number of corrected bugs tested)
*new bug rate (number of new, including regressions, bugs found during testing/number of corrected bugs tested)
*for each new enhancement, the new bug rate (the number of bugs found for this enhancement/number of mandays)
and various other figures.
It is impossible, for reasons we shan't go into, to test everything every time.
So, my question is:
How do I estimate the number and type of bugs that remain in my software?
What testing strategies do I have to follow to make sure that the product is good?
I know this is a bit of an open question, but hey, I also know that there are no simple solutions.
Thanks.
A: The question is who requires you to provide the stats.
If it's non-technical people, fake the stats. By "fake", I mean "provide any inevitably meaningless, but real numbers" of the kind you mentioned.
If it's technical people without a CS background, they ought to be told about the halting problem, which is undecidable and is simpler than counting and classifying the remaining bugs.
There's a lot of metrics and tools regarding software quality (code coverage, cyclomatic complexity, coding guidelines and tools enforcing them, etc.). In practice, what works is automating as much tests as possible, having human testers do as many tests that weren't automated as possible, and then pray.
A: I don't think you can ever really estimate the number of bugs in your app. Unless you use a language and process that allows formal proofs, you can never really be sure. Your time is probably better spent setting up processes to minimize bugs than trying to estimate how many you have.
One of the most important things you can do is have a good QA team and good work item tracking. You may not be able to do full regression testing every time, but if you have a list of the changes you've made to the app since the last release, then your QA people (or person) can focus their testing on the parts of the app that are expected to be affected.
Another thing that would be helpful is unit tests. The more of your codebase you have covered the more confident you can be that changes in one area didn't inadvertently affected another area. I've found this quite useful, as sometimes I'll change something and forget that it would affect another part of the app, and the unit tests showed the problem right away. Passed unit tests won't guarantee that you haven't broken anything, but they can help increase confidence that changes you make are working.
Also, this is a bit redundant and obvious, but make sure you have good bug tracking software. :)
A: I think keeping it simple is the best way to go. Categorize your bugs by severity, and address them in order of decreasing severity.
This way you can hand over the highest-quality build possible (the number of significant bugs remaining is how I would gauge the quality of the product, as opposed to some complex statistics).
A: Most of the agile methodologies address this dilemma pretty clearly. You can't test everything. Neither can you test it infinite number of times before you release. So the procedure is to rely on the risk and likelihood of the bug. Both risk and likelihood are numerical values. The product of both gives you a RPN number. If the number is less than 15 you ship a beta. If you can bring it down to less than 10 you ship the product and push the bug to be fixed in a future releasee.
How to calculate risk ?
If its a crash then its a 5
If its a crash but you can provide a work around then its a number less than 5.
If the bug reduces the functionality then its a 4
How to calculate likelihood ?
can you re-produce it every time you run, its a 5.
If the work around provided still causes it to crash then less than 5
Well, I am curious to know whether anyone else using this scheme and eager to know their milage on this.
A: How long is a piece of string? Ultimately what makes a quality product? Bugs gives some indication yes, but many other factors are involved, Unit Test coverage is a key factor in IMO. But in my experience the main factor that effects whether a product can be deemed quality or not, is good understanding of the problem that is being solved. Often what happens is, the 'problem' that the product is meant to solve is not understood correctly and developers end up inventing the solution to a problem they have flesh out in their head, and not the real problem, thus 'bugs' are made. I am a strong proponent of iterative Agile development, that way the product is constantly access against the 'problem' and the product does not stray to far from its goal.
A: The questions I heard wer, how do I estimate the bugs in my software? and what techniques do I use to ensure the quality is good?
Rather than go through a full course, here are a couple approaches.
How do I estimate the bugs in my software?
Start with the history, you know how many you found during testing (hopefully) and you know how many were found after the fact. You can use that to estimate how efficient you are at finding bugs (DDR - Defect Detection Rate is one name for this). If you can show that for some consistent time period, your DDR is consistent (or improving) you can provide some insight into the quality of the release by guessing at the number of post-release defects that will be found once the product is released.
What techniques do I use to ensure the quality is good?
Root cause analysis on your bugs will point you to specific components that are buggy, specific developers that create buggy code, the fact that lacking full requirements results in implementation not matching expectations, etc.
Project Review meetings to quickly identify what was good, so those things can be repeated and what was bad and find a way to not do those again.
Hopefully, these give you a good start. Good Luck!
A: It seems the consensus is that the emphasis should be placed on unit testing. Bug tracking is a good indicator of the product quality, but is only is acurate as your test team. If you employ unit testing it gives you a measurable metric of code coverage and provides regression testing so you can be assured you didn't break anything since last month.
My company relies on system/integration level testing. I see alot of defects being introduced because there is a lack of regression testing. I think "bugs" where the developer's implementation of the requirements deviates from the user's vision is sort of a seperate problem that as Dan and rptony stated is best addressed by Agile methodologies.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/15176",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
} |
Q: Writing Emacs extensions in languages other than Lisp I'd like to take an existing application (written in OCaml) and create an Emacs "interface" for it (like, for example, the Emacs GDB mode). I would prefer to do this without writing a ton of Lisp code. In MVC terms, I'd like for the View to be Emacs, but for the Model and Controller to remain (primarily) OCaml.
Does anybody know of a way to write Emacs extensions in a language other than Lisp? This could either take the form of bindings to the Emacs extension API in some other language (e.g., making OCaml a first-class Emacs extension language) or an Emacs interaction mode where, for example, the extension has a pipe into which it can write Emacs Lisp expressions and read out result values.
A: I don't know if this will work for your particular problem, but I have been doing something similar using the shell-command-to-string function:
(shell-command-to-string
"bash -c \"script-to-exec args\"")
So for example, we have existing scripts written in python which will mangle a file, so the above lets me invoke the script via emacs lisp.
A quick google search found this page describing a system to write extensions in Python, so it seems feasible to do what you want... you will just have to see if anyone has written a similar framework for OCaml.
A: Try PyMacs, which allows extending Emacs in Python.
edit: updated link.
A: Some Extension Api is now possible with the incoming emacs 25.1 and dynamic modules
A Library, emacs-ffi offer a foreign function interface based on libffi.
Check out complete documentation on the README.
A: From the statically typed languages side, there is something that looks quite performant and well featured for Haskell:
https://github.com/knupfer/haskell-emacs
there is also probably something useful for Scala to be reused from the Ensime project (has a bridge for both Emacs and Vim):
https://github.com/ensime/ensime-server
Furthermore, a quick google search revealed another potential candidate for extending Emacs with a classic FP language, OCaml; the project has a lot of .ml source files so there's got to be an Emacs-OCaml bridge somewhere:
https://github.com/the-lambda-church/merlin
A: http://www.emacswiki.org/cgi-bin/emacs-en?CategoryExtensionLanguage is a list of all non-Elisp extension languages you can use.
It does appear to be dynamic language centric.
http://common-lisp.net/project/slime/ is missing from that list, as it is not quite an extension language, but an Elisp-Common Lisp bridge. Its source code would show how to communicate back and forth over sockets.
A similar IDE for Erlang is Distel, at http://fresh.homeunix.net/~luke/distel/ (currently down) and https://github.com/massemanet/distel.
Good luck!
A: There is no "Extension API". Emacs Lisp is way in there, and it ain't moving.
You can run Emacs commands from your other process. Have a look at Gnuserv.
There are plenty of applications where Emacs is the View for a Model/Controller in a separate process. The Emacs GDB interface is a good example. I'm not sure of a simpler example, maybe sql-postgresql?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/15187",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "28"
} |
Q: Changing CURRENT save/default directory in Delphi 2007 without using Save-As I have a project group that contains a project I'm working on as well as some related component and packages. I prefer to keep the database links active during design-time.
I know how to write the code which would change the database file location, but in this case, I'm just using ".\data" instead, for various design-time reasons. Unfortunately, using a relative folder means that Delphi needs to be "pointing" to the same folder that the project is in.
I'd like to be able to force the folder location to a different root, without using the "Save As" feature. Does anyone happen to know where the Delphi IDE retrieves that info?
A: I have some projects in delphi7, Delphi2009, Delphi2010 and DelphiXE4 and also my project are not stored in my document.
I force the save/default like this:
Tools > Options > Environment Options > Default project
for Delphi 2010 => U:\Projects\Rad Studio 2010\Projects
A: I am not sure I completely understand your question.
*
*If you are referring to the folder the IDE has as the current folder, then you can just change the shortcut that launches Delphi to set the current directory where ever you want it to be.
A: You can change the location the project compliers and saves the dcu/unit/exe to in Project/Options under the Directories/Conditionals is that what you are looking for?
I bevile there is also some settings for the DBE in the Tools menu, but I dont have them install at the moment (or i cant find them)
A: I'd say you should just hard-code the full path to the database. Not pretty, but it will always work. There are tools, like GExperts, that will ensure your database components are set to inactive when you save them, and that way you can be sure to set them correctly on the live computer at runtime.
It would be nice to use relative paths, but that would be a function of the component and not the application really.
A: Do NOT use a path relative to .exe path. That will get you into trouble with Windows XP in limited access mode, as well as with Windows Vista.
Do you need one database for all users?
Use a path relative to All Users\Application Data directory.
Do you need separate databases per user?
Use a path relative to user's private User\Application Data directory.
Do you need multiple databases per user?
Use a path relative to user's My Documents directory.
Use any of the above as default, but add registry keys that allow you to override the settings.
A: I believe it gets that info from Tools -> Options -> Environment Options -> Environment Variables.
Try to play with those (Add override).
A: Tools -> Options -> Environment Options -> Environment Variables - BDSProjectsDir
| {
"language": "en",
"url": "https://stackoverflow.com/questions/15190",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Best way to deploy subversion (SVN) in a multisite windows environment We are planning on moving for MS Source Safe (ouch) to SVN. We are working mostly in a Microsoft environment (windows, Visual Studio, .NET) and we have developers in multiple sites. I heard about VisualSVN and integration with visual studio. On the other hand I can get someone to host SVN for me and use TortoiseSVN. Any recommendations? Any pitfalls I should avoid?
A:
I heard about VisualSVN and
integration with visual studio
Point to note, VisualSVN (the one that integrates with VStudio) is not a server technology at all, it is simply a integrated GUI front end to SVN, and in fact works through TortoiseSVN (which is required to be installed). However, VisualSVN is GREAT and defnitely worth the $50 per developer to use it. I used it daily and it saves me SO much time.
There is also VisualSVN Server, which will take care of the server side of things and the setup is absolutely dead simple. As long as you have an internet facing server and copious amounts of bandwidth (though SVN is not much of a bandwidth hog) you should be fine to host it yourself. Oh yeah, and VisualSVN Server is completely FREE!
However, having your repository hosted off-site is definitely always an option.
I use dreamhost for this now and couldn't be happier.
A: Hosting subversion is fantastically simple. At the risk of being labeled a brown nose (is there a badge for that?) Jeff Atwood did put up an article on installing subersion
*
*http://blog.codinghorror.com/setting-up-subversion-on-windows/
So really you could save yourself some money by running your own subversion server and you'll never have to worry about what happens to your code if your hosting company goes belly up.
I would start with tortoise because it is free and is really easy to use. If you find you really need integration with VS then by all means try out visual svn. In my experience source control <-> editor integration is most useful for automatically opening files when you edit them. Subversion doesn't require you to open files so that big advantage is gone.
A: Another SVN integration with Visual studio is AnkhSVN http://ankhsvn.open.collab.net/ It is free, and has a few quirks. Personally, I use it for basic diffing and the visual indicators for file status (changed, conflict, etc.) while I use Tortoise for the heavy lifting.
A: You can get hosting of secure svn repositories from a variety of sources: http://beanstalkapp.com/ and many others. Often free if the usage (users, data, etc.) is limited.
VisualSVN does integrate with Visual Studio but not like SourceSafe does (and I mean this in a good way). It requires TortoiseSVN so it's not not an either/or. VisualSVN and Tortoise is a great combination.
A:
Best way to deploy subversion (SVN) in a multisite windows environment
As far as I understand, you have multiple development teams in different locations (even different continents, maybe) who have to access the same codebase. For such a case VisualSVN Server provides Multisite Repository Replication feature.
The feature is based on VDFS (VisualSVN Distributed File System) technology which allows automatic, transparent, bidirectional master/slave replication of your repositories between remote sites. What's more -- it works out-of-the-box with minimal configuration steps done via VisualSVN Server Manager MMC console.
Learn more at http://www.visualsvn.com/support/topic/00068/
| {
"language": "en",
"url": "https://stackoverflow.com/questions/15203",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: What is the best way to iterate through a strongly-typed generic List? What is the best way to iterate through a strongly-typed generic List in C#.NET and VB.NET?
A: C#
myList<string>().ForEach(
delegate(string name)
{
Console.WriteLine(name);
});
Anonymous delegates are not currently implemented in VB.Net, but both C# and VB.Net should be able to do lambdas:
C#
myList<string>().ForEach(name => Console.WriteLine(name));
VB.Net
myList(Of String)().ForEach(Function(name) Console.WriteLine(name))
As Grauenwolf pointed out the above VB won't compile since the lambda doesn't return a value. A normal ForEach loop as others have suggested is probably the easiest for now, but as usual it takes a block of code to do what C# can do in one line.
Here's a trite example of why this might be useful: this gives you the ability to pass in the loop logic from another scope than where the IEnumerable exists, so you don't even have to expose it if you don't want to.
Say you have a list of relative url paths that you want to make absolute:
public IEnumerable<String> Paths(Func<String> formatter) {
List<String> paths = new List<String>()
{
"/about", "/contact", "/services"
};
return paths.ForEach(formatter);
}
So then you could call the function this way:
var hostname = "myhost.com";
var formatter = f => String.Format("http://{0}{1}", hostname, f);
IEnumerable<String> absolutePaths = Paths(formatter);
Giving you "http://myhost.com/about", "http://myhost.com/contact" etc. Obviously there are better ways to accomplish this in this specfic example, I'm just trying to demonstrate the basic principle.
A: For VB.NET:
For Each tmpObject as ObjectType in ObjectTypeList
'Do some stuff '
Next
A: For C#:
foreach(ObjectType objectItem in objectTypeList)
{
// ...do some stuff
}
Answer for VB.NET from Purple Ant:
For Each objectItem as ObjectType in objectTypeList
'Do some stuff '
Next
A: With any generic implementation of IEnumerable the best way is:
//C#
foreach( var item in listVariable) {
//do stuff
}
There is an important exception however. IEnumerable involves an overhead of Current() and MoveNext() that is what the foreach loop is actually compiled into.
When you have a simple array of structs:
//C#
int[] valueTypeArray;
for(int i=0; i < valueTypeArray.Length; ++i) {
int item = valueTypeArray[i];
//do stuff
}
Is quicker.
Update
Following a discussion with @Steven Sudit (see comments) I think my original advice may be out of date or mistaken, so I ran some tests:
// create a list to test with
var theList = Enumerable.Range(0, 100000000).ToList();
// time foreach
var sw = Stopwatch.StartNew();
foreach (var item in theList)
{
int inLoop = item;
}
Console.WriteLine("list foreach: " + sw.Elapsed.ToString());
sw.Reset();
sw.Start();
// time for
int cnt = theList.Count;
for (int i = 0; i < cnt; i++)
{
int inLoop = theList[i];
}
Console.WriteLine("list for : " + sw.Elapsed.ToString());
// now run the same tests, but with an array
var theArray = theList.ToArray();
sw.Reset();
sw.Start();
foreach (var item in theArray)
{
int inLoop = item;
}
Console.WriteLine("array foreach: " + sw.Elapsed.ToString());
sw.Reset();
sw.Start();
// time for
cnt = theArray.Length;
for (int i = 0; i < cnt; i++)
{
int inLoop = theArray[i];
}
Console.WriteLine("array for : " + sw.Elapsed.ToString());
Console.ReadKey();
So, I ran this in release with all optimisations:
list foreach: 00:00:00.5137506
list for : 00:00:00.2417709
array foreach: 00:00:00.1085653
array for : 00:00:00.0954890
And then debug without optimisations:
list foreach: 00:00:01.1289015
list for : 00:00:00.9945345
array foreach: 00:00:00.6405422
array for : 00:00:00.4913245
So it appears fairly consistent, for is quicker than foreach and arrays are quicker than generic lists.
However, this is across 100,000,000 iterations and the difference is about .4 of a second between the fastest and slowest methods. Unless you're doing massive performance critical loops it just isn't worth worrying about.
A: Without knowing the internal implementation of a list, I think generally the best way to iterate over it would be a foreach loop. Because foreach uses an IEnumerator to walk over the list, it's up to the list itself to determine how to move from object to object.
If the internal implementation was, say, a linked list, then a simple for loop would be quite a bit slower than a foreach.
Does that make sense?
A: It depends on your application:
*
*for loop, if efficiency is a priority
*foreach loop or ForEach method, whichever communicates your intent more clearly
A: I may be missing something, but iterating through a generic list should be fairly simple if you use my examples below. The List<> class implements the IList and IEnumerable interfaces so that you can easily iterate through them basically any way you want.
The most efficient way would be to use a for loop:
for(int i = 0; i < genericList.Count; ++i)
{
// Loop body
}
You may also choose to use a foreach loop:
foreach(<insertTypeHere> o in genericList)
{
// Loop body
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/15204",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17"
} |
Q: Subversion Management Tools we have a lot of users running in different shared and solo-owned repositories in Subversion. As part of our work, we do project-shared code and individual work, and we need to control access, ideally on a group basis.
Currenly, we use SVNManager to allow users to manage access and create repositories. However, in order to get that working we had to do quite a bit of hacking.
Does anyone know of a free, open-source, linux-compatible SVN management system?
Thanks for your help.
A: I would recommend SVN Access: http://www.jaj.com/projects/svnaccess/ or http://freshmeat.net/projects/svnaccess/
I have used it as is, and have modified it for an enterprise-wide solution at my day job.
A: There is an alternative called KDESVN which you might want to try. However, I have never used it, so I cannot vouch for it.
A: svn-access-manager seems to be a great open-source web administration GUI for SVN too (and currently active ...).
But I've finally adopted USVN !
This question is very similar to SVN admin management GUI tool by the way ...
A: I use KDESVN. Once it's set up it works great, but you only get one chance to set up your branch structure, so plan to create a test repository first.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/15211",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13"
} |
Q: UltraWebGrid: How to use a drop-down list in a column I'm using the Infragistics grid and I'm having a difficult time using a drop-down list as the value selector for one of my columns.
I tried reading the documentation but Infragistics' documentation is not so good. I've also taken a look at this discussion with no luck.
What I'm doing so far:
col.Type = ColumnType.DropDownList;
col.DataType = "System.String";
col.ValueList = myValueList;
where myValueList is:
ValueList myValueList = new ValueList();
myValueList.Prompt = "My text prompt";
myValueList.DisplayStyle = ValueListDisplayStyle.DisplayText;
foreach(MyObjectType item in MyObjectTypeCollection)
{
myValueList.ValueItems.Add(item.ID, item.Text); // Note that the ID is a string (not my design)
}
When I look at the page, I expect to see a drop-down list in the cells for this column, but my columns are empty.
A: Here's an example from one of my pages:
UltraWebGrid uwgMyGrid = new UltraWebGrid();
uwgMyGrid.Columns.Add("colTest", "Test Dropdown");
uwgMyGrid.Columns.FromKey("colTest").Type = ColumnType.DropDownList;
uwgMyGrid.Columns.FromKey("colTest").ValueList.ValueListItems.Insert(0, "ONE", "Choice 1");
uwgMyGrid.Columns.FromKey("colTest").ValueList.ValueListItems.Insert(1, "TWO", "Choice 2");
A: I've found what was wrong.
The column must allow updates.
uwgMyGrid.Columns.FromKey("colTest").AllowUpdate = AllowUpdate.Yes;
A: public void MakeCellValueListDropDownList(UltraWebGrid grid, string columnName, string valueListName, string[] listArray)
{
//Set the column to be a dropdownlist
UltraGridColumn Col = grid.Columns.FromKey(columnName);
Col.Type = ColumnType.DropDownList;
Col.DataType = "System.String";
try
{
ValueList ValList = grid.DisplayLayout.Bands[0].Columns.FromKey(columnName).ValueList;
ValList.DataSource = listArray;
foreach (string item in listArray)
{
ValList.ValueListItems.Add(item);
}
ValList.DataBind();
}
catch (ArgumentException)
{
}
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/15219",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Setting an ASP.NET Master Page at runtime I'm working on a site which needs to be able to support two or more looks, changable at runtime. I'd hoped to be able to handle the change with a CSS switch, but it looks like I'll need to use a different masterpage for each design.
So, what's the best way to set the masterpage at runtime? Page.MasterPageFile can only be set in the Page.OnPreInit event. It looks like the solutions are to make all my pages inherit from a common base which handles the PreInit event, or to use an HttpModule which does that.
Any advice?
A: I've done this once before, I did exactly what you described (Made all pages inherit from a custom page with an OnPreInit event). Also I had a custom Application_PreRequestHandlerExecute in my Global.asax.cs for setting Page.StyleSheetTheme for doing image/css changes that didn't require a different Master Page.
A: Rather than two different master pages how about having one master that dynamically loads different user controls and content HTML literals?
A: I feel your pain. I searched for about an hour (if not more) for an issue to this, without success. It isn't just a cut and dry answer to say "just call it from PreInit on each page" when you have hundreds of pages. But then I realized that I was spending more time looking for a solution than it would have taken to just do it on each page.
However, I wanted to set the MasterPageFile based on a Profile property, so that would have been about 5 lines of code each page, a maintainability nightmare. And anyways, "don't repeat yourself", right?
So I created an Extension method in a module in the App_Code folder to make this easier and more maintainable.
Public Module WebFunctions
<System.Runtime.CompilerServices.Extension()> _
Public Sub SetMaster(ByVal page As Page)
Dim pb As ProfileCommon = DirectCast(HttpContext.Current.Profile, ProfileCommon)
If pb IsNot Nothing Then
page.MasterPageFile = pb.MasterPage
End If
End Sub
End Module
And then on each page's PreInit, I just call this:
Protected Sub Page_PreInit(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.PreInit
Me.SetMaster()
End Sub
A: It's easy enough to handle PreInit and insert the one line of code it takes to load the proper Master Page.
this.Page.MasterPageFile = "~/default.master";
In the absence of some compelling reason not to go this route, that's what I'd do, regardless of where you handle the PreInit.
A: I'm curious what decides how the page should look? Is it the user clicking a button to change the theme? Is it based on the URL that was used to get to the site?
Code behind is supported in Master Pages, so you could put some logic in your one Master Page to decide what should be displayed.
I've seen several sites set cookies based on user clicks (to change font size, or page width), and then have different CSS files applied based on the value of those cookies. If no cookie is present, display a default look and feel.
EDIT:
Another thought here, if you are simply trying to switch out CSS is to set your style tag to run at the server, and assign properties to it at run-time. Once again this would require the use of a single master page, and putting code the code-behind of the master page, probably in the PreInit event handler.
Since I've never implemented this solution I'm not sure if the whole <HEAD> tag has to run at the server or not.
<html>
<head id="Head" runat="server">
<style id="StylePlaceholder" runat="server" type="text/css"></style>
</head>
A: Inherit all you pages from a base class like
public class PageBase : System.Web.UI.Page
{
public PageBase()
{
this.PreInit += new EventHandler(PageBase_PreInit);
}
void PageBase_PreInit(object sender, EventArgs e)
{
this.MasterPageFile = "~/MyMasterPage.Master";
}
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/15228",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
} |
Q: How do you create a debug only function that takes a variable argument list? Like printf() I'd like to make a debug logging function with the same parameters as printf. But one that can be removed by the pre-processor during optimized builds.
For example:
Debug_Print("Warning: value %d > 3!\n", value);
I've looked at variadic macros but those aren't available on all platforms. gcc supports them, msvc does not.
A: Ah, vsprintf() was the thing I was missing. I can use this to pass the variable argument list directly to printf():
#include <stdarg.h>
#include <stdio.h>
void DBG_PrintImpl(char * format, ...)
{
char buffer[256];
va_list args;
va_start(args, format);
vsprintf(buffer, format, args);
printf("%s", buffer);
va_end(args);
}
Then wrap the whole thing in a macro.
A: Another fun way to stub out variadic functions is:
#define function sizeof
A: @CodingTheWheel:
There is one slight problem with your approach. Consider a call such as
XTRACE("x=%d", x);
This works fine in the debug build, but in the release build it will expand to:
("x=%d", x);
Which is perfectly legitimate C and will compile and usually run without side-effects but generates unnecessary code. The approach I usually use to eliminate that problem is:
*
*Make the XTrace function return an int (just return 0, the return value doesn't matter)
*Change the #define in the #else clause to:
0 && XTrace
Now the release version will expand to:
0 && XTrace("x=%d", x);
and any decent optimizer will throw away the whole thing since short-circuit evaluation would have prevented anything after the && from ever being executed.
Of course, just as I wrote that last sentence, I realized that perhaps the original form might be optimized away too and in the case of side effects, such as function calls passed as parameters to XTrace, it might be a better solution since it will make sure that debug and release versions will behave the same.
A: I still do it the old way, by defining a macro (XTRACE, below) which correlates to either a no-op or a function call with a variable argument list. Internally, call vsnprintf so you can keep the printf syntax:
#include <stdio.h>
void XTrace0(LPCTSTR lpszText)
{
::OutputDebugString(lpszText);
}
void XTrace(LPCTSTR lpszFormat, ...)
{
va_list args;
va_start(args, lpszFormat);
int nBuf;
TCHAR szBuffer[512]; // get rid of this hard-coded buffer
nBuf = _vsnprintf(szBuffer, 511, lpszFormat, args);
::OutputDebugString(szBuffer);
va_end(args);
}
Then a typical #ifdef switch:
#ifdef _DEBUG
#define XTRACE XTrace
#else
#define XTRACE
#endif
Well that can be cleaned up quite a bit but it's the basic idea.
A: This is how I do debug print outs in C++. Define 'dout' (debug out) like this:
#ifdef DEBUG
#define dout cout
#else
#define dout 0 && cout
#endif
In the code I use 'dout' just like 'cout'.
dout << "in foobar with x= " << x << " and y= " << y << '\n';
If the preprocessor replaces 'dout' with '0 && cout' note that << has higher precedence than && and short-circuit evaluation of && makes the whole line evaluate to 0. Since the 0 is not used the compiler generates no code at all for that line.
A: In C++ you can use the streaming operator to simplify things:
#if defined _DEBUG
class Trace
{
public:
static Trace &GetTrace () { static Trace trace; return trace; }
Trace &operator << (int value) { /* output int */ return *this; }
Trace &operator << (short value) { /* output short */ return *this; }
Trace &operator << (Trace &(*function)(Trace &trace)) { return function (*this); }
static Trace &Endl (Trace &trace) { /* write newline and flush output */ return trace; }
// and so on
};
#define TRACE(message) Trace::GetTrace () << message << Trace::Endl
#else
#define TRACE(message)
#endif
and use it like:
void Function (int param1, short param2)
{
TRACE ("param1 = " << param1 << ", param2 = " << param2);
}
You can then implement customised trace output for classes in much the same way you would do it for outputting to std::cout.
A: Here's something that I do in C/C++. First off, you write a function that uses the varargs stuff (see the link in Stu's posting). Then do something like this:
int debug_printf( const char *fmt, ... );
#if defined( DEBUG )
#define DEBUG_PRINTF(x) debug_printf x
#else
#define DEBUG_PRINTF(x)
#endif
DEBUG_PRINTF(( "Format string that takes %s %s\n", "any number", "of args" ));
All you have to remember is to use double-parens when calling the debug function, and the whole line will get removed in non-DEBUG code.
A: What platforms are they not available on? stdarg is part of the standard library:
http://www.opengroup.org/onlinepubs/009695399/basedefs/stdarg.h.html
Any platform not providing it is not a standard C implementation (or very, very old). For those, you will have to use varargs:
http://opengroup.org/onlinepubs/007908775/xsh/varargs.h.html
A: Part of the problem with this kind of functionality is that often it requires
variadic macros. These were standardized fairly recently(C99), and lots of
old C compilers do not support the standard, or have their own special work
around.
Below is a debug header I wrote that has several cool features:
*
*Supports C99 and C89 syntax for debug macros
*Enable/Disable output based on function argument
*Output to file descriptor(file io)
Note: For some reason I had some slight code formatting problems.
#ifndef _DEBUG_H_
#define _DEBUG_H_
#if HAVE_CONFIG_H
#include "config.h"
#endif
#include "stdarg.h"
#include "stdio.h"
#define ENABLE 1
#define DISABLE 0
extern FILE* debug_fd;
int debug_file_init(char *file);
int debug_file_close(void);
#if HAVE_C99
#define PRINT(x, format, ...) \
if ( x ) { \
if ( debug_fd != NULL ) { \
fprintf(debug_fd, format, ##__VA_ARGS__); \
} \
else { \
fprintf(stdout, format, ##__VA_ARGS__); \
} \
}
#else
void PRINT(int enable, char *fmt, ...);
#endif
#if _DEBUG
#if HAVE_C99
#define DEBUG(x, format, ...) \
if ( x ) { \
if ( debug_fd != NULL ) { \
fprintf(debug_fd, "%s : %d " format, __FILE__, __LINE__, ##__VA_ARGS__); \
} \
else { \
fprintf(stderr, "%s : %d " format, __FILE__, __LINE__, ##__VA_ARGS__); \
} \
}
#define DEBUGPRINT(x, format, ...) \
if ( x ) { \
if ( debug_fd != NULL ) { \
fprintf(debug_fd, format, ##__VA_ARGS__); \
} \
else { \
fprintf(stderr, format, ##__VA_ARGS__); \
} \
}
#else /* HAVE_C99 */
void DEBUG(int enable, char *fmt, ...);
void DEBUGPRINT(int enable, char *fmt, ...);
#endif /* HAVE_C99 */
#else /* _DEBUG */
#define DEBUG(x, format, ...)
#define DEBUGPRINT(x, format, ...)
#endif /* _DEBUG */
#endif /* _DEBUG_H_ */
A: Have a look at this thread:
*
*How to make a variadic macro (variable number of arguments)
It should answer your question.
A: Having come across the problem today, my solution is the following macro:
static TCHAR __DEBUG_BUF[1024];
#define DLog(fmt, ...) swprintf(__DEBUG_BUF, fmt, ##__VA_ARGS__); OutputDebugString(__DEBUG_BUF);
You can then call the function like this:
int value = 42;
DLog(L"The answer is: %d\n", value);
A: This is what I use:
inline void DPRINTF(int level, char *format, ...)
{
# ifdef _DEBUG_LOG
va_list args;
va_start(args, format);
if(debugPrint & level) {
vfprintf(stdout, format, args);
}
va_end(args);
# endif /* _DEBUG_LOG */
}
which costs absolutely nothing at run-time when the _DEBUG_LOG flag is turned off.
A: This is a TCHAR version of user's answer, so it will work as ASCII (normal), or Unicode mode (more or less).
#define DEBUG_OUT( fmt, ...) DEBUG_OUT_TCHAR( \
TEXT(##fmt), ##__VA_ARGS__ )
#define DEBUG_OUT_TCHAR( fmt, ...) \
Trace( TEXT("[DEBUG]") #fmt, \
##__VA_ARGS__ )
void Trace(LPCTSTR format, ...)
{
LPTSTR OutputBuf;
OutputBuf = (LPTSTR)LocalAlloc(LMEM_ZEROINIT, \
(size_t)(4096 * sizeof(TCHAR)));
va_list args;
va_start(args, format);
int nBuf;
_vstprintf_s(OutputBuf, 4095, format, args);
::OutputDebugString(OutputBuf);
va_end(args);
LocalFree(OutputBuf); // tyvm @sam shaw
}
I say, "more or less", because it won't automatically convert ASCII string arguments to WCHAR, but it should get you out of most Unicode scrapes without having to worry about wrapping the format string in TEXT() or preceding it with L.
Largely derived from MSDN: Retrieving the Last-Error Code
A: Not exactly what's asked in the question . But this code will be helpful for debugging purposes , it will print each variable's value along with it's name . This is completely type independent and supports variable number of arguments.
And can even display values of STL's nicely , given that you overload output operator for them
#define show(args...) describe(#args,args);
template<typename T>
void describe(string var_name,T value)
{
clog<<var_name<<" = "<<value<<" ";
}
template<typename T,typename... Args>
void describe(string var_names,T value,Args... args)
{
string::size_type pos = var_names.find(',');
string name = var_names.substr(0,pos);
var_names = var_names.substr(pos+1);
clog<<name<<" = "<<value<<" | ";
describe(var_names,args...);
}
Sample Use :
int main()
{
string a;
int b;
double c;
a="string here";
b = 7;
c= 3.14;
show(a,b,c);
}
Output :
a = string here | b = 7 | c = 3.14
| {
"language": "en",
"url": "https://stackoverflow.com/questions/15240",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "43"
} |
Q: Does anyone have any real-world experience of CSLA? The main web application of my company is crying out for a nifty set of libraries to make it in some way maintainable and scalable, and one of my colleagues has suggested CSLA. So I've bought the book but as :
programmers don't read books anymore
I wanted to gauge the SOFlow community's opinion of it.
So here are my questions:
*
*How may people are using CSLA?
*What are the pros and cons?
*Does CSLA really not fit in with TDD?
*What are my alternatives?
*If you have stopped using it or decided against why?
A: In defence of the CSLA, although I do agree with many of the comments that have been made particularly the unit testing one...
My company used it extensively for a Windows Forms data entry application, with a high degree of success.
*
*It provided out of the box functionality that we didn't have the time or expertise to write ourselves.
*It standardised all of our business objects making maintenance easy and reducing the learning curve for our new developers.
On the whole I would say that any issues that it caused were more than outwayed by the benefits.
UPDATE: Further to this we are still using it for our windows forms app but experiments with using it for other applications such as web sites have shown that it is perhaps to cumbersome when you don't need much of its functionality and we are now investigating lighter weight options for these scenarios.
A: Before I specifically answer your question, I'd like to put a few thoughts down. Is CSLA right for your project? It depends. I would personally consider CSLA for desktop based applications that does not value unit testing as a high priority. CSLA is great if you want to easily scale to an n-tier application. CSLA tends to get some flack because it does not allow pure unit testing. This is true, however like anything in technology, I believe that there is No One True Way. Unit testing may not be something you are undertaking for a specific project. What works for one team and one project may not work for another team or other project.
There are also many misconceptions in regards to CSLA. It is not an ORM. it is not a competitor to NHibernate (in fact using CLSA Business Objects & NHibernate as data access fit really well together). It formalises the concept of a Mobile Object.
1. How many people are using CSLA?
Based on the CSLA Forums, I would say there are quite a number of CSLA based projects out there. Honestly though, I have no idea how many people are actually using it. I have used it in the past on two projects.
2. What are the pros and cons?
While it is difficult to summarise in a short list, here is some of the pro/con's that come to mind.
Pros:
*
*It's easy to get new developers up
to speed. The CSLA book and sample
app are great resources to get up to speed.
*The Validation framework is truly world class - and has been "borrowed" for many many other non-CSLA projects and technologies.
*n-Level Undo within your business objects
*Config line change for n-Tier scalability (Note: not even a
recompile is necessary)
*Key technologies are abstracted from the "real" code. When WCF was
introduced, it had minimal impact on
CSLA code.
*It is possible to share your business objects between windows and web projects.
*CSLA promotes the normalization of behaviour rather than the normalization of data (leaving the database for data normalization).
Cons:
*
*Difficulty in unit testing
*Lack of Separation of Concern (generally your business objects have data access code inside them).
*As CSLA promotes the normalization of behavior, rather than the normalization of data, and this can result in business objects that are named similarly, but have different purposes. This can cause some confusion and a feeling like you are not reusing objects appropriately. That said, once the physiological leap is taken, it more than makes sense - it seems inappropriate to structure objects the "old" way.
*It's not "in fashion" to build applications this way. You may struggle to get developers who are passionate about the technology.
3. After reading this does CSLA really not fit in with TDD?
I haven't found an effective way to do TDD with CSLA. That said, I am sure there are many smarter people out there than me that may have tried this with greater success.
4. What are my alternatives?
Domain-Driven-Design is getting big push at the moment (and rightfully so - it's fantastic for some applications). There are also a number of interesting patterns developing from the introduction of LINQ (and LINQ to SQL, Entity Framework, etc). Fowlers book PoEAA, details many patterns that may be suitable for your application. Note that some patterns are competing (i.e. Active Record and Repository), and thus are meant to be used for specific scenarios. While CSLA doesn't exactly match any of the patterns described in that book, it most closely resembles Active Record (although I feel it is short-sighted to claim an exact match for this pattern).
5. If you have stopped using it or decided against why?
I didn't fully recommend CSLA for my last project, because I believe the scope of the application is too large for the benefits CSLA provides.
I would not use CSLA on a web project. I feel there are other technologies better suited to building applications in that environment.
In summary, while CSLA is anything but a silver bullet, it is appropriate for some scenarios.
Hope this helps!
A: I joined a team where CSLA is mandatory. We don't use the remote data portal which is the only reason I could agree for usage of this framework. I never bought into the idea of CSLA so maybe that's why I have nothing but issues with it, sorry.
A couple of the issues:
I don't need a road block between my code and the .NET framework which is what this framework felt like to me. I had a limited option of list objects, while I just had to ignore the rich list objects in the .NET framework.
Ii is totally ridiculous that we had these read-only lists and then non read-only lists. So if I had to add an item to the list I had to recreate the entire list...are you serious?
Then csla wants to manage my object state which is fine but nothing is really exposed. Sometimes I want to change an object state manually instead of fetching it again which seems like what csla wants me to do. I basically end up creating many properties to expose options csla didn't think I should have direct access to.
Why can't I just instantiate an object? We end up creating static methods which instantiates an object and passes it back...are you kidding me?
Check the framework source code and it looks to heavy on the reflection code to me.
Reasons to use csla:
*
*the straight .net framework is too powerful for you.
*your developers are not seasoned and can't grasp the concept of patterns then csla will pretty much have everyone on the same page.
*
*I don't need a road block between my code and the .NET framework...I am stuck with these list objects.
A: We started using CSLA because we thought it would help with our model layer. Was sort of overkill and mostly all we use now is the SmartDate class, just because we're already linked to the library.
We thought the validation interface would really help us enforce business rules but it didn't work well with WCF and serialization (we're still stuck on version 2.0.3.0, so things might have changed).
A: Our company practised CSLA in some of its projects and some of the legacy projects remain to be CSLA. Other projects moved away from it because CSLA violated a plain and simple OOP rule: Single Responsibility Principle.
CSLA objects are self-sustaining, e.g. they retrieve their own data, they manage their own behavior, they save themselves. Unfortunately this meant that your average CSLA object has at least three responsibilities -- representing the domain model, containing business rules, and containing data access definition (not the DAL, or data access implementation, as I previously stated/implied) all at the same time.
A: Not to take CSLA of the list, but before using it, research the benefits and make sure they really apply. Will your team be able to correctly/consistently implement it? Remoting and portal dance needed?
I think beyond all the theoretical ponder, it is all about clean/maintainable/extendable/testable code following basic proven patterns.
I counted lines of code needed in a specific domain of a project converted from CSLA. Between all the different CSLA objects(readonly+editable+root+list combinations) and their stored procs it took about 1700 lines, versus a Linq2SQL + Repository implementation that took 180 lines. The Linq2SQL version consisted mostly of generated classes that your team doesn’t need to consume book to understand. And yes, I used CodeSmith to generate the CSLA parts, but I now believe in DRY code with single responsibility bits, and the CSLA implementation now looks to me like yesterday’s hero.
As an alternative I would like to suggest looking into Linq2Sql/Entity Framework/NHibernate combined with Repository and UnitOfWork patterns. Have a look at http://www.codeplex.com/backgroundmotion
Cheers!
A: We use CSLA extensively. There are several benefits; first, I believe that every line of business developer should read Rocky Lhotka's book on Business Objects programming. I've personally found it to be in my top 3 best programming books ever. CSLA is a framework based on this book and using it gives your project access to very high level functionality like n-level undo, validation rules and scalability architecture while providing the details for you. Notice I said "providing" and not "hiding". I've found that the best part of CSLA is that is makes you understand how all of these things are implemented down to the source code without making you reproduce them yourself. You can choose to use as many or few features as you need but I've found that by staying true to the design patterns of the framework, it really keeps you out of trouble.
--Byron
A: We've been using CSLA now for over five years, and we think it works great for constructing business applications. Coupled with code generation you can create business objects in a relative short amount of time and focus your effort on the meat of the application.
A: I've been using CSLA since vb5, when it was more of a collection of patterns than it was a framework. With the introduction of.NET, CSLA turned into a full-blown framework, that came with a hefty learning curve. However, the CSLA addresses many things that all business developers tend to write themselves at some point (depending on project scope): validation logic, authentication logic, undo functionality, dirty logic, etc. All of these things you get for free out of the box in one nice framework.
As others have stated, being a framework, it forces developers to write business logic in a similar fashion. It also forces you to provide a level of abstraction for your business logic, so that not using a UI framework such as MVC, MVP, MVVM becomes not so important.
In fact, I would argue that the reason why so many of these UI patterns are so hyped up today (in the Microsoft world) is that people have been doing stuff incredibly wrong for so long (ie., using DataGrids in your UI, sprinkling your business logic everywhere. tisk tisk). Design your middle tier (business logic) correctly from the start, you can reuse your middle tier in ANY UI. Win Form, ASP.NET/MVC, WCF Service, WPF, Silverlight**, Windows Service, ....
But aside from these, the huge payoff for me has been it's built-in ability to scale. The CSLA uses a proxy pattern that is configurable via your config file. This allows your business objects to make remote calls from server to server, without having to write one lick of code. Adding more users to your system? No problem, deploy your CSLA business objects to a new application server, make a config file entry change, and BAM!! Instant scalability needs met.
Compare this to using DTO's, storing your business logic on the client (whatever client that may be), and having to write each of your own CRUD methods as service methods. YIKES!!! Not saying this is a bad approach, but I wouldn't want to do it. Not when there's a framework out there to essentially do it for me.
I'm going to reiterate what other folks have said in that CSLA is NOT an ORM. CSLA forces you to supply your business objects with data. They do not care where you get your data. You can use an ORM to supply your business objects with data. You can also use raw ADO.NET, other services (RESTFUl, SOAP), excel spreadsheets, I can keep going here.
As for your support for TDD, I have never tried using that approach with CSLA either. I have taken the approach where I model my middle tier (ala business objects) using class and sequence diagrams, most often allowing use case, screen and/or process design to dictate. Perhaps a bit old school, but UML has always served me very well in my design and development efforts. I've successfully designed and developed very large and scalable applications still being used today. And until WCF RIA matures, I'll be continuing to use CSLA..
** with some work arounds
A: I'm new to CSLA but I understand the concepts and I already understand that it's not an ORM tool so quit beating that damn drum folks. There are features of CSLA I like but using them feels a bit like there is a magician behind the curtain. I guess if you don't mind not knowing about how it works then you can use the objects and they work fine.
There is a large learning curve for beginners and I think it would benefit greatly by having 5-15 min. videos like Microsoft has for learning the fundamentals. Or how about releasing a companion book with the code instead of getting the code released and taking months to get the book out? Just sayin Mr Lohtka... We started building our stuff before the book and I struggled the whole time. But like I said, I'm new to it.
We used CSLA. We made our objects fit their mold then used 10% of what the framework offered. Object level undo? Didn't use it. NTier flexibility? Didn't use it. We ended up writing enough business rule code that I thought the only thing we were getting out of CSLA was complexity. Some "long in the tooth" developers that know the framework used it as their hammer because they had a nail that needed hitting. CSLA was in their belt and my guess is a lot of proponents of the framework see things from that perspective too.
I guess our seasoned developers are happy because it all makes sense to them. I guess if your organization doesn't have newbie programmers and you guys get bored by writing efficient and simple POCO objects with well formed patterns, then go for it. Use CSLA.
A: I am using CSLA as the business object framework for a medium size project. The framework has come a long way from the VB6 days and offers an extraordinary amount of flexibility and "out of the box" functionality. CSLA's mobile smart objects makes UI development much easier. However, I agree with others it isn't the right tool for every situation. There is definitely some overhead involved, but also a lot of power. Personally, I am looking forward to using the CSLA Light with Silverlight.
Pros:
*
*Data technology agnostic1
*Large install base and it's FREE!!
*Stable and Logical framework
*Data Access code can be in your objects or in a separate assembly
*Property and Object Validation and Authorization
Cons
*
*The code can be a lot to maintain2
*Probably need a code generator to use effectively
*Learning curve. The structure of CSLA objects are easy to grasp, but the caveats can create headaches.
I'm not sure about test driven design. I don't unit test or test driven design (shame on me), so I don't know if unit tests are different than TDD, but I know that the most recent version of the framework comes with unit tests.
1 Good thing because data access technologies never stay the same for long.
2 This has gotten better with recent versions of the framework.
A: A lot of people recommend using Code Generation with CSLA. I'd recommend checking out our set of supported templates as they will increase your ROI immensely.
Thanks
-Blake Niemyjski (Author of the CodeSmith CSLA Templates)
A: After reading all the answers, I've noticed that quite a few people have some misconceptions about CSLA.
First, CSLA is not an ORM. How can I say that so definitely? Because Rockford Lhotka has stated it himself many times in interviews on the .NET Rocks and Hanselminutes podcasts. Look for any episode where Rocky was interviewed and he'll state it in no uncertain terms. I think this is the most critical fact for people to understand, because almost all the misconceptions about CSLA flow from believing that it is an ORM or attempting to use it as one.
As Brad Leach alluded in his answer, CSLA objects model behavior, although it may be more accurate to say that they model the behavior of data, since data is integral to them. CSLA is not an ORM because it's completely agnostic about how you talk to your data store. You should use some kind of data access layer with CSLA, perhaps even an ORM. (I do. I now use Entity Framework, which works beautifully.)
Now, on to unit testing. I've never had any difficulty unit testing my CSLA objects, because I don't put my data access code directly into my business objects. Instead, I use some variation of the repository pattern. The repository is consumed by CSLA, not the other way around. By swapping in a fake repository for my unit tests and using the local data portal, BOOM! it's simple. (Once Entity Framework allows the use of POCOs, this will be even cleaner.)
All of this comes from realizing that CSLA is not an ORM. It might consume an ORM, but it itself is not one.
Cheers.
UPDATE
I thought I'd make a few more comments.
Some people have said that CSLA is verbose compared to things like LINQ to SQL and so on. But here we're comparing apples to oranges. LINQ to SQL is an ORM. It offers some things that CSLA does not, and CSLA offers some things L2S does not, like integrated validation and n-tier persistence through the various remote data portals. In fact, I'd say that last thing, n-tier persistence, trumps them all for me. If I want to use Entity Framework or LINQ to SQL over the net, I have to put something like WCF in between, and that multiplies the work and complexity enormously, to the point where I think it is much more verbose than CSLA. (Now, I'm a fan of WCF, REST and SOA, but use it where you really need it, such as when you want to expose a service to third parties. For most line-of-business apps, it isn't really needed, and CSLA is a better choice.) In fact, with the latest version of CSLA, Rocky provides a WCFDataPortal, which I've used. It works great.
I'm a fan of SOLID, TDD, and other modern software development principles, and use them wherever practical. But I think the benefits of CSLA outweigh some of the objections of those orthodoxies, and in any case I've managed to make CSLA work quite well (and easily) with TDD, so that's not an issue.
A: I used it for a project a couple years ago. But when the project was done, I couldn't tell anyone what CSLA did for me. Sure, I inherited from its classes. But I was able to remove that inheritance from almost all classes with no restructuring. We had no use for the N-Tier stuff. The n-level undo was so slow that we couldn't use it. So I guess at the end it only helped us model our classes.
Having said that, other teams have started using it (after a horrid attempt by a team to create their own framework). So there has to be something worthwhile in there, because they're all smarter than me!
A: I'm a PHP guy. When we started building comparatively large scale applications with PHP, I started researching on lots of application frameworks and ORMs essentially in PHP world, then in Java and .NET. The reason I also looked at Java and .NET frameworks was not to blindly use any PHP framework, but first try to understand what is really going on, and what kind of enterprise level architectures are there.
Because I haven't used CSLA in a real world application, I can't comment on its pros and cons, but what i can say is Lhotka is one the rare thinkers -I'm not saying just expert- in Software Architecture field. Although the name Domain Driven Design is coined by Eric Evans -by the way his book is also great and i humbly advise to read it- Lhotka was applying domain driven design for years. Having said that, whatever you think about his framework, benefit from his profound ideas in the field.
You can find his talks on dotnetrocks.com/archives.aspx and videos from dnrtv.com/archives.aspx (search for Lhotka).
@Byron
What are the other two books you liked?
A: John,
We have teams working in CSLA from 2 to 3.5 and have found it a great way to provide a consistant framework so all the developers are "doing it the same way". It is great that most of the low value code is generated and we know when we run unit tests they work out of the box for all the CRUD stuff. We find that our TDD really comes in with the refactoring we do to design, and CSLA doesn't prevent us from doing any of that.
Chris
A: I last tried to use CSLA in the stone age days of VB6. In retrospect, it would have been more effective if I had used code generation. If you don't have effective code generation tools and a strategy for fitting them into your workflow, they you should avoid frameworks like CSLA, otherwise the features you get from CSLA won't make up for the amount of time you spend writing n lines of code per table, n lines of code per column, etc.
A: I've used CSLA.NET in few projects now, it was most successfull in a windows forms application which has rich databinding compatabilities (which asp.net application's don't have).
It's main problem is the TDD support like people have been pointing out, this is because of the black-box like behaviour for the Dataportal_XYZ functions and it's inability to allow us to mock the data objects. There have been efforts to work around this issue with this being the best approach
A: Yes, I (um, we) used it extensively to model our business process logic that was primarily databound forms in a windows forms application. The application was a trading system. CSLA is designed to be at that layer just below the UI.
If you think about your standard complex line-of-business application you may have a form with many fields, many rules for those fields (including cross-field validation rules), you may invoke a modal dialog to edit some child object, you may want to be able to be able to cancel such dialogs and revert back to a previous state. CSLA supports this.
It's cons are that it has a bit of a learning curve.
The key thing to remember is to use CSLA to model how a user interacts with forms on some application. The most efficient way for me was to design the UI and understand it's flows, behaviour and validation rules before building the CSLA objects. Don't have your CSLA objects drive UI design.
We also found it very useful to be able to use CSLA business objects server side to validate objects sent from clients.
We also had built in mechanisms to perform validation asynchronously against web service (i.e. checking the credit limit range of a counterparty against a master).
CSLA enforces a strong seperation between your UI, BusinessLogic and Persistance and we wrote a load of unit tests against them. It may not be strictly TDD because you are driving it from UI design, that doesn't mean it isn't testable.
The only real alternative is creating your own model \ business objects, but pretty soon you end up implementing features that CSLA offers out of the box (INotifyPropertyChanged, IDataErrorInfo, PushState, PopState etc.)
A: I have used CSLA for one project and it worked great and make things much simpler and neater.
Instead of having your team writing business objects in their own different personal style, we know have a common standard to work against.
//andy
A: I had experience with it several years ago. It is a brilliant architecture, but very complex, difficult to understand or change, and it's solving a problem that most of us developing web based applications don't necessarily have. It was developed more for windows based applications and handling multi-level undo, with a heavy emphasis on transactional logic. You will probably hear people say that since web applications are request-response at the page level, it is inappropriate, but with AJAX-style web apps maybe this argument doesn't hold so much water.
It has a very deep object model, and it can take a while to really wrap your brain around it. Of course, a lot can change in a few years. I would be interested to hear other recent opinions.
All things considered, it would not be my first choice of architecture.
A: I wanted to use it, but my then lead developer had the idea too much 'magic' was involved...
A: CSLA is the best application framework that exists. Rocky LHotka is a very but very smart guy. He is writing the history of software development like Martin Fowler, David S Platt, but my favourites writers are Rod Stephens, Mathew mcDonalds Jeff Levinson thearon willis and Louis Davidson alias dr sql. :-)
Pros: All design patterns are applied.
Cons: Hard to learn, and few samples.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/15241",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "51"
} |
Q: Simplest way to make a Google Map mashup? Given a list of locations such as
<td>El Cerrito, CA</td>
<td>Corvallis, OR</td>
<td>Morganton, NC</td>
<td>New York, NY</td>
<td>San Diego, CA</td>
What's the easiest way to generate a Google Map with pushpins for each location?
A: Check out the Google Maps API Examples
They make it pretty simple and their API documentation is great.
Most of the examples are for doing all the code in JavaScript on the client side, but there are APIs for other languages available as well.
A: I'm assuming you have the basics for Maps in your code already with your API Key.
<head>
<script
type="text/javascript"
href="http://maps.google.com/maps?
file=api&v=2&key=xxxxx">
function createMap() {
var map = new GMap2(document.getElementById("map"));
map.setCenter(new GLatLng(37.44, -122.14), 14);
}
</script>
</head>
<body onload="createMap()" onunload="GUnload()">
Everything in Google Maps is based off of latitude (lat) and longitude (lng).
So to create a simple marker you will just create a GMarker with the lat and lng.
var where = new GLatLng(37.925243,-122.307358); //Lat and Lng for El Cerrito, CA
var marker = new GMarker(where); // Create marker (Pinhead thingy)
map.setCenter(where); // Center map on marker
map.addOverlay(marker); // Add marker to map
However if you don't want to look up the Lat and Lng for each city you can use Google's Geo Coder. Heres an example:
var address = "El Cerrito, CA";
var geocoder = new GClientGeocoder;
geocoder.getLatLng(address, function(point) {
if (point) {
map.clearOverlays(); // Clear all markers
map.addOverlay(new GMarker(point)); // Add marker to map
map.setCenter(point, 10); // Center and zoom map on marker
}
});
So I would just create an array of GLatLng's of every city from the GeoCoder and then draw them on the map.
A: I guess more information would be needed to really give you an answer, but over at Django Pluggables there is a django-googlemap plugin that might be of help.
Edit: Adam has a much better answer. When it doubt look at the API examples.
A: Try this: http://www.google.com/uds/solutions/wizards/mapsearch.html
It's a google maps wizard which will generate the code for you. Not the best for your application; but a good place to "get your feet wet" ;)
Edit: (found the link), here's a good Google Maps API stepwise tutorial.
Good luck!
/mp
A: Here are some links but as with most things i have not got round to trying them yet.
http://gathadams.com/2007/08/21/add-google-maps-to-your-net-site-in-10-minutes/
http://www.mapbuilder.net/
Cheers
John
| {
"language": "en",
"url": "https://stackoverflow.com/questions/15247",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
} |
Q: Can placement new for arrays be used in a portable way? Is it possible to actually make use of placement new in portable code when using it for arrays?
It appears that the pointer you get back from new[] is not always the same as the address you pass in (5.3.4, note 12 in the standard seems to confirm that this is correct), but I don't see how you can allocate a buffer for the array to go in if this is the case.
The following example shows the problem. Compiled with Visual Studio, this example results in memory corruption:
#include <new>
#include <stdio.h>
class A
{
public:
A() : data(0) {}
virtual ~A() {}
int data;
};
int main()
{
const int NUMELEMENTS=20;
char *pBuffer = new char[NUMELEMENTS*sizeof(A)];
A *pA = new(pBuffer) A[NUMELEMENTS];
// With VC++, pA will be four bytes higher than pBuffer
printf("Buffer address: %x, Array address: %x\n", pBuffer, pA);
// Debug runtime will assert here due to heap corruption
delete[] pBuffer;
return 0;
}
Looking at the memory, the compiler seems to be using the first four bytes of the buffer to store a count of the number of items in it. This means that because the buffer is only sizeof(A)*NUMELEMENTS big, the last element in the array is written into unallocated heap.
So the question is can you find out how much additional overhead your implementation wants in order to use placement new[] safely? Ideally, I need a technique that's portable between different compilers. Note that, at least in VC's case, the overhead seems to differ for different classes. For instance, if I remove the virtual destructor in the example, the address returned from new[] is the same as the address I pass in.
A: @Derek
5.3.4, section 12 talks about the array allocation overhead and, unless I'm misreading it, it seems to suggest to me that it is valid for the compiler to add it on placement new as well:
This overhead may be applied in all array new-expressions, including those referencing the library function operator new[](std::size_t, void*) and other placement allocation functions. The amount of overhead may vary from one invocation of new to another.
That said, I think VC was the only compiler that gave me trouble with this, out of it, GCC, Codewarrior and ProDG. I'd have to check again to be sure, though.
A: @James
I'm not even really clear why it needs the additional data, as you wouldn't call delete[] on the array anyway, so I don't entirely see why it needs to know how many items are in it.
After giving this some thought, I agree with you. There is no reason why placement new should need to store the number of elements, because there is no placement delete. Since there's no placement delete, there's no reason for placement new to store the number of elements.
I also tested this with gcc on my Mac, using a class with a destructor. On my system, placement new was not changing the pointer. This makes me wonder if this is a VC++ issue, and whether this might violate the standard (the standard doesn't specifically address this, so far as I can find).
A: Personally I'd go with the option of not using placement new on the array and instead use placement new on each item in the array individually. For example:
int main(int argc, char* argv[])
{
const int NUMELEMENTS=20;
char *pBuffer = new char[NUMELEMENTS*sizeof(A)];
A *pA = (A*)pBuffer;
for(int i = 0; i < NUMELEMENTS; ++i)
{
pA[i] = new (pA + i) A();
}
printf("Buffer address: %x, Array address: %x\n", pBuffer, pA);
// dont forget to destroy!
for(int i = 0; i < NUMELEMENTS; ++i)
{
pA[i].~A();
}
delete[] pBuffer;
return 0;
}
Regardless of the method you use, make sure you manually destroy each of those items in the array before you delete pBuffer, as you could end up with leaks ;)
Note: I haven't compiled this, but I think it should work (I'm on a machine that doesn't have a C++ compiler installed). It still indicates the point :) Hope it helps in some way!
Edit:
The reason it needs to keep track of the number of elements is so that it can iterate through them when you call delete on the array and make sure the destructors are called on each of the objects. If it doesn't know how many there are it wouldn't be able to do this.
A: Thanks for the replies. Using placement new for each item in the array was the solution I ended up using when I ran into this (sorry, should have mentioned that in the question). I just felt that there must have been something I was missing about doing it with placement new[]. As it is, it seems like placement new[] is essentially unusable thanks to the standard allowing the compiler to add an additional unspecified overhead to the array. I don't see how you could ever use it safely and portably.
I'm not even really clear why it needs the additional data, as you wouldn't call delete[] on the array anyway, so I don't entirely see why it needs to know how many items are in it.
A: Placement new itself is portable, but the assumptions you make about what it does with a specified block of memory are not portable. Like what was said before, if you were a compiler and were given a chunk of memory, how would you know how to allocate an array and properly destruct each element if all you had was a pointer? (See the interface of operator delete[].)
Edit:
And there actually is a placement delete, only it is only called when a constructor throws an exception while allocating an array with placement new[].
Whether new[] actually needs to keep track of the number of elements somehow is something that is left up to the standard, which leaves it up to the compiler. Unfortunately, in this case.
A: Similar to how you would use a single element to calculate the size for one placement-new, use an array of those elements to calculate the size required for an array.
If you require the size for other calculations where the number of elements may not be known you can use sizeof(A[1]) and multiply by your required element count.
e.g
char *pBuffer = new char[ sizeof(A[NUMELEMENTS]) ];
A *pA = (A*)pBuffer;
for(int i = 0; i < NUMELEMENTS; ++i)
{
pA[i] = new (pA + i) A();
}
A: I think gcc does the same thing as MSVC, but of course this doesn't make it "portable".
I think you can work around the problem when NUMELEMENTS is indeed a compile time constant, like so:
typedef A Arr[NUMELEMENTS];
A* p = new (buffer) Arr;
This should use the scalar placement new.
A: C++17 (draft N4659) says in [expr.new], paragraph 15:
[O]verhead may be applied in all array new-expressions, including those referencing the library function operator new[](std::size_t, void*) and other placement allocation functions. The amount of overhead may vary from one invocation of new to another.
So it appears to be impossible to use (void*) placement new[] safely in C++17 (and earlier), and it's unclear to me why it's even specified to exist.
In C++20 (draft N4861) this was changed to
[O]verhead may be applied in all array new-expressions, including those referencing a placement allocation function, except when referencing the library function operator new[](std::size_t, void*). The amount of overhead may vary from one invocation of new to another.
So if you're sure that you're using C++20, you can safely use it—but only that one placement form, and only (it appears) if you don't override the standard definition.
Even the C++20 text seems ridiculous, because the only purpose of the extra space is to store array-size metadata, but there is no way to access it when using any custom placement form of new[]. It's in a private format that only delete[] knows how to read—and with custom allocation you can't use delete[], so at best it's just wasted space.
Actually, as far as I can tell, there is no safe way to use custom forms of operator new[] at all. There is no way to call the destructors correctly because the necessary information isn't passed to operator new[]. Even if you know that the objects are trivially destructible, the new expression may return a pointer to some arbitrary location in the middle of the memory block that your operator new[] returned (skipping over the pointless metadata), so you can't wrap an allocation library that only supplies malloc and free equivalents: it also needs a way to search for a block by a pointer to its middle, which even if it exists is likely to be a lot slower.
I don't understand how they (or just Stroustrup?) botched this so badly. The obviously correct way to do it is to pass the number of array elements and the size of each element to operator new[] as two arguments, and let each allocator choose how to store it. Perhaps I'm missing something.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/15254",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "47"
} |
Q: Objective-C/Cocoa: How do I accept a bad server certificate? Using NSURLRequest, I am trying to access a web site that has an expired certificate. When I send the request, my connection:didFailWithError delegate method is invoked with the following info:
-1203, NSURLErrorDomain, bad server certificate
My searches have only turned up one solution: a hidden class method in NSURLRequest:
[NSURLRequest setAllowsAnyHTTPSCertificate:YES forHost:myHost];
However, I don't want to use private APIs in a production app for obvious reasons.
Any suggestions on what to do? Do I need to use CFNetwork APIs, and if so, two questions:
*
*Any sample code I can use to get started? I haven't found any online.
*If I use CFNetwork for this, do I have to ditch NSURL entirely?
EDIT:
iPhone OS 3.0 introduced a supported method for doing this. More details here: How to use NSURLConnection to connect with SSL for an untrusted cert?
A: The supported way of doing this requires using CFNetwork. You have to do is attach a kCFStreamPropertySSLSettings to the stream that specifies kCFStreamSSLValidatesCertificateChain == kCFBooleanFalse. Below is some quick code that does it, minus checking for valid results add cleaning up. Once you have done this You can use CFReadStreamRead() to get the data.
CFURLRef myURL = CFURLCreateWithString(kCFAllocatorDefault, CFSTR("http://www.apple.com"), NULL);
CFHTTPMessageRef myRequest = CFHTTPMessageCreateRequest(kCFAllocatorDefault, CFSTR("GET"), myURL, kCFHTTPVersion1_1);
CFReadStreamRef myStream = CFReadStreamCreateForHTTPRequest(kCFAllocatorDefault, myRequest);
CFMutableDictionaryRef myDict = CFDictionaryCreateMutable(kCFAllocatorDefault, 0, &kCFTypeDictionaryKeyCallBacks, &kCFTypeDictionaryValueCallBacks);
CFDictionarySetValue(myDict, kCFStreamSSLValidatesCertificateChain, kCFBooleanFalse);
CFReadStreamSetProperty(myStream, kCFStreamPropertySSLSettings, myDict);
CFReadStreamOpen(myStream);
A: If it's for an internal server for testing purposes, why not just import the test server's certificate into the KeyChain and set custom trust settings?
A: iPhone OS 3.0 introduced a supported way of doing this that doesn't require the lower-level CFNetwork APIs. More details here:
How to use NSURLConnection to connect with SSL for an untrusted cert?
A: I've hit the same issue - I was developing a SOAP client, and the dev server has a "homegrown" certificate. I wasn't able to solve the issue even using that method, since I wasn't using NSURL, but the (poorly documented and apparently abandoned) WS methods, and decided for the time being to (internally) just use a non-SSL connection.
Having said that, however, the question that springs to mind is, if you aren't willing to use a private API in a production app, should you be allowing access to a site with a dodgy certificate?
I'll quote Jens Alfke:
That's not just a theoretical security problem. Something
like 25% of public DNS servers have been compromised, according to
recent reports, and can direct users to phishing/malware/ad sites even
if they enter the domain name properly. The only thing protecting you
from that is SSL certificate checking.
A: Can you create a self signed certificate and add your custom certificate authority to the trusted CAs? I'm not quite sure how this would work on the iPhone, but I'd assume on Mac OS X you would add these to the Keychain.
You may also be interested in this post Re: How to handle bad certificate error in NSURLDownload
A: Another option would be to use an alternate connection library.
I am a huge fan of AsyncSocket and it has support for self signed certs
http://code.google.com/p/cocoaasyncsocket/
Take a look, I think it is way more robust then the standard NSURLRequests.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/15266",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
} |
Q: Aging Data Structure in C# I want a data structure that will allow querying how many items in last X minutes. An item may just be a simple identifier or a more complex data structure, preferably the timestamp of the item will be in the item, rather than stored outside (as a hash or similar, wouldn't want to have problems with multiple items having same timestamp).
So far it seems that with LINQ I could easily filter items with timestamp greater than a given time and aggregate a count. Though I'm hesitant to try to work .NET 3.5 specific stuff into my production environment yet. Are there any other suggestions for a similar data structure?
The other part that I'm interested in is aging old data out, If I'm only going to be asking for counts of items less than 6 hours ago I would like anything older than that to be removed from my data structure because this may be a long-running program.
A: A simple linked list can be used for this.
Basically you add new items to the end, and remove too old items from the start, it is a cheap data structure.
example-code:
list.push_end(new_data)
while list.head.age >= age_limit:
list.pop_head()
If the list will be busy enough to warrant chopping off larger pieces than one at a time, then I agree with dmo, use a tree structure or something similar that allows pruning on a higher level.
A: I think that an important consideration will be the frequency of querying vs. adding/removing. If you will do frequent querying (especially if you'll have a large collection) a B-tree may be the way to go:
http://en.wikipedia.org/wiki/B-tree
You could have some thread go through and clean up this tree periodically or make it part of the search (again, depending on the usage). Basically, you'll do a tree search to find the spot "x minutes ago", then count the number of children on the nodes with newer times. If you keep the number of children under the nodes up to date, this sum can be done quickly.
A: a cache with sliding expiration will do the job ....
stuff your items in and the cache handles the aging ....
http://www.sharedcache.com/cms/
| {
"language": "en",
"url": "https://stackoverflow.com/questions/15272",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: What do the getUTC* methods on the date object do? What does it mean when you get or create a date in UTC format in JavaScript?
A: A date represents a specific point in time. This point in time will be called differently in different places. As I write this, it's 00:27 on Tuesday in Germany, 23:27 on Monday in the UK and 18:27 on Monday in New York.
To take an example method: getDay returns the day of the week in the local timezone. Right now, for a user in Germany, it would return 2. For a user in the UK or US, it would return 1. In an hour's time, it will return 2 for the user in the UK (because it will then be 00:27 on Tuesday there).
The ..UTC.. methods deal with the representation of the time in UTC (also known as GMT). In winter, this is the same timezone as the UK, in summer it's an hour behind the time in the UK.
It's summer as I write this. getUTCDay will return 1 (Monday), getUTCHours will return 22, getUTCMinutes will return 27. So it's 22:27 on Monday in the UTC timezone. Whereas the plain get... functions will return different values depending on where the user is, the getUTC.. functions will return those same values no matter where the user is.
A: getUTC is for converting times to Coordinated Universal Time (UTC, the acronym is ordered differently than what it stands for) which is the standard time based on the time in Greenwich, London.
The universal time is calculated using a time offset (in minutes when in JavaScript.) This offset is based on the time zone configured on the client browser's operating system.
If you plan on storing dates for users in multiple time zones, this is what you should use.
A: Further to Dan's remark about the acronym being different to what it stands for is a good reason: UTC Abbreviation on Wikipedia
| {
"language": "en",
"url": "https://stackoverflow.com/questions/15302",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13"
} |
Q: JNI and Java: ant calling make or make calling ant? I'm just about to make my first trip into the world of JNI (Java Native Interface) to provide file system change notifications from platform specific C/C++ code to Java. That is unless someone suggest some brilliant library for doing this that I've missed.
Being new to JNI I've managed to find much documentation on the interface side of it and library generation side of it, but I've not found much on building the native library.
I've got an existing build based on ant for the pre-existing Java source, so I'm trying to work out if I should get ant to call make to create the library or if it's best to get make to call ant after creating the library?
Neither option jumps out as being terribly nice, but both seem better than trying to get ant to call a compiler to compile the code and generate the library directly.
A: As a simpler alternative to JNI, try JNA: https://jna.dev.java.net/, may solve this hassle for you and be simpler (assuming it can do what you want).
A: I strongly dislike make because of its implicit ruleset and treatment of whitespace. Personally I would use cpp tasks (http://ant-contrib.sourceforge.net/cpptasks/index.html) to do my C compilation. They are not as flexible as make but they are also far less complex and it will mean you don't have to burden your developers with learning make.
A: I'm working on something similar right now. Be aware that using swig from swig.org is often easier as it generates the stubs to the native library for you.
The short answer to your question is that the ant file should run the make file after the java library has already been built, as the native library depends on the swig generated header, which is generated from the java class files.
If you are super familiar with ant, and don't want to learn a new system, then http://ant-contrib.sourceforge.net/cpptasks/index.html, also linked by another poster, will let you build c++ in ant.
A: I'd skip JNI entirely, and use an external program which writes notifications on standard-output. Java can then simply read from the programs output stream and generate whatever event is necessary. JNI is way too much work if all you want is to send simple notifications.
Also, on Linux you can simply start "inotifywait" (with some suitable parameters, see "man inotifywait").
A: You could also try the terp C++ tasks at Codemesh. They are not free but they offer a high level of abstraction coupled with the ability to discover/specify the C++ compiler and the ability to iterate over more than one compiler/processor architecture/compiler configuration for multiplatform builds.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/15305",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: Optimizing the PDF Export of Huge Reports in Sql Reporting Services 2005 First off I understand that it is a horrible idea to run extremely large/long running reports. I am aware that Microsoft has a rule of thumb stating that a SSRS report should take no longer than 30 seconds to execute. However sometimes gargantuan reports are a preferred evil due to external forces such complying with state laws.
At my place of employment, we have an asp.net (2.0) app that we have migrated from Crystal Reports to SSRS. Due to the large user base and complex reporting UI requirements we have a set of screens that accepts user inputted parameters and creates schedules to be run over night. Since the application supports multiple reporting frameworks we do not use the scheduling/snapshot facilities of SSRS. All of the reports in the system are generated by a scheduled console app which takes user entered parameters and generates the reports with the corresponding reporting solutions the reports were created with. In the case of SSRS reports, the console app generates the SSRS reports and exports them as PDFs via the SSRS web service API.
So far SSRS has been much easier to deal with than Crystal with the exception of a certain 25,000 page report that we have recently converted from crystal reports to SSRS. The SSRS server is a 64bit 2003 server with 32 gigs of ram running SSRS 2005. All of our smaller reports work fantastically, but we are having trouble with our larger reports such as this one. Unfortunately, we can't seem to generate the aforemention report through the web service API. The following error occurs roughly 30-35 minutes into the generation/export:
Exception Message: The underlying connection was closed: An unexpected error occurred on a receive.
The web service call is something I'm sure you all have seen before:
data = rs.Render(this.ReportPath, this.ExportFormat, null, deviceInfo,
selectedParameters, null, null, out encoding, out mimeType, out usedParameters,
out warnings, out streamIds);
The odd thing is that this report will run/render/export if the report is run directly on the reporting server using the report manager. The proc that produces the data for the report runs for about 5 minutes. The report renders in SSRS native format in the browser/viewer after about 12 minutes. Exporting to pdf through the browser/viewer in the report manager takes an additional 55 minutes. This works reliably and it produces a whopping 1.03gb pdf.
Here are some of the more obvious things I've tried to get the report working via the web service API:
*
*set the HttpRuntime ExecutionTimeout
value to 3 hours on the report
server
*disabled http keep alives on the report server
*increased the script timeout on the report server
*set the report to never time out on the server
*set the report timeout to several hours on the client call
From the tweaks I have tried, I am fairly comfortable saying that any timeout issues have been eliminated.
Based off of my research of the error message, I believe that the web service API does not send chunked responses by default. This means that it tries to send all 1.3gb over the wire in one response. At a certain point, IIS throws in the towel. Unfortunately the API abstracts away web service configuration so I can't seem to find a way to enable response chunking.
*
*Does anyone know of anyway to reduce/optimize the PDF export phase and or the size of the PDF without lowering the total page count?
*Is there a way to turn on response chunking for SSRS?
*Does anyone else have any other theories as to why this runs on the server but not through the API?
EDIT: After reading kcrumley's post I began to take a look at the average page size by taking file size / page count. Interestingly enough on smaller reports the math works out so that each page is roughly 5K. Interestingly, when the report gets larger this "average" increases. An 8000 page report for example is averaging over 40K/page. Very odd. I will also add that the number of records per page is set except for the last page in each grouping, so it's not a case where some pages have more records than another.
A: We narrowed down the large PDF exports from SSRS and found 2 main culprits
1) Unless images are JPG or PNG colour type 3, they are expanded to BMP's See here
2) Unless you configure SSRS to behave otherwise (not recommended), then SSRS will embed fonts or font subsets into the PDF, unless they are one of the 5 'standard' PDF fonts.
Although none of the standard fonts (other than Symbol I guess) are installed on most Windows OS's out of the box, we've found that if you use Times New Roman, Courier New, or Arial then forward and reverse font substitution will take place.
The easiest way to convert your RDL's is to view them as XML and search and replace the FontFamily tags.
If you have to use a non standard font, then, you can still minimize the damage:
*
*Use as few fonts as you can. Search through the RDL XML to make sure there aren't any redundant fonts.
*Use TTF fonts if you use different sizes of the font.
*Try not to mix normal, bold and italic variants of the font, else it will be embedded multiple times.
A:
*
*Does anyone know of anyway to
reduce/optimize the PDF export phase
and or the size of the PDF without
lowering the total page count?
I have a few ideas and questions:
1. Is this a graphics-heavy report? If not, do you have tables that start out as text but are converted into a graphic by the SSRS PDF renderer (check if you can select the text in the PDF)? 41K per page might be more than it should be, or it might not, depending on how information-dense your report is. But we've had cases where we had minor issues with a report's layout, like having a table bleed into the page's margins, that resulted in the SSRS PDF renderer "throwing up its hands" and rendering the table as an image instead of as text. Obviously, the fewer graphics in your report, the smaller your file size will be.
2. Is there a way that you could easily break the report into pieces? E.g., if it's a 10-location report, where Location 1 is followed by Location 2, etc., on your final report, could you run the Location 1 portion independent of the Location 2 portion, etc.? If so, you could join the 10 sub-reports into one final PDF using PDFSharp after you've received them all. This leads to some difficulties with page numbering, but nothing insurmountable.
3. Does anyone else have any other
theories as to why this runs on the
server but not through the API?
My guess would be the sheer size of the report. I don't remember everything about what's an IIS setting and what's SSRS-specific, but there might be some overall IIS settings (maybe in Metabase.xml) that you would have to be updated to even allow that much data to pass through.
You could isolate the question of whether the time is the problem by taking one of your working reports and building in a long wait time in your stored procedures with WAITFOR (assuming SQL Server for your DBMS).
Not solutions, per se, but ideas. Hope it helps.
A: Obviously, its a huge report, in fact it's closer to a 1.3 GB database, than a report.
Have you thought of finding a way to split it into multiple pieces and then combine them together? (use one of several different ways to combine PDFs listed on this site.)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/15310",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Is there a method for handling errors from COM objects in RDML? Is there a method for handling errors from COM objects in RDML? For instance, when calling Word VBA methods like PasteSpecial, an error is returned and the LANSA application crashes. I cannot find anything in the documentation to allow handling of these errors.
Actually, error handling in general is a weak-point for LANSA and RDML, but that's another topic.
A: I know almost nothing about LANSA etc. A few minutes in Google convinced me that error handling is, as you say, not a strong point. Over on the lansa.us site there is this article about remote debugging which, at a stretch, might be going in the right direction.
One wonders if a DEF_BREAK would work. Here's a longish post about using DEF_BREAK. If DEF_BREAK hooks in with #COM_* functions, that might be a possibility. Please pardon my naivety in this regard.
I also found some code at the LANSA Tech Exchange. I had hoped that there'd be something obvious, but no. Being more LANSA-aware than me, you may find something.
A: At my company, we were able to handle Communication API's through the ActiveX part of LANSA. The supplier embedded his API's in an ActiveX component. We used this component in our LANSA application. This works fine and stable.
Maybe you could embed the Microsoft API's in an ActiveX component too? I don't know from the top of my head if Microsoft Word can be addressed as an ActiveX component.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/15315",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: IE7 HTML/CSS margin-bottom bug Here is the scenario:
I have a table with a margin-bottom of 19px. Below that I have a form that contains some fieldsets. One of them is floated right. The problem is that the margin-bottom is not getting the full 19px in IE7. I've gone through all of the IE7 css/margin/float bugs that I can think of and have tried remedies but have been unsuccessful. I have been googling for a while now and cannot find anything that is helping out.
Here is what I have tried.
*
*Wrapping the form or fieldset in an unstyled div. No apparent change.
*Nixing the margin-bottom on the table and instead wrapping that with a div and giving it a padding-bottom of 19px. No apparent change.
*Nixing the margin-bottom on the table and adding a div with a fixed height of 19px. No apparent change.
*Putting a clear between the table and the fieldset.
I know there are some others that I am forgetting, but those are the things I have tried out recently. This happens to each fieldset.
I am using a reset style sheet and have a xhtml transitional doctype.
Edit: I also have the IE7 web developer toolbar and Firebug. The style information for both browsers says that it has a margin-bottom: 19px; but it clearly is not for IE7.
A: if you have floated and unfloated elements, the only surefire way to ensure vertical space between them cross-browser is padding-top on the subsequent element.
A: Replace margin-bottom: 19px; with <div/> with height: 19px.
Remove CSS style for margin-bottom and insert <div/> with height: 19px between <table/> and <form/>
It solved this problem in my case.
<table id="mytable">
<tr>
<th>Col 1</th>
<th>Col 3</th>
<th>Col 2</th>
</tr>
<tr>
<td>Val 1</td>
<td>Val 2</td>
<td>Val 3</td>
</tr>
</table>
<div style="height:19px;"></div>
<form method="post" action="test.html" id="myform">
A: Have you got a valid doctype? Otherwise IE7 renders in quirks mode which is basically IE5.5
A: I put together what you described there, and it's rendering properly for me. It's likely you have another style somewhere that's having an effect on your form, or your table. If you aren't doing so already, using a reset.css file is extremely useful. If you want to see which styles are affecting a particular element, the Web Developer Toolbar for firefox has a handy Style Information command for seeing which styles (from which files/style blocks/inline styles) are being applied to it. You can activate it by pressing Ctrl+Shift+Y, or hitting CSS -> View Style Information
Here's the code that worked for me in IE7:
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
<title>Test</title>
<style>
#mytable {
margin-bottom: 19px;
border: solid green 1px;
}
#myform {
border: solid red 1px;
overflow: hidden;
}
#floaty {
float: right;
border: solid blue 1px;
}
</style>
</head>
<body>
<table id="mytable">
<th>Col 1</th>
<th>Col 3</th>
<th>Col 2</th>
<tr>
<td>Val 1</td>
<td>Val 2</td>
<td>Val 3</td>
</tr>
</table>
<form method="post" action="test.html" id="myform">
<fieldset id="floaty">
<label for="myinput">Caption:</label>
<input id="myinput" type="text" />
</fieldset>
<fieldset>
<p>Some example content</p>
<input type="checkbox" id="mycheckbox" />
<label for="mycheckbox">Click MEEEEE</label>
</fieldset>
</form>
</body>
</html>
A: If you remove the float from the element below the table, does the margin appear?
A: I wouldn't know for sure without testing but try placing this between the table and the fieldset:
<br style="clear:both;" />
| {
"language": "en",
"url": "https://stackoverflow.com/questions/15326",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
} |
Q: How to create short snippets in Vim? I have recently started using Vim as my text editor and am currently working on my own customizations.
I suppose keyboard mappings can do pretty much anything, but for the time being I'm using them as a sort of snippets facility almost exclusively.
So, for example, if I type def{TAB} (:imap def{TAB} def ():<ESC>3ha), it expands to:
def |(): # '|' represents the caret
This works as expected, but I find it annoying when Vim waits for a full command while I'm typing a word containing "def" and am not interested in expanding it.
*
*Is there a way to avoid this or use this function more effectively to this end?
*Is any other Vim feature better suited for this?
After taking a quick look at SnippetsEmu, it looks like it's the best option and much easier to customize than I first thought.
To continue with the previous example:
:Snippet def <{}>():
Once defined, you can expand your snippet by typing def{TAB}.
A: If SnippetsEmu is too heavy or ambitious for what you need (it was for me), I wrote a plugin that manages snippets based on filetype. It even has tab completion when picking the snippet! :)
Get it here: snippets.vim
A: I just installed UltiSnips. There’s a good article that explains why you might choose UltiSnips: Why UltiSnips?
I haven’t used any of the other snippet plugins; I decided to take the plunge with one that seemed full-featured and would be able to accommodate me as I gain more Vim skills and want to do more sophisticated things.
A: SnippetsEmu is a useful snippets plugin.
A: As noted by MDCore, SnippetsEmu is a popular Vim script that does just that and more. If you need only expanding (without moving back the caret), you can use the standard :ab[breviate] command.
:ab[breviate] [<expr>] {lhs} {rhs}
add abbreviation for {lhs} to {rhs}. If {lhs} already
existed it is replaced with the new {rhs}. {rhs} may
contain spaces.
See |:map-<expr>| for the optional <expr> argument.
A: Snipmate - like texmate :)
http://www.vim.org/scripts/script.php?script_id=2540
video:
http://vimeo.com/3535418
snippet def
""" ${1:docstring} """
def ${2:name}:
return ${3:value}
A: As another suggestion (although slightly different) using vim's built in functionality:
:iabbrev def def(): #<LEFT><LEFT><LEFT><LEFT><LEFT>
Now whenever you type def followed by a space or other non-word character, it will expand to the same as what you've given as the output of SnippetsEmu (the space comes from the space you entered to trigger the completion).
This approach doesn't suffer the "lag" issue you encountered using :inoremap, and is built-into vim. For more information on this feature, look at :help abbrev.
You may be concerned that being triggered by space not tab it will trigger unnecessarily, but in general vim is pretty smart about when to trigger it. The issue can be additionally mitigated by enabling the abbreviation only for certain file-types (eg, python):
au filetype python :iabbrev ... etc
Snip[ets] (Manager|Emu|Mate|.vim) is of course also a perfect solution, but it's nice to be aware of the alternatives (especially when they are built in).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/15334",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "29"
} |
Q: Which scripting language to support in an existing codebase? I'm looking at adding scripting functionality to an existing codebase and am weighing up the pros/cons of various packages. Lua is probably the most obvious choice, but I was wondering if people have any other suggestions based on their experience.
Scripts will be triggered upon certain events and may stay resident for a period of time. For example upon startup a script may define several options which the program presents to the user as a number of buttons. Upon selecting one of these buttons the program will notify the script where further events may occur.
These are the only real requirements;
*
*Must be a cross-platform library that is compilable from source
*Scripts must be able to call registered code-side functions
*Code must be able to call script-side functions
*Be used within a C/C++ codebase.
A: Based on my own experience:
*
*Python. IMHO this is a good choice. We have a pretty big code base with a lot of users and they like it a lot.
*Ruby. There are some really nice apps such as Google Sketchup that use this. I wrote a Sketchup plugin and thought it was pretty nice.
*Tcl. This is the old-school embeddable scripting language of choice, but it doesn't have a lot of momentum these days. It's high quality though, they use it on the Hubble Space Telescope!
*Lua. I've only done baby stuff with it but IIRC it only has a floating point numeric type, so make sure that's not a problem for the data you will be working with.
We're lucky to be living in the golden age of scripting, so it's hard to make a bad choice if you choose from any of the popular ones.
A: I have played around a little bit with Spidermonkey. It seems like it would at least be worth a look at in your situation. I have heard good things about Lua as well. The big argument for using a javascript scripting language is that a lot of developers know it already and would probably be more comfortable from the get go, whereas Lua most likely would have a bit of a learning curve.
I'm not completely positive but I think that spidermonkey your 4 requirements.
A: I've used Python extensively for this purpose and have never regretted it.
A: Lua is has the most straight-forward C API for binding into a code base that I've ever used. In fact, I usually quickly roll bindings for it by hand. Whereas, you often wouldn't consider doing so without a generator like swig for others. Also, it's typically faster and more light weight than the alternatives, and coroutines are a very useful feature that few other languages provide.
A: AngelScript
lets you call standard C functions and C++ methods with no need for proxy functions. The application simply registers the functions, objects, and methods that the scripts should be able to work with and nothing more has to be done with your code. The same functions used by the application internally can also be used by the scripting engine, which eliminates the need to duplicate functionality.
For the script writer the scripting language follows the widely known syntax of C/C++ (with minor changes), but without the need to worry about pointers and memory leaks.
A: The original question described Tcl to a "T".
Tcl was designed from the beginning to be an embedded scripting language. It has evolved to be a first class dynamic language in its own right but still is used all over the world as an embeded language. It is available under the BSD license so it is just about as free as it gets. It also compiles on pretty much any moden platform, and many not-so-modern. And not only does it work on desktop systems, there are variations available for mobile platforms.
Tcl excels as a "glue" language, where you can write performance-intensive functions in C while still benefiting from the advantages of a scripting language for less performance critical parts of the application.
Tcl also comes with a first class GUI toolkit (Tk) that is arguably one of the easiest cross platform GUI toolkits available. It also interfaces very nicely with SQLite and other databases, and has had built-in support for unicode for quite some time.
If the scripting interface will be made available to your customers (as opposed to simply enabling your own engineers to work at the scripting level), Tcl is extremely easy to learn as there are a total of only 12 rules that govern the entire language (as of tcl 8.6). In fact, Tcl shines as a way to invent domain specific languages which is often how it is used as an end-user scripting solution.
A: There were some excellent suggestions already, but I just wanted to mention that Perl can also be called / can call to C/C++.
A: You probably could use any modern scripting / bytecode language.
If you're willing to put up with the growing pains of a new product, you could use the Parrot VM. Which has support for many, if not all of the languages listed on this page. Unfortunately it's not done yet, but that hasn't stopped some people from using it in a production environment.
A: I think most people are probably mentioning the scripting language that they are most familiar with. From my perspective, Tcl was designed specifically to interface with C, so your problem domain is tailor-made for the language. However, I'm sure Python, Perl, or Lua would be fine. You should probably choose the language that is most familiar to your current team, since that will reduce the learning time.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/15349",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: Requirements, Specs, and Managing Up in an Agile Environment My company has tried to adopt the scrum methodology with mixed success. Theses are some areas where we've had issues. How do you handle these?
*
*Tracking requirements from
Product Marketing through to product. We're trying out JIRA to track all requirements individually and assigning a release to each one as it is picked for implementation.
*Who creates stories? Product
Management who doesn't know enough
to create effectively small stories,
developers who may not have domain
knowledge, an analyst in between?
*Functional specs
*
*do you write them or just try to get them into a story
definition?
*Do you write functional
specs per story? Per feature?
*How do you see the relationship between functional specs and stories?
*answering the question from people
with VP in their title "what are we
going to get by [8 months from
now]?"
A: Let's see if my take adds anything (not certain by any means...)
*
*I'm not sure about the "assigning a release to each one" thing. I thought the idea was to put a "price" on each story/function point/unit of development and pick what goes into the current sprint. Everything else is backlog - you can offer some indication of remaining effort (see evidence based scheduling in FogBugz) but I don't think you should be allocating to specific sprints - you don't know what'll be in the backlog by the time you get there, for one thing. All you know is that it's going to change, so why waste time on it?
*There should be a designated user representative. Or more than one, if domain knowledge can't be concentrated in one individual. But someone from the business domain should be in charge overall of deciding what goes into a sprint, subject to the effort available, of course. There can be a place for a Business Analyst type, but they need to be domain experts. If your user(s) can't write stories, even with your help (it's a co-operative thing, or should be) then you all need help. Consider getting a coach involved for a sprint or two.
*You won't be writing functional specs in an Agile environment. You'll be writing code. Your user will be on hand at all times (or you're already exposed to significant risk) and they're your spec. The story tells you "what", and is going to be a small enough unit of work that you should be able to decide on "how" fairly quickly. And refactor. Always refactor. It's not an overhead, it's part of the process and your design won't evolve satisfactorily without it.
*If you have VPs (hey, I'm a VP, we're not all bad!) who ask that sort of question, then parts of your company are not getting it yet. Choose someone (the person best able to deal with non-techies, perhaps, or maybe the person least able, since they clearly need the practice) to explain it to them. If what's built is important to them, perhaps their questions are an indication that someone's not as involved as they should be.
A: *
*You should translate your requirements into a Product Backlog. This backlog is what you use to decide what Sprint Backlog items are chosen for each Sprint iteration. Management decides what is on the Product Backlog, but the team needs to agree to what they can produce in the Sprint (this is a negotiation that occurs at every sprint).
*Your Product Owner (usually a product manager) drives the creation of the stories. The Stories are simple (as a system admin, I need to be able to add a user). If your product management does not understand your product, you are in trouble.
*Agile is about designing as required. The design is never in the story. The spec can be per story, or per feature. You could design all your CRUD inside of one spec, which covers multiple stories.
*The Product Owner gets a product demo at the end of every Sprint. So value is demonstrated at every cycle. So your VP would get reports on a monthly basis (ususally 3 weeks of dev + 1 week to prepare for the Sprint demo).
A: If you are going to do anything in regards writing or designing code, one of the things you should always do, is write a spec, irrespective of whatever methodology you are using, wether it is Scrum, XP, Agile or SDLC. Many people who say that writing specs is so unagile and a monument to wasteful bureaucratic paperwork. The simple fact is that they are misguided when they say that code is the spec.
The clear fact is that a spec allows you to formulate your ideas and designs beforehand, and its much easier to change a spec than it is to change a program, especially if you are working outside the confines of simple LOB application. Specs ensure you have a clearer understanding of what is required when you start coding.
Its been show time and time again that teams that use specs, design better software.
In my opinion, if you hear anybody say the code is the spec, that is dogma, plain and simple, and is storing up huge maintainability problems for the future.
As an aside, I don't have anything against the Agile Manifesto or light management process centric methods like Scrum. I've used it in the past few years a number of times,
and it delivers. I've also seen good software down the drain, where an agile focus would have saved it. But it is no panacea or silver bullet.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/15365",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Only accepting certain ajax requests from authenticated users What's the best practice for making sure that certain ajax calls to certain pages are only accepted from authenticated users?
For example:
Let's say that I have a main page called blog.php (I know, creativity abounds). Let's also say that there is a page called delete.php which looks for the parameter post_id and then deletes some entry from a database.
In this very contrived example, there's some mechanism on blog.php which sends a request via ajax to delete.php to delete an entry.
Now this mechanism is only going to be available to authenticated users on blog.php. But what's to stop someone from just calling delete.php with a bunch of random numbers and deleting everything in site?
I did a quick test where I set a session variable in blog.php and then did an ajax call to delete.php to return if the session variable was set or not (it wasn't).
What's the accepted way to handle this sort of thing?
OK. I must have been crazy the first time I tried this.
I just did another test like the one I described above and it worked perfectly.
A: You were correct in trying to use session variables. Once your user authenticates, you should store that information in their session so that each subsequent page view will see that. Make sure you are calling session_start() on both pages (blog.php and delete.php) before accessing $_SESSION. Also make sure you have cookies enabled -- and if not, you should pass an additional parameter in the query string, usually PHPSESSID=<session_id()>.
A: It is not recommended that you rely on sessions for authentication without taking additional actions.
Read more on.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/15366",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
} |
Q: What's the best UML diagramming tool? I'm trying to choose a tool for creating UML diagrams of all flavours. Usability is a major criteria for me, but I'd still take more power with a steeper learning curve and be happy. Free (as in beer) would be nice, but I'd be willing to pay if the tool's worth it. What should I be using?
A: For sequence diagrams, only, try websequencediagrams.com. It's a freemium (free for the basic tasks, paid for advanced features) product, and lets you quickly bang out a diagram without any fussing around with lines and stencils.
Alice->Bob: Authentication Request
note left of Bob: Bob thinks about it
Bob->Alice: Authentication Response
A: You can also check out Lucid Chart for uml and other types of diagramming.
A: Don't forget yuml.me, I love it.
A: For me it's Enterprise Architect from Sparx Systems. A very rounded UML tool for a very reasonable price.
Very strong feature list including: integrated project management, baselining, export/import (including export to html), documentation generation from the model, various templates (Zachman, TOGAF, etc.), IDE plugins, code generation (with IDE plugins available for Visual Studio, Eclipse & others), automation API - the list goes on.
Oh yeah, don't forget support for source control directly from inside the tool (SVN, CVS, TFS & SCC).
I would also stay away from Visio - you only get diagrams, not a model. Rename a class in one place in a UML modelling tool and you rename in all places. This is not the case in Visio!
A: For my simple & short UML working,
I've used this tool:
StarUML - http://staruml.sourceforge.net/en/
Great free software for UML drawing.
Although the original Star UML is no longer maintained, there's now a fork called White Star UML, which is actively developed.
A: I like VisualParadigm mentioned before in this thread. It's powerful and easy to use I think it gives most power comparing to other tools.
If you need something simple, quick and easy (and free) there is a great tool called UMLet - I highly recommend this. I've tried many of UML diagramming tools and this the simplest one (and it still allows to do great diagrams). This is my choice:)
A: http://plantuml.sourceforge.net/index.html
A: In my practice i use Sequence Diagram Editor. it is really fast and helpful tool. the one thing i don't like about it is that it is commercial product, not free.
A: Some context: Recently for graduate school I researched UML tools for usability and UML comprehension in general for an independent project. I also model/architect for a living.
The previous posts have too many answers and not enough questions. A common misunderstanding is that UML is about creating diagrams. Sure, diagrams are important, but really you are creating a model. Here are the questions that should be answered as each vendor product/solution does some things better than others. Note: The listed answers are my view as the best even if other products support a given feature or need.
*
*Are you modeling or drawing? (Drawing - ArgoUML, free implementations, and Visio)
*Will you be modeling in the future? (For basic modeling - Community editions of pay products)
*Do you want to formalize your modeling through profiles or meta-models? OCL? (Sparx, RSM, Visual Paradigm)
*Are you concerned about model portability, XMI support? (GenMyModel, Sparx, Visual Paradigm, Altova)
*Do you have an existing set of documents that you need to work with? (Depends on the documents)
*Would you want to generate code stubs or full functioning code?(GenMyModel, Visual Paradigm, Sparx, Altova)
*Do you need more mature processes such as use case management, pattern creation, asset creation, RUP integration, etc? (RSA/RSM/IBM Rational Products)
Detailed Examples: IBM Rational Software Architect did not implement UML 2.0 all the way when it comes to realizes type relationships when creating a UML profile, but Visual Paradigm and Sparx got it right.
Ok, that was way too detailed, so a simpler example would be ArgoUML, which has no code generation features and focuses on drawing more than the modeling aspect of UML. Sparx and Visual Paradigm do UML really well and generate code well, however, hooking into project lifecycles and other process is where RSM/RSA is strong.
Watch out for closed or product specific code generation processes or frameworks as you could end up stuck with that product.
This is a straight brain dump so a couple details may not be perfect, however, this should provide a general map to the questions and solutions to looking into.
NEW - Found a good list of many UML tools with descriptions. Wiki UML Tool List
A: As I usually use UML more as a communication tool rather than a modeling tool I sometimes have the need to flex the language a bit, which makes the strict modeling tools quite unwieldy. Also, they tend to have a large overhead for the occasional drawing. This also means I don't give tools that handle round-trip modeling well any bonus points. With this in mind...
When using Visio, I tend to use these stencils for my UMLing needs (the built in kind of suck). It could be that I have grown used to it as it is the primary diagramming tool at my current assignment.
OmniGraffle also has some UML stencils built in and more are available at Graffletopia, but I wouldn't recommend that as a diagramming tool as it has too many quirks (quirks that are good for many things, but not UML). Free trial though, so by all means... :)
I've been trying out MagicDraw a bit, but while functional, I found the user interface distracting.
Otherwise i find the Topcased an interesting project (or group of projects). Last I used it it still had some bugs, but it worked, and seems to have evolved nicely since. Works great on any Eclipse-enabled platform. Free as in speech and beer :)
As for the diagramming tool Dia, it's quite ugly (interface and resulting drawings), but it does get the job done. An interesting modeling tool free alternative is Umbrello, but I haven't really used it much.
I definitely agree with mashi that whiteboards are great (together with a digital camera or cellphone).
Probably some of the nicest tools I've used belong to the Rational family of tools.
A: Obviously if you are serious about UML in the long run you need to use a software UML tool like the ones suggested in the other answers, but I've found that a whiteboard is one of the best tools for UML diagramming, especially during the design phase, or when you are exploring different alternatives. Nothing beats a whiteboard for speed/flexibility in my mind. They are also great for collaboration assuming you are collocated physically.
A: In my opinion StarUML is the best.
A: I can't believe no one has mentioned NetBeans UML Editor, it's great and satisfied all of my Java based UML requirments.
This after I tested JDeveloper UML, ArgoUML and StarUML.
A: I recently conducted a poll "What UML Tools do you use?" in my blog. NetBeans UML was was the top opensource choice and Enterprise Architect was the top commercial choice.
A: You can create UML class, sequence, component, use case, and activity diagrams in Visual Studio 2010 Ultimate. You can link these diagrams to Team Foundation work items so you can plan and track development and test work. You can also create sequence, dependency graphs, and layer diagrams from code and use Architecture Explorer to browse and explore your solution.
I've posted more links on my profile for more info.
A: You may be looking for an automated tool that will automatically generate a lot of stuff for you. But here's a free, generally powerful diagramming tool useful not only for UML but for all kinds of diagramming tasks. It accepts as input and outputs to a wide variety of commonly used file formats. It's called yEd, and it's worth a look
A: Visual Paradigm for UML http://content.usa.visual-paradigm.com/websiteimages/images/products/vpuml60/vpumltitle.gif
I'm very fond of Visual Paradigm for UML It's very powerful and has a free Community Edition and cheap Personal Edition as well.
Agilian http://content.usa.visual-paradigm.com/websiteimages/images/products/ag10/agtitle.gif
For Agile modeling there's also Agilian which is a bit more flexible, adds extra features to support smartboards and knows mind-mapping as well.
The thing I like most about their products is the flexibility. I'm using Enterprise Architect at work nowadays but I think it's not smart enough. I want to be able to quick-brainstorm some sequence diagrams and have the application keep my model up-to-date in the background, something VPUML does a very good job at.
In my opinion it's way better than Enterprise Architect, though that is a great tool as well :)
A: You might want to take a look at MagicDraw or Visual Paradigm for UML. Both offer community editions that, of course, don't span the full feature range, but may well be sufficient if you want to create diagrams only and not generate code or do full round-trip engineering.
A: Rational and Together/J are best-of-breed products, but expensive.
In my experience, I've enjoyed Eclipse Omondo and Sparx Enterprise Architect. Omondo integrates nicely with Eclipse for code generation, and has a very intuitive feel. However, it is strongly tied to Java. Sparx is a good tool for the price point, but lacks the full range of UML 2.0 diagrams.
Do NOT bother with Poseidon. It is buggy, bloated, and unusuable for all intents and purposes.
A: For sequence diagrams you can also try Trace Modeler. It's not free but it has a great interface, very friendly and productive. You can use it on any platform.
A: Take a look at BOUML: multiplatform (QT), works pretty well and supports colaborative work.
BOUML is a free UML 2 tool box (under development) allowing you to specify and generate code in C++, Java, Idl, Php and Python.
BOUML runs under Unix/Linux/Solaris, MacOS X(Power PC and Intel) and Windows.
From Wikipedia:
The releases prior to version 4.23 are free software licensed under GPL. BOUML 5 and later is proprietary software.
A: If you're looking to get out the door and working on UML without having to learn a complex new tool I would check out Violet UML. I've used it to some pretty great success in the past.
A: PlantUML is an open-source markup-language-to-UML-diagram tool in Java that deserves to be mentioned here. It ranks high on the usability scale because of its intuitive syntax for the various diagrams and diagram components.
A: Dia is a possible choice. It's definitely not the best tool, but it is functional.
A: Enterprise Architect from Sparx systems is the best tool I've used. A bit expensive at $199 (professional edition), but IMO it's worth it.
A: I will add UMLet which I haven't tried yet, but have been selected at my office to start doing diagrams.
Looks simple, diagrams aren't sexy, but it seems quite complete with regard to the kind of diagrams you can do. Seems to have good export capabilities too (important!), is flexible can support custom components) and can be used as Eclipse plugin.
A: Astah UML (ex-JUDE) is pretty good.
A: I haven't been able to find a top-notch free UML diagramming tool, but if you're interested in pure diagramming, as opposed to round-trip-engineering, I'd go with Microsoft Visio. If you want full round-trip engineering, Rational Rose.
This list of UML tools on Wikipedia might also come in handy.
A: Pen and paper. If you can get the scan into a vector format, that may be useful when making minor amendments.
A: You should try Creately. Runs in your browser and can do team collaboration.
supports sequence diagrams, class, ER, usecase etc. works great and has a free version available.
Creately.com
A: Try alt text http://www.sparxsystems.com.au/images/products/logos/EA.png
But, It is NOT free
It has amazing features.. Check the screenshots here.
And they have alt text http://www.sparxsystems.com.au/images/products/logos/MDGIntVS-268x73.png and alt text http://www.sparxsystems.com.au/images/products/logos/MDGIntEclipse-223x73.pngtooo..
A: Visual Paradigm for UML or Dia are good options
A: I have been working on UML standards since 1999 and may tell you that Sparx Enterprise Architect should not be considered as a UML tool as it does not follow UML 2 specification. Its diagrams look as UML but names of the properties and the way as they specified are not following UML standard. MagicDraw and IBM RSA are the true UML tool on the market so far.
A: You should try Modelio Free Edition.
It support UML2, BPMN, SOA and XMI. It is simple to use and their forum is very active.
A: You might want to check out ArgoUML. It's not the best tool I've ever used, but it's one of the better free ones I've seen. It's a little slow because it's written in Java, but it let's you do some basic UML diagrams with relative ease.
A: As mentioned, ArgoUML is a decent tool for UML 1.4 and has recently (Autumn 2008) been receiving some much needed maintainance updates.
A: ArgoUML.
I used this for my thesis and it is well-designed: maybe it has too much feature not very important but I prefer have some uselee feature than don't have some useful feature.
A: I strongly recommend BOUML. It's a free UML modelling application, which:
*
*is extremely fast (fastest UML tool ever created, check out benchmarks),
*has rock solid C++, Java, PHP and others import support,
*is multiplatform (Linux, Windows, other OSes),
*has a great SVG export support, which is important, because viewing large graphs in vector format, which scales fast in e.g. Firefox, is very convenient (you can quickly switch between "birds eye" view and class detail view),
*is full featured, impressively intensively developed (look at development history, it's hard to believe that such fast progress is possible).
*supports plugins, has modular architecture (this allows user contributions, looks like BOUML community is forming up)
Believe me, there is no better tool. StarUML is a retarded turtle compared to BOUML. ArgoUML simply doesn't work. Dia is a ergonomy^-1 software.
A: Just throwing in my two bits here, but I found ArgoUML to be very useful. It takes a little while to get used to it and its a bit buggy (last I checked it was in version .29 or so) but it works pretty well once you get used to it. It handles all types of UML diagrams, which is why I prefer it. Also, its made by tigris, the same people who made subclipse, an SVN repository plug-in for Eclipse.
A: If you want to model at diagram level and also have a clean metamodel the new Omondo build allows live synchronization between MOF and UML Diagrams.
Just amazing to see my diagram and the xmi live synchronized each time I change something in my diagram and the model is changed. What is most incredible is that the model is also the metamodel and the MOF because everything is lived synchronized. Very powerful new concept for my point of view.
I also like Java code annotation and JPA support in the class diagram and in the model. I don't know any other tool having these 2 incredible features !!
A: Take a look at the Sybase PowerDesigner
http://www.sybase.com/products/modelingdevelopment/powerdesigner
Description:
http://en.wikipedia.org/wiki/PowerDesigner
It is a vary powerful tool but so is the price!
A: The TopCoder UML Tool is a very good free UML tool.
A: For sequence diagrams there is free java based Quick Sequence Diagram Editor. The sequence is written in text editor and then rendered by QSDE engine. It exports to variety of vector and bitmap file formats.
A: Violet
Free, and very easy to use.
A: I recommend Software Ideas Modeler. It has a lot of features and an intuitive GUI.
A: I have tried MagicDraw and it is very good, only the community edition though.
Also I have tried omondo, it looks fantastic but it is very expensive for commercial use.
A: +1 for TopCoder UML Tool after I had tried most of other free tools.
My reasons are:
1) The tool can save UML diagrams in the human-readable format XMI, so the file can be fed to the version control system easily.
2) Support of Undo/Redo (this is the reason I've discharged ArgoUML).
3) The diagram is kept in one single file, and not linked tightly with "workspace" or "project".
StarUML is also good, though is old. Unfortunatley it is not developed/maintained any longer.
A: In my career I often needed to draw UML diagrams and generate Java code. I found MagicDraw most appealing and I'm a happy user. I think their licensing model is fair because it allows to pay for what you need. I prefer it to other products I used in my (distant) past: ArgoUML, Poseidon, Rational Rose, Dia. Be aware that my experiences on other products are obsolete and have maybe significantly improved or changed their licensing model. Maybe you should start with an open-source tool and decide later whether to spend some bucks.
With MagicDraw you can document your code by generating diagrams from code. You can also model first, then generate the code. It also integrates well with several IDEs.
A: I use gmodeler.com. It just does class diagrams.
Good things
*
*Very simple feature set. Great UI. Very easy to use.
*Attractive UI.
*Don't have to login/create an account
*Can save diagrams
*Free
Bad things
*
*Hard to collaborate -- have to export to xml (I don't care)
*Can't access diagrams from any machine because it saves to your browser (I don't care)
*Can't export as image or pdf (I can take a screen shot)
*Can't generate code for most languages
*Very simple feature set. (I don't care)
*Each class has an 'Event' list which I don't need and I can't get rid of.
A: I advice to use Pacestar UML Diagrammer. It helps you generate UML 2.0 diagrams quickly, easily, flexible AND commonly understood notation.
I used it in many projects and I'm very satisfied. And too it doesn't use much of memory and space just 6 Mo of Hard Disk.
And the most feature that I like it very much is that I can copy diagrams from the editor and paste them in MS Word... so when I need to edit a specific diagram, just I click on and it will be opened in the editor and by closing it, the updates had been done in MS Word document.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/15376",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "488"
} |
Q: Best practices for managing and deploying large JavaScript apps What are some standard practices for managing a medium-large JavaScript application? My concerns are both speed for browser download and ease and maintainability of development.
Our JavaScript code is roughly "namespaced" as:
var Client = {
var1: '',
var2: '',
accounts: {
/* 100's of functions and variables */
},
orders: {
/* 100's of functions and variables and subsections */
}
/* etc, etc for a couple hundred kb */
}
At the moment, we have one (unpacked, unstripped, highly readable) JavaScript file to handle all the business logic on the web application. In addition, there is jQuery and several jQuery extensions. The problem we face is that it takes forever to find anything in the JavaScript code and the browser still has a dozen files to download.
Is it common to have a handful of "source" JavaScript files that gets "compiled" into one final, compressed JavaScript file? Any other handy hints or best practices?
A: Also, I suggest you to use Google's AJAX Libraries API in order to load external libraries.
It's a Google developer tool which bundle majors JavaScript libraries and make it easier to deploy, upgrade and make them lighter by always using compressed versions.
Also, it make your project simpler and lighter because you don't need to download, copy and maintain theses libraries files in your project.
Use it this way :
google.load("jquery", "1.2.3");
google.load("jqueryui", "1.5.2");
google.load("prototype", "1.6");
google.load("scriptaculous", "1.8.1");
google.load("mootools", "1.11");
google.load("dojo", "1.1.1");
A: Just a sidenode - Steve already pointed out, you should really "minify" your JS files. In JS, whitespaces actually matter. If you have thousand lines of JS and you strip only the unrequired newlines you have already saved about 1K. I think you get the point.
There are tools, for this job. And you should never modify the "minified"/stripped/obfuscated JS by hand! Never!
A: In our big javascript applications, we write all our code in small separate files - one file per 'class' or functional group, using a kind-of-like-Java namespacing/directory structure. We then have:
*
*A compile-time step that takes all our code and minifies it (using a variant of JSMin) to reduce download size
*A compile-time step that takes the classes that are always or almost always needed and concatenates them into a large bundle to reduce round trips to the server
*A 'classloader' that loads the remaining classes at runtime on demand.
A: For server efficiency's sake, it is best to combine all of your javascript into one minified file.
Determine the order in which code is required and then place the minified code in the order it is required in a single file.
The key is to reduce the number of requests required to load your page, which is why you should have all javascript in a single file for production.
I'd recommend keeping files split up for development and then create a build script to combine/compile everything.
Also, as a good rule of thumb, make sure you include your JavaScript toward the end of your page. If JavaScript is included in the header (or anywhere early in the page), it will stop all other requests from being made until it is loaded, even if pipelining is turned on. If it is at the end of the page, you won't have this problem.
A: The approach that I've found works for me is having seperate JS files for each class (just as you would in Java, C# and others). Alternatively you can group your JS into application functional areas if that's easier for you to navigate.
If you put all your JS files into one directory, you can have your server-side environment (PHP for instance) loop through each file in that directory and output a <script src='/path/to/js/$file.js' type='text/javascript'> in some header file that is included by all your UI pages. You'll find this auto-loading especially handy if you're regularly creating and removing JS files.
When deploying to production, you should have a script that combines them all into one JS file and "minifies" it to keep the size down.
A: Read the code of other (good) javascript apps and see how they handle things. But I start out with a file per class. But once its ready for production, I would combine the files into one large file and minify.
The only reason, I would not combine the files, is if I didn't need all the files on all the pages.
A: My strategy consist of 2 major techniques: AMD modules (to avoid dozens of script tags) and the Module pattern (to avoid tightly coupling of the parts of your application)
AMD Modules: very straight forward, see here: http://requirejs.org/docs/api.html also it's able to package all the parts of your app into one minified JS file: http://requirejs.org/docs/optimization.html
Module Pattern: i used this Library: https://github.com/flosse/scaleApp you asking now what is this ? more infos here: http://www.youtube.com/watch?v=7BGvy-S-Iag
| {
"language": "en",
"url": "https://stackoverflow.com/questions/15390",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "30"
} |
Q: SQL Server 2008 vs 2005 Linq integration Linq To SQL or Entity framework both integrate nicely with SQL Server 2005.
The SQL Server 2008 spec sheet promises even better integration - but I can't see it.
What are some examples of what you can do Linq-wise when talking to a 2008 server that you can't when talking to SQL Server 2005?
A: There is a problem of paging over a joined set that SQL 2005 mis-interprets.
var orders = (
from c in Customers
from o in c.Orders
select new {c, o}
).Skip(10).Take(10).ToList();
LINQ generates a ROW_Number against the joined set. SQL2005 generates a bad plan from that code. Here's a link to the discussion.
Edit#2: I'd like to clarify that I don't know that SQL2008 solves this problem. I'm just hopeful.
A: This marketing link claims
"Write data access code directly against a Microsoft SQL Server database, using LINQ to SQL."
Which is basically untrue.
Linq To SQL is query comprehension translated into expression trees translated into SQL, optimized by the query optimizer and then run against SQL Server database. "directly" feh.
A: it has full support for the new data types. lol. beyond that you got me, other than possibilities of optimised queries (like the merge command, etc).
A: I am guessing most of it has to do on the server anyways. They probably optimized the query execution as for differences I don't know except for the new types.
A: Unless LINQ exposes the new MERGE statement, no.
There is little effective difference in the engines especially from an ORM/client view
| {
"language": "en",
"url": "https://stackoverflow.com/questions/15395",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: DLL plugin that creates a parented window doesn't handle messages correctly I'm creating a plugin framework, where my application loads a series of plugin DLL's, then creates a new window and pass this new window's handle to the plugin. The plugin can, then, use this handle to create their own GUI.
Everything seems to be working very well. The only problem is that when I press TAB on a plugin widget (An editbox, for example), it doen't jump to another widget. I figured out that some Windows messages are passed, and some others aren't. The WM_KEYDOWN is passed for other keys, because I can type on the editbox, but this message doesn't handle TAB key.
Hope somebody has a hint.
I'm using Borland VCL with CBuilder, but I think I could use any framework under WIN32 to create these plugins, since they never know how their parent windows were created.
A: It's very complex matter indeed.
When you hit TAB focus jumps to another control only when these controls belong to a Modal Dialog Box. In fact there are some buttons like ESC, LEFT, RIGHT, DOWN, UP, TAB which modal dialog message function treats in a special way. If you want these keys to behave in similar way with modeless dialog box or any other window you should change you message processing function and use IsDialogMessage inside. You'll find more information about IsDialogMessage function in MSDN also to better understand this stuff you may check as well Dialog Boxes section.
And, as was mentioned before, you should set WS_TABSTOP and WS_GROUP styles when needed.
Good luck!
A: I believe you'll have to take the following steps:
*
*Subclass your edit controls (and other controls as needed).
*Capture the WM_KEYDOWN message in your edit control's WndProc.
*Check to see if the shift key is currently held down (using GetKeyState or similar).
*Call GetWindow, passing in a handle to your edit control and either GW_HWNDPREV or GW_HWNDNEXT depending on whether shift is held down. This will give you the handle to the window that should receive focus.
*Call SetFocus and pass in the window handle you got in step 4.
Make sure you handle the case where your edit controls are multiline, as you might want to have a real tab character appear instead of moving to the next control.
Hope that helps!
A: I believe you suffer from having a different instance of the VCL in each of your dlls and exes. Classes from the dll are not the same as the ones from your exe, even if they are called the same. Also global variables (Application, Screen) are not shared between them. Neither is the memory since they both have their own memory manager.
The solution is to have the dlls and the exe share the VCL library and the memory manager. I am not a BCB developer, but a Delphi developer. In Delphi we would just use the rtl and the vcl as runtime packages. Maybe you could do the BCB equivalent.
A: A DLL has its own TApplication object.
to provide uniform key handling. when the DLL Loads.
assign the DLL::TApplication to the EXE::TApplication
Be sure to do the reverse on exit.
--
Michael
| {
"language": "en",
"url": "https://stackoverflow.com/questions/15398",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: SQL With A Safety Net My firm have a talented and smart operations staff who are working very hard. I'd like to give them a SQL-execution tool that helps them avoid common, easily-detected SQL mistakes that are easy to make when they are in a hurry. Can anyone suggest such a tool? Details follow.
Part of the operations team remit is writing very complex ad-hoc SQL queries. Not surprisingly, operators sometimes make mistakes in the queries they write because they are so busy.
Luckily, their queries are all SELECTs not data-changing SQL, and they are running on a copy of the database anyway. Still, we'd like to prevent errors in the SQL they run. For instance, sometimes the mistakes lead to long-running queries that slow down the duplicate system they're using and inconvenience others until we find the culprit query and kill it. Worse, occasionally the mistakes lead to apparently-correct answers that we don't catch until much later, with consequent embarrassment.
Our developers also make mistakes in complex code that they write, but they have Eclipse and various plugins (such as FindBugs) that catch errors as they type. I'd like to give operators something similar - ideally it would see
SELECT U.NAME, C.NAME FROM USER U, COMPANY C WHERE U.NAME = 'ibell';
and before you executed, it would say "Hey, did you realise that's a Cartesian product? Are you sure you want to do that?" It doesn't have to be very smart - finding obviously missing join conditions and similar evident errors would be fine.
It looks like TOAD should do this but I can't seem to find anything about such a feature. Are there other tools like TOAD that can provide this kind of semi-intelligent error correction?
Update: I forgot to mention that we're using MySQL.
A: If your people are using the mysql(1) program to run queries, you can use the safe-updates option (aka i-am-a-dummy) to get you part of what you need. Its name is somewhat misleading; it not only prevents UPDATE and DELETE without a WHERE (which you're not worried about), but also adds an implicit LIMIT 1000 to SELECT statements, and aborts SELECTs that have joins and are estimated to consider over 1,000,000 tuples --- perfect for discouraging Cartesian joins.
A: ..."writing very complex ad-hoc SQL queries.... they are so busy"
Danger Will Robinson!
Automate Automate Automate.
Ideally, the ops team should not be put into a position where they have to write queries on the fly in a high stress situation – it’s a recipe for disaster! Better for them to build up a library of pre-written scripts that have undergone the appropriate testing to make sure it a) does what you want b) provides an audit trail c) has a possible ‘undo’ type function.
Failing that, giving them a user ID that only has SELECT premissions might help :-)
A: You might find SQL Prompt from redgate useful. I'm not sure what database engine you're using, as it's only for MSSQL Server
A: I'm not expecting anything like this to exist. The tool would have to first implement everything that the SQL parser in your database implements, and then it would have to do a data model analysis to predict "bad" queries.
Your best bet might be to write a plugin for a text editor that did some basic checking for suspicious patterns and highlighted them differently than the standard .sql mode. But even that would be quite difficult.
I would be happy with a tool that set off alarm bells whenever I typed in an update statement without a where clause. And perhaps administered a mild electric shock, since it's usually about 1 in the morning after a long day when mistakes like that happen.
A: It would be pretty easy to build this by setting up a sample database with a extremely small amount of dummy data, which would receive the query first. A couple of things will happen:
*
*You might get a SQL syntax error, which would not load the database much since it's a small database.
*You might get back a response which could clearly be shown to contain every row in one or more tables, which is probably not what they want.
*Things which pass the above conditions are likely to be okay, so you can run them against the copy of the production database.
Assuming your schema doesn't change much and is not particularly weird, writing the above is likely the quickest solution to your problem.
A: I'd start with some coding standards - for instance never use the type of join in your example - it often results in bad results (especially in SQL Server if you try to do an outer join that way, you will get bad results). require them to do explicit joins.
If you have complex relationships, you might consider putting them in views and then writing the adhoc queries from the views. Then at least they will never make the mistake of getting the joins wrong.
A: Can't you just limit the amount of time a query can run for? I'm not sure about MySQL, but for SQL Server, even just the default query analyzer can restrict how long queries will run before they time out. Couple that with limited rights so they can only run SELECT queries, and you should be pretty much covered.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/15399",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Is Single Responsibility Principle a rule of OOP? An answer to a Stack Overflow question stated that a particular framework violated a plain and simple OOP rule: Single Responsibility Principle (SRP).
Is the Single Responsibility Principle really a rule of OOP?
My understanding of the definition of Object Orientated Programming is "a paradigm where objects and their behaviour are used to create software". This includes the following techniques: Encapsulation, Polymorphism & Inheritance.
Now don't get me wrong - I believe SRP to be the key to most good OO designs, but I feel there are cases where this principle can and should be broken (just like database normalization rules). I aggressively push the benefits of SRP, and the great majority of my code follows this principle.
But, is it a rule, and thus implies that it shouldn't be broken?
A: None of these rules are laws. They are more guidelines and best practices. There are times when it doesn't make sense to follow "the rules" and you need to do what is best for your situation.
Don't be afraid to do what you think is right. You might actually come up with newer and better rules.
A: To quote Captain Barbossa:
"..And secondly, you must be a pirate for the pirate's code to apply and you're not.
And thirdly, the code is more what you'd call "guidelines" than actual rules...."
To quote Jack Sparrow & Gibbs.
"I thought you were supposed to keep to the code."
Mr. Gibbs: "We figured they were more actual guidelines. "
So clearly Pirates understand this pretty well.
The "rules" could be understood via the patterns movement as "Forces"
So there is a force trying to make the class have a single responsibility. (cohesion)
But there is also a force trying to keep the coupling to other classes down.
As with all design ( not just code) the answer is that it depends.
A: Very few rules, if any, in software development are without exception. Some people think there are no place for goto but they're wrong.
As far as OOP goes, there isn't a single definition of object-orientedness so depending on who you ask you'll get a different set of hard and soft principles, patterns, and practices.
The classic idea of OOP is that messages are sent to otherwise opaque objects and the objects interpret the message with knowledge of their own innards and then perform a function of some sort.
SRP is a software engineering principle that can apply to the role of a class, or a function, or a module. It contributes to the cohesion of something so that it behaves well put together without unrelated bits hanging off of it or having multiple roles that intertwine and complicate things.
Even with just one responsibilty, that can still range from a single function to a group of loosely related functions that are part of a common theme. As long as you're avoiding jury-rigging an element to take the responsibilty of something it wasn't primarily designed for or doing some other ad-hoc thing that dilute the simplicity of an object, then violate whatever principle you want.
But I find that it's easier to get SRP correct then to do something more elaborate that is just as robust.
A: Ahh, I guess this pertains to an answer I gave. :)
As with most rules and laws, there are underlying motives by which these rules are relevant -- if the underlying motive is not present or applicable to your case, then you are free to bend/break the rules according to your own needs.
That being said, SRP is not a rule of OOP per se, but are considered best practices to create OOP applications that are both easily extensible and unit-testable.
Both are characteristics that I consider as of utmost importance in enterprise application development, where maintenance of existing applications occupies more time than new development does.
A: As many of the other posters have said, all rules are made to be broken.
That being said, I do think that SRP is one of the more important rules for writing good code. It's not specific to Object Oriented programming, but the "encapsulation" part of OOP is very hard to do right if the class does not have a single responsibility.
After all, how do you correctly and simply encapsulate a class with multiple responsibilities? Usually the answer is multiple interfaces and in many languages that can help quite a bit, but it's still confusing to the users of your class that it may apply in completely different ways in different situations.
A: SRP is just another expression of ISP :-) .
And the "P" means "principle" , not "rule" :D
| {
"language": "en",
"url": "https://stackoverflow.com/questions/15412",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "21"
} |
Q: Is it possible to coax Visual Studio 2008 into using italics for comments? I'm quite used to my IDE setup in Delphi 7, where I have my comments rendered in italics.
This doesn't appear to be possible in VS2008. There's only an option for bold font, not italics.
Is there some kind of registry hack or font magic I can perform to get it to work? The font I'm using is Consolas, if that makes a difference.
Edit: This is possible. See this answer for details.
Adam, as far as I can tell, you can't change the font name for just comments - only the colour, and boldness. If I'm wrong, please tell me!
A: If you have a font editor, you can change an italic font to pretend it's bold. Here's an example of it. (For VS 2005, but it should work all the same.)
A: I recommend Damien Guard's "Humane theme" for Visual Studio. It includes a custom font he developed, Envy R, which uses a clever hack - the bold version of the font is actually italic, so his theme italicizes comments by telling Visual Studio to bold them.
Even if you don't like the colors, just grab the theme (or the Envy R font) and tweak it in.
A: The pertinent registry key is
HKCU\Software\Microsoft\VisualStudio\9.0\FontAndColors\{A27B4E24-A735-4D1D-B8E7-9716E1E3D8E0}
Comment FontFlags
Default is 0. Putting in a few test values got me various combination of normal, bold, and strike-through text, but no italics. Strikethrough isn't an option in the dialog either, so maybe there is a magic value for italics.
@jon limjap:
The VS 2008 version of that theme doesn't italicize comments, just bold.
A: I dunno how he did it but Tomas Restrepo has a Visual Studio theme that is able to italicize comments and string literals.
This one is in Visual Studio 2005, but the theme editing for both versions appear unchange so it might provide you with some clues as to how to do it on your own theme.
Update: I didn't notice that he had a link to a Visual Studio 2008 version at the bottom of the post.
A: You can kind of fake it by changing the font to something like the Lucida Handwriting font, which looks sort of italic or, buy or find a free italic only font.
Edit: I've actually gone through the built-in fonts on my VS 2008 on Vista, and chosen Monotype Corsiva, and bumped the size to 12 for my comments setting (getting old - eyes aren't what they used to be)
A: I successfully used FontForge to create a copy of Consolas (although this should work with any font) with the bold style actually being italics.
This other answer of mine has the details.
Basically, change the name and GUID, then open the italic variant and change its font info from saying italic to saying bold.
A: Unfortunately not...not sure why they don't let you do that.
You can, however, change the font for just comments. So you could make it something different which will make it stand out more.
You may even be able to make a custom version of the font you use that is by default italic and then set that as the comment font.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/15414",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: How to change the icon of .bat file programmatically? I'd like to know what's the way to actually set the icon of a .bat file to an arbitrary icon.
How would I go about doing that programmatically, independently of the language I may be using.
A: The icon displayed by the Shell (Explorer) for batch files is determined by the registry key
HKCR\batfile\DefaultIcon
which, on my computer is
%SystemRoot%\System32\imageres.dll,-68
You can set this to any icon you like.
This will however change the icons of all batch files (unless they have the extension .cmd).
A: If you want an icon for a batch file,
first create a link for the batch file as follows
Right click in window folder where you want the link
select New -> Shortcut, then specify where the .bat file is.
This creates the .lnk file you wanted.
Then you can specify an icon for the link,
on its properties page.
Some nice icons are available here:
%SystemRoot%\System32\SHELL32.dll
Note For me on Windows 10:
%SystemRoot% == C:\Windows\
More Icons are here:
C:\Windows\System32\imageres.dll
Also you might want to have the first line
in the batch file to be "cd .."
if you stash your batch files in a bat subdirectory
one level below where your shortcuts, are supposed to execute.
A: Assuming you're referring to MS-DOS batch files: as it is simply a text file with a special extension, a .bat file doesn't store an icon of its own.
You can, however, create a shortcut in the .lnk format that stores an icon.
A: One of the way you can achieve this is:
*
*Create an executable Jar file
*Create a batch file to run the above jar and launch the desktop java application.
*Use Batch2Exe converter and covert to batch file to Exe.
*During above conversion, you can change the icon to that of your choice.(must of valid .ico file)
*Place the short cut for the above exe on desktop.
Now your java program can be opened in a fancy way just like any other MSWindows apps.! :)
A: Try BatToExe converter. It will convert your batch file to an executable, and allow you to set an icon for it.
A: You can just create a shortcut and then right click on it -> properties -> change icon, and just browse for your desired icon.
Hope this help.
To set an icon of a shortcut programmatically, see this article using SetIconLocation:
How Can I Change the Icon for an Existing Shortcut?:
https://devblogs.microsoft.com/scripting/how-can-i-change-the-icon-for-an-existing-shortcut/
Const DESKTOP = &H10&
Set objShell = CreateObject("Shell.Application")
Set objFolder = objShell.NameSpace(DESKTOP)
Set objFolderItem = objFolder.ParseName("Test Shortcut.lnk")
Set objShortcut = objFolderItem.GetLink
objShortcut.SetIconLocation "C:\Windows\System32\SHELL32.dll", 13
objShortcut.Save
A: You could use a Bat to Exe converter from here:
https://web.archive.org/web/20190304134631/http://www.f2ko.de/en/b2e.php
This will convert your batch file to an executable, then you can set the icon for the converted file.
A: I'll assume you are talking about Windows, right? I don't believe you can change the icon of a batch file directly. Icons are embedded in .EXE and .DLL files, or pointed to by .LNK files.
You could try to change the file association, but that approach may vary based on the version of Windows you are using. This is down with the registry in XP, but I'm not sure about Vista.
A: try with shortcutjs.bat to create a shortcut:
call shortcutjs.bat -linkfile mybat3.lnk -target "%cd%\Ascii2All.bat" -iconlocation "%SystemRoot%\System32\SHELL32.dll,77"
you can use the -iconlocation switch to point to a icon .
A: You may use a program like BAT to EXE converter for example that one: link
This program permits you to add your custom icon.
A: i recommand to use BAT to EXE converter for your desires
| {
"language": "en",
"url": "https://stackoverflow.com/questions/15423",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "47"
} |
Q: How do I get rid of Home in ASP.Net MVC? I know this site is written using ASP.Net MVC and I do not see "/Home" in the url. This proves to me that it can be done. What special route and do I need?
A: If you're running on IIS 7, you can simply delete the Default.aspx file that comes with ASP.NET MVC (assuming you're running on Preview 3 or higher). That file was needed due to an issue with Cassini that was fixed in .NET 3.5 SP1. For more details check out:
http://haacked.com/archive/2008/04/10/upcoming-changes-in-routing.aspx
and
http://haacked.com/archive/2008/05/12/sp1-beta-and-its-effect-on-mvc.aspx
A: I actually like having all of my home controller methods to be at the root of the site. Like this: /about, /contact, etc. I guess I'm picky. I use a simple route constraint to do it. Here is my blog post with a code sample.
A: Just change "Home" to an empty string.
routes.MapRoute(
"Home",
"",
new { action = Index, controller = Home }
);
A: I'd add
routes.MapRoute("NoIndex", "{action}", new { controller = "Home", action = "Index" });
in RouteConfig.cs
A: This is what I did to get rid of Home. It will treat all routes with only one specifier as Home/Action and any with two as Controller/Action. The downside is now controller has to have an explicit index (/Controller != /Controller/Index), but it might help you or others.
routes.MapRoute(
"Default",
"{action}",
new { controller = "Home", action = "Index" }
);
routes.MapRoute(
"Actions",
"{controller}/{action}",
new { }
);
A: In IIS 7, you can simply delete the Default.aspx file that comes with ASP.NET MVC (assuming you're running on Preview 3 or higher). That file was needed due to an issue with Cassini that was fixed in .NET 3.5 SP1.
For more details check out:
Upcoming Changes In Routing and .NET 3.5 SP1 Beta and Its Effect on MVC
| {
"language": "en",
"url": "https://stackoverflow.com/questions/15470",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15"
} |
Q: .Net drawing clipping bug GDI+ DrawLines function has a clipping bug that can be reproduced by running the following c# code. When running the code, two line paths appear, that should be identical, because both of them are inside the clipping region. But when the clipping region is set, one of the line segment is not drawn.
protected override void OnPaint(PaintEventArgs e)
{
PointF[] points = new PointF[] { new PointF(73.36f, 196),
new PointF(75.44f, 32),
new PointF(77.52f, 32),
new PointF(79.6f, 196),
new PointF(85.84f, 196) };
Rectangle b = new Rectangle(70, 32, 20, 164);
e.Graphics.SetClip(b);
e.Graphics.DrawLines(Pens.Red, points); // clipped incorrectly
e.Graphics.TranslateTransform(80, 0);
e.Graphics.ResetClip();
e.Graphics.DrawLines(Pens.Red, points);
}
Setting the antials mode on the graphics object resolves this. But that is not a real solution.
Does anybody know of a workaround?
A: It appears that this is a known bug...
The following code appears to function as you requested:
protected override void OnPaint(PaintEventArgs e)
{
PointF[] points = new PointF[] { new PointF(73.36f, 196),
new PointF(75.44f, 32),
new PointF(77.52f, 32),
new PointF(79.6f, 196),
new PointF(85.84f, 196) };
e.Graphics.SmoothingMode = System.Drawing.Drawing2D.SmoothingMode.AntiAlias;
Rectangle b = new Rectangle(70, 32, 20, 165);
e.Graphics.SetClip(b);
e.Graphics.DrawLines(Pens.Red, points); // clipped incorrectly
e.Graphics.TranslateTransform(80, 0);
e.Graphics.ResetClip();
e.Graphics.DrawLines(Pens.Red, points);
}
Note: I have AntiAlias'ed the line and extended your clipping region by 1
it appears that the following work arounds might help (although not tested):
*
*The pen is more than one pixel thick
*The line is perfectly horizontal or vertical
*The clipping is against the window boundaries rather than a clip rectangle
The following is a list of articles that might / or then again might not help:
http://www.tech-archive.net/pdf/Archive/Development/microsoft.public.win32.programmer.gdi/2004-08/0350.pdf
http://www.tech-archive.net/Archive/Development/microsoft.public.win32.programmer.gdi/2004-08/0368.html
OR...
the following is also possible:
protected override void OnPaint ( PaintEventArgs e )
{
PointF[] points = new PointF[] { new PointF(73.36f, 196),
new PointF(75.44f, 32),
new PointF(77.52f, 32),
new PointF(79.6f, 196),
new PointF(85.84f, 196) };
Rectangle b = new Rectangle( 70, 32, 20, 164 );
Region reg = new Region( b );
e.Graphics.SetClip( reg, System.Drawing.Drawing2D.CombineMode.Union);
e.Graphics.DrawLines( Pens.Red, points ); // clipped incorrectly
e.Graphics.TranslateTransform( 80, 0 );
e.Graphics.ResetClip();
e.Graphics.DrawLines( Pens.Red, points );
}
This effecivly clips using a region combined/unioned (I think) with the ClientRectangle of the canvas/Control. As the region is difned from the rectangle, the results should be what is expected. This code can be proven to work by adding
e.Graphics.FillRectangle( new SolidBrush( Color.Black ), b );
after the setClip() call. This clearly shows the black rectangle only appearing in the clipped region.
This could be a valid workaround if Anti-Aliasing the line is not an option.
Hope this helps
A: What appears to be the matter with the code?
OK, the question should be... what should the code do that it doesn't already.
When I run the code, I see 2 red 'spikes' am I not meant to?
You appear to draw the first spike within the clipped rectangle region verified by adding the the following after the declaration of the Rectangle :
e.Graphics.FillRectangle( new SolidBrush( Color.Black ), b );
Then you perform a translation, reset the clip so at this point I assume the clientRectangle is being used as the appropriate clip region and then attempt to redraw the translated spike. Where's the bug?!?
A: The bug is that both line segments should be drawn identical but they are not because the spike that is drawn within the clipping region is completely within the clipping region and should not be clipped in any way but it is. This is a very annoying but that results in any software that uses drawlines heavily + clipping to look unprofessional because of gaps that can appear in the polygons.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/15478",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Java Coding standard / best practices - naming convention for break/continue labels Sometimes a labeled break or continue can make code a lot more readable.
OUTERLOOP: for ( ;/*stuff*/; ) {
//...lots of code
if ( isEnough() ) break OUTERLOOP;
//...more code
}
I was wondering what the common convention for the labels was. All caps? first cap?
A: I don't understand where this "don't use labels" rule comes from. When doing non-trivial looping logic, the test to break or continue isn't always neatly at the end of the surrounding block.
outer_loop:
for (...) {
// some code
for (...) {
// some code
if (...)
continue outer_loop;
// more code
}
// more code
}
Yes, cases like this do happen all the time. What are people suggesting I use instead? A boolean condition like this?
for (...) {
// some code
boolean continueOuterLoop = false;
for (...) {
// some code
if (...) {
continueOuterLoop = true;
break;
}
// more code
}
if (continueOuterLoop)
continue;
// more code
}
Yuck! Refactoring it as a method doesn't alleviate that either:
boolean innerLoop (...) {
for (...) {
// some code
if (...) {
return true;
}
// more code
}
return false;
}
for (...) {
// some code
if (innerLoop(...))
continue;
// more code
}
Sure it's a little prettier, but it's still passing around a superfluous boolean. And if the inner loop modified local variables, refactoring it into a method isn't always the correct solution.
So why are you all against labels? Give me some solid reasons, and practical alternatives for the above case.
A: If you have to use them use capitals, this draws attention to them and singles them out from being mistakenly interpreted as "Class" names. Drawing attention to them has the additional benefit of catching someone's eye that will come along and refactor your code and remove them. ;)
A: The convention I've most seen is simply camel case, like a method name...
myLabel:
but I've also seen labels prefixed with an underscore
_myLabel:
or with lab...
labSomething:
You can probably sense though from the other answers that you'll be hard-pushed to find a coding standard that says anything other than 'Don't use labels'. The answer then I guess is that you should use whatever style makes sense to you, as long as it's consistent.
A: The convention is to avoid labels altogether.
There are very, very few valid reasons to use a label for breaking out of a loop. Breaking out is ok, but you can remove the need to break at all by modifying your design a little. In the example you have given, you would extract the 'Lots of code' sections and put them in individual methods with meaningful names.
for ( ;/*stuff*/; )
{
lotsOfCode();
if ( !isEnough() )
{
moreCode();
}
}
Edit: having seen the actual code in question (over here), I think the use of labels is probably the best way to make the code readable. In most cases using labels is the wrong approach, in this instance, I think it is fine.
A: Sun's Java code style seem to prefer naming labels in the same way as variables, meaning camel case with the first letter in lower case.
A: wrt sadie's code example:
You gave
outerloop:
for (...) {
// some code
for (...) {
// some code
if (...)
continue outerloop;
// more code
}
// more code
}
As an example. You make a good point. My best guess would be:
public void lookMumNoLabels() {
for (...) {
// some code
doMoreInnerCodeLogic(...);
}
}
private void doMoreInnerCodeLogic(...) {
for (...) {
// some code
if (...) return;
}
}
But there would be examples where that kind of refactoring doesn't sit correctly with whatever logic you're doing.
A: As labels are so rarely useful, it appears, that there is no clear convention. The Java language specification has one example with labels and they are in non_cap.
But since they are so rare, in my opinion it is best, to think twice whether they are really the right tool.
And if they are the right tool, make them all caps so that other developers (or yourself later on) realize them as something unusual right away. (as Craig already pointed out)
A: The convetion/best practise would still be not to use them at all and to refactor the code so that is more readable using extract as method.
A: They are kind of the goto of Java - not sure if C# has them. I have never used them in practice, I can't think of a case where avoiding them wouldn't result in much more readable code.
But if you have to- I think all caps is ok. Most people won't use labelled breaks, so when they see the code, the caps will jump out at them and will force them to realise what is going on.
A:
I know, I should not use labels.
But just assume, I have some code, that could gain a lot in readability from labeled breaks, how do I format them.
Mo, your premise is wrong.
The question shouldn't be 'how do I format them?'
Your question should be 'I have code that has a large amount of logic inside loops - how do I make it more readable?'
The answer to that question is to move the code into individual, well named functions. Then you don't need to label the breaks at all.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/15481",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "24"
} |
Q: Sorting an IList in C# So I came across an interesting problem today. We have a WCF web service that returns an IList. Not really a big deal until I wanted to sort it.
Turns out the IList interface doesn't have a sort method built in.
I ended up using the ArrayList.Adapter(list).Sort(new MyComparer()) method to solve the problem but it just seemed a bit "ghetto" to me.
I toyed with writing an extension method, also with inheriting from IList and implementing my own Sort() method as well as casting to a List but none of these seemed overly elegant.
So my question is, does anyone have an elegant solution to sorting an IList
A: You're going to have to do something like that i think (convert it into a more concrete type).
Maybe take it into a List of T rather than ArrayList, so that you get type safety and more options for how you implement the comparer.
A: You can use LINQ:
using System.Linq;
IList<Foo> list = new List<Foo>();
IEnumerable<Foo> sortedEnum = list.OrderBy(f=>f.Bar);
IList<Foo> sortedList = sortedEnum.ToList();
A: This question inspired me to write a blog post: http://blog.velir.com/index.php/2011/02/17/ilistt-sorting-a-better-way/
I think that, ideally, the .NET Framework would include a static sorting method that accepts an IList<T>, but the next best thing is to create your own extension method. It's not too hard to create a couple of methods that will allow you to sort an IList<T> as you would a List<T>. As a bonus you can overload the LINQ OrderBy extension method using the same technique, so that whether you're using List.Sort, IList.Sort, or IEnumerable.OrderBy, you can use the exact same syntax.
public static class SortExtensions
{
// Sorts an IList<T> in place.
public static void Sort<T>(this IList<T> list, Comparison<T> comparison)
{
ArrayList.Adapter((IList)list).Sort(new ComparisonComparer<T>(comparison));
}
// Sorts in IList<T> in place, when T is IComparable<T>
public static void Sort<T>(this IList<T> list) where T: IComparable<T>
{
Comparison<T> comparison = (l, r) => l.CompareTo(r);
Sort(list, comparison);
}
// Convenience method on IEnumerable<T> to allow passing of a
// Comparison<T> delegate to the OrderBy method.
public static IEnumerable<T> OrderBy<T>(this IEnumerable<T> list, Comparison<T> comparison)
{
return list.OrderBy(t => t, new ComparisonComparer<T>(comparison));
}
}
// Wraps a generic Comparison<T> delegate in an IComparer to make it easy
// to use a lambda expression for methods that take an IComparer or IComparer<T>
public class ComparisonComparer<T> : IComparer<T>, IComparer
{
private readonly Comparison<T> _comparison;
public ComparisonComparer(Comparison<T> comparison)
{
_comparison = comparison;
}
public int Compare(T x, T y)
{
return _comparison(x, y);
}
public int Compare(object o1, object o2)
{
return _comparison((T)o1, (T)o2);
}
}
With these extensions, sort your IList just like you would a List:
IList<string> iList = new []
{
"Carlton", "Alison", "Bob", "Eric", "David"
};
// Use the custom extensions:
// Sort in-place, by string length
iList.Sort((s1, s2) => s1.Length.CompareTo(s2.Length));
// Or use OrderBy()
IEnumerable<string> ordered = iList.OrderBy((s1, s2) => s1.Length.CompareTo(s2.Length));
There's more info in the post: http://blog.velir.com/index.php/2011/02/17/ilistt-sorting-a-better-way/
A: The accepted answer by @DavidMills is quite good, but I think it can be improved upon. For one, there is no need to define the ComparisonComparer<T> class when the framework already includes a static method Comparer<T>.Create(Comparison<T>). This method can be used to create an IComparison on the fly.
Also, it casts IList<T> to IList which has the potential to be dangerous. In most cases that I have seen, List<T> which implements IList is used behind the scenes to implement IList<T>, but this is not guaranteed and can lead to brittle code.
Lastly, the overloaded List<T>.Sort() method has 4 signatures and only 2 of them are implemented.
*
*List<T>.Sort()
*List<T>.Sort(Comparison<T>)
*List<T>.Sort(IComparer<T>)
*List<T>.Sort(Int32, Int32, IComparer<T>)
The below class implements all 4 List<T>.Sort() signatures for the IList<T> interface:
using System;
using System.Collections.Generic;
public static class IListExtensions
{
public static void Sort<T>(this IList<T> list)
{
if (list is List<T> listImpl)
{
listImpl.Sort();
}
else
{
var copy = new List<T>(list);
copy.Sort();
Copy(copy, 0, list, 0, list.Count);
}
}
public static void Sort<T>(this IList<T> list, Comparison<T> comparison)
{
if (list is List<T> listImpl)
{
listImpl.Sort(comparison);
}
else
{
var copy = new List<T>(list);
copy.Sort(comparison);
Copy(copy, 0, list, 0, list.Count);
}
}
public static void Sort<T>(this IList<T> list, IComparer<T> comparer)
{
if (list is List<T> listImpl)
{
listImpl.Sort(comparer);
}
else
{
var copy = new List<T>(list);
copy.Sort(comparer);
Copy(copy, 0, list, 0, list.Count);
}
}
public static void Sort<T>(this IList<T> list, int index, int count,
IComparer<T> comparer)
{
if (list is List<T> listImpl)
{
listImpl.Sort(index, count, comparer);
}
else
{
var range = new List<T>(count);
for (int i = 0; i < count; i++)
{
range.Add(list[index + i]);
}
range.Sort(comparer);
Copy(range, 0, list, index, count);
}
}
private static void Copy<T>(IList<T> sourceList, int sourceIndex,
IList<T> destinationList, int destinationIndex, int count)
{
for (int i = 0; i < count; i++)
{
destinationList[destinationIndex + i] = sourceList[sourceIndex + i];
}
}
}
Usage:
class Foo
{
public int Bar;
public Foo(int bar) { this.Bar = bar; }
}
void TestSort()
{
IList<int> ints = new List<int>() { 1, 4, 5, 3, 2 };
IList<Foo> foos = new List<Foo>()
{
new Foo(1),
new Foo(4),
new Foo(5),
new Foo(3),
new Foo(2),
};
ints.Sort();
foos.Sort((x, y) => Comparer<int>.Default.Compare(x.Bar, y.Bar));
}
The idea here is to leverage the functionality of the underlying List<T> to handle sorting whenever possible. Again, most IList<T> implementations that I have seen use this. In the case when the underlying collection is a different type, fallback to creating a new instance of List<T> with elements from the input list, use it to do the sorting, then copy the results back to the input list. This will work even if the input list does not implement the IList interface.
A: How about using LINQ To Objects to sort for you?
Say you have a IList<Car>, and the car had an Engine property, I believe you could sort as follows:
from c in list
orderby c.Engine
select c;
Edit: You do need to be quick to get answers in here. As I presented a slightly different syntax to the other answers, I will leave my answer - however, the other answers presented are equally valid.
A: try this **USE ORDER BY** :
public class Employee
{
public string Id { get; set; }
public string Name { get; set; }
}
private static IList<Employee> GetItems()
{
List<Employee> lst = new List<Employee>();
lst.Add(new Employee { Id = "1", Name = "Emp1" });
lst.Add(new Employee { Id = "2", Name = "Emp2" });
lst.Add(new Employee { Id = "7", Name = "Emp7" });
lst.Add(new Employee { Id = "4", Name = "Emp4" });
lst.Add(new Employee { Id = "5", Name = "Emp5" });
lst.Add(new Employee { Id = "6", Name = "Emp6" });
lst.Add(new Employee { Id = "3", Name = "Emp3" });
return lst;
}
**var lst = GetItems().AsEnumerable();
var orderedLst = lst.OrderBy(t => t.Id).ToList();
orderedLst.ForEach(emp => Console.WriteLine("Id - {0} Name -{1}", emp.Id, emp.Name));**
A: Found this thread while I was looking for a solution to the exact problem described in the original post. None of the answers met my situation entirely, however. Brody's answer was pretty close. Here is my situation and solution I found to it.
I have two ILists of the same type returned by NHibernate and have emerged the two IList into one, hence the need for sorting.
Like Brody said I implemented an ICompare on the object (ReportFormat) which is the type of my IList:
public class FormatCcdeSorter:IComparer<ReportFormat>
{
public int Compare(ReportFormat x, ReportFormat y)
{
return x.FormatCode.CompareTo(y.FormatCode);
}
}
I then convert the merged IList to an array of the same type:
ReportFormat[] myReports = new ReportFormat[reports.Count]; //reports is the merged IList
Then sort the array:
Array.Sort(myReports, new FormatCodeSorter());//sorting using custom comparer
Since one-dimensional array implements the interface System.Collections.Generic.IList<T>, the array can be used just like the original IList.
A: Useful for grid sorting this method sorts list based on property names. As follow the example.
List<MeuTeste> temp = new List<MeuTeste>();
temp.Add(new MeuTeste(2, "ramster", DateTime.Now));
temp.Add(new MeuTeste(1, "ball", DateTime.Now));
temp.Add(new MeuTeste(8, "gimm", DateTime.Now));
temp.Add(new MeuTeste(3, "dies", DateTime.Now));
temp.Add(new MeuTeste(9, "random", DateTime.Now));
temp.Add(new MeuTeste(5, "call", DateTime.Now));
temp.Add(new MeuTeste(6, "simple", DateTime.Now));
temp.Add(new MeuTeste(7, "silver", DateTime.Now));
temp.Add(new MeuTeste(4, "inn", DateTime.Now));
SortList(ref temp, SortDirection.Ascending, "MyProperty");
private void SortList<T>(
ref List<T> lista
, SortDirection sort
, string propertyToOrder)
{
if (!string.IsNullOrEmpty(propertyToOrder)
&& lista != null
&& lista.Count > 0)
{
Type t = lista[0].GetType();
if (sort == SortDirection.Ascending)
{
lista = lista.OrderBy(
a => t.InvokeMember(
propertyToOrder
, System.Reflection.BindingFlags.GetProperty
, null
, a
, null
)
).ToList();
}
else
{
lista = lista.OrderByDescending(
a => t.InvokeMember(
propertyToOrder
, System.Reflection.BindingFlags.GetProperty
, null
, a
, null
)
).ToList();
}
}
}
A: Here's an example using the stronger typing. Not sure if it's necessarily the best way though.
static void Main(string[] args)
{
IList list = new List<int>() { 1, 3, 2, 5, 4, 6, 9, 8, 7 };
List<int> stronglyTypedList = new List<int>(Cast<int>(list));
stronglyTypedList.Sort();
}
private static IEnumerable<T> Cast<T>(IEnumerable list)
{
foreach (T item in list)
{
yield return item;
}
}
The Cast function is just a reimplementation of the extension method that comes with 3.5 written as a normal static method. It is quite ugly and verbose unfortunately.
A: In VS2008, when I click on the service reference and select "Configure Service Reference", there is an option to choose how the client de-serializes lists returned from the service.
Notably, I can choose between System.Array, System.Collections.ArrayList and System.Collections.Generic.List
A: Found a good post on this and thought I'd share. Check it out HERE
Basically.
You can create the following class and IComparer Classes
public class Widget {
public string Name = string.Empty;
public int Size = 0;
public Widget(string name, int size) {
this.Name = name;
this.Size = size;
}
}
public class WidgetNameSorter : IComparer<Widget> {
public int Compare(Widget x, Widget y) {
return x.Name.CompareTo(y.Name);
}
}
public class WidgetSizeSorter : IComparer<Widget> {
public int Compare(Widget x, Widget y) {
return x.Size.CompareTo(y.Size);
}
}
Then If you have an IList, you can sort it like this.
List<Widget> widgets = new List<Widget>();
widgets.Add(new Widget("Zeta", 6));
widgets.Add(new Widget("Beta", 3));
widgets.Add(new Widget("Alpha", 9));
widgets.Sort(new WidgetNameSorter());
widgets.Sort(new WidgetSizeSorter());
But Checkout this site for more information... Check it out HERE
A: using System.Linq;
var yourList = SomeDAO.GetRandomThings();
yourList.ToList().Sort( (thing, randomThing) => thing.CompareThisProperty.CompareTo( randomThing.CompareThisProperty ) );
That's pretty !ghetto.
A: Is this a valid solution?
IList<string> ilist = new List<string>();
ilist.Add("B");
ilist.Add("A");
ilist.Add("C");
Console.WriteLine("IList");
foreach (string val in ilist)
Console.WriteLine(val);
Console.WriteLine();
List<string> list = (List<string>)ilist;
list.Sort();
Console.WriteLine("List");
foreach (string val in list)
Console.WriteLine(val);
Console.WriteLine();
list = null;
Console.WriteLine("IList again");
foreach (string val in ilist)
Console.WriteLine(val);
Console.WriteLine();
The result was:
IList
B
A
C
List
A
B
C
IList again
A
B
C
A: This looks MUCH MORE SIMPLE if you ask me. This works PERFECTLY for me.
You could use Cast() to change it to IList then use OrderBy():
var ordered = theIList.Cast<T>().OrderBy(e => e);
WHERE T is the type eg. Model.Employee or Plugin.ContactService.Shared.Contact
Then you can use a for loop and its DONE.
ObservableCollection<Plugin.ContactService.Shared.Contact> ContactItems= new ObservableCollection<Contact>();
foreach (var item in ordered)
{
ContactItems.Add(item);
}
A: Convert your IList into List<T> or some other generic collection and then you can easily query/sort it using System.Linq namespace (it will supply bunch of extension methods)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/15486",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "95"
} |
Q: Hidden Features of Java After reading Hidden Features of C# I wondered, What are some of the hidden features of Java?
A: My favorite: dump all thread stack traces to standard out.
windows: CTRL-Break in your java cmd/console window
unix: kill -3 PID
A: How about Properties files in your choice of encodings? Used to be, when you loaded your Properties, you provided an InputStream and the load() method decoded it as ISO-8859-1. You could actually store the file in some other encoding, but you had to use a disgusting hack like this after loading to properly decode the data:
String realProp = new String(prop.getBytes("ISO-8859-1"), "UTF-8");
But, as of JDK 1.6, there's a load() method that takes a Reader instead of an InputStream, which means you can use the correct encoding from the beginning (there's also a store() method that takes a Writer). This seems like a pretty big deal to me, but it appears to have been snuck into the JDK with no fanfare at all. I only stumbled upon it a few weeks ago, and a quick Google search turned up just one passing mention of it.
A: Something that really surprised me was the custom serialization mechanism.
While these methods are private!!, they are "mysteriously" called by the JVM during object serialization.
private void writeObject(ObjectOutputStream out) throws IOException;
private void readObject(ObjectInputStream in) throws IOException, ClassNotFoundException;
This way you can create your own custom serialization to make it more "whatever" (safe, fast, rare, easy etc. )
This is something that really should be considering if a lot of information has to be passed through nodes. The serialization mechanism may be changed to send the half of data. There are many times when the bottlenecks are not in the platform, but in the amount of that sent trough the wire, may save you thousands of dlls in hardware.
Here is an article.
http://java.sun.com/developer/technicalArticles/Programming/serialization/
A: An optimization trick that makes your code easier to maintain and less susceptible to a concurrency bug.
public class Slow {
/** Loop counter; initialized to 0. */
private long i;
public static void main( String args[] ) {
Slow slow = new Slow();
slow.run();
}
private void run() {
while( i++ < 10000000000L )
;
}
}
$ time java Slow
real 0m15.397s
$ time java Slow
real 0m20.012s
$ time java Slow
real 0m18.645s
Average: 18.018s
public class Fast {
/** Loop counter; initialized to 0. */
private long i;
public static void main( String args[] ) {
Fast fast = new Fast();
fast.run();
}
private void run() {
long i = getI();
while( i++ < 10000000000L )
;
setI( i );
}
private long setI( long i ) {
this.i = i;
}
private long getI() {
return this.i;
}
}
$ time java Fast
real 0m12.003s
$ time java Fast
real 0m9.840s
$ time java Fast
real 0m9.686s
Average: 10.509s
It requires more bytecodes to reference a class-scope variable than a method-scope variable. The addition of a method call prior to the critical loop adds little overhead (and the call might be inlined by the compiler anyway).
Another advantage to this technique (always using accessors) is that it eliminates a potential bug in the Slow class. If a second thread were to continually reset the value of i to 0 (by calling slow.setI( 0 ), for example), the Slow class could never end its loop. Calling the accessor and using a local variable eliminates that possibility.
Tested using J2SE 1.6.0_13 on Linux 2.6.27-14.
A: Identifiers can contain foreign language chars like umlauts:
instead of writing:
String title="";
someone could write:
String Überschrift="";
A: I can add Scanner object. It is the best for parsing.
String input = "1 fish 2 fish red fish blue fish";
Scanner s = new Scanner(input).useDelimiter("\\s*fish\\s*");
System.out.println(s.nextInt());
System.out.println(s.nextInt());
System.out.println(s.next());
System.out.println(s.next());
s.close();
A: A couple of people have posted about instance initializers, here's a good use for it:
Map map = new HashMap() {{
put("a key", "a value");
put("another key", "another value");
}};
Is a quick way to initialize maps if you're just doing something quick and simple.
Or using it to create a quick swing frame prototype:
JFrame frame = new JFrame();
JPanel panel = new JPanel();
panel.add( new JLabel("Hey there"){{
setBackground(Color.black);
setForeground( Color.white);
}});
panel.add( new JButton("Ok"){{
addActionListener( new ActionListener(){
public void actionPerformed( ActionEvent ae ){
System.out.println("Button pushed");
}
});
}});
frame.add( panel );
Of course it can be abused:
JFrame frame = new JFrame(){{
add( new JPanel(){{
add( new JLabel("Hey there"){{
setBackground(Color.black);
setForeground( Color.white);
}});
add( new JButton("Ok"){{
addActionListener( new ActionListener(){
public void actionPerformed( ActionEvent ae ){
System.out.println("Button pushed");
}
});
}});
}});
}};
A: Dynamic proxies (added in 1.3) allow you to define a new type at runtime that conforms to an interface. It's come in handy a surprising number of times.
A: final initialization can be postponed.
It makes sure that even with a complex flow of logic return values are always set. It's too easy to miss a case and return null by accident. It doesn't make returning null impossible, just obvious that it's on purpose:
public Object getElementAt(int index) {
final Object element;
if (index == 0) {
element = "Result 1";
} else if (index == 1) {
element = "Result 2";
} else {
element = "Result 3";
}
return element;
}
A: Annotation Processing API from Java 6 looks very perspective for code generation and static code verification.
A: People are sometimes a bit surprised when they realize that it's possible to call private methods and access/change private fields using reflection...
Consider the following class:
public class Foo {
private int bar;
public Foo() {
setBar(17);
}
private void setBar(int bar) {
this.bar=bar;
}
public int getBar() {
return bar;
}
public String toString() {
return "Foo[bar="+bar+"]";
}
}
Executing this program...
import java.lang.reflect.*;
public class AccessibleExample {
public static void main(String[] args)
throws NoSuchMethodException,IllegalAccessException, InvocationTargetException, NoSuchFieldException {
Foo foo=new Foo();
System.out.println(foo);
Method method=Foo.class.getDeclaredMethod("setBar", int.class);
method.setAccessible(true);
method.invoke(foo, 42);
System.out.println(foo);
Field field=Foo.class.getDeclaredField("bar");
field.setAccessible(true);
field.set(foo, 23);
System.out.println(foo);
}
}
...will yield the following output:
Foo[bar=17]
Foo[bar=42]
Foo[bar=23]
A: Most people does not know they can clone an array.
int[] arr = {1, 2, 3};
int[] arr2 = arr.clone();
A: JVisualVM from the bin directory in the JDK distribution. Monitoring and even profiling any java application, even one you didn't launch with any special parameters. Only in recent versions of the Java 6SE JDK.
A: The power you can have over the garbage collector and how it manages object collection is very powerful, especially for long-running and time-sensitive applications. It starts with weak, soft, and phantom references in the java.lang.ref package. Take a look at those, especially for building caches (there is a java.util.WeakHashMap already). Now dig a little deeper into the ReferenceQueue and you'll start having even more control. Finally grab the docs on the garbage collector itself and you'll be able to control how often it runs, sizes of different collection areas, and the types of algorithms used (for Java 5 see http://java.sun.com/docs/hotspot/gc5.0/gc_tuning_5.html).
A: You can access final local variables and parameters in initialization blocks and methods of local classes. Consider this:
final String foo = "42";
new Thread() {
public void run() {
dowhatever(foo);
}
}.start();
A bit like a closure, isn't it?
A: You can build a string sprintf-style using String.format().
String w = "world";
String s = String.format("Hello %s %d", w, 3);
You can of course also use special specifiers to modify the output.
More here: http://java.sun.com/j2se/1.5.0/docs/api/java/util/Formatter.html#syntax
A: Actually, what I love about Java is how few hidden tricks there are. It's a very obvious language. So much so that after 15 years, almost every one I can think of is already listed on these few pages.
Perhaps most people know that Collections.synchronizedList() adds synchronization to a list. What you can't know unless you read the documentation is that you can safely iterate on the elements of that list by synchronizing on the list object itself.
CopyOnWriteArrayList might be unknown to some, and Future represents an interesting way to abstract multithreaded result access.
You can attach to VMs (local or remote), get information on GC activity, memory use, file descriptors and even object sizes through the various management, agent and attach APIs.
Although TimeUnit is perhaps better than long, I prefer Wicket's Duration class.
A: I think another "overlooked" feature of java is the JVM itself. It is probably the best VM available. And it supports lots of interesting and useful languages (Jython, JRuby, Scala, Groovy). All those languages can easily and seamlessly cooperate.
If you design a new language (like in the scala-case) you immediately have all the existing libraries available and your language is therefore "useful" from the very beginning.
All those languages make use of the HotSpot optimizations. The VM is very well monitor and debuggable.
A: Joshua Bloch's new Effective Java is a good resource.
A: Some control-flow tricks, finally around a return statement:
int getCount() {
try { return 1; }
finally { System.out.println("Bye!"); }
}
The rules for definite assignment will check that a final variable is always assigned through a simple control-flow analysis:
final int foo;
if(...)
foo = 1;
else
throw new Exception();
foo+1;
A: Source code URLs. E.g. here is some legal java source code:
http://google.com
(Yes, it was in Java Puzzlers. I laughed...)
A: Didn't read about this
Integer a = 1;
Integer b = 1;
Integer c = new Integer(1);
Integer d = new Integer(1);
Integer e = 128;
Integer f = 128;
assertTrue (a == b); // again: this is true!
assertFalse(e == f); // again: this is false!
assertFalse(c == d); // again: this is false!
read more about this by searching java's pool of integer (internal 'cache' from -128 to 127 for autoboxing) or look into Integer.valueOf
A: You can define an anonymous subclass and directly call a method on it even if it implements no interfaces.
new Object() {
void foo(String s) {
System.out.println(s);
}
}.foo("Hello");
A: The asList method in java.util.Arrays allows a nice combination of varargs, generic methods and autoboxing:
List<Integer> ints = Arrays.asList(1,2,3);
A: Using this keyword for accessing fields/methods of containing class from an inner class. In below, rather contrived example, we want to use sortAscending field of container class from the anonymous inner class. Using ContainerClass.this.sortAscending instead of this.sortAscending does the trick.
import java.util.Comparator;
public class ContainerClass {
boolean sortAscending;
public Comparator createComparator(final boolean sortAscending){
Comparator comparator = new Comparator<Integer>() {
public int compare(Integer o1, Integer o2) {
if (sortAscending || ContainerClass.this.sortAscending) {
return o1 - o2;
} else {
return o2 - o1;
}
}
};
return comparator;
}
}
A: Not really a feature, but an amusing trick I discovered recently in some Web page:
class Example
{
public static void main(String[] args)
{
System.out.println("Hello World!");
http://Phi.Lho.free.fr
System.exit(0);
}
}
is a valid Java program (although it generates a warning).
If you don't see why, see Gregory's answer! ;-) Well, syntax highlighting here also gives a hint!
A: *
*Local classes.
*Instantiating Java inner-classes from outside of the containing class.
A: String Parameterised Class Factory.
Class.forName( className ).newInstance();
Load a resource (property file, xml, xslt, image etc) from deployment jar file.
this.getClass().getClassLoader().getResourceAsStream( ... ) ;
A: Instances of the same class can access private members of other instances:
class Thing {
private int x;
public int addThings(Thing t2) {
return this.x + t2.x; // Can access t2's private value!
}
}
A: The next-generation Java plugin found in Java 1.6 Update 10 and later has some very neat features:
*
*Pass java_arguments parameter to pass arguments to the JVM that is created. This allows you to control the amount of memory given to the applet.
*Create separate class loaders or even separate JVM's for each applet.
*Specify the JVM version to use.
*Install partial Java kernels in cases where you only need a subset of the full Java libraries' functionality.
*Better Vista support.
*Support (experimental) to drag an applet out of the browser and have it keep running when you navigate away.
Many other things that are documented here: http://jdk6.dev.java.net/plugin2/
More from this release here: http://jdk6.dev.java.net/6u10ea.html
A: Intersection types allow you to (kinda sorta) do enums that have an inheritance hierarchy. You can't inherit implementation, but you can delegate it to a helper class.
enum Foo1 implements Bar {}
enum Foo2 implements Bar {}
class HelperClass {
static <T extends Enum<T> & Bar> void fooBar(T the enum) {}
}
This is useful when you have a number of different enums that implement some sort of pattern. For instance, a number of pairs of enums that have a parent-child relationship.
enum PrimaryColor {Red, Green, Blue;}
enum PastelColor {Pink, HotPink, Rockmelon, SkyBlue, BabyBlue;}
enum TransportMedium {Land, Sea, Air;}
enum Vehicle {Car, Truck, BigBoat, LittleBoat, JetFighter, HotAirBaloon;}
You can write generic methods that say "Ok, given an enum value thats a parent of some other enum values, what percentage of all the possible child enums of the child type have this particular parent value as their parent?", and have it all typesafe and done without casting. (eg: that "Sea" is 33% of all possible vehicles, and "Green" 20% of all possible Pastels).
The code look like this. It's pretty nasty, but there are ways to make it better. Note in particuar that the "leaf" classes themselves are quite neat - the generic classes have declarations that are horribly ugly, but you only write them onece. Once the generic classes are there, then using them is easy.
import java.util.EnumSet;
import javax.swing.JComponent;
public class zz extends JComponent {
public static void main(String[] args) {
System.out.println(PrimaryColor.Green + " " + ParentUtil.pctOf(PrimaryColor.Green) + "%");
System.out.println(TransportMedium.Air + " " + ParentUtil.pctOf(TransportMedium.Air) + "%");
}
}
class ParentUtil {
private ParentUtil(){}
static <P extends Enum<P> & Parent<P, C>, C extends Enum<C> & Child<P, C>> //
float pctOf(P parent) {
return (float) parent.getChildren().size() / //
(float) EnumSet.allOf(parent.getChildClass()).size() //
* 100f;
}
public static <P extends Enum<P> & Parent<P, C>, C extends Enum<C> & Child<P, C>> //
EnumSet<C> loadChildrenOf(P p) {
EnumSet<C> cc = EnumSet.noneOf(p.getChildClass());
for(C c: EnumSet.allOf(p.getChildClass())) {
if(c.getParent() == p) {
cc.add(c);
}
}
return cc;
}
}
interface Parent<P extends Enum<P> & Parent<P, C>, C extends Enum<C> & Child<P, C>> {
Class<C> getChildClass();
EnumSet<C> getChildren();
}
interface Child<P extends Enum<P> & Parent<P, C>, C extends Enum<C> & Child<P, C>> {
Class<P> getParentClass();
P getParent();
}
enum PrimaryColor implements Parent<PrimaryColor, PastelColor> {
Red, Green, Blue;
private EnumSet<PastelColor> children;
public Class<PastelColor> getChildClass() {
return PastelColor.class;
}
public EnumSet<PastelColor> getChildren() {
if(children == null) children=ParentUtil.loadChildrenOf(this);
return children;
}
}
enum PastelColor implements Child<PrimaryColor, PastelColor> {
Pink(PrimaryColor.Red), HotPink(PrimaryColor.Red), //
Rockmelon(PrimaryColor.Green), //
SkyBlue(PrimaryColor.Blue), BabyBlue(PrimaryColor.Blue);
final PrimaryColor parent;
private PastelColor(PrimaryColor parent) {
this.parent = parent;
}
public Class<PrimaryColor> getParentClass() {
return PrimaryColor.class;
}
public PrimaryColor getParent() {
return parent;
}
}
enum TransportMedium implements Parent<TransportMedium, Vehicle> {
Land, Sea, Air;
private EnumSet<Vehicle> children;
public Class<Vehicle> getChildClass() {
return Vehicle.class;
}
public EnumSet<Vehicle> getChildren() {
if(children == null) children=ParentUtil.loadChildrenOf(this);
return children;
}
}
enum Vehicle implements Child<TransportMedium, Vehicle> {
Car(TransportMedium.Land), Truck(TransportMedium.Land), //
BigBoat(TransportMedium.Sea), LittleBoat(TransportMedium.Sea), //
JetFighter(TransportMedium.Air), HotAirBaloon(TransportMedium.Air);
private final TransportMedium parent;
private Vehicle(TransportMedium parent) {
this.parent = parent;
}
public Class<TransportMedium> getParentClass() {
return TransportMedium.class;
}
public TransportMedium getParent() {
return parent;
}
}
A: Read "Java Puzzlers" by Joshua Bloch and you will be both enlightened and horrified.
A: It has already been mentioned that a final array can be used to pass a variable out of the anonymous inner classes.
Another, arguably better and less ugly approach though is to use AtomicReference (or AtomicBoolean/AtomicInteger/…) class from java.util.concurrent.atomic package.
One of the benefits in doing so is that these classes also provide such methods as compareAndSet, which may be useful if you're creating several threads which can modify the same variable.
Another useful related pattern:
final AtomicBoolean dataMsgReceived = new AtomicBoolean(false);
final AtomicReference<Message> message = new AtomicReference<Message>();
withMessageHandler(new MessageHandler() {
public void handleMessage(Message msg) {
if (msg.isData()) {
synchronized (dataMsgReceived) {
message.set(msg);
dataMsgReceived.set(true);
dataMsgReceived.notifyAll();
}
}
}
}, new Interruptible() {
public void run() throws InterruptedException {
synchronized (dataMsgReceived) {
while (!dataMsgReceived.get()) {
dataMsgReceived.wait();
}
}
}
});
In this particular example we could have simply waited on message for it to become non-null, however null may often be a valid value and then you need to use a separate flag to finish the wait.
waitMessageHandler(…) above is yet another useful pattern: it sets up a handler somewhere, then starts executing the Interruptible which may throw an exception, and then removes the handler in the finally block, like so:
private final AtomicReference<MessageHandler> messageHandler = new AtomicReference<MessageHandler>();
public void withMessageHandler(MessageHandler handler, Interruptible logic) throws InterruptedException {
synchronized (messageHandler) {
try {
messageHandler.set(handler);
logic.run();
} finally {
messageHandler.set(null);
}
}
}
Here I assume that the messageHandler's (if it's not null) handleMessage(…) method is called by another thread when a message is received. messageHandler must not be simply of MessageHandler type: that way you will synchronize on a changing variable, which is clearly a bug.
Of course, it doesn't need to be InterruptedException, it could be something like IOException, or whatever makes sense in a particular piece of code.
A: Comma & array. It is legal syntax: String s[] = {
"123" ,
"234" ,
};
A: Java 6 (from Sun) comes with an embedded JavaScrip interpreter.
http://java.sun.com/javase/6/docs/technotes/guides/scripting/programmer_guide/index.html#jsengine
A: This is not exactly "hidden features" and not very useful, but can be extremely interesting in some cases:
Class sun.misc.Unsafe - will allow you to implement direct memory management in Java (you can even write self-modifying Java code with this if you try a lot):
public class UnsafeUtil {
public static Unsafe unsafe;
private static long fieldOffset;
private static UnsafeUtil instance = new UnsafeUtil();
private Object obj;
static {
try {
Field f = Unsafe.class.getDeclaredField("theUnsafe");
f.setAccessible(true);
unsafe = (Unsafe)f.get(null);
fieldOffset = unsafe.objectFieldOffset(UnsafeUtil.class.getDeclaredField("obj"));
} catch (Exception e) {
throw new RuntimeException(e);
}
};
}
A: Double Brace Initialization took me by surprise a few months ago when I first discovered it, never heard of it before.
ThreadLocals are typically not so widely known as a way to store per-thread state.
Since JDK 1.5 Java has had extremely well implemented and robust concurrency tools beyond just locks, they live in java.util.concurrent and a specifically interesting example is the java.util.concurrent.atomic subpackage that contains thread-safe primitives that implement the compare-and-swap operation and can map to actual native hardware-supported versions of these operations.
A: When working in Swing I like the hidden Ctrl - Shift - F1 feature.
It dumps the component tree of the current window.
(Assuming you have not bound that keystroke to something else.)
A: Every class file starts with the hex value 0xCAFEBABE to identify it as valid JVM bytecode.
(Explanation)
A: Functors are pretty cool. They are pretty close to a function pointer, which everyone is usually quick to say is impossible in Java.
Functors in Java
A: SwingWorker for easily managing user interface callbacks from background threads.
A: Apparently with some debug builds there is an option which dumps the native (JIT) assembly code from HotSpot: http://weblogs.java.net/blog/kohsuke/archive/2008/03/deep_dive_into.html
Unfortunately I wasn't able to find the build via the link in that post, if anyone can find a more precise URL, I'd love to play with it.
A: You can switch(this) inside method definitions of enum classes. Made me shout "whut!" loudly when I discovered that this actually works.
A: You can add runtime checks of generic types using a Class<T> object, this comes in handy when a class is being created in a configuration file somewhere and there is no way to add a compile time check for the generic type of the class. You dont want the class to blow up at runtime if the app happens to be configured wrong and you dont want all you classes riddled with instance of checks.
public interface SomeInterface {
void doSomething(Object o);
}
public abstract class RuntimeCheckingTemplate<T> {
private Class<T> clazz;
protected RuntimeChecking(Class<T> clazz) {
this.clazz = clazz;
}
public void doSomething(Object o) {
if (clazz.isInstance(o)) {
doSomethingWithGeneric(clazz.cast(o));
} else {
// log it, do something by default, throw an exception, etc.
}
}
protected abstract void doSomethingWithGeneric(T t);
}
public class ClassThatWorksWithStrings extends RuntimeCheckingTemplate<String> {
public ClassThatWorksWithStrings() {
super(String.class);
}
protected abstract void doSomethingWithGeneric(T t) {
// Do something with the generic and know that a runtime exception won't occur
// because of a wrong type
}
}
A: My vote goes to java.util.concurrent with its concurrent collections and flexible executors allowing among others thread pools, scheduled tasks and coordinated tasks. The DelayQueue is my personal favorite, where elements are made available after a specified delay.
java.util.Timer and TimerTask may safely be put to rest.
Also, not exactly hidden but in a different package from the other classes related to date and time. java.util.concurrent.TimeUnit is useful when converting between nanoseconds, microseconds, milliseconds and seconds.
It reads a lot better than the usual someValue * 1000 or someValue / 1000.
A: Language-level assert keyword.
A: Not really part of the Java language, but the javap disassembler which comes with Sun's JDK is not widely known or used.
A: The addition of the for-each loop construct in 1.5. I <3 it.
// For each Object, instantiated as foo, in myCollection
for(Object foo: myCollection) {
System.out.println(foo.toString());
}
And can be used in nested instances:
for (Suit suit : suits)
for (Rank rank : ranks)
sortedDeck.add(new Card(suit, rank));
The for-each construct is also applicable to arrays, where it hides the index variable rather than the iterator. The following method returns the sum of the values in an int array:
// Returns the sum of the elements of a
int sum(int[] a) {
int result = 0;
for (int i : a)
result += i;
return result;
}
Link to the Sun documentation
A: i personally discovered java.lang.Void very late -- improves code readability in conjunction with generics, e.g. Callable<Void>
A: Perhaps the most surprising hidden feature is the sun.misc.Unsafe class.
http://www.docjar.com/html/api/ClassLib/Common/sun/misc/Unsafe.java.html
You can;
*
*Create an object without calling a constructor.
*Throw any exception even Exception without worrying about throws clauses on methods. (There are other way to do this I know)
*Get/set randomly accessed fields in an object without using reflection.
*allocate/free/copy/resize a block of memory which can be long (64-bit) in size.
*Obtain the location of fields in an object or static fields in a class.
*independently lock and unlock an object lock. (like synchronize without a block)
*define a class from provided byte codes. Rather than the classloader determining what the byte code should be. (You can do this with reflection as well)
BTW: Incorrect use of this class will kill the JVM. I don't know which JVMs support this class so its not portable.
A: Since no one else has said it yet (I Think) my favorite feature is Auto boxing!
public class Example
{
public static void main(String[] Args)
{
int a = 5;
Integer b = a; // Box!
System.out.println("A : " + a);
System.out.println("B : " + b);
}
}
A: Some years ago when I had to do Java (1.4.x) I wanted an eval() method and Suns javac is (was?) written in Java so it was just to link tools.jar and use that with some glue-code around it.
A: Here's my list.
My favourite (and scariest) hidden feature is that you can throw checked exceptions from methods that are not declaring to throw anything.
import java.rmi.RemoteException;
class Thrower {
public static void spit(final Throwable exception) {
class EvilThrower<T extends Throwable> {
@SuppressWarnings("unchecked")
private void sneakyThrow(Throwable exception) throws T {
throw (T) exception;
}
}
new EvilThrower<RuntimeException>().sneakyThrow(exception);
}
}
public class ThrowerSample {
public static void main( String[] args ) {
Thrower.spit(new RemoteException("go unchecked!"));
}
}
Also you may like to know you can throw 'null'...
public static void main(String[] args) {
throw null;
}
Guess what this prints:
Long value = new Long(0);
System.out.println(value.equals(0));
And, guess what this returns:
public int returnSomething() {
try {
throw new RuntimeException("foo!");
} finally {
return 0;
}
}
the above should not surprise good developers.
In Java you can declare an array in following valid ways:
String[] strings = new String[] { "foo", "bar" };
// the above is equivalent to the following:
String[] strings = { "foo", "bar" };
So following Java code is perfectly valid:
public class Foo {
public void doSomething(String[] arg) {}
public void example() {
String[] strings = { "foo", "bar" };
doSomething(strings);
}
}
Is there any valid reason why, instead, the following code shouldn't be valid?
public class Foo {
public void doSomething(String[] arg) {}
public void example() {
doSomething({ "foo", "bar" });
}
}
I think, that the above syntax would have been a valid substitute to the varargs introduced in Java 5. And, more coherent with the previously allowed array declarations.
A: Shutdown Hooks. This allows to register a thread that will be created immediatly but started only when the JVM ends ! So it is some kind of "global jvm finalizer", and you can make useful stuff in this thread (for example shutting down java ressources like an embedded hsqldb server). This works with System.exit(), or with CTRL-C / kill -15 (but not with kill -9 on unix, of course).
Moreover it's pretty easy to set up.
Runtime.getRuntime().addShutdownHook(new Thread() {
public void run() {
endApp();
}
});;
A: Joint union in type parameter variance:
public class Baz<T extends Foo & Bar> {}
For example, if you wanted to take a parameter that's both Comparable and a Collection:
public static <A, B extends Collection<A> & Comparable<B>>
boolean foo(B b1, B b2, A a) {
return (b1.compareTo(b2) == 0) || b1.contains(a) || b2.contains(a);
}
This contrived method returns true if the two given collections are equal or if either one of them contains the given element, otherwise false. The point to notice is that you can invoke methods of both Comparable and Collection on the arguments b1 and b2.
A: The value of:
new URL("http://www.yahoo.com").equals(new URL("http://209.191.93.52"))
is true.
(From Java Puzzlers)
A: If you do a lot of JavaBean development and work with property change support, you generally wind up writing a lot of setters like this:
public void setFoo(Foo aFoo){
Foo old = this.foo;
this.foo = aFoo;
changeSupport.firePropertyChange("foo", old, aFoo);
}
I recently stumbled across a blog that suggested a more terse implementation of this that makes the code a lot easier to write:
public void setFoo(Foo aFoo){
changeSupport.firePropertyChange("foo", this.foo, this.foo = aFoo);
}
It actually simplified things to the point where I was able to adjust the setter template in Eclipse so the method gets created automatically.
A: static imports to "enhance" the language, so you can do nice literal things in type safe ways:
List<String> ls = List("a", "b", "c");
(can also do with maps, arrays, sets).
http://gleichmann.wordpress.com/2008/01/13/building-your-own-literals-in-java-lists-and-arrays/
Taking it further:
List<Map<String, String>> data = List(Map( o("name", "michael"), o("sex", "male")));
A: As a starter I really appreciate the JConsole monitoring software in Java 6, it has solved a couple of problems for me already and I keep on finding new uses for it.
Apparently the JConsole was there already in Java 5 but I reckon it is improved now and at least working much more stable as of now.
JConsole in Java 5:
JConsole in Java 5
JConsole in Java 6:
JConsole in Java 6
And while you are at it, have a good look at the other tools in the series:
Java 6 troubleshooting tools
A: Not so hidden, but interesting.
You can have a "Hello, world" without main method ( it throws NoSuchMethodError thought )
Originally posted by RusselW on Strangest language feature
public class WithoutMain {
static {
System.out.println("Look ma, no main!!");
System.exit(0);
}
}
$ java WithoutMain
Look ma, no main!!
A: Java processing does a neat trick on variable definition if you do not use a default initializer.
{
int x;
if(whatever)
x=1;
if(x == 1)
...
}
This will give you an error at compile time that you have a path where X isn't properly defined. This has helped me a few times, and I've taken to considering default initialization like these:
int x=0;
String s=null;
to be a bad pattern since it blocks this helpful checking.
That said, sometimes it's difficult to get around--I have had to go back and edit in the =null when it made sense as a default, but I never put it in on the first pass any more.
A: I was surprised by instance initializers the other day. I was deleting some code-folded methods and ended up creating multiple instance initializers :
public class App {
public App(String name) { System.out.println(name + "'s constructor called"); }
static { System.out.println("static initializer called"); }
{ System.out.println("instance initializer called"); }
static { System.out.println("static initializer2 called"); }
{ System.out.println("instance initializer2 called"); }
public static void main( String[] args ) {
new App("one");
new App("two");
}
}
Executing the main method will display:
static initializer called
static initializer2 called
instance initializer called
instance initializer2 called
one's constructor called
instance initializer called
instance initializer2 called
two's constructor called
I guess these would be useful if you had multiple constructors and needed common code
They also provide syntactic sugar for initializing your classes:
List<Integer> numbers = new ArrayList<Integer>(){{ add(1); add(2); }};
Map<String,String> codes = new HashMap<String,String>(){{
put("1","one");
put("2","two");
}};
A: This is not really a hidden feature but it did give me a big surprise when I saw this compiled fine:
public int aMethod(){
http://www.google.com
return 1;
}
the reason why it compiles is that the line http://www.google.com the "http:" part is treated by the compiler as a label and the rest of the line is a comment.
So, if you want to write some bizzare code (or obfuscated code), just put alot of http addresses there. ;-)
A: JDK 1.6_07+ contains an app called VisualVM (bin/jvisualvm.exe) that is a nice GUI on top of many of the tools. It seems more comprehensive than JConsole.
A: You can declare a class in a method:
public Foo foo(String in) {
class FooFormat extends Format {
public Object parse(String s, ParsePosition pp) { // parse stuff }
}
return (Foo) new FooFormat().parse(in);
}
A: You can override a method and have the superclass constructor call it (this may come as a surprise to C++ programmers.)
Example
A: It took them long enough to add support for this,
System Tray
A: Classpath wild cards since Java 6.
java -classpath ./lib/* so.Main
Instead of
java -classpath ./lib/log4j.jar:./lib/commons-codec.jar:./lib/commons-httpclient.jar:./lib/commons-collections.jar:./lib/myApp.jar so.Main
See http://java.sun.com/javase/6/docs/technotes/tools/windows/classpath.html
A: I really like the rewritten Threading API from Java 1.6. Callables are great. They are basically threads with a return value.
A: Self-bound generics:
class SelfBounded<T extends SelfBounded<T>> {
}
http://www.artima.com/weblogs/viewpost.jsp?thread=136394
A: I like the static import of methods.
For example create the following util class:
package package.name;
public class util {
private static void doStuff1(){
//the end
}
private static String doStuff2(){
return "the end";
}
}
Then use it like this.
import static package.name.util.*;
public class main{
public static void main(String[] args){
doStuff1(); // wee no more typing util.doStuff1()
System.out.print(doStuff2()); // or util.doStuff2()
}
}
Static Imports works with any class, even Math...
import static java.lang.Math.*;
import static java.lang.System.out;
public class HelloWorld {
public static void main(String[] args) {
out.println("Hello World!");
out.println("Considering a circle with a diameter of 5 cm, it has:");
out.println("A circumference of " + (PI * 5) + "cm");
out.println("And an area of " + (PI * pow(5,2)) + "sq. cm");
}
}
A: List.subList returns a view on the original list
A documented but little known feature of lists. This allows you to work with parts of a list with changes mirrored in the original list.
List subList(int fromIndex, int toIndex)
"This method eliminates the need for explicit range operations (of the sort that commonly exist for arrays). Any operation that expects a list can be used as a range operation by passing a subList view instead of a whole list. For example, the following idiom removes a range of elements from a list:
list.subList(from, to).clear();
Similar idioms may be constructed for indexOf and lastIndexOf, and all of the algorithms in the Collections class can be applied to a subList."
A: Oh, I almost forgot this little gem. Try this on any running java process:
jmap -histo:live PID
You will get a histogram of live heap objects in the given VM. Invaluable as a quick way to figure certain kinds of memory leaks. Another technique I use to prevent them is to create and use size-bounded subclasses of all the collections classes. This causes quick failures in out-of-control collections that are easy to identify.
A: A feature with which you can display splash screens for your Java Console Based Applications.
Use the command line tool java or javaw with the option -splash
eg:
java -splash:C:\myfolder\myimage.png -classpath myjarfile.jar com.my.package.MyClass
the content of C:\myfolder\myimage.png will be displayed at the center of your screen, whenever you execute the class "com.my.package.MyClass"
A: For most people I interview for Java developer positions labeled blocks are very surprising. Here is an example:
// code goes here
getmeout:{
for (int i = 0; i < N; ++i) {
for (int j = i; j < N; ++j) {
for (int k = j; k < N; ++k) {
//do something here
break getmeout;
}
}
}
}
Who said goto in java is just a keyword? :)
A: Not really a feature, but it makes me chuckle that goto is a reserved word that does nothing except prompting javac to poke you in the eye. Just to remind you that you are in OO-land now.
A: Javadoc - when written properly (not always the case with some developers unfortunately), it gives you a clear, coherent description of what code is supposed to do, as opposed to what it actually does. It can then be turned into a nice browsable set of HTML documentation. If you use continuous integration etc it can be generated regularly so all developers can see the latest updates.
A: with static imports you can do cool stuff like:
List<String> myList = list("foo", "bar");
Set<String> mySet = set("foo", "bar");
Map<String, String> myMap = map(v("foo", "2"), v("bar", "3"));
A: How about covariant return types which have been in place since JDK 1.5? It is pretty poorly publicised, as it is an unsexy addition, but as I understand it, is absolutely necessary for generics to work.
Essentially, the compiler now allows a subclass to narrow the return type of an overridden method to be a subclass of the original method's return type. So this is allowed:
class Souper {
Collection<String> values() {
...
}
}
class ThreadSafeSortedSub extends Souper {
@Override
ConcurrentSkipListSet<String> values() {
...
}
}
You can call the subclass's values method and obtain a sorted thread safe Set of Strings without having to down cast to the ConcurrentSkipListSet.
A: Transfer of control in a finally block throws away any exception. The following code does not throw RuntimeException -- it is lost.
public static void doSomething() {
try {
//Normally you would have code that doesn't explicitly appear
//to throw exceptions so it would be harder to see the problem.
throw new RuntimeException();
} finally {
return;
}
}
From http://jamesjava.blogspot.com/2006/03/dont-return-in-finally-clause.html
A: Haven't seen anyone mention instanceof being implemented in such a way that checking for null is not necessary.
Instead of:
if( null != aObject && aObject instanceof String )
{
...
}
just use:
if( aObject instanceof String )
{
...
}
A: The strictfp keyword. (I never saw it used in a real application though :)
You can get the class for primitive types by using the following notation: int.class,
float.class, etc. Very useful when doing reflection.
Final arrays can be used to "return" values from anonymous inner classes (warning, useless example below):
final boolean[] result = new boolean[1];
SwingUtilities.invokeAndWait(new Runnable() {
public void run() { result[0] = true; }
});
A: You can define and invoke methods on anonymous inner classes.
Well they're not that hidden, but very few people know they can be used to define a new method in a class and invoke it like this:
(new Object() {
public String someMethod(){
return "some value";
}
}).someMethod();
Probably is not very common because it not very useful either, you can call the method it only when you define it ( or via reflection )
A: Allowing methods and constructors in enums surprised me. For example:
enum Cats {
FELIX(2), SHEEBA(3), RUFUS(7);
private int mAge;
Cats(int age) {
mAge = age;
}
public int getAge() {
return mAge;
}
}
You can even have a "constant specific class body" which allows a specific enum value to override methods.
More documentation here.
A: I was aware that Java 6 included scripting support, but I just recently discovered jrunscript,
which can interpret and run JavaScript (and, one presumes, other scripting languages such as Groovy) interactively, sort of like the Python shell or irb in Ruby
A: The C-Style printf() :)
System.out.printf("%d %f %.4f", 3,Math.E,Math.E);
Output:
3 2.718282 2.7183
Binary Search (and it's return value)
int[] q = new int[] { 1,3,4,5};
int position = Arrays.binarySearch(q, 2);
Similar to C#, if '2' is not found in the array, it returns a negative value but if you take the 1's Complement of the returned value you actually get the position where '2' can be inserted.
In the above example, position = -2, ~position = 1 which is the position where 2 should be inserted...it also lets you find the "closest" match in the array.
I thinks its pretty nifty... :)
A: The type params for generic methods can be specified explicitly like so:
Collections.<String,Integer>emptyMap()
A: It's not exactly hidden, but reflection is incredibly useful and powerful. It is great to use a simple Class.forName("...").newInstance() where the class type is configurable. It's easy to write this sort of factory implementation.
A: I know this was added in release 1.5 but the new enum type is a great feature. Not having to use the old "int enum pattern" has greatly helped a bunch of my code. Check out JLS 8.9 for the sweet gravy on your potatoes!
A: Part feature, part bother: Java's String handling to make it 'appear' a native Type (use of operators on them, +, +=)
Being able to write:
String s = "A";
s += " String"; // so s == "A String"
is very convenient, but is simply syntactic sugar for (ie gets compiled to):
String s = new String("A");
s = new StringBuffer(s).append(" String").toString();
ergo an Object instantiation and 2 method invocations for a simple concatenation. Imagine Building a long String inside a loop in this manner!? AND all of StringBuffer's methods are declared synchronized. Thankfully in (I think) Java 5 they introduced StringBuilder which is identical to StringBuffer without the syncronization.
A loop such as:
String s = "";
for (int i = 0 ; i < 1000 ; ++i)
s += " " + i; // Really an Object instantiation & 3 method invocations!
can (should) be rewritten in your code as:
StringBuilder buf = new StringBuilder(); // Empty buffer
for (int i = 0 ; i < 1000 ; ++i)
buf.append(' ').append(i); // Cut out the object instantiation & reduce to 2 method invocations
String s = buf.toString();
and will run approximately 80+% faster than the original loop!
(up to 180% on some benchmarks I have run)
A: You can use enums to implement an interface.
public interface Room {
public Room north();
public Room south();
public Room east();
public Room west();
}
public enum Rooms implements Room {
FIRST {
public Room north() {
return SECOND;
}
},
SECOND {
public Room south() {
return FIRST;
}
}
public Room north() { return null; }
public Room south() { return null; }
public Room east() { return null; }
public Room west() { return null; }
}
EDIT: Years later....
I use this feature here
public enum AffinityStrategies implements AffinityStrategy {
https://github.com/peter-lawrey/Java-Thread-Affinity/blob/master/src/main/java/vanilla/java/affinity/AffinityStrategies.java
By using an interface, developers can define their own strategies. Using an enum means I can define a collection (of five) built in ones.
A: final for instance variables:
Really useful for multi-threading code and it makes it a lot easier to argue about the instance state and correctness. Haven't seen it a lot in industry context and often not thought in java classes.
static {something;}:
Used to initialize static members (also I prefer a static method to do it (because it has a name). Not thought.
A: I just (re)learned today that $ is a legal name for a method or variable in Java. Combined with static imports it can make for some slightly more readable code, depending on your view of readable:
http://garbagecollected.org/2008/04/06/dollarmaps/
A: As of Java 1.5, Java now has a much cleaner syntax for writing functions of variable arity. So, instead of just passing an array, now you can do the following
public void foo(String... bars) {
for (String bar: bars)
System.out.println(bar);
}
bars is automatically converted to array of the specified type. Not a huge win, but a win nonetheless.
A: "const" is a keyword, but you can't use it.
int const = 1; // "not a statement"
const int i = 1; // "illegal start of expression"
I guess the compiler writers thought it might be used in the future and they'd better keep it reserved.
A: Use StringBuilder instead of StringBuffer when you don't need synchronized management included in StringBuilder. It will increase the performance of your application.
Improvements for Java 7 would be even better than any hidden Java features:
*
*Diamond syntax: Link
Don't use those infinite <> syntax at instanciation:
Map<String, List<String>> anagrams = new HashMap<String, List<String>>();
// Can now be replaced with this:
Map<String, List<String>> anagrams = new HashMap<>();
*
*Strings in switch: Link
Use String in switch, instead of old-C int:
String s = "something";
switch(s) {
case "quux":
processQuux(s);
// fall-through
case "foo":
case "bar":
processFooOrBar(s);
break;
case "baz":
processBaz(s);
// fall-through
default:
processDefault(s);
break;
}
*
*Automatic Resource Management Link
This old code:
static void copy(String src, String dest) throws IOException {
InputStream in = new FileInputStream(src);
try {
OutputStream out = new FileOutputStream(dest);
try {
byte[] buf = new byte[8 * 1024];
int n;
while ((n = in.read(buf)) >= 0)
out.write(buf, 0, n);
} finally {
out.close();
}
} finally {
in.close();
}
}
can now be replaced by this much simpler code:
static void copy(String src, String dest) throws IOException {
try (InputStream in = new FileInputStream(src);
OutputStream out = new FileOutputStream(dest)) {
byte[] buf = new byte[8192];
int n;
while ((n = in.read(buf)) >= 0)
out.write(buf, 0, n);
}
}
A: I enjoyed
*
*javadoc's taglet and doclet that enable us to customize javadoc output.
*JDK tools: jstat, jstack etc.
A: Java Bean property accessor methods do not have to start with "get" and "set".
Even Josh Bloch gets this wrong in Effective Java.
A: Surprises me that an interface can extend multiple interfaces but class can extend only one class.
A: I was surprised when I first noticed the Ternary-Operator which equals a simple if-then-else statement:
minVal = (a < b) ? a : b;
| {
"language": "en",
"url": "https://stackoverflow.com/questions/15496",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "295"
} |
Q: Use SVN instead of CVS on SourceForge I've just setup a new project on SourceForge and the admins set it up with CVS as the SCM, however, I want to use SVN. There is NO code in this project yet - empty directory.
How do I change this project from using CVS to SVN?
A: It's under the options.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/15504",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: How do I track down performance problems with page rendering? I've been tasked with improving the performance of an ASP.NET 2.0 application. The page I'm currently focused on has many problems but one that I'm having trouble digging into is the render time of the page. Using Trace.axd the duration between Begin Render and End Render is 1.4 seconds. From MSDN I see that
All ASP.NET Web server controls have a
Render method that writes out the
control's markup that is sent to the
browser.
If I had the source code for all the controls on the page, I would just instrument them to trace out their render time. Unfortunately, this particular page has lots of controls, most of them third-party. Is there tool or technique to get better visibility into what is going on during the render? I would like to know if there is a particularly poorly performing control, or if there are simply too many controls on the page.
A: <%@Page Trace="true" %>
See http://www.asp101.com/articles/robert/tracing/default.asp.
A: Download ANTS PROFILER, this will give you a perfect overview of the lines causing the slowdown.
Also when it's about rendering make sure you don't use to much string concats (like string += "value") but use StringBuilders to improve performance.
A: It may not help if the problem is inside one of your controls - as you expect - but if the page is poorly designed and that's causing render to be slow, YSlow should help clean that up.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/15513",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Is there anyway to disable the client-side validation for dojo date text box? In my example below I'm using a dijit.form.DateTextBox:
<input type="text" name="startDate" dojoType="dijit.form.DateTextBox" constraints="{datePattern:'MM/dd/yyyy'}" value='<c:out value="${sessionScope.adminMessageForm.startDate}"/>' />
So for example, if the user starts to enter "asdf" into the date the field turns yellow and a popup error message appears saying The value entered is not valid.. Even if I remove the constraints="{datePattern:'MM/dd/yyyy'}" it still validates.
Without going into details as to why, I would like to be able keep the dojoType and still prevent validation in particular circumstances.
A: Try overriding the validate method in your markup.
This will work (just tested):
<input type="text" name="startDate" dojoType="dijit.form.DateTextBox"
constraints="{datePattern:'MM/dd/yyyy'}"
value='<c:out value="${sessionScope.adminMessageForm.startDate}"/>'
validate='return true;'
/>
A: My only suggestion is to programmatically remove the dojoType on the server-side or client-side. It is not possible to keep the dojoType and not have it validate. Unless you create your own type that has you logic in it.
A: I had a similar problem, where the ValidationTextBox met all my needs but it was necessary to disable the validation routines until after the user had first pressed Submit.
My solution was to clone this into a ValidationConditionalTextBox with a couple new methods:
enableValidator:function() {
this.validatorOn = true;
},
disableValidator: function() {
this.validatorOn = false;
},
Then -- in the validator:function() I added a single check:
if (this.validatorOn)
{ ... }
Fairly straightforward, my default value for validatorOn is false (this appears right at the top of the javascript). When my form submits, simply call enableValidator(). You can view the full JavaScript here:
http://lilawnsprinklers.com/js/dijit/form/ValidationTextBox.js
| {
"language": "en",
"url": "https://stackoverflow.com/questions/15514",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: How do I generate ASCII codes 2 and 3 in a Bash command line? If I press Ctrl+B that ought to give me ASCII code 2, but Ctrl+C is going to be interpreted as a Break.
So I figure I've got to redirect a file in. How do I get these characters into a file?
A: echo $'\002\003' > ./myfile
A: perl -e 'print "\xFF"'
where FF is the hex code of the ACSII code you want to print. So for ACSII code 2, it would be \x02.
A: Ctrl-V escapes the next keystoke. That's how you can get a Ctrl-C out: Ctrl-V Ctrl-C
| {
"language": "en",
"url": "https://stackoverflow.com/questions/15537",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Choosing a new development machine I'm not sure how this question will be recieved here but lets give it a shot...
It's time for me to get a new dev PC. What's the best choice these days?
I typically have 2-3 Visual Studios open along with mail and all that stuff. Ideally I would imagine 2+ GB of RAM would be nice as my current XP box is dying. =)
I hopped on the Dell site (my days of building PC's are behind me. I just need something that gets the job done.) and started browsing around only to be confused from all the processor choices. What does a typical dev box need these days? Duo? Quad?
Is it worth going to 64 bit Vista as well?
It's been a while since I got a new machine so I'm just looking for some guidance.
Thanks
A: I just built a quad core - 8 GB of RAM and run Server 2008 with Hyper-V on it. I have VMs for my build server, dev platform, and deployment options (XP, Vista, Server 2003/2008) with snapshots at the various service pack levels. What's nice is you can spin up a VM whenever you need it, and re-allocate the resources when you don't.. So if I want to have 4 or 5 GB of ram and four processors available for my dev platform, no problem.. when I need to test some installs, I can save my status and spin up my test machines.. (and it only ran about $800 US).
A: Jeff's ultimate developer rig series is great, but it's out of date. If you want to build your own ultimate developer rig, you can do hours of research to get the perfect list or use the tricks below to come up with a great component list in a short time.
Credits: Mehul taught me this method and it's a huge time-saver.
The Basic PC Builder Shopping List
Start with the basic system builder shopping list:
*
*Computer case
*Power supply
*Motherboard CPU
*Video card
*RAM
*Hard Drive
*DVD-ROM
*Monitors
*Optional: Extra fans
*Optional: Windows
(This list is good for most of us. Add/remove for your specific needs.)
The Short Version
Make a wish list at NewEgg.com to track your component choices and estimate price. For each item on the shopping list above, go to the NewEgg.com category and list the top sellers sorted by most reviews. Read some reviews on the top 3 items listed and add one to your wish list. You may want to check Dell.com and deal sites for monitor options. When you're finished you'll have a solid list of great components that have been well reviewed by a large group of talented system builders.
The Detailed Version
Start at Gear Geek Heaven:
Go to NewEgg.com, create an account and start a wish list to keep track of your selections. NewEgg.com selection, prices and service are good, but you don't have to buy at NewEgg.com. You're going to use the site to keep track of your component choices and get a good price estimate.
Let the Wisdom of the Geeks Narrow Your Options
The biggest problem with spec'ing a new developer rig is that there are too many options. To narrow your options, observe the behavior of a large group of hardware enthusiasts, record their preferences and use that data to guide your decision. (Everyone who comments at NewEgg.com isn't an expert, but there are many intelligent buyers here who write helpful reviews.)
In other words, find the top selling and best reviewed items on NewEgg.com, a popular hardware site for system builders.
Score = (Sales-Rank + Review-Count) * Rating * Price
NewEgg.com is the right place to learn what the system builders are doing, but it's not obvious at first glance how to do that. You'll have to drill down a bit to see the top selling items. You also need something more helpful than just the top selling items, you want gear that's been used and reviewed by a large group of active and enthusiastic gear geeks so you'll want to factor in customer reviews, too.
Find Top sellers in the item category, then sort by Most Reviews
Use the NewEgg.com top level menu to navigate to the category for that item type. Then use the left sidebar menu to drill down to a little more specific sub-category. Click the Top Sellers link on the left sidebar to list the top selling items for that category. Then sort by "Most Reviews" by selecting the dropdown next to the search box on the upper right part of the page. Don't input any search text.
Hands-On Example
Example: On the top menu bar of NewEgg.com, select Computer Hardware/Motherboards then click a sub-category linke on the left sidebar like Intel Motherboards.In the sub-category, you should see an option on the left sidebar for "Top Sellers" select that link to list the top selling items in that category. The search listing should now show the top selling items in the category. Sort this listing by "Most Reviews". At the top of listings on the upper right is a search box, next to the search box is the dropdown box with the option, "Most Reviews". Leave the search box blank and select the "Most Reviews" sort option from the dropdown box.
Down to the Finalists
One of the 2 or 3 products at the top of the list should be a good choice for your new system. Scan the reviews to see if the general buzz makes you comfortable with the component. Use the sidebar links and search to filter the results if the top sellers are out of your price range or you need to refine the specs.
Judge the Judges
When you scan the item reviews look at the range of ratings, you want to see more than 100 reviews with mostly 4 and 5 star ratings. Steer clear of items that have a high average but also have a lot of low ratings. Avoid very new items and watch out for older items that are on special and may be closing out. You want something that's been out for 6 months or a year. The price will be lower and the reviews will be more realistic.
Pro-Choice
When you're satisified that the item is what you want and the price is right, add it to your wishlist.
Foreach component in system-shopping-list do
Repeat this process for each item. It's fast and fun.
If item == Monitor { search("Dell.com") };
Dell often has good monitor specials, so you might want to check that site for monitors. The best Dell deals are usually found on sites like techbargains.com and DealFire.com.
Go Forth and Multi-buy
When you're finished you'll have a solid list of popular components that are favored by enthusiastic system builders that frequent NewEgg.com. Order them from NewEgg.com or your favorite dealer and get building!
A: Not looking to travel. I'd rather get a powerful desktop for my dollar. I have a nice big panel here so problem with that. The majority of my development is ASP.NET stuff with some winforms projects.
A: Jeff built an Ultimate Developer Rig for Scott Hanselman a while back. You can check out his requirements and see if it matches closely to what you are looking for.
From what you've mentioned, an Intel Q9450, 4 or 8gigs of ram and a couple good sized hard drives will suit you well. I would say there is no reason not to get Vista x64 at this point. The ability to utilize more than 3.2gb of ram is very important for a developer.
If you're in the more than two monitor club, you'll need two video cards as well.
Hope this helps!
A: I recently built a version of the UDR as well but used Vista x64. It works great with the VMs. Just get lots of memory (8gbs) and fast hard drives. I've heard good things about Win Server 2008 but not sure if driver support is available. On a older dell laptop that I tried installing WinServer 2008 and it kept crashing on the nvidia drivers. Good luck.
A: People are probably going to yell at me...but I've found that Vista 64 is mostly worth it. The main reason for me though is that I'm always maxing out my memory and having a 64bit OS allows me to go past the <4GB limit of 32bit.
But even if you don't get 64bit, just buy 2 2GB RAM cards anyways....you will be able to use most of it (my system shows 3.5GB on 32bit) and then you've got it for if you upgrade later and (if your system has 4 slots) you'll have room to expand to 8GB later on....
A: There are some additional questions that would make our answers more complete.
*
*Are you going to want to travel with
it?
*How important is screen real estate
to you?
*Will you be doing interpreted or
compiled?
*Is it web based development, or
client based?
I've seen some great deals on 17" HP laptops lately - one at Best Buy that had 4GB of RAM and a monster hard drive along with a 2.4+ Ghz Core 2 Duo for roughly $800 after tax.
A: You didn't provide a budget or other considerations like sound footprint. You also didn't say if you actually can use more than a few cores at one time with the applications you are developing. So, everything below is a guess.
If you have the budget, the Mac Pro with Bootcamp(or a vm if you are so inclinded) might be a consideration. You won't want to upgrade your HDD or memory from Apple, but, the parts are easy enough to find at Newegg.
I know this seems a little crazy, but, you can get a good value if you need the dual processors at 4 cores each. It is currently $2800 for 2 x 2.8GHz 8 cores total.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/15573",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: What are the correct stencils for object relational diagramming in visio? All of my Visio experience is with LAN/WAN documentation. I recently had a desire to visualize the relationship between objects in the Nagios configuration and I realized I didn't know how to do it properly and moved on to something more important. I was reading the responses to this thread and realized this is something software developers must do a lot.
So this is probably a soft-pitch question, but what is the proper method for documenting object relationships in Visio? Is there a better template to use? What stencil collection is the proper stencil in?
In my probably naive view I imagine an object being a large box with a single "reception" connector and containing multiple smaller boxes, each of which represents an object member and having its own connector. So, each object member field would connect out to the "reception" connector on the object of the member's type. In and of itself those objects are fairly easy to build. The problem I ran into is that the connector lines didn't respect the objects and ran over the top of them, making an awful, unusable mess.
Thanks for any pointers.
A: You can use a UML static class diagram with << stereotype >> annotations, which is the kind of thing you would do in Rational Rose for using UML for things that aren't necessarily classes and methods, such as databases.
A: I've been using these UML stencils for diagramming object models and entity relationship diagrams. It is fairly comprehensive. Be sure to take a look at the "tips" document... very important.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/15583",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Practical use of System.WeakReference I understand what System.WeakReference does, but what I can't seem to grasp is a practical example of what it might be useful for. The class itself seems to me to be, well, a hack. It seems to me that there are other, better means of solving a problem where a WeakReference is used in examples I've seen. What's the canonical example of where you've really got to use a WeakReference? Aren't we trying to get farther away from this type of behavior and use of this class?
A: One useful example is the guys who run DB4O object oriented database. There, WeakReferences are used as a kind of light cache: it will keep your objects in memory only as long as your application does, allowing you to put a real cache on top.
Another use would be in the implementation of weak event handlers. Currently, one big source of memory leaks in .NET applications is forgetting to remove event handlers. E.g.
public MyForm()
{
MyApplication.Foo += someHandler;
}
See the problem? In the above snippet, MyForm will be kept alive in memory forever as long as MyApplication is alive in memory. Create 10 MyForms, close them all, your 10 MyForms will still be in memory, kept alive by the event handler.
Enter WeakReference. You can build a weak event handler using WeakReferences so that someHandler is a weak event handler to MyApplication.Foo, thus fixing your memory leaks!
This isn't just theory. Dustin Campbell from the DidItWith.NET blog posted an implementation of weak event handlers using System.WeakReference.
A: I use weak reference for state-keeping in mixins. Remember, mixins are static, so when you use a static object to attach state to a non-static one, you never know how long it will be required. So instead of keeping a Dictionary<myobject, myvalue> I keep a Dictionary<WeakReference,myvalue> to prevent the mixin from dragging things for too long.
The only problem is that every time I do an access, I also check for dead references and remove them. Not that they hurt anyone, unless there are thousands, of course.
A: I use it to implement a cache where unused entries are automatically garbage collected:
class Cache<TKey,TValue> : IEnumerable<KeyValuePair<TKey,TValue>>
{ Dictionary<TKey,WeakReference> dict = new Dictionary<TKey,WeakReference>();
public TValue this[TKey key]
{ get {lock(dict){ return getInternal(key);}}
set {lock(dict){ setInteral(key,value);}}
}
void setInteral(TKey key, TValue val)
{ if (dict.ContainsKey(key)) dict[key].Target = val;
else dict.Add(key,new WeakReference(val));
}
public void Clear() { dict.Clear(); }
/// <summary>Removes any dead weak references</summary>
/// <returns>The number of cleaned-up weak references</returns>
public int CleanUp()
{ List<TKey> toRemove = new List<TKey>(dict.Count);
foreach(KeyValuePair<TKey,WeakReference> kv in dict)
{ if (!kv.Value.IsAlive) toRemove.Add(kv.Key);
}
foreach (TKey k in toRemove) dict.Remove(k);
return toRemove.Count;
}
public bool Contains(string key)
{ lock (dict) { return containsInternal(key); }
}
bool containsInternal(TKey key)
{ return (dict.ContainsKey(key) && dict[key].IsAlive);
}
public bool Exists(Predicate<TValue> match)
{ if (match==null) throw new ArgumentNullException("match");
lock (dict)
{ foreach (WeakReference weakref in dict.Values)
{ if ( weakref.IsAlive
&& match((TValue) weakref.Target)) return true;
}
}
return false;
}
/* ... */
}
A: There are two reasons why you would use WeakReference.
*
*Instead of global objects declared as static: Global objects are declared as static fields and static fields cannot be GC'ed (garbage-collected) until the AppDomain is GC'ed. So you risk out-of-memory exceptions. Instead, we can wrap the global object in a WeakReference. Even though the WeakReference itself is declared static, the object it points to will be GC'ed when memory is low.
Basically, use wrStaticObject instead of staticObject.
class ThingsWrapper {
//private static object staticObject = new object();
private static WeakReference wrStaticObject
= new WeakReference(new object());
}
Simple app to prove that static object is garbage-collected when AppDomain is.
class StaticGarbageTest
{
public static void Main1()
{
var s = new ThingsWrapper();
s = null;
GC.Collect();
GC.WaitForPendingFinalizers();
}
}
class ThingsWrapper
{
private static Thing staticThing = new Thing("staticThing");
private Thing privateThing = new Thing("privateThing");
~ThingsWrapper()
{ Console.WriteLine("~ThingsWrapper"); }
}
class Thing
{
protected string name;
public Thing(string name) {
this.name = name;
Console.WriteLine("Thing() " + name);
}
public override string ToString() { return name; }
~Thing() { Console.WriteLine("~Thing() " + name); }
}
Note from the output below staticThing is GC'ed at the very end even after ThingsWrapper is - i.e. GC'ed when AppDomain is GC'ed.
Thing() staticThing
Thing() privateThing
~Thing() privateThing
~ThingsWrapper
~Thing() staticThing
Instead we can wrap Thing in a WeakReference. As wrStaticThing can be GC'ed, we'll need a lazy-loaded method which I've left out for brevity.
class WeakReferenceTest
{
public static void Main1()
{
var s = new WeakReferenceThing();
s = null;
GC.Collect();
GC.WaitForPendingFinalizers();
if (WeakReferenceThing.wrStaticThing.IsAlive)
Console.WriteLine("WeakReference: {0}",
(Thing)WeakReferenceThing.wrStaticThing.Target);
else
Console.WriteLine("WeakReference is dead.");
}
}
class WeakReferenceThing
{
public static WeakReference wrStaticThing;
static WeakReferenceThing()
{ wrStaticThing = new WeakReference(new Thing("wrStaticThing")); }
~WeakReferenceThing()
{ Console.WriteLine("~WeakReferenceThing"); }
//lazy-loaded method to new Thing
}
Note from output below that wrStaticThing is GC'ed when GC thread is invoked.
Thing() wrStaticThing
~Thing() wrStaticThing
~WeakReferenceThing
WeakReference is dead.
*For objects that are time-consuming to initialize: You do not want objects that are time-consusming to init to be GC'ed. You can either keep a static reference to avoid that (with cons from above point) or use WeakReference.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/15593",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "39"
} |
Q: Best way to structure a repository in Subversion for Visual Studio projects? I have a few C# .dll projects which are common to many applications. Currently, I have one big repository. I have each DLL stored as a separate project within the repository and every application project stored as a project within the same repository.
I recently switched to Subversion for source control and I fear that I did not do a good job of structuring the repository. I would like to hear what others are doing.
A: Subversion repositories are typical sub-divided into:
branch/
tags/
trunk/
You would either place all of your DLL and application projects into the trunk and then use branch and tags for all of them as necessary too:
branch/
tags/
trunk/
project1/
project2/
Alternatively, you could create folders for each project in the root and then place the common branch, tags and trunk folders within them.
project1/
branch/
tags/
trunk/
project2/
branch/
tags/
trunk/
Note that this practice is simply convention and nothing in SVN requires (or really promotes) doing it exactly this way. However, everyone is used to it. So, you would be doing people a favor to go along.
To elaborate further, the trunk is where your main development will take place. When you want to mark a particular revision (e.g. a release version), then simply svn copy the project into the tags directory. Also, just copy code into the branch directory when you want to do something dramatic or prolonged and don't want to hinder progress in the trunk. Later you can svn merge your branch back into the trunk when it is ready for action!
If you want to correct mishaps in your current Subverion repository, then just use svn move to relocate them. Unlike the delete and add process of CVS, move will retain version history for the new location.
A: using the branch/trunk/tag repository structure is pretty standard, but if I'm understanding you properly, your issue is that you have a set of common dll projects that get used across multiple projects. This can definately become tricky to manage.
So the typical scenario here is that you have some class library called Common.Helpers that has code that is common to all your applications.
Let's say I'm starting a new application called StackOverflow.Web that needs to reference Common.Helpers.
Usually what you would do is create a new solution file and add a new project called Stackoverflow.Web and add the existing Common.Helpers project and then reference it from the new Stackoverflow.Web project.
What I usually try and do is create a repository for the Common.Helpers project and then in subversion reference it as an external. That way you can keep the code under source control in a single location, but still use it seperately in multiple projects.
A: if your sub projects can be released at different versions (like controls, web parts, ect...) then it may make sense to build your structure like this:
Solution
Project 1
*
*Branch
*Tags
*Trunk
Project 2
*
*Branch
*Tags
*Trunk
This way you can manage each project release independently.
Otherwise the most common structure is:
*
*Branch
*Tags
*Trunk
*Docs (Optional)
A: I store everything in the repository to make it easy for developers (or rebuilt devboxes) to check-out from SVN and then run a build (with all necessary assemblies in relative paths). If you have multiple projects that should be separate, this would also encourage the team of your shared components to deliver high quality assemblies. This could follow a normal release to production mentality where the shared assemblied would be updated in your downstream projects. This is a very natural Software Value Chain, at the cost of a little bit of disk space.
JP Boodhoo has a great series on the topic of automated builds, VS folder structure, and getting developers up and running quickly.
A: Thanks to everyone who answered. lomaxx, I spent the morning looking into using the external feature and it looks like this is the way to go. I was not aware of it, probably because it is not exactly prominent in Tortoise.
A: If you want to use the merge-tracking of Subversion 1.5 over more than one project at the same time you should use a single tree without externals.
A tracked merge is (just like a commit) always over a directory and its children.
The same rule applies on atomic commits. (Works only stable within a single workingcopy. It might work in some specific other cases but that behavior is not guaranteed)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/15621",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12"
} |