text
stringlengths 8
267k
| meta
dict |
---|---|
Q: SSRS - Uninstall Trial Version of VS Business Intelligence I want to know how to fully uninstall MSSQL 2005.
I've been using the Trial version of SQL Server Reporting Services for a while now. My company finally purchased the software from an online distributor, and for support of Oracle, we needed to upgrade to MSSQL 2005 SP2. Anyway, the "full" version of the software would not install, as it was already installed (It seems the installer doesn't recognize what was installed was the trial version). So I tried uninstalling MSSQL 2005, and everything related (including visual studio), I can not seem to get it reinstalled. The error is a vague error message, and when i click the link to get more information, the usual "no information about this error was found" error.
Microsoft SQL Server 2005 Setup
There was an unexpected failure during
the setup wizard. You may review the
setup logs and/or click the help
button for more information.
For help, click:
http://go.microsoft.com/fwlink?LinkID=20476&ProdName=Microsoft+SQL+Server&ProdVer=9.00.1399.06&EvtSrc=setup.rll&EvtID=50000&EvtType=packageengine%5cinstallpackageaction.cpp%40InstallToolsAction.11%40sqls%3a%3aInstallPackageAction%3a%3aperform%400x643
BUTTONS:
OK
A: @Mark Struzinski
I actually discovered that it was a problem with the installer, when installing the "Full Version". I discovered, since the product was downloaded, instead of delivered on CD/DVD, that the installer was looking for information in a path that was not correct. There was a MS Knowledge Base article on the topic. Thanks for your reply, tho
A: I had the exact same problem, and this article helped me clean up all the related files from my system and do a fresh install of both Visual Studio and the SQL client components. Give it a try and let me know if it helps you out:
http://support.citrix.com/article/CTX115270
| {
"language": "en",
"url": "https://stackoverflow.com/questions/15632",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-1"
} |
Q: Using Visual Studio to develop for C++ for Unix Does anyone have battle stories to share trying to use Visual Studio to develop applications for Unix? And I'm not talking using .NET with a Mono or Wine virtual platform running beneath.
Our company has about 20 developers all running Windows XP/Vista and developing primarily for Linux & Solaris. Until recently we all logged into a main Linux server and modified/built code the good old fashioned way: Emacs, Vi, dtpad - take your pick. Then someone said, "hey - we're living in the Dark Ages, we should be using an IDE".
So we tried out a few and decided that Visual Studio was the only one that would meet our performance needs (yes, I'm sure that IDE X is a very nice IDE, but we chose VS).
The problem is, how do you setup your environment to have the files available locally to VS, but also available to a build server? We settled with writing a Visual Studio plugin - it writes our files locally and to the build server whenever we hit "Save" and we have a bit fat "sync" button that we can push when our files change on the server side (for when we update to the latest files from our source control server).
The plugin also uses Visual Studio's external build system feature that ultimately just ssh's into the build server and calls our local "make" utility (which is Boost Build v2 - has great dependency checking, but is really slow to start as a result i.e. 30-60 seconds to begin). The results are piped back into Visual Studio so the developer can click on the error and be taken to the appropriate line of code (quite slick actually). The build server uses GCC and cross-compiles all of our Solaris builds.
But even after we've done all this, I can't help but sigh whenever I start to write code in Visual Studio. I click a file, start typing, and VS chugs to catch up with me.
Is there anything more annoying than having to stop and wait for your tools? Are the benefits worth the frustration?
Thoughts, stories, help?
A: VS chugs to catch up with me.
Hmmm ... you machine needs more memory & grunt. Never had performance problems with mine.
I've about a decade's experience doing exactly what you're proposing, most of it in the finance industry, developing real-time systems for customers in the banking, stock exchanges, stock brokerage niches.
Before proceeding further, I need to confess that all this was done in VS6 + CVS, and of late, SVN.
Source Code Versioning
Developers have separate sourcesafe repositories so that they can store their work and check it packages of work at logical milestones. When they feel they want to do an integration test, we run a script that checks it into SVN.
Once checked into SVN, we've a process that kicks off that will automatically generate relevant makefiles to compile them on the target machines for continuous integration.
We've another set of scripts that synchs new stuff from SVN to the folders that VS looks after. There's a bit of gap because VS can't automatically pick up new files; we usually handle that manually. This only happens regularly the first few days of the project.
That's an overview of how we maintain codes. I have to say, I've probably glossed over some details (let me know if you're interested).
Coding
From the coding aspect, we rely heavily on the pre-processors (i.e. #define, etc) and flags in the makefile to shape compilation process. For cross platform portability, we use GCC. A few times, we were force to use aCC on HP-UX and some other compilers, but we did not have much grief. The only thing that is a constant pain, is that we had to watch out for thread heap spaces across platforms. The compiler does not spare us from that.
Why?
The question is usually, "Why the h*ll would you even what to have such a complicated way of development?". Our answer is usually another question that goes, "Have you any clue how insane it is to debug a multi-threaded application by examining the core dump or using gdb?". Basically, the fact that we can trace/step through each line of code when you're debugging an obscure bug, makes it all worth the effort!
Plus!... VS's intellisense feature makes it so easy to find the method/attribute belonging to classes. I also heard the VS2008 has refactoring capabilities. I've shifted my focus to Java on Eclipse that has both features. You'd be more productive focusing coding business logic rather than devote energy making your mind do stuff like remember!
Also! ... We'd end up with a product that can run on both Windows and Linux!
Good luck!
A: I feel your pain. We have an application which is 'cross-platform'. A typical client/server application where the client needs to be able to run on windows and linux. Since our client base mostly uses windows we work using VS2008 (the debugger makes life a lot easier) - however we still need to perform linux builds.
The major problem with this was we were checking in code that we didn't know would build under gcc, which would more than likely break the CI stuff we had setup. So we installed MingGW on all our developer's machines which allows us to test that working copy will build under gcc before we commit it back to the repository.
A: We develop for Mac and PC. We just work locally in whatever ide we prefer, mostly VS but also xcode. When we feel our changes are stable enough for the build servers we check them in. The two build servers (Mac and PC) look for source control checkins, and each does a build. Build errors are emailed back to the team.
Editing files live on the build server sounds precarious to me. What happens if you request a build while another developer has edits that won't build?
A: I know this doesn't really answer your question, but you might want to consider setting up remote X sessions, and just run something like KDevelop, which, by the way, is a very nice IDE--or even Eclipse, which is more mainstream, and has a broader developer base. You could probably just use something like Xming as the X server on your Windows machines.
A: Wow, that sounds like a really strange use for Visual Studio. I'm very happy chugging away in vim. However, the one thing I love about Visual Studio is the debugger. It sounds like you are not even using it.
When I opened the question I thought you must be referring to developing portable applications in Visual Studio and then migrating them to Solaris. I have done this and had pleasant experiences with it.
A: Network shares.
Of course, then you have killer lag on the network, but at least there's only one copy of your files.
You don't want to hear what I did when I was developing on both platforms. But you're going to: drag-n-drop copy a few times a day. Local build and run, and periodically checking it out on Unix to make sure gcc was happy and that the unit tests were happy on that platform too. Not really a rapid turnaround cycle there.
A: @monjardin
The main reason we use it is because of the re-factoring/search tools provided through Visual Assist X (by Whole Tomato). Although there are a number of other nice to haves like Intelli-sense. We are also investigating integrations with our other tools AccuRev, Bugzilla and Totalview to complete the environment.
@roo
Using multiple compilers sounds like a pain. We have the luxury of just sticking with gcc for all our platforms.
@josh
Yikes! That sounds like a great way to introduce errors! :-)
A: I've had good experience developing Playstation2 code in Visual Studio
using gcc in cygwin. If you've got cygwin with gcc and glibc, it
should be nearly identical to your target environments. The fact that you
have to be portable across Solaris and Linux hints that cygwin should
work just fine.
A: Most of my programming experience is in Windows and I'm a big fan of visual studio (especially with Resharper, if you happen to be doing C# coding). These days I've been writing an application for linux in C++. After trying all the IDEs (Netbeans, KDevelop, Eclipse CDT, etc), I found Netbeans to be the least crappy. For me, absolute minimum requirements are that I be able to single-step through code and that I have intellisense, with ideally some refactoring functions as well. It's amazing to me how today's linux IDE's are not even close to what Visual Studio 6 was over ten years ago. The biggest pain point right now is how slow and poorly implemented the intellisense in Netbeans is. It takes 2-3 seconds to populate on a fast machine with 8GB of RAM. Eclipse CDT's intellisense was even more laggy. I'm sorry, but a 2 second wait for intellisense doesn't cut it.
So now I'm looking into using VS from Windows, even though my only build target is linux...
Chris, you might want to look at the free automation build server 'CruiseControl', which integrates with all main source control systems (svn, tfs, sourcesafe, etc.). It's whole purpose is to react to check-ins in a source control system. In general, you configure it so that anytime anyone checks code in, a build is initiated and (ideally) unit tests are run. For some languages there are some great plugins that do code analysis, measure unit test code coverage, etc. Notifications are sent back to the team about successful / broken builds.
Here's a post describing how it can be set up for C++: link (thoughtworks.org).
I'm just getting started with converting from a linux-only simple config (Netbeans + SVN, with no build automation) to using Windows VS 2008 with build automation back-end that runs unit tests in addition to doing builds in linux. I shudder at the amount of time it's going to take me to get that all configured, but the sooner the better, I guess.
In my ideal end-state I'll be able to auto-generate the Netbeans project file from the VS project, so that when I need to debug something in linux I can do so from that IDE. VS project files are XML-based, so that shouldn't be too hard.
If anyone has any pointers for any of this, I'd really appreciate it.
Thanks,
Christophe
A: You could have developers work in private branches (easier if you're using a DVCS). Then, when you want to checkin some code, you check it into your private branch on [windows|unix], update your sandbox on [unix|windows] and build/test before committing back to the main branch.
A: We are using a similar solution to what you described.
We have our code stored on the Windows side of the world and UNIX (QNX 4.25 to be exact) has access though an NFS mount (thanks to UNIX services for Windows). We have an ssh into UNIX to run make and the pipe to output into VS. Accessing the code is fast, builds are a little slower than before, but our longest compile is currently less than two minutes, not a big deal.
Using VS for UNIX development has been worth the effort to set it up, because we now have IntelliSense. Less typing = happy developer.
A: Check out "Final Builder" (http://www.finalbuilder.com/). Select a version control system (e.g. cvs or svn, to be honest, cvs would suit this particular use case better by the sounds of it) and then set up build triggers on FinalBuilder so that checkins cause a compile and send the results back to you.
You can set up rulesets in FinalBuilder that prevent you checking in / merging broken code into the baseline or certain branch folders but allow it to others (we don't allow broken commits to /baseline or /branches/*, but we have a /wip/ branching folder for devs who need to share potentially broken code or just want to be able to commit at the end of the day).
You can distribuite FB over multiple "build servers" so that you don't wind up with 20 people trying to build on one box, or waiting for one box to process all the little bitty commits.
Our project has a Linux-based server with Mac and Win clients, all sharing a common codebase. This set up works ridiculously well for us.
A: I'm doing the exact same thing at work. The setup I use is VS for Windows development, with a Linux VM running under VirtualBox for local build / execute verification. VirtualBox has a facility where you can make a folder on the host OS (Windows 7 in my case) available as a mountable filesystem in the guest (Ubuntu LTS 12.04 in my case). That way, after I start a VS build, and it's saved the files, I can immediately start a make under Linux to verify it builds and runs OK there.
We use SVN for source control, the final target is a Linux machine (it's a game server), so that uses the same makefile as my local Linux build. That way, if I add a file to the project / change a compiler option, usuall adding / changing a -D, I do the modifications initially under VS, and then immediately change the Linus makefile to reflect the same changes. Then when I commit, the build system (Bamboo) picks up the change, and does its own verification build.
Hard earned experience says this is an order of magnitude easier if you build like this from day one.
The first project I worked on started as Windows only, I was hired to port it to Linux, since they wanted to switch to a homogenous server environment, and everything else was Linux. Retrofitting a Windows project into this sort of a setup was a fairly major expenditure of effort.
Project number two was done "two system build" right from day one. We wanted to maintain the ability to use VS for development / debug since it is a very polished IDE, but we also had the requirement for final deploy to Linux servers. As I alluded to above, when the project was build with this in mind right from the start, it was quite painless. The worst part was a single file: system_os.cpp that contained OS specific routines, things like "get current time since linux epoch start in milliseconds", etc. etc. etc.
At the risk of getting a little off topic, we also created unit tests for this, and having the unit tests for the OS specific pieces provided a great deal of confidence that they worked the same on both platforms.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/15635",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17"
} |
Q: Can you really build a fast word processor with GoF Design Patterns? The Gang of Four's Design Patterns uses a word processor as an example for at least a few of their patterns, particularly Composite and Flyweight.
Other than by using C or C++, could you really use those patterns and the object-oriented overhead they entail to write a high-performing fully featured word processor?
I know that Eclipse is written in Java but I haven't used it much so I don't know if it's all that fast or as polished as something like Visual Studio, which has a C++ based text editing system.
I only used C++ and Java as examples. The question has more to do with the overhead of having a lot of in-memory objects like you would in an application such as a word processor or even a game.
Design patterns promote abstraction at the expense of parsimony even though they usually point out when you might take some kind of performance hit. Word processors and especially games get the most benefit from being as close to the metal as possible.
I was just wondering if anyone knew of an fast object-oriented word processor or text editor that wasn't written in C++, and whether they'd build one using patterns or would they forgoe a lot of the abstracting away of things?
A: Flyweight really is just a way of conserving resources in situations where there are thousands of objects with intrinsic shared state, so it could be useful in higher level languages than C/C++. Maybe the GoF's example using glyphs in a document was not the best choice to illustrate this pattern.
I think there's a lot more to building a high performance word processor than just these basic patterns though - not sure whether there is anything in GoF that rules out being able to successfully do this.
Generally, Visual Studio (VS) is more advanced and performs significantly better than Eclipse - at least, the versions of VS that I have seen. Eclipse is one of the most impressive Java applications out there though, it runs quite well on more recent machines with lots of RAM.
A: Well, flyweight is a ridiculous pattern to use in a word processor. IIRC, they had each character being referenced as a object [note: it was for each glyph, which is still crazy because your OS will happily draw that for you]. With a pointer being wider than a character and all the processing associated with indirection, you'd be mad to use that particular pattern that way in a word processor.
If you're interested in the design of word processors, I found an article that doesn't address patterns but does look at some of the data structures underlying word processor design and design considerations.
Try to remember that design patterns are there to make your life easier, not for you to be pure. There has to be a reason to use a pattern, it has to offer some benefit.
A: The point of GoF and patterns in general is to talk about how to do things "right" as in correct, not necessarily "right" as in right for the circumstances. Where performance is an issue, and you find that no named pattern gives adequate performance, then perhaps you can justify going your own way. But a good knowledge of patterns gives you a "sensible default" and will probably mean that you sacrifice clarity / SoC / etc only so much as is necessary to give adequate performance.
Feeling that you are "deviating" from the norm encourages you to a) think twice, and b) comment the non-idiomatic code well.
Patterns are vital knowledge, but nothing is gospel and you must always apply judgement.
Having said all that - I can't think of any reason why you couldn't write a decent text editor using patterns and a modern JDK
A: This question actually seems to be about Java vs. C++ performance, and that's not the object orientation so much as running on a virtual machine with garbage collection and such.
This whitepaper on Java vs. C++ performance might be worth a read.
A:
One of the things you have to remember was that the GoF book was written in the early 90s, when the prevalent OSes did not have extensive graphic libraries. Even Windows was not yet an OS at that time.
IIRC GoF was released in 1994. Even in 1994 Windows 95 Beta was available (and running on my 486DX33) and Windows 3.x had been around since roughly 1990.
A: Eclipse + netbeans + IntelliJ all are written pretty much all in java or something that runs on the JVM (not C++). In at least 2 of those IDEs I have spent some time with the editor code, so I can assure you its all java (and its not easy either).
VS 2005 was my last experience of visual studio, and even then I thought eclipse was much more responsive (intelliJ doubly so given time to warm up and index).
Not sure how thats relevant but thats my experience. But I am surprised visual studio is still today written in C++ - I would think that it would be in Microsoft's interest to use C# - if nothing else it would really push its performance hard, nothing like eating your own dog food !
A: Yes, current machines are fast enough and have enough memory that that is possible. If you take a look at Squeak, you see a Smalltalk IDE written in Smalltalk, significantly slower than Java, but still fast enough. HD video editing on the other hand is something that currently has a need for some lower-level support.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/15649",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: SSRS - Process dies/goes to sleep after not being used Another SSRS question here:
We have a development, a QA, a Prod-Backup and a Production SSRS set of servers.
On our production and prod-backup, SSRS will go to sleep if not used for a period of time.
This does not occur on our development or QA server.
In the corporate environment we're in, we don't have physical (or even remote login) access to these machines, and have to work with a team of remote administrators to configure our SSRS application.
We have asked that they fix, if possible, this issue. So far, they haven't been able to identify the issue, and I would like to know if any of my peers know the answer to this question. Thanks.
A: For anybody using the integrated webserver that is built into SQL Reporting Services (and hence IIS may not even be installed on the box), the setting to control this actually lives in:
C:\Program Files\Microsoft SQL Server\
MSRS10_50.MSSQLSERVER\Reporting Services\ReportServer\rsreportserver.config
Your directory may be different; version 10_50 maps to SQL 2008 R2.
You'll be looking for the setting called RecycleTime.
Default is 720 (12 hours). Setting it to 0 will disable.
A: In IIS, check the settings on the application pool that SSRS is running in. On the properties pane->Performance tab you can set the amount of time the worker process needs to be idle for before it shuts down. You can also disable this entirely.
A: I vaguely recall having problems with SSRS on one machine when we changed the "Enable HTTP Keep-Alives" setting in IIS. Try toggling that checkbox (I don't remember whether it was checked or unchecked when it caused us problems).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/15656",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
} |
Q: Subversion revision number across multiple projects When using Subversion (svn) for source control with multiple projects I've noticed that the revision number increases across all of my projects' directories. To illustrate my svn layout (using fictitious project names):
/NinjaProg/branches
/tags
/trunk
/StealthApp/branches
/tags
/trunk
/SnailApp/branches
/tags
/trunk
When I perform a commit to the trunk of the Ninja Program, let's say I get that it has been updated to revision 7. The next day let's say that I make a small change to the Stealth Application and it comes back as revision 8.
The question is this: Is it common accepted practice to, when maintaining multiple projects with one Subversion server, to have unrelated projects' revision number increase across all projects? Or am I doing it wrong and should be creating individual repositories for each project? Or is it something else entirely?
EDIT: I delayed in flagging an answer because it had become clear that there are reasons for both approaches, and even though this question came first, I'd like to point to some other questions that are ultimately asking the same question:
Should I store all projects in one repository or mulitiple?
One SVN Repository or many?
A: I think it is highly recommended that you create separate repositories for each project. If for nothing else than to avoid the scenario you are talking about.
With version control, especially Subversion, you can easily check out pieces of a repository into another working copy and then commit them back to their respective repositories. That allows you to keep them clearly separate and distinct while giving you a great deal of flexibility. Once you get into SVN a little more (I'm assuming you are new.) you can start using hooks and I might see where that could get difficult with you setup. If permission are important to you, a single repository might prove more difficult than necessary.
Also, if you are concerned that it will take a lot of time to setup each repository look into the SVNParentPath variable for the Apache configuration file. (Again, I'm assuming you are using Apache.)
A: This is due to how subversion works. Each revision is really a snapshot of the repository identified by that revision number. If all your projects share a repository then it is unavoidable. Typically, in my experience, however you would setup separate repositories for completely unrelated projects. So short answer is no you are doing nothing wrong it is a common question surrounding subversion but it makes sense when you think about how it stores repository information.
A: The revision number should really only be an identifier for a particular version. Whether it's sequential for a project or not shouldn't matter. That being said, I can understand that it's less than ideal.
Most projects I've encountered have been setup in a single repository and the revision ids behave in this way. I don't know any SVN configuration option to change this behavior, and IMHO, maintaining multiple repositories seems like an unnecessary overhead.
A: We just have one repository with everything in it, pretty much exactly like your example.
I can't see anything wrong with this - the only requirement for the revision number is that it is
*
*Unique
*Atomic
*Bigger than it was at the last checkin
It doesn't matter if it increases by 1 or 50 with each commit as far as I'm concerned.
@grom:
Then whenever I start a new project I just run:
svnadmin create /var/www/svn/myproject
I can see this working fine if you've only got 1 or 2 devs, but what happens if the people who are creating new projects don't have shell access on the SVN server to be able to create directories under /var/www ?
A: Recommended to use separate repository per project. In my Apache conf.d directory I have subversion.conf that contains:
<Location /svn>
DAV svn
SVNParentPath /var/www/svn
AuthType Basic
AuthName "Subversion Repository"
AuthUserFile /var/www/svn/password
Require valid-user
</Location>
Then whenever I start a new project I just run:
svnadmin create /var/www/svn/myproject
A: Hm, where I work we have all our projects in the same repository. I really don't see the benefit of separating them, doesn't that just create a lot of extra work -creating new repositories, granting access to people, etc? I guess separate repositories makes sense if the projects are completely unrelated, and you have, say, external customers that needs to have access to the repo.
A: At my workplace, we have two repositories. One with public read access, and one for everything else. I'd use just one for everything, but we need different access rights for public/private projects.
That said, I personally don't see the problem with the revision numbers incrementing on every update. The revision numbers could skip prime and even numbers and still do what its supposed to do. Make it easy to get to a specific revision.
A: If having the revision numbers change based on other projects bothers you, then put the projects in separate repositories. That is the only way to make the revision numbers independent.
To me, the big reason to use different repositories is to provide separate access control for users and/or using different hook scripts.
A: Maybe it's best not to necessarily make one repo per "project", but rather one repo per "solution" (to use Visual Studio terms). If you have a bunch of "projects" in different folders but they're related to each other, then put them in the same repo.
A: I am surprised no has mentioned that this is discussed in Version Control with Subversion, which is available free online, here.
I read up on the issue awhile back and it really seems like a matter of personal choice, there is a good blog post on the subject here. EDIT: Since the blog appears to be down, (archived version here), here is some of what Mark Phippard had to say on the subject.
These are some of the advantages of the single repository approach.
*
*Simplified administration. One set of hooks to deploy. One repository to backup. etc.
*Branch/tag flexibility. With the code all in one repository it makes it easier to create a branch or tag involving multiple projects.
*Move code easily. Perhaps you want to take a section of code from one project and use it in another, or turn it into a library for several projects. It is easy to move the code within the same repository and retain the history of the code in the process.
Here are some of the drawbacks to the single repository approach, advantages to the multiple repository approach.
*
*Size. It might be easier to deal with many smaller repositories than one large one. For example, if you retire a project you can just archive the repository to media and remove it from the disk and free up the storage. Maybe you need to dump/load a repository for some reason, such as to take advantage of a new Subversion feature. This is easier to do and with less impact if it is a smaller repository. Even if you eventually want to do it to all of your repositories, it will have less impact to do them one at a time, assuming there is not a pressing need to do them all at once.
*Global revision number. Even though this should not be an issue, some people perceive it to be one and do not like to see the revision number advance on the repository and for inactive projects to have large gaps in their revision history.
*Access control. While Subversion's authz mechanism allows you to restrict access as needed to parts of the repository, it is still easier to do this at the repository level. If you have a project that only a select few individuals should access, this is easier to do with a single repository for that project.
*Administrative flexibility. If you have multiple repositories, then it is easier to implement different hook scripts based on the needs of the repository/projects. If you want uniform hook scripts, then a single repository might be better, but if each project wants its own commit email style then it is easier to have those projects in separate repositories
When you really think about, the revision numbers in a multiple project repository are going to get high, but you are not going to run out. Keep in mind that you can view a history on a sub directory and quickly see all the revision numbers that pertain to a project.
A: I store one project per repository, and like a previous commenter on this subversion question, I mark shared projects as external, so that they are only in source control once.
I'm just starting to add a CI build server (CruiseControl.NET), so I'll have to see how that all works out, but if my build scripts are right it should not be a problem.
Other than appearance though, it is really a matter of preference (in my opinion).
A:
When you really think about, the revision numbers in a multiple
project repository are going to get high, but you are not going to run
out. Keep in mind that you can view a history on a sub directory and
quickly see all the revision numbers that pertain to a project.
Actually if your building Microsoft code, and you use the svn revision numbers as a part of your version string then you could run out. Microsoft compiler will throw an error if any part of the version string is greater than 65535.... In our case we have a massive repository at revision 68876 and we just hit this wall.
A: One repository per project.
Steven Murawski's comment about CC.NET is an interesting one. I would be interested to hear how it works if you need to specify several source control repositories.
A: @Daniel Fone: The SVN docs recommend one project per repository, so that is definitely the way the creators intended it to go. As you can have one server (apache or svnserve) maintain multiple repositories, I've never run into a problem of too much overhead. With VisualSVN Server, installing an apache server and configuring multiple repositories is a snap.
A: I'm not sure the SVN docs actually recommend one project per repository. Mostly they talk about the upsides and downsides of each path. I happen to use three different repositories, one for 7 or 8 projects that are all related, making it very nice to be able to send out compatible copies of all the projects just by building from one revision (or verifying they're compatible by looking at the revision numbers on each). The second repository has another group of related projects and documents, while the third is a much smaller one. That lets us take advantage of the fact that the related projects can be managed by a single revision number, but that unrelated projects don't affect their repository.
A: The revision-numbers have no semantic use. The only thing is, that they are in sequential order. If you dump your project and import it in another repository, your versions can get new revision-numbers. So NEVER use the revision-numbers to mark your releases or similar stuff. Make tags for releases (copies of the relevant revision).
A: Had the same problem in my previous company, They use to have like 50 projects running in one repository and it was a nightmare to work on the same projects because of when doing svn updates others would curse....lol...
One thing I have learned that always works out best, One project One Repo....you will never regret it.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/15674",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "24"
} |
Q: Can I configure VisualStudio 2008 to always build the startup project? I have a solution with several projects, where the startup project has a post-build event that does all the copying of "plugin" projects and other organizing tasks. After upgrading the solution from VS 2005 to VS 2008, it appears as though the post-build event only fires if I modify the startup project, which means my updated plugins don't get plugged in to the current debugging session. This makes sense, but it seems like a change in behavior. Is anyone else noticing a change in behavior with regard to which projects get built?
Does anyone know of a workaround that I can use to force the startup project to rebuild whenever I hit F5? Perhaps I configured VS 2005 to work this way so long ago that I've forgotten all about it ...
A: I think you need to reorganize the responsibilities. Each component should be responsible for itself and therefore copy its generated goodness where it needs to go. That way it doesn't matter if/who/what/when/where got built. Whatever is updated will put itself into the proper place.
IMO the other suggestions are no-nos since they'll circumvent the compiler's smarts to know when a rebuild is necessary for the main project. And hence killing any compile time-savings. If your "plugin" projects are assemblies (and not just project-references from the main project), then you do not need to rebuild the main project each time a plugin is rebuilt. The new assembly will get selected into the process / debugger w/o the main project needing a rebuild.
A: Why not just add a dependency to the "startup" project for each of the plugins? This will force the project to be rebuilt if any of the others change, and you won't have to mess with any other pre/post-build events.
A: I don't know if this is the right way to do it but you could add a prebuild event to your startup projcet (if it's static) to clean the project which will force a rebuild.
something like:
devenv project.csproj /clean
A: This is a pain. What we really need is for Microsoft to allow us to hook into a Post-Solution Build event. You can do this via macros but that's too complicated.
I'm assuming this is a C++ project because I don't have this problem with C#.
This is my solution, it's not elegant but it works:
*
*Create a new project whose only purpose is to run the post-build script. Mark it as dependent on every other project in the solution.
*Add a dummy file to that project called dummy.h or whatever.
*Right click on dummy.h in Solution Explorer and select Properties.
*Select 'Custom Build Step'.
*For the command line type 'echo' and for Outputs just type 'dummy' or something else that will never exist.
This project, and therefore the post-build script, will now be run on every build.
John.
A: flipdoubt: they are projects created originally in 2008. My suggestion if it's not working C# is to look in the Build Events tab and check the setting of the "Run the post-build event:" drop down. If it is set to 'When the build updates the project output' this might be your problem, try setting to 'On successful build'.
John.
A: I'm having the same issue here and it is VERY annoying. John Richardson is right in that there should be a Post-Solution Build event (and a Pre-Solution Build event) that applies whenever ANY project in the solution is being built.
I don't think there is any good workaround to get this outcome in the current VS 2008 IDE.
A: Starting from @lomaxx suggestion, I got a very similar setup working by adding the following line at the end of the post-build event of the startup project:
"$(DevEnvDir)devenv.exe" "$(ProjectPath)" /clean
Note that this makes the startup project build the next time you need to debug, so you should make sure the project gets built at least once.
PS. I initially tried the pre-build as suggested, but that didn't work (and I think it makes sense - if VS thinks a project doesn't need building, it won't execute any events for that project).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/15678",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Developer Setup for Starting Out with Cocoa/Mac Programming I'd like to start experimenting with Cocoa and programming for Mac OSX. I'm not terribly concerned with Objective C syntax/constructs/bheaviors at this point, but more curious as to an efficient setup on in terms of an editor and/or IDE that will get me going quickly. Is there any IDE even remotely similar to Visual Studio (since that's where I've spent most of my time over the last 7 years) in terms of it's solution/project concept? Any other tools, tips, suggestions and/or resources to get up and experimenting quickly?
I'd like to avoid a lot of the intro stuff and get into things like "If you want to create an Mac desktop application, you can use Acme IDE and set up your project like this."
I was afraid Xcode was going to be the answer! :P I tried playing around with that -- literally just getting it off the DVD and just diving in with no idea what to expect (before I even knew that you used Objective C as the language). I figured, the wise guy that I am, that I could just sort of fumble around and get a simple app working ... wrong.
@Andrew - Thanks for the insight on those config settings. Based on my Xcode first impression, I think those may help.
A: The first document to read and digest is the Mem management guide, understand this before moving on. This is a great guide to objective-c too. Infact the developer site at Apple is very good - but you would probably want to read the Hillegas book first.
In regards to Xcode vs Visual Studio - they are different. I wouldn't say one is better than the other - Windows developers come over from VS and expect it to be the same. This is just an arrogant attitude and please don't fall into this crowd. Having used VS since the AppStudio days and Xcode for a year or so now, both have strengths and weaknesses. Xcode is something that out of the box (and especially when coming from VS) doesn't seem that good, but once you start using and understanding it - it becomes very powerful.
Also, there are a lot more tools included with Xcode et al, such as Instruments and Shark that you simply can't get with VS, unless you open your wallet, and even then IMHO aren't as good.
Anyway, good luck. I still enjoy C#, but Objective-C/Cocoa somehow makes programming fun again once you get into it...
A: Don't bother digging up your OSX DVD as they've released a new version (3.1) of XCode since then.
First, you'll want to join Apple Developer Connection (it's free, and you need it to access their version of MSDN) - it uses your Apple ID so if you've ever had one for the itunes store etc, it's that same username/password
Once you've done that, click on downloads, then click on developer tools, to view this page, and go for the XCode 3.1 Developer DVD
A: One other suggestion: If you have feature or enhancement requests, or bugs that you've run into, be sure to file them at Apple's Bug Reporter. It's the best way for developers to communicate their needs to Apple, because every issue is tracked through the system.
A: I'd suggest you pick a fun little product and dive in. If you're looking for a book I'd suggest Cocoa Programming for Max OSX which is a very good introduction both to Objective-C and Cocoa.
XCode is pretty much the de facto IDE and free with OSX. It should be on your original install DVD. It's good but not as good as Visual Studio (sorry, it's really not).
As a long-time VS user I found the default XCode config a little odd and hard to adjust to, particularly the way a new floating window would open for every sourcefile. Some tweaks I found particularly helpful;
*
*Settings/General -> All-In-One (unifies editor/debugger window)
*Settings/General -> Open counterparts in same editor (single-window edit)
*Settings/Debugging - "In Editor Debugger Controls"
*Settings/Debugging - "Auto Clear Debug Console"
*Settings/Key-binding - lots of binding to match VS (Ctrl+F5/Shift+F5,Shift+Home, Shift+End etc)
I find the debugger has some annoying issues such as breakpoints not correctly mapping to lines and exceptions aren't immediately trapped by the debugger. Nothing deal-breaking but a bit cumbersome.
I would recommend that you make use of the new property syntax that was introduced for Objective-C 2.0. They make for a heck of a lot less typing in many many places. They're limited to OSX 10.5 only though (yeah, language features are tied to OS versions which is a bit odd).
Also don't be fooled into downplaying the differences between C/C++ and Objective-C. They're very much related but ARE different languages. Try and start Objective-C without thinking about how you'd do X,Y,Z in C/C++. It'll make it a lot easier.
A: You might try the demo of textmate and see how you like it for working with objective-c or any other type of text really. It will import xcode project settings so you can still compile and run from textmate rather than having to go back to xcode.
A: Xcode is the standard for editing source files, though you can use another editor in conjunction with the command line xcodebuild tool if you really want. I used Vim for all my Cocoa editing before finally giving in to Xcode. It's not the greatest IDE in the world, but it gets the job done, and the recent 3.x releases have had some nice improvements.
The real power tool of Cocoa development is Interface Builder. IB does not generate source code like many UI tools. Instead it manipulates real Cocoa views, controls, and objects which it then bundles into an archive (nib) that is loaded by your program at runtime. Most Cocoa programs use at least one nib file, and often many more.
No matter what IDE/editor combination you choose for hacking on source files, I recommend using IB where you can. Even if you're not a fan of other UI layout/generation tools, I suggest keeping an open mind, giving "the Cocoa way" a chance and at least learning what Interface Builder can do for your development process.
A: AFAIK, pretty much every OS X developer uses Xcode.
That, and Interface Builder for creating the GUIs.
FWIW, try to get hold of a copy of Hillegas's book, as it's a great introductory tutorial, and the reference Docs Apple provides really aren't. (They are generally very good reference docs, however).
A: Cocoa is huge. The hardest part of learning how to write apps on Mac is learning Cocoa. By the way. You do not need to know ObjC (though it helps tons). You can write Cocoa apps with Python or Ruby (right in the IDE).
I agree VS is a better IDE then Xcode. But if you throw in Interface Builder and all the other tools, I'm not so sure. Mac development is not about 1 giant IDE for everything. But VS is "kinder" on the developer then Xcode is.
Also if you want to do cross platform apps look at RealBasic. A fine tool (Basic though. But it runs on Linux too.) You'd be surprised how many Mac apps are written with RB.
A: I've heard the books currently out there are pretty out of date. The whole ecosystem seems to evolve very fast with dramatic changes made in every OS release.
He wrote a tutorial which pulls together some Apple documentation and other tutorials which should get you started. I think it covers the basics of using the IDE, writing simple apps, and then goes on to more advanced stuff.
A: I've been dabbling in Cocoa for the past couple years, and recently picked up Fritz Anderson's "Xcode 3 Unleashed." Highly recommended for getting into Xcode — especially with some of the big changes 3.0/Leopard brought.
Don't forget Hillegass' defacto Cocoa bible, "Cocoa Programming for Mac OS X - Third Edition."
A: @peter I don't know why you had trouble with getting a simple app working, right off the bat without doing anything your app gets a lot of benefits from the Cocoa framework. If you mean you were trying to do stuff like connect a button to an action and have it print a alert on screen or something like that then yes I could see where your going with it being difficult.
The problem for me starting with Cocoa many years back is that it was so different from anything else that it had a little bit of a learning curve. Whereas many other systems are compile time oriented Cocoa is very dynamic and runtime oriented. Once you get past learning how actions hook up to classes it just becomes a matter of learning how the Cocoa frameworks work.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/15681",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "25"
} |
Q: How can I convert all line endings to CRLF, LF, or CR during SVN operations So, you are all ready to do a big SVN Commit and it bombs because you have inconsistent line endings in some of your files. Fun part is, you're looking at 1,000s of files spanning dozens of folders of different depths.
What do you do?
A: First is to clean everything up. Are you on Windows or Unix/Linux/Mac?
If you're on Unix/Linux/Mac, you can try something like this:
$ find . -type f -name "*.java" -exec dos2unix {}\;
That's if you have dos2unix on your box. It's not on my Mac or any of the six Linux machines we have. Seems like we didn't install this particular package. Fortunately, it's easy enough to find.
Be careful using it because you don't want to to munge binary files.
Once you've cleaned everything up, you should put the svn:eol-style property on your files. Setting it to native will checkout the file with the correct line ending for your machine, but store them in Unix line ending format. The other three options are "LF" for Unix, "CRLF" for Windows, and "CR" for pre Mac OS X Macs. Most people find "native" to work out the best. The only problem with Native is that it won't check in a file with mixed line endings while "LF" and "CRLF" will.
Once you do that, you should get a pre-commit hook that will allow you to enforce line endings on particular files. Then, teach your developers to use autoproperties. The pre-commit hook will prevent any commits unless the property is placed on the file A developer gets their commit rejected once or twice, and they'll setup auto properties on their own.
A: I don't think the pre-commit hook can actually change the data that is being committed - it can disallow a commit, but I don't think it can do the conversion for you.
It sounds like you want the property 'svn:eol-style' set to 'native' - this will automatically convert newlines to whatever is used on your platform (use 'CRLF', 'CR' or 'LF' to get those regardless of what the OS wants).
You can use auto-properties so that all future files you create will have this property set (auto props are handled client-side, so you'd have to set this up for each user).
A: Add a pre-commit hook which parses the file content and performs the munging of CRLF/LF/CR/etc for you before it's written to SVN.
A: You may consider using a command like Linux's dos2unix for the conversion. Being a Linux command, it is easy to use it in batch mode with scripts etc. I do not know whether there is an equivalent for other operating systems.
A: you can use notepad++ to batch convert line endings.
Make regex search:
([^\r])\n
and replace it with
$1\r\n
you then should choose a bunch of test files like:
*.xml;*.txt;*.csv;...asf.
this avoids that you accidently modify binary files
NOTE: the regex patterns skip empty lines, so you have to run a second replace job with \n\n and replace it with \n\r\n
| {
"language": "en",
"url": "https://stackoverflow.com/questions/15687",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "24"
} |
Q: How do you begin designing a large system? It's been mentioned to me that I'll be the sole developer behind a large new system. Among other things I'll be designing a UI and database schema.
I'm sure I'll receive some guidance, but I'd like to be able to knock their socks off. What can I do in the meantime to prepare, and what will I need to keep in mind when I sit down at my computer with the spec?
A few things to keep in mind: I'm a college student at my first real programming job. I'll be using Java. We already have SCM set up with automated testing, etc...so tools are not an issue.
A: This sounds very much like my first job. Straight out of university, I was asked to design the database and business logic layer, while other people would take care of the UI. Meanwhile the boss was looking over my shoulder, unwilling to let go of what used to be his baby and was now mine, and poking his finger in it. Three years later, developers were fleeing the company and we were still X months away from actually selling anything.
The big mistake was in being too ambitious. If this is your first job, you will make mistakes and you will need to change how things work long after you've written them. We had all sorts of features that made the system more complicated than it needed to be, both on the database level and in the API that it presented to other developers. In the end, the whole thing was just far too complicated to support all at once and just died.
So my advice:
*
*If you're not sure about taking on such a big job single-handed, don't. Tell your employers, and get them to find or hire somebody for you to work with who can help you out. If people need to be added to the project, then it should be done near the start rather than after stuff starts going wrong.
*Think very carefully about what the product is for, and to boil it down to the simplest set of requirements you can think of. If the people giving you the spec aren't technical, try to see past what they've written to what will actually work and make money. Talk to customers and salespeople, and understand the market.
*There's no shame in admitting you're wrong. If it turns out that the entire system needs to be rewritten, because you made some mistake in your first version, then it's better to admit this as soon as possible so you can get to it. Correspondingly, don't try to make an architecture that can anticipate every possible contingency in your first version, because you don't know what every contingency is and will just get it wrong. Write once with an eye to throwing away and starting again - you may not have to, the first version may be fine, but admit it if you do.
A: I also disagree about starting with the database. The DB is simply an artifact of how your business objects are persisted. I don't know of an equivalent in Java, but .Net has stellar tools such as SubSonic that allow your DB design to stay fluid as you iterate through your business objects design. I'd say first and foremost (even before deciding on what technologies to introduce) focus on the process and identify your nouns and verbs ... then build out from those abstractions. Hey, it really does work in the "real world", just like OOP 101 taught you!
A: Before you start coding, plan out your database schema - everything else will flow from that. Getting the database reasonably correct early on will save you time and headaches later.
A: The main thing is being able to abstract the complexity of the system so that you don't get bogged down by it as soon as you start off.
*
*First read the spec like a story (skimming through it). Don't stop at every requirement to analyze it right there and then. This will allow you to get an overall picture of the system without too many details. At this point you would start identifying the major functional components of the system. Start putting these down (use a mindmap tool if you like).
*Then take each component and start exploding it (and tying each detail with requirements in the spec document). Do this for all components, till you have covered all requirements.
*Now, you should start looking at relationships between the components, and whether there are repetitions of features or functions across the various components (which you can then pull out to create utility components, or such). Around now, you would have a good detailed map of your requirements in your mind.
*NOW, you should think of designing the database, ER diagrams, Class Design, DFDs, deployment, etc.
The problem with doing the last step first is that you can get bogged down in the complexity of your system without really gaining an overall understanding in the first place.
A: Do you know much about OOP? If so, look into Spring and Hibernate to keep your implementation clean and orthogonal. If you get that, you should find TDD a good way to keep your design compact and lean, especially since you have "automated testing" up and running.
UPDATE:
Looking at the first slew of answers, I couldn't disagree more. Particularly in the Java space, you should find plenty of mentors/resources on working out your application with Objects, not a database-centric approach. Database design is typically the first step for Microsoft folks (which I do daily, but am in a recovery program, er, Alt.Net). If you keep the focus on what you need to deliver to a customer and let your ORM figure out how to persist your objects, your design should be better.
A: I do it the other way around. I find that doing it database-schema-first gets the system stuck in a data-driven-design that is difficult to abstract from persistence. We try to do domain model designs first and then base the database schema on those.
And then there's the infrastructure design: the team should settle on conventions on how to structure the program first and foremost. And then we work together to agree first on a design for the common functionality of the system (e.g., things everyone needs like persistence, logging, etc.). This becomes the framework of the system.
We all work on that together first before we split the rest of the functionalities amongst ourselves.
A: It has been my experience that Java applications (.NET also) that consider the database last are highly likely to perform poorly when placed into a corporate environment. You need to really think about your audience. You didn't say if it was a web app or not. Either way the infrastructure that you are implementing on is important when considering how you handle your data.
No matter what methodology you consider, how you get and save your data and it's impact on performance should be right up there as one of your #1 priorities.
A: I'd suggest thinking about how this application will be used. How will future users work with it? I'm sure you know at least a few things about what this application needs to handle, but my first advice is "think of the user and what he or she needs".
Draw it up on plain paper, thinking of where to section off the code. Remeber not to mix logic with GUI code (common error). This way you will be set to extend your applications reach in the future to servlets and/or applets or whatever platform comes along. Section in layers so that you can respond to large changes faster without rebuilding everything. Layers should not see any other layers than their closest neighbouring layers.
Begin with true core functionallity. All that time consuming fluff (that will make your project 4 weeks late), wont matter much to the wast majority of users. It can be added later once you are sure you can deliver on time.
Btw. Even though this has nothing to do with design I'd just like to say that you won't deliver on time. Make a realistic estimate on time consumption and then double it :-) I assume here that you will not be alone in this project and that people will come and go as the project progresses. You may need to train people midway through the project, people go on holiday / need surgery etc.
A: Split the big system to smaller pieces.
And don't think that it's so complex, because it usually isn't. By thinking too complex it just ruins your thoughts and eventually the design. Some point you just realize that you could do the same thing easier, and then you redesign it.
Atleast this has been my major mistake in designing.
Keep it simple!
A: I found very insightful ideas about starting a new large project, based on
*
*common good practices
*Test Driven Development
*and pragmatic approach
in the book Growing Object-Oriented Software, Guided by Tests.
It is still under development, but first 3 chapters may be what You are looking for and IMHO worth reading.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/15690",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
} |
Q: Is the .NET Client Profile worth targeting? I've recently been looking into targeting the .NET Client Profile for a WPF application I am building. However, I was frustrated to notice that the Client Profile is only valid for the following OS configurations:
*
*Windows XP SP2+
*Windows Server 2003 Edit: Appears the Client Profile will not install on Windows Server 2003.
In addition, the client profile is not valid for x64 or ia64 editions; and will also not install if any previous version of the .NET Framework has been installed.
I'm wondering if the effort in adding the extra OS configurations to the testing matrix is worth the effort. Is there any metrics available that state the percentage of users that could possibly benefit from the client profile? I believe that once the .NET Framework has been installed, extra information is passed to a web server as part of a web request signifying that the framework is available. Granted, I would imagine that Windows XP SP2 users without the .NET Framework installed would be a large amount of people. It would then be a question of whether my application targeted those individuals specifically.
Has anyone else determined if it is worth the extra effort to target these specific users?
Edit: It seems that it is possible to get a compiler warning if you use features not included in the Client Profile. As I usually run with warnings as errors, this will hopefully be enough to minimise testing in this configuration. Of course, this configuration will still need to be tested, but it should be as simple as testing if the install/initial run works on XP with SP2+.
A: Ultimately, it will not hurt any users if you target the Client Profile. This is because the client profile is a subset of the .net framework v3.5 sp1, and if v3.5 sp1 is already installed you don't need to install anything.
The assemblies in the client profile are the same binaries as the full framework, so unless you're loading assemblies dynamically, then you shouldn't need to do any additional testing.
My thinking is that unless you must use assemblies which are NOT in the client profile, then you should target it.
As for the OS requirements, WPF won't run on pre-XP sp2, so if you need to run on other OSes, then you'll have to use WinForms anyways.
EDIT:
On IE, yes. It sends the .NET Framework version as part of the UA string, e.g.:
Actually so does FF3+3.5sp1:
Mozilla/5.0 (Windows; U; Windows NT 6.0; en-US; rv:1.9.0.1) Gecko/2008070208 Firefox/3.0.1 (.NET CLR 3.5.30729)
A: I think it is important to target as many users as you can, have you ever considered shipping your application without any managed code at all? You can convert your managed applications to pure machine code using tools such as http://www.xenocode.com/ or http://www.remotesoft.com/linker/ so you won't need any .NET framework on the client machines at all.
A:
I believe that once the .NET Framework has been installed, extra information is passed to a web server as part of a web request signifying that the framework is available.
On IE, yes. It sends the .NET Framework version as part of the UA string, e.g.:
Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0; .NET CLR 2.0.50727).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/15694",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: Good Stripes tutorials / examples? The company I just started working for is using Stripes for parts of its web page development these days, and while it seems to be a nice enough web framework it no one really uses it-- it is almost non existent on the 'net. It's not even first in it's google search and the result you do get is for its old home page.
So, do any of you people use Stripes? Of your own volition? Do you know of any good tutorials / examples?
A: I recommend checking out the book referenced by jko:
a book from The Pragmatic Bookshelf called Stripes: ...and Java web development is fun again
Whilst still in 'beta' the book covers everything very well.
Another good place to start is this ONJava article.
I have used Stripes on a few projects now and have liked it a lot.
It may sound crazy but the Stripes quickstart and sample application documentation on the website does a pretty good job of covering the bases.
This is helped by the fact there is little to Stripes, probably because it is relatively new and not trying to be all things to all people. I would say give the quick-start a try and if by the end of it you are unsatisfied look elsewhere. At the end of the day you and your company have to be happy (and productive) with what you are using irrespective of how many people are using it.
A: I've never used (or even heard of) Stripes.
Regardless, there's a book from The Pragmatic Bookshelf called Stripes: ...and Java web development is fun again that may be worth checking out. You could also check out the Stripes mailing list archive.
A: It's a shame that some people perceive Stripes as a framework for which "there really just isn't much support or information for it." In reality, the Stripes community is very supportive - have a look at the mailing list and you'll see how friendly and responsive people are. In fact, some have said on the #stripes IRC channel that they have had better response for Hibernate-related questions than on #hibernate itself!
Give Stripes a good, serious look instead of dismissing it because of misconceptions.
A: Stripes is a great framework. We converted a major project from a home grown framework to stripes and it took less than one week.
The book referenced above is a great resources, as is the mailing list.
There's also an active irc channel #stripes on freenode.
It's a very powerful framework that doesn't get in your way.
A: We considered it when we were looking at open source frameworks. But we saw the same thing your did that there really just isn't much support or information for it. You should always weight the community support factor surrounding open source projects before picking one. (which is what you are doing here)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/15695",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: Runtime Configuration in .Net (specifically the EntLib) I'm looking for a way to configure a DB connection at runtime; specifically using the Enterprise Library. I see that there's a *.Data.Configuration (or something close to this ... don't recall off the top of my head) assembly but am finding not much on the interwebs. Complicating matters is the fact that the API help is broken on Vista.
Now, I found this work-around:
Configuration cfg = ConfigurationManager.OpenExeConfiguration(ConfigurationUserLevel.None);
ConnectionStringSettings connection = new ConnectionStringSettings();
connection.Name = "Runtime Connection";
connection.ProviderName = "System.Data.OleDb";
connection.ConnectionString = "myconstring";
cfg.ConnectionStrings.ConnectionStrings.Add(connection);
cfg.Save(ConfigurationSaveMode.Modified);
ConfigurationManager.RefreshSection("connectionStrings");
var runtimeCon = DatabaseFactory.CreateDatabase("Runtime Connection");
And although it gives me what I want, it permanently edits the App.config. Sure I can go back and delete the changes, but I'd rather not go through this hassle.
A: If you're using a winforms app you could try using UserProperties to store this info. Another possible solution could be custom configuration sections.
A: If you don't want it saved, you do not need to execute the cfg.Save command.
The Configuration object will store your changes until it isn't needed anymore.
A: Nope, you must save in order for the EntLib (and, I suspect, any other tool) to see the changes.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/15700",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: How can I determine whether a specific file is open in Windows? One of my favourite tools for linux is lsof - a real swiss army knife!
Today I found myself wondering which programs on a WinXP system had a specific file open. Is there any equivalent utility to lsof? Additionally, the file in question was over a network share so I'm not sure if that complicates matters.
A: Use Process Explorer from the Sysinternals Suite, the Find Handle or DLL function will let you search for the process with that file open.
A: One equivalent of lsof could be combined output from Sysinternals' handle and listdlls, i.e.:
c:\SysInternals>handle
[...]
------------------------------------------------------------------------------
gvim.exe pid: 5380 FOO\alois.mahdal
10: File (RW-) C:\Windows
1C: File (RW-) D:\some\locked\path\OpenFile.txt
[...]
c:\SysInternals>listdlls
[...]
------------------------------------------------------------------------------
Listdlls.exe pid: 6840
Command line: listdlls
Base Size Version Path
0x00400000 0x29000 2.25.0000.0000 D:\opt\SysinternalsSuite\Listdlls.exe
0x76ed0000 0x180000 6.01.7601.17725 C:\Windows\SysWOW64\ntdll.dll
[...]
c:\SysInternals>listdlls
Unfortunately, you have to "run as Administrator" to be able to use them.
Also listdlls and handle do not produce continuous table-like form so filtering filename would hide PID. findstr /c:pid: /c:<filename> should get you very close with both utilities, though
c:\SysinternalsSuite>handle | findstr /c:pid: /c:Driver.pm
System pid: 4 \<unable to open process>
smss.exe pid: 308 NT AUTHORITY\SYSTEM
avgrsa.exe pid: 384 NT AUTHORITY\SYSTEM
[...]
cmd.exe pid: 7140 FOO\alois.mahdal
conhost.exe pid: 1212 FOO\alois.mahdal
gvim.exe pid: 3408 FOO\alois.mahdal
188: File (RW-) D:\some\locked\path\OpenFile.txt
taskmgr.exe pid: 6016 FOO\alois.mahdal
[...]
Here we can see that gvim.exe is the one having this file open.
A: Try Unlocker.
The Unlocker site has a nifty chart (scroll down after following the link) that shows a comparison to other tools. Obviously such comparisons are usually biased since they are typically written by the tool author, but the chart at least lists the alternatives so that you can try them for yourself.
A: If the file is a .dll then you can use the TaskList command line app to see whose got it open:
TaskList /M nameof.dll
A: The equivalent of lsof -p pid is the combined output from sysinternals handle and listdlls, ie
handle -p pid
listdlls -p pid
you can find out pid with sysinternals pslist.
A: There is a program "OpenFiles", seems to be part of windows 7. Seems that it can do what you want. It can list files opened by remote users (through file share) and, after calling
"openfiles /Local on" and a system restart, it should be able to show files opened locally. The latter is said to have performance penalties.
A: Use Process Explorer to find the process id. Then use Handle to find out what files are open.
Eg handle -p
I like this approach because you are using utilities from Microsoft itself.
A: If you right-click on your "Computer" (or "My Computer") icon and select "Manage" from the pop-up menu, that'll take you to the Computer Management console.
In there, under System Tools\Shared Folders, you'll find "Open Files". This is probably close to what you want, but if the file is on a network share then you'd need to do the same thing on the server on which the file lives.
A: In OpenedFilesView, under the Options menu, there is a menu item named "Show Network Files". Perhaps with that enabled, the aforementioned utility is of some use.
A: Try Handle. Filemon & Regmon are also great for trying to figure out what the duce program foo is doing to your system.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/15708",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "98"
} |
Q: Is this valid YAML? So for my text parsing in C# question, I got directed at YAML. I'm hitting a wall with this library I was recommended, so this is a quickie.
heading:
name: A name
taco: Yes
age: 32
heading:
name: Another name
taco: No
age: 27
And so on. Is that valid?
A: Well, it appears YAML is gone out the window then. I want something both human writable and readable. Plus, this C# implementation...I have no idea if it's working or not, the documentation consists of a few one line code examples. It barfs on their own YAML files, and is an old student project. The only other C# YAML parser I've found uses the MS-PL which I'm not really comfortable using.
I might just end up rolling my own format. Best practices be damned, all I want to do is associate a key with a value.
A: Try this(Online YAML parser).
You don't have to download anything or do something. Just go there, and copy & paste. That's it.
A: There appears to be a YAML validator called Kwalify which should give you the answer. You shoulda just gone with the String tokenizing, man. Writing parsers is fun :)
A: There is another YAML library for .NET which is under development. Right now it supports reading YAML streams. It has been tested on Windows and Mono. Write support is currently being implemented.
A: CodeProject has one at:
http://www.codeproject.com/KB/recipes/yamlparser.aspx
I haven't tried it too much, but it's worth a look.
A: Partially. YAML supports the notion of multiple consecutive "documents". If this is what you are trying to do here, then yes, it is correct - you have two documents (or document fragments). To make it more explicit, you should separate them with three dashes, like this:
---
heading:
name: A name
taco: Yes
age: 32
---
heading:
name: Another name
taco: No
age: 27
On the other hand if you wish to make them part of the same document (so that deserializing them would result in a list with two elements), you should write it like the following. Take extra care with the indentation level:
- heading:
name: A name
taco: Yes
age: 32
- heading:
name: Another name
taco: No
age: 27
In general YAML is concise and human readable / editable, but not really human writable, so you should always use libraries to generate it. Also, take care that there exists some breaking changes between different versions of YAML, which can bite you if you are using libraries in different languages which conform to different versions of the standard.
A: You can see the output in the online yaml parser :
http://yaml-online-parser.appspot.com/?yaml=heading%3A%0D%0A+name%3A+A+name%0D%0A+taco%3A+Yes%0D%0A+age%3A+32%0D%0A%0D%0Aheading%3A%0D%0A+name%3A+Another+name%0D%0A+taco%3A+No%0D%0A+age%3A+27%0D%0A&type=json
As you can see, there is only one heading node created.
A: Just to make an explicit comment about it: You have a duplicate mapping key issue. A YAML processor will resolve this as a !!map, which prohibits duplicate keys. Not all processors enforce this constraint, though, so you might get an incorrect result if you pass an incorrect YAML stream to a processor.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/15709",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Design problems with .Net UserControl I have created a UserControl that has a ListView in it. The ListView is publicly accessible though a property. When I put the UserControl in a form and try to design the ListView though the property, the ListView stays that way until I compile again and it reverts back to the default state.
How do I get my design changes to stick for the ListView?
A: You need to decorate the ListView property with the DesignerSerializationVisibility attribute, like so:
[DesignerSerializationVisibility(DesignerSerializationVisibility.Content)]
public ListView MyListView { get { return this.listView1; } }
This tells the designer's code generator to output code for it.
A: Fredrik is right, basically, when you need to enable the designer to persist the property to page so it can be instantiated at run time. There is only one way to do this, and that is to write its values to the ASPX page, which is then picked up by the runtime.
Otherwise, the control will simply revert to its default state each and every time.
Always keep in the back of your mind that the Page (and its contents) and the code are completely seperate in ASP.NET, they are hooked up at run time. This means that you dont get the nice code-behind designer support like you do in a WinForms app (where the form is an instance of an object).
A: Just so I'm clear, you've done something like this, right?
public ListView MyListView { get { return this.listView1; } }
So then you are accessing (at design time) the MyListView property on your UserControl?
I think if you want proper design-time support you're better off changing the "Modifier" property on the ListView itself (back on the original UserControl) to Public - that way you can modify the ListView directly on instances of the UserControl. I've had success doing that anyway.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/15716",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Programming Glossary As I browse through the site, I find a lot of terms that many developers just starting out (and even some advanced developers) may be unfamiliar with.
It would be great if people could post here with a term and definition that might be unknown to beginners or those from different programming backgrounds.
Some not-so-common terms I've seen are 'auto boxing', 'tuples', 'orthogonal code', 'domain driven design', 'test driven development', etc.
Code snippets would also be helpful where applicable..
A: *
*http://en.wikipedia.org/wiki/Boxing_(Computer_science)#Boxing
*http://en.wikipedia.org/wiki/Tuples
*http://en.wikipedia.org/wiki/Orthogonal#Computer_science
*http://en.wikipedia.org/wiki/Domain_driven_design
*http://en.wikipedia.org/wiki/Test_driven_development
Someone may have beat us to it ;)
A: http://en.wikipedia.org/wiki/Boxing_%28Computer_science%29#Boxing
thats the correct link for boxing as related to computer science :D
A: Better yet, a site domain dictionary, containing a definition (over time) for every programming term on Stackoverflow, with the definition itself modded according to the Wiki-like aspects Atwood and others have been discussing.
There are coding dictionaries out there but they're all either a) crap or b) not extensible or editable in a collaborative way.
Right now if I come across an unfamiliar programming term or acronym my first stop is Google, followed by Wiki, followed by one of the many dedicated dictionaries. No reason why Stackoverflow shouldn't be on that list.
A: The c2 Wiki kicks butt. Great combination of concise definitions and examples, plus discussions that break it down when there are different interpretations.
A: It may actually be helpful to go around adding the tag 'glossary' to specific questions (I recently saw one about Expressions vs. Statements, for instance).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/15729",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Looking for a specific FireFox extension / program for Form posting I am looking for either a FireFox extension, or a similar program, that allows you to craft GET and POST requests. The user would put in a form action, and as many form key/value pairs as desired. It would also send any cookie information (or send the current cookies from any domain the user chooses.) The Web Developer add-on is almost what I'm looking for; It let's you quickly see the form keys, but it doesn't let you change them or add new ones (which leads to a lot of painful JavaScript in the address bar...)
A: If you're a windows user, use Fiddler. It is invaluable for looking at the raw Http requests and responses. It also has the ability to create requests with the request builder and it has an auto responder also, so you can intercept requests. It even lets you inspect HTTPS traffic and it has a built in event scripting engine, where you can create your own rules.
A: Actually I think Poster is what you're looking for.
A Screen shot of an older Poster version
A: You may want to check out the Tamper Data extension which allows you to easily intercept and manipulate the request parameters among other features.
A: If you've got Greasemonkey installed you might want to try the XSS Assistant user script: http://www.whiteacid.org/greasemonkey/#xss_assistant
A: The DOM Inspector will let you add/edit/remove inputs - copying existing ones is the easiest way of adding new ones. I highly recommend getting Inspect This as well.
A: Look for a extension called Poster.
A: Poster looks nice and complete. For reference, I will add UrlParams to the list, but looks like Poster is better.
A: I think tamper data would do the trick....live http headers addon would also do the same.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/15731",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: How to validate an XML file against an XSD file? I'm generating some xml files that needs to conform to an xsd file that was given to me. How should I verify they conform?
A: Using Java 7 you can follow the documentation provided in package description.
// create a SchemaFactory capable of understanding WXS schemas
SchemaFactory factory = SchemaFactory.newInstance(XMLConstants.W3C_XML_SCHEMA_NS_URI);
// load a WXS schema, represented by a Schema instance
Source schemaFile = new StreamSource(new File("mySchema.xsd"));
Schema schema = factory.newSchema(schemaFile);
// create a Validator instance, which can be used to validate an instance document
Validator validator = schema.newValidator();
// validate the DOM tree
try {
validator.validate(new StreamSource(new File("instance.xml"));
} catch (SAXException e) {
// instance document is invalid!
}
A: The Java runtime library supports validation. Last time I checked this was the Apache Xerces parser under the covers. You should probably use a javax.xml.validation.Validator.
import javax.xml.XMLConstants;
import javax.xml.transform.Source;
import javax.xml.transform.stream.StreamSource;
import javax.xml.validation.*;
import java.net.URL;
import org.xml.sax.SAXException;
//import java.io.File; // if you use File
import java.io.IOException;
...
URL schemaFile = new URL("http://host:port/filename.xsd");
// webapp example xsd:
// URL schemaFile = new URL("http://java.sun.com/xml/ns/j2ee/web-app_2_4.xsd");
// local file example:
// File schemaFile = new File("/location/to/localfile.xsd"); // etc.
Source xmlFile = new StreamSource(new File("web.xml"));
SchemaFactory schemaFactory = SchemaFactory
.newInstance(XMLConstants.W3C_XML_SCHEMA_NS_URI);
try {
Schema schema = schemaFactory.newSchema(schemaFile);
Validator validator = schema.newValidator();
validator.validate(xmlFile);
System.out.println(xmlFile.getSystemId() + " is valid");
} catch (SAXException e) {
System.out.println(xmlFile.getSystemId() + " is NOT valid reason:" + e);
} catch (IOException e) {}
The schema factory constant is the string http://www.w3.org/2001/XMLSchema which defines XSDs. The above code validates a WAR deployment descriptor against the URL http://java.sun.com/xml/ns/j2ee/web-app_2_4.xsd but you could just as easily validate against a local file.
You should not use the DOMParser to validate a document (unless your goal is to create a document object model anyway). This will start creating DOM objects as it parses the document - wasteful if you aren't going to use them.
A: One more answer: since you said you need to validate files you are generating (writing), you might want to validate content while you are writing, instead of first writing, then reading back for validation. You can probably do that with JDK API for Xml validation, if you use SAX-based writer: if so, just link in validator by calling 'Validator.validate(source, result)', where source comes from your writer, and result is where output needs to go.
Alternatively if you use Stax for writing content (or a library that uses or can use stax), Woodstox can also directly support validation when using XMLStreamWriter. Here's a blog entry showing how that is done:
A: If you have a Linux-Machine you could use the free command-line tool SAXCount. I found this very usefull.
SAXCount -f -s -n my.xml
It validates against dtd and xsd.
5s for a 50MB file.
In debian squeeze it is located in the package "libxerces-c-samples".
The definition of the dtd and xsd has to be in the xml! You can't config them separately.
A: With JAXB, you could use the code below:
@Test
public void testCheckXmlIsValidAgainstSchema() {
logger.info("Validating an XML file against the latest schema...");
MyValidationEventCollector vec = new MyValidationEventCollector();
validateXmlAgainstSchema(vec, inputXmlFileName, inputXmlSchemaName, inputXmlRootClass);
assertThat(vec.getValidationErrors().isEmpty(), is(expectedValidationResult));
}
private void validateXmlAgainstSchema(final MyValidationEventCollector vec, final String xmlFileName, final String xsdSchemaName, final Class<?> rootClass) {
try (InputStream xmlFileIs = Thread.currentThread().getContextClassLoader().getResourceAsStream(xmlFileName);) {
final JAXBContext jContext = JAXBContext.newInstance(rootClass);
// Unmarshal the data from InputStream
final Unmarshaller unmarshaller = jContext.createUnmarshaller();
final SchemaFactory sf = SchemaFactory.newInstance(XMLConstants.W3C_XML_SCHEMA_NS_URI);
final InputStream schemaAsStream = Thread.currentThread().getContextClassLoader().getResourceAsStream(xsdSchemaName);
unmarshaller.setSchema(sf.newSchema(new StreamSource(schemaAsStream)));
unmarshaller.setEventHandler(vec);
unmarshaller.unmarshal(new StreamSource(xmlFileIs), rootClass).getValue(); // The Document class is the root object in the XML file you want to validate
for (String validationError : vec.getValidationErrors()) {
logger.trace(validationError);
}
} catch (final Exception e) {
logger.error("The validation of the XML file " + xmlFileName + " failed: ", e);
}
}
class MyValidationEventCollector implements ValidationEventHandler {
private final List<String> validationErrors;
public MyValidationEventCollector() {
validationErrors = new ArrayList<>();
}
public List<String> getValidationErrors() {
return Collections.unmodifiableList(validationErrors);
}
@Override
public boolean handleEvent(final ValidationEvent event) {
String pattern = "line {0}, column {1}, error message {2}";
String errorMessage = MessageFormat.format(pattern, event.getLocator().getLineNumber(), event.getLocator().getColumnNumber(),
event.getMessage());
if (event.getSeverity() == ValidationEvent.FATAL_ERROR) {
validationErrors.add(errorMessage);
}
return true; // you collect the validation errors in a List and handle them later
}
}
A: Here's how to do it using Xerces2. A tutorial for this, here (req. signup).
Original attribution: blatantly copied from here:
import org.apache.xerces.parsers.DOMParser;
import java.io.File;
import org.w3c.dom.Document;
public class SchemaTest {
public static void main (String args[]) {
File docFile = new File("memory.xml");
try {
DOMParser parser = new DOMParser();
parser.setFeature("http://xml.org/sax/features/validation", true);
parser.setProperty(
"http://apache.org/xml/properties/schema/external-noNamespaceSchemaLocation",
"memory.xsd");
ErrorChecker errors = new ErrorChecker();
parser.setErrorHandler(errors);
parser.parse("memory.xml");
} catch (Exception e) {
System.out.print("Problem parsing the file.");
}
}
}
A: We build our project using ant, so we can use the schemavalidate task to check our config files:
<schemavalidate>
<fileset dir="${configdir}" includes="**/*.xml" />
</schemavalidate>
Now naughty config files will fail our build!
http://ant.apache.org/manual/Tasks/schemavalidate.html
A: If you are generating XML files programatically, you may want to look at the XMLBeans library. Using a command line tool, XMLBeans will automatically generate and package up a set of Java objects based on an XSD. You can then use these objects to build an XML document based on this schema.
It has built-in support for schema validation, and can convert Java objects to an XML document and vice-versa.
Castor and JAXB are other Java libraries that serve a similar purpose to XMLBeans.
A: Since this is a popular question, I will point out that java can also validate against "referred to" xsd's, for instance if the .xml file itself specifies XSD's in the header, using xsi:schemaLocation or xsi:noNamespaceSchemaLocation (or xsi for particular namespaces) ex:
<document xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:noNamespaceSchemaLocation="http://www.example.com/document.xsd">
...
or schemaLocation (always a list of namespace to xsd mappings)
<document xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.example.com/my_namespace http://www.example.com/document.xsd">
...
The other answers work here as well, because the .xsd files "map" to the namespaces declared in the .xml file, because they declare a namespace, and if matches up with the namespace in the .xml file, you're good. But sometimes it's convenient to be able to have a custom resolver...
From the javadocs: "If you create a schema without specifying a URL, file, or source, then the Java language creates one that looks in the document being validated to find the schema it should use. For example:"
SchemaFactory factory = SchemaFactory.newInstance("http://www.w3.org/2001/XMLSchema");
Schema schema = factory.newSchema();
and this works for multiple namespaces, etc.
The problem with this approach is that the xmlsns:xsi is probably a network location, so it'll by default go out and hit the network with each and every validation, not always optimal.
Here's an example that validates an XML file against any XSD's it references (even if it has to pull them from the network):
public static void verifyValidatesInternalXsd(String filename) throws Exception {
InputStream xmlStream = new new FileInputStream(filename);
DocumentBuilderFactory factory = DocumentBuilderFactory.newInstance();
factory.setValidating(true);
factory.setNamespaceAware(true);
factory.setAttribute("http://java.sun.com/xml/jaxp/properties/schemaLanguage",
"http://www.w3.org/2001/XMLSchema");
DocumentBuilder builder = factory.newDocumentBuilder();
builder.setErrorHandler(new RaiseOnErrorHandler());
builder.parse(new InputSource(xmlStream));
xmlStream.close();
}
public static class RaiseOnErrorHandler implements ErrorHandler {
public void warning(SAXParseException e) throws SAXException {
throw new RuntimeException(e);
}
public void error(SAXParseException e) throws SAXException {
throw new RuntimeException(e);
}
public void fatalError(SAXParseException e) throws SAXException {
throw new RuntimeException(e);
}
}
You can avoid pulling referenced XSD's from the network, even though the xml files reference url's, by specifying the xsd manually (see some other answers here) or by using an "XML catalog" style resolver. Spring apparently also can intercept the URL requests to serve local files for validations. Or you can set your own via setResourceResolver, ex:
Source xmlFile = new StreamSource(xmlFileLocation);
SchemaFactory schemaFactory = SchemaFactory
.newInstance(XMLConstants.W3C_XML_SCHEMA_NS_URI);
Schema schema = schemaFactory.newSchema();
Validator validator = schema.newValidator();
validator.setResourceResolver(new LSResourceResolver() {
@Override
public LSInput resolveResource(String type, String namespaceURI,
String publicId, String systemId, String baseURI) {
InputSource is = new InputSource(
getClass().getResourceAsStream(
"some_local_file_in_the_jar.xsd"));
// or lookup by URI, etc...
return new Input(is); // for class Input see
// https://stackoverflow.com/a/2342859/32453
}
});
validator.validate(xmlFile);
See also here for another tutorial.
I believe the default is to use DOM parsing, you can do something similar with SAX parser that is validating as well saxReader.setEntityResolver(your_resolver_here);
A: Using Woodstox, configure the StAX parser to validate against your schema and parse the XML.
If exceptions are caught the XML is not valid, otherwise it is valid:
// create the XSD schema from your schema file
XMLValidationSchemaFactory schemaFactory = XMLValidationSchemaFactory.newInstance(XMLValidationSchema.SCHEMA_ID_W3C_SCHEMA);
XMLValidationSchema validationSchema = schemaFactory.createSchema(schemaInputStream);
// create the XML reader for your XML file
WstxInputFactory inputFactory = new WstxInputFactory();
XMLStreamReader2 xmlReader = (XMLStreamReader2) inputFactory.createXMLStreamReader(xmlInputStream);
try {
// configure the reader to validate against the schema
xmlReader.validateAgainst(validationSchema);
// parse the XML
while (xmlReader.hasNext()) {
xmlReader.next();
}
// no exceptions, the XML is valid
} catch (XMLStreamException e) {
// exceptions, the XML is not valid
} finally {
xmlReader.close();
}
Note: If you need to validate multiple files, you should try to reuse your XMLInputFactory and XMLValidationSchema in order to maximize the performance.
A: Are you looking for a tool or a library?
As far as libraries goes, pretty much the de-facto standard is Xerces2 which has both C++ and Java versions.
Be fore warned though, it is a heavy weight solution. But then again, validating XML against XSD files is a rather heavy weight problem.
As for a tool to do this for you, XMLFox seems to be a decent freeware solution, but not having used it personally I can't say for sure.
A: Validate against online schemas
Source xmlFile = new StreamSource(Thread.currentThread().getContextClassLoader().getResourceAsStream("your.xml"));
SchemaFactory factory = SchemaFactory.newInstance(XMLConstants.W3C_XML_SCHEMA_NS_URI);
Schema schema = factory.newSchema(Thread.currentThread().getContextClassLoader().getResource("your.xsd"));
Validator validator = schema.newValidator();
validator.validate(xmlFile);
Validate against local schemas
Offline XML Validation with Java
A: I had to validate an XML against XSD just one time, so I tried XMLFox. I found it to be very confusing and weird. The help instructions didn't seem to match the interface.
I ended up using LiquidXML Studio 2008 (v6) which was much easier to use and more immediately familiar (the UI is very similar to Visual Basic 2008 Express, which I use frequently). The drawback: the validation capability is not in the free version, so I had to use the 30 day trial.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/15732",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "286"
} |
Q: Google Analytics Access with C# I know that there is no official API for Google Analytics but is there a way to access Google Analytics Reports with C#?
A: I wrote a small project that lets you generate pretty much any Analytics report. It's listed on Google's Analytics API page - http://code.google.com/apis/analytics/docs/gdata/gdataArticlesCode.html
You can read about it here and get the source code - http://www.reimers.dk/blogs/jacob_reimers_weblog/archive/2009/05/09/added-google-analytics-reader-for-net.aspx
A: I emailed them asking this same question a while back and here's the response I got:
Hello,
Thank you for your email. I apologize for the delay in replying to your email. Google Analytics does not currently provide an API to access the reporting data. However, we do offer export functionality for single reports in the following formats:
*
*PDF
*Tab separated value (TSV)
*XML
*Excel (CSV)
This feature allows you to easily import report data into your favorite spreadsheet application or to process the data otherwise.
Additionally, we're unable to provide support for custom implementations of Google Analytics. For this level of support, you can contact one of our highly qualified Google Analytics Authorized Consultants for assistance with advanced needs. These partners deliver a number of professional services such as installation support, training, and advanced filter and e-commerce configurations.
For a complete list of our worldwide partners and a more detailed description of the services they offer, please go to http://www.google.com/analytics/support_partner_provided.html
For additional questions, please visit the Analytics Help Center at http://www.google.com/support/googleanalytics/?utm_id=tf. You can also find helpful tips and information by visiting the Google Analytics Help Forum at http://groups.google.com/group/analytics-help?utm_id=tr.
Sincerely,
[snip]
Analytics Support
For the latest updates as well as some helpful tips on Google Analytics, check out the Google Analytics blog at http://analytics.blogspot.com
A: I have a completed library for called GoogleAnalytics.Net that allows you to fire page views/events/transactions from within .net code.
You can download the library from it's project home page:
http://www.diaryofaninja.com/projects/details/ga-dot-net
A: Update: Google launched a Google Analytics API today.
Google Analytics Blog - API Launched
A: This guy has had some success with at least some light Analytics integration. Now I realize this isn't exactly what you're looking for, but he does mention a book and perhaps you can get in touch with him.
A: Have a look at the SilverLight Google Analytics Snippet - http://code.google.com/apis/analytics/docs/tracking/silverlightTrackingIntro.html
http://msaf.codeplex.com/wikipage?title=Google%20Analytics
Because Silverlight is C#.
A: Google analytics API changed recently(2012) and because of that most of the codings are changed. so below link will be helpful for c# developers
Google Analytics API in C# -Execution of request failed: https://www.google.com/analytics/feeds/accounts/default
A: Google has created there own client lib Google APIs Client Library for .NET which allows for access to most of the Google Apis using dotnet.
Then can be found on nuget
A: Yet another analytics API for C#
https://github.com/igooana/igooana
This project is aimed at C# 5 and uses async/await and dynamic extensively.
I tried to make this API as simple as possible and maximum type-safe.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/15734",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15"
} |
Q: How do you use #define? I'm wondering about instances when it makes sent to use #define and #if statements. I've known about it for a while, but never incorporated it into my way of coding. How exactly does this affect the compilation?
Is #define the only thing that determines if the code is included when compiled? If I have #define DEBUGme as a custom symbol, the only way to exclude it from compile is to remove this #define statement?
A: #define is used to define compile-time constants that you can use with #if to include or exclude bits of code.
#define USEFOREACH
#if USEFOREACH
foreach(var item in items)
{
#else
for(int i=0; i < items.Length; ++i)
{ var item = items[i]; //take item
#endif
doSomethingWithItem(item);
}
A:
Is #define the only thing that
determines if the code is included
when compiled? If I have #define
DEBUGme as a custom symbol, the only
way to exclude it from compile is to
remove this #define statement?
You can undefine symbols as well
#if defined(DEBUG)
#undef DEBUG
#endif
A: In C# #define macros, like some of Bernard's examples, are not allowed. The only common use of #define/#ifs in C# is for adding optional debug only code. For example:
static void Main(string[] args)
{
#if DEBUG
//this only compiles if in DEBUG
Console.WriteLine("DEBUG")
#endif
#if !DEBUG
//this only compiles if not in DEBUG
Console.WriteLine("RELEASE")
#endif
//This always compiles
Console.ReadLine()
}
A: Well, defines are used often for compile time constants and macros. This can make your code a bit faster as there are really no function calls, the output values of the macros are determined at compile time. The #if's are very useful. The most simple example that I can think of is checking for a debug build to add in some extra logging or messaging, maybe even some debugging functions. You can also check different environment variables this way.
Others with more C/C++ experience can add more I am sure.
A: I often find myself defining some things that are done repetitively in certain functions. That makes the code much shorter and thus allows a better overview.
But as always, try to find a good measure to not create a new language out of it. Might be a little hard to read for the occasional maintenance later on.
A: It's for conditional compilation, so you can include or remove bits of code based upon project attributes which tend to be:
*
*Intended platform (Windows/Linux/XB360/PS3/Iphone.... etc)
*Release or Debug (Generally logging, asserts etc are only included in a debug build)
They can also be used to disable large parts of a system quickly,
for example, during development of a game, I might define
#define PLAYSOUNDS
and then wrap the final call to play a sound in:
#ifdef PLAYSOUNDS
// Do lots of funk to play a sound
return true;
#else
return true;
So it's very easy for me to turn on and off the playing of sounds for a build. (Typically I don't play sounds when debugging because it gets in the way of my personal music :) )
The benefit is that you're not introducing a branch through adding an if statement....
A: @Ed: When using C++, there is rarely any benefit for using #define over inline functions when creating macros. The idea of "greater speed" is a misconception. With inline functions you get the same speed, but you also get type safey, and no side-effects of preprocessor "pasting" due to the fact that parameters are evaluated before the function is called (for an example, try writing the ubiquitous MAX macro, and call it like this: MAX(x++, y).. you'll see what I'm getting at).
I have never had to use #define in my C#, and I very rarely use it for anything other that platform and compiler version checking for conditional compilation in C++.
A: Perhaps the most common usees of #define in C# is to differentiate between debug/release and different platforms (for example Windows and X-Box 360 in the XNA framework).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/15744",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "18"
} |
Q: How do I automate repetitive tasks post-build? I run an ASP.NET website solution with a few other projects in it. I've known that MSBuild projects are capable of this, but is it the best way? Are they easy to create? Is nAnt, CruiseControl.NET or any other solution better?
When I build the site (using Web Deployment Projects), can I automate part of the build so that it does not copy certain folders from the project into the Release folder? For instance, I have folders with local search indexes, images and other content part of the folder, but I never need or upload those when deploying the project.
I'm also looking toward this type of solution to automatically increment build and version numbers.
A: Here's an example of a Web Deployment Project scripting this sort of task in the .wdproj file:
<Target Name="AfterBuild">
<!-- ============================ Script Compression============================ -->
<MakeDir Directories="$(OutputPath)\compressed" />
<Exec Command="java -jar c:\yuicompressor-2.2.5\build\yuicompressor-2.2.5.jar --charset UTF-8 styles.css -o compressed/styles.css" WorkingDirectory="$(OutputPath)" />
<Exec Command="move /Y .\compressed\* .\" WorkingDirectory="$(OutputPath)" />
<RemoveDir Directories="$(OutputPath)\sql" />
<Exec Command="c:\7zip-4.4.2\7za.exe a $(ZipName).zip $(OutputPath)\*" />
</Target>
This would allow you to delete a folder.
(I suspect that if you wanted to not have the folder copy over at all, the solution file would be the place to specify that, though I haven't had to use that.)
A: MaseBase, you can use Web Deployment Projects to build and package Web Sites. We do that all the time for projects with a web application aspect. After you assign a WDP to a Web Site, you can open up the .wdproj file as plain-text XML file. At the end is a commented section of MSBuild targets that represent the sequence of events that fire during a build process.
<!-- To modify your build process, add your task inside one of the targets below and uncomment it.
Other similar extension points exist, see Microsoft.WebDeployment.targets.
<Target Name="BeforeBuild">
</Target>
<Target Name="BeforeMerge">
</Target>
<Target Name="AfterMerge">
</Target>
<Target Name="AfterBuild">
</Target>
-->
You can uncomment the targets you want (e.g. "AfterBuild") and insert the necessary tasks there to carry out your repeated post-build activities.
A: You can set the Build Action/Copy to Output Directory property on individual files (select the file and hit F4 to open the properties window) to control what happens to them during build, but not for folders. This could probably be automated with a (pre) build task if you don't want to do it manually.
Alternatively, you can exclude these folders from the project (right click and 'exclude from project'); they'll still be there ("show all files" in solution explorer), but they won't be included when building the project.
A: CruiseControl.NET solves a different problem (continuous integration) ... however, I've had great success with NAnt for specifically what you're asking. There's a learning curve, but once you get proficient you'll wonder how you ever got along w/o it.
A: In addition to @Fredrik's tip about setting project items to "Copy to Output Directory", you can also specify a post-build action in the project's properties in the Build tab and include CMD commands like copy.exe and move.exe.
A: We use FinalBuilder to automate a bunch of post build / pre build tasks. There's also a web interface so you can kick off builds (or push websites) by logging in to the web site and clicking a button.
http://www.finalbuilder.com/
A: Can't you edit the Web Deployment project's MSBuild file for it to do what you want?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/15774",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Minimize javascript HTTP calls from AjaxControlToolkit controls? I love the ease that the ASP.NET Ajax control Toolkit provides to ASP.NET developers. However, I've not found it to be a scalable solution at all. I have a page with 2 date input fields. By putting a TextBox and an CalendarExtendar control attached to it for each date field, I get several (11, in fact) external Javascript calls to /ScriptResource.axd?d=xxxx
Is there any way to control this? Why does it suck so much? What's a better Ajax toolkit for .NET that can provide equal (or better) set of functionality that is more scalable, straight forward and completely customizable? NOT looking to reinvent the wheel here.
A: I always preferred to write my Ajax calls in javascript using JQuery or Prototype. ASP.NET Ajax Toolkit does make things easier, but it never seems to do so elegantly.
I personally would make a new Calendar Controller. This way you can control the AJAX (using JQuery/Prototype) calls that are being made.
A: ASP.NET AJAX allows you to register web services with the ScriptManager which will create JavaScript proxies for you to call. See http://msdn.microsoft.com/en-us/library/bb515101.aspx.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/15788",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: How do I validate xml against a DTD file in Python I need to validate an XML string (and not a file)
against a DTD description file.
How can that be done in python?
A: from the examples directory in the libxml2 python bindings:
#!/usr/bin/python -u
import libxml2
import sys
# Memory debug specific
libxml2.debugMemory(1)
dtd="""<!ELEMENT foo EMPTY>"""
instance="""<?xml version="1.0"?>
<foo></foo>"""
dtd = libxml2.parseDTD(None, 'test.dtd')
ctxt = libxml2.newValidCtxt()
doc = libxml2.parseDoc(instance)
ret = doc.validateDtd(ctxt, dtd)
if ret != 1:
print "error doing DTD validation"
sys.exit(1)
doc.freeDoc()
dtd.freeDtd()
del dtd
del ctxt
A: Another good option is lxml's validation which I find quite pleasant to use.
A simple example taken from the lxml site:
from StringIO import StringIO
from lxml import etree
dtd = etree.DTD(StringIO("""<!ELEMENT foo EMPTY>"""))
root = etree.XML("<foo/>")
print(dtd.validate(root))
# True
root = etree.XML("<foo>bar</foo>")
print(dtd.validate(root))
# False
print(dtd.error_log.filter_from_errors())
# <string>:1:0:ERROR:VALID:DTD_NOT_EMPTY: Element foo was declared EMPTY this one has content
| {
"language": "en",
"url": "https://stackoverflow.com/questions/15798",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "31"
} |
Q: Why are my auto-run applications acting weird on Vista? The product we are working on allows the user to easily set it up to run automatically whenever the computer is started. This is helpful because the product is part of the basic work environment of most of our users.
This feature was implemented not so long ago and for a while all was well, but when we started testing this feature on Vista the product started behaving really weird on startup. Specifically, our product makes use of another product (lets call it X) that it launches whenever it needs its services. The actual problem is that whenever X is launched immediately after log-on, it crashes or reports critical errors related to disk access (this happens even when X is launched directly - not through our product).
This happens whenever we run our product by registering it in the "Run" key in the registry or place a shortcut to it in the "Startup" folder inside the "Start Menu", even when we put a delay of ~20 seconds before actually starting to run. When we changed the delay to 70 seconds, all is well.
We tried to reproduce the problem by launching our product manually immediately after logon (by double-clicking on a shortcut placed on the desktop) but to no avail.
Now how is it possible that applications that run normally a minute after logon report such hard errors when starting immediately after logon?
A: This is the effect of a new feature in Vista called "Boxing":
Windows has several mechanisms that allow the user/admin to set up applications to automatically run when windows starts. This feature is mostly used for one of these purposes:
1. Programs that are part of the basic work environment of the user, such that the first action the user would usually take when starting the computer is to start them.
2. All sorts of background "agents" - skype, messenger, winamp etc.
When too many (or too heavy) programs are registered to run on startup the end result is that the user can't actually do anything for the first few seconds/minutes after login, which can be really annoying. In comes Vista's "Boxing" feature:
Briefly, Vista forces all programs invoked through the Run key to operate at low priority for the first 60 seconds after login. This affects both I/O priority (which is set to Very Low) and CPU priority. Very Low priority I/O requests do not pass through the file cache, but go directly to disk. Thus, they are much slower than regular I/O.
The length of the boxing period is set by the registry value:
"HKLM\Software\Microsoft\Windows\CurrentVersion\Explorer\Advanced\DelayedApps\Delay_Sec".
For a more detailed explanation see here and here
A: The program probably needs some more info put into its properties. It needs to "Run As", instead of just running.
Maybe this application should be developed as a service, instead of a program to be launched, or you could have service that launches the program when its determined the best window of opportunity.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/15805",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: SSRS - Post Publishing Tasks As part of the publishing "best practices" I came up with my own, I tend to archive report groups and republish the "updated" reports. However, with this stratedgy, I lose users associated with each report or have to rehide reports.
Is there an automated process I can use to hide reports or add users, after deploying form Visual Studio?
A: Paul Stovell posted some examples of Reporting Services automation that might get you going.
EDIT: The link to the Subversion repository has been updated and is now working
| {
"language": "en",
"url": "https://stackoverflow.com/questions/15815",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Changing the resolution of a VNC session in linux I use VNC to connect to a Linux workstation at work. At work I have a 20" monitor that runs at 1600x1200, while at home I use my laptop with its resolution of 1440x900.
If I set the vncserver to run at 1440x900 I miss out on a lot of space on my monitor, whereas if I set it to run at 1600x1200 it doesn't fit on the laptop's screen, and I have to scroll it all the time.
Is there any good way to resize a VNC session on the fly?
My VNC server is RealVNC E4.x (I don't remember the exact version) running on SuSE64.
A: Found out that the vnc4server (4.1.1) shipped with Ubuntu (10.04) is patched to also support changing the resolution on the fly via xrandr. Unfortunately the feature was hard to find because it is undocumented. So here it is...
Start the server with multiple 'geometry' instances, like:
vnc4server -geometry 1280x1024 -geometry 800x600
From a terminal in a vncviewer (with: 'allow dymanic desktop resizing' enabled) use xrandr to view the available modes:
xrandr
to change the resulution, for example use:
xrandr -s 800x600
Thats it.
A: Adding to Nathan's (accepted) answer:
I wanted to cycle through the list of resolutions but didnt see anything for it:
function vncNextRes()
{
xrandr -s $(($(xrandr | grep '^*'|sed 's@^\*\([0-9]*\).*$@\1@')+1)) > /dev/null 2>&1 || \
xrandr -s 0
}
It gets the current index, steps to the next one and cycles back to 0 on error (i.e. end)
EDIT
Modified to match a later version of xrandr ("*" is on end of line and no leading resolution identifier).
function vncNextRes()
{
xrandr -s $(($(xrandr 2>/dev/null | grep -n '\* *$'| sed 's@:.*@@')-2)) || \
xrandr -s 0
}
A: Solution by @omiday worked for me in Xvnc TigerVNC 1.1.0, so I condensed it into a single bash function vncsize x y. Use it like this: vncsize 1400 1000. It works for any VNC output name, "default" or "VNC-0".
function vncsize {
local x=$1 y=$2
local mode
if mode=$(cvt "$x" "$y" 2>/dev/null)
then
if [[ $mode =~ "Modeline (.*)$" ]]
then
local newMode=${BASH_REMATCH[1]//\"/}
local modeName=${newMode%% *}
local newSize=( ${modeName//[\"x_]/ } )
local screen=$(xrandr -q|grep connected|cut -d' ' -f1)
xrandr --newmode $newMode
xrandr --addmode "$screen" "$modeName"
xrandr --size "${newSize[0]}x${newSize[1]}" &&
return 0
else
echo "Unable to parse modeline for ($x $y) from $mode"
return 2
fi
else
echo "\`$x $y' is not a valid X Y pair"
return 1
fi
}
A: I'm running TigerVNC on my Linux server, which has basic randr support.
I just start vncserver without any -randr or multiple -geometry options.
When I run xrandr in a terminal, it displays all the available screen resolutions:
bash> xrandr
SZ: Pixels Physical Refresh
0 1920 x 1200 ( 271mm x 203mm ) 60
1 1920 x 1080 ( 271mm x 203mm ) 60
2 1600 x 1200 ( 271mm x 203mm ) 60
3 1680 x 1050 ( 271mm x 203mm ) 60
4 1400 x 1050 ( 271mm x 203mm ) 60
5 1360 x 768 ( 271mm x 203mm ) 60
6 1280 x 1024 ( 271mm x 203mm ) 60
7 1280 x 960 ( 271mm x 203mm ) 60
8 1280 x 800 ( 271mm x 203mm ) 60
9 1280 x 720 ( 271mm x 203mm ) 60
*10 1024 x 768 ( 271mm x 203mm ) *60
11 800 x 600 ( 271mm x 203mm ) 60
12 640 x 480 ( 271mm x 203mm ) 60
Current rotation - normal
Current reflection - none
Rotations possible - normal
Reflections possible - none
I can then easily switch to another resolution (f.e. switch to 1360x768):
bash> xrandr -s 5
I'm using TightVnc viewer as the client and it automatically adapts to the new resolution.
A: As this question comes up first on Google I thought I'd share a solution using TigerVNC which is the default these days.
xrandr allows selecting the display modes (a.k.a resolutions) however
due to modelines being hard
coded
any additional modeline such as "2560x1600" or "1600x900" would need to
be added into the
code. I
think the developers who wrote the code are much smarter and the hard
coded list is just a sample of values. It leads to the conclusion that
there must be a way to add custom modelines and man xrandr confirms
it.
With that background if the goal is to share a VNC session between two
computers with the above resolutions and assuming that the VNC server is
the computer with the resolution of "1600x900":
*
*Start a VNC session with a geometry matching the physical display:
$ vncserver -geometry 1600x900 :1
*On the "2560x1600" computer start the VNC viewer (I prefer
Remmina) and connect to the remote VNC
session:
host:5901
*Once inside the VNC session start up a terminal window.
*Confirm that the new geometry is available in the VNC session:
$ xrandr
Screen 0: minimum 32 x 32, current 1600 x 900, maximum 32768 x 32768
VNC-0 connected 1600x900+0+0 0mm x 0mm
1600x900 60.00 +
1920x1200 60.00
1920x1080 60.00
1600x1200 60.00
1680x1050 60.00
1400x1050 60.00
1360x768 60.00
1280x1024 60.00
1280x960 60.00
1280x800 60.00
1280x720 60.00
1024x768 60.00
800x600 60.00
640x480 60.00
and you'll notice the screen being quite small.
*List the modeline (see xrandr article in ArchLinux wiki) for
the "2560x1600" resolution:
$ cvt 2560 1600
# 2560x1600 59.99 Hz (CVT 4.10MA) hsync: 99.46 kHz; pclk: 348.50 MHz
Modeline "2560x1600_60.00" 348.50 2560 2760 3032 3504 1600 1603 1609 1658 -hsync +vsync
or if the monitor is old get the GTF timings:
$ gtf 2560 1600 60
# 2560x1600 @ 60.00 Hz (GTF) hsync: 99.36 kHz; pclk: 348.16 MHz
Modeline "2560x1600_60.00" 348.16 2560 2752 3032 3504 1600 1601 1604 1656 -HSync +Vsync
*Add the new modeline to the current VNC session:
$ xrandr --newmode "2560x1600_60.00" 348.16 2560 2752 3032 3504 1600 1601 1604 1656 -HSync +Vsync
*In the above xrandr output look for the display name on the second
line:
VNC-0 connected 1600x900+0+0 0mm x 0mm
*Bind the new modeline to the current VNC virtual monitor:
$ xrandr --addmode VNC-0 "2560x1600_60.00"
*Use it:
$ xrandr -s "2560x1600_60.00"
A: I think your best best is to run the VNC server with a different geometry on a different port. I would try based on the man page
$vncserver :0 -geometry 1600x1200
$vncserver :1 -geometry 1440x900
Then you can connect from work to one port and from home to another.
Edit: Then use xmove to move windows between the two x-servers.
A: Interestingly no one answered this. In TigerVNC, when you are logged into the session. Go to System > Preference > Display from the top menu bar ( I was using Cent OS as my remote Server). Click on the resolution drop down, there are various settings available including 1080p. Select the one that you like. It will change on the fly.
Make sure you Apply the new setting when a dialog is prompted. Otherwise it will revert back to the previous setting just like in Windows
A: Perhaps the most ignorant answer I've posted but here goes: Use TigerVNC client/viewer and check 'Resize remote session to local window' under Screen tab of options.
I don't know what the $%#@ TigerVNC client tells remote vncserver or xrandr or Xvnc or gnome or ... but it resizes when I change the TigerVNC Client window.
My setup:
*
*Tiger VNC Server running on CentOS 6. Hosting GNOME desktop. (Works with RHEL 6.6 too)
*Windows some version with Tiger VNC Client.
With this the resolution changes to fit the size of the client window no matter what it is, and it's not zooming, it's actual resolution change (I can see the new resolution in xrandr output).
I tried all I could to add a new resolution to the xrandr, but to no avail, always end up with 'xrandr: Failed to get size of gamma for output default' error.
Versions with which it works for me right now (although I've not had issues with ANY versions in the past, I just install the latest using yum install gnome-* tigervnc-server and works fine):
OS: RHEL 6.6 (Santiago)
VNC Server:
Name : tigervnc-server
Arch : x86_64
Version : 1.1.0
Release : 16.el6
# May be this is relevant..
$ xrandr --version
xrandr program version 1.4.0
Server reports RandR version 1.4
$
# I start the server using vncserver -geometry 800x600
# Xvnc is started by vncserver with following args:
/usr/bin/Xvnc :1 -desktop plabb13.sgdcelab.sabre.com:1 (sg219898) -auth /login/sg219898/.Xauthority
-geometry 800x600 -rfbwait 30000 -rfbauth /login/sg219898/.vnc/passwd -rfbport 5901 -fp catalogue:/e
tc/X11/fontpath.d -pn
# I'm running GNOME (installed using sudo yum install gnome-*)
Name : gnome-desktop
Arch : x86_64
Version : 2.28.2
Release : 11.el6
Name : gnome-session
Arch : x86_64
Version : 2.28.0
Release : 22.el6
Connect using Tiger 32-bit VNC Client v1.3.1 on Windows 7.
A: Real VNC server 4.4 includes support for Xrandr, which allows resizing the VNC. Start the server with:
vncserver -geometry 1600x1200 -randr 1600x1200,1440x900,1024x768
Then resize with:
xrandr -s 1600x1200
xrandr -s 1440x900
xrandr -s 1024x768
A: Guys this is really simple.
login via ssh into your pi
execute
vncserver -geometry 1200x1600
This will generate a new session :1
connect with your vnc client at ipaddress:1
Thats it.
A: I'm not sure about linux, but under windows, tightvnc will detect and adapt to resolution changes on the server.
So you should be able to VNC into the workstation, do the equivalent of right-click on desktop, properties, set resolution to whatever, and have your client vnc window resize itself accordingly.
A:
On the other hand, if there's a way to
move an existing window from one
X-server to another, that might solve
the problem.
I think you can use xmove to move windows between two separate x-servers. So if it works, this should at least give you a way to do what you want albeit not as easily as changing the resolution.
A: As far as I know there's no way to change the client's resolution just using VNC, as it is just a "monitor mirroring" application.
TightVNC however (which is a VNC client and server application) can resize the screen on the client side, i.e. making everything a little smaller (similar to image resizing techniques in graphics programs). That should work if you don't use too small font sizes. VNC should theoretically be compatible between different VNC applications.
A: I have a simple idea, something like this:
#!/bin/sh
echo `xrandr --current | grep current | awk '{print $8}'` >> RES1
echo `xrandr --current | grep current | awk '{print $10}'` >> RES2
cat RES2 | sed -i 's/,//g' RES2
P1RES=$(cat RES1)
P2RES=$(cat RES2)
rm RES1 RES2
echo "$P1RES"'x'"$P2RES" >> RES
RES=$(cat RES)
# Play The Game
# Finish The Game with Lower Resolution
xrandr -s $RES
Well, I need a better solution for all display devices under Linux and Similars S.O
A: I think that depends on your window manager.
I'm a windows user, so this might be a wrong guess, but: Isn't there something called X-Server running on linux machines - at least on ones that might be interesting targets for VNC - that you can connect to with "X-Clients"?
VNC just takes everything that's on the screen and "tunnels it through your network". If I'm not totally wrong then the "X" protocol should give you the chance to use your client's desktop resolution.
Give X-Server on Wikipedia a try, that might give you a rough overview.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/15816",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "172"
} |
Q: Reading Excel files from C# Is there a free or open source library to read Excel files (.xls) directly from a C# program?
It does not need to be too fancy, just to select a worksheet and read the data as strings. So far, I've been using Export to Unicode text function of Excel, and parsing the resulting (tab-delimited) file, but I'd like to eliminate the manual step.
A: If it is just simple data contained in the Excel file you can read the data via ADO.NET. See the connection strings listed here:
http://www.connectionstrings.com/?carrier=excel2007
or
http://www.connectionstrings.com/?carrier=excel
-Ryan
Update: then you can just read the worksheet via something like select * from [Sheet1$]
A: I did a lot of reading from Excel files in C# a while ago, and we used two approaches:
*
*The COM API, where you access Excel's objects directly and manipulate them through methods and properties
*The ODBC driver that allows to use Excel like a database.
The latter approach was much faster: reading a big table with 20 columns and 200 lines would take 30 seconds via COM, and half a second via ODBC. So I would recommend the database approach if all you need is the data.
Cheers,
Carl
A: ExcelMapper is an open source tool (http://code.google.com/p/excelmapper/) that can be used to read Excel worksheets as Strongly Typed Objects. It supports both xls and xlsx formats.
A: I want to show a simple method to read xls/xlsx file with .NET. I hope that the following will be helpful for you.
private DataTable ReadExcelToTable(string path)
{
//Connection String
string connstring = "Provider=Microsoft.ACE.OLEDB.12.0;Data Source=" + path + ";Extended Properties='Excel 8.0;HDR=NO;IMEX=1';";
//the same name
//string connstring = Provider=Microsoft.JET.OLEDB.4.0;Data Source=" + path + //";Extended Properties='Excel 8.0;HDR=NO;IMEX=1';";
using(OleDbConnection conn = new OleDbConnection(connstring))
{
conn.Open();
//Get All Sheets Name
DataTable sheetsName = conn.GetOleDbSchemaTable(OleDbSchemaGuid.Tables,new object[]{null,null,null,"Table"});
//Get the First Sheet Name
string firstSheetName = sheetsName.Rows[0][2].ToString();
//Query String
string sql = string.Format("SELECT * FROM [{0}]",firstSheetName);
OleDbDataAdapter ada =new OleDbDataAdapter(sql,connstring);
DataSet set = new DataSet();
ada.Fill(set);
return set.Tables[0];
}
}
Code is from article: http://www.c-sharpcorner.com/uploadfile/d2dcfc/read-excel-file-with-net/. You can get more details from it.
A: Not free, but with the latest Office there's a very nice automation .Net API. (there has been an API for a long while but was nasty COM) You can do everything you want / need in code all while the Office app remains a hidden background process.
A: Forgive me if I am off-base here, but isn't this what the Office PIA's are for?
A: Lately, partly to get better at LINQ.... I've been using Excel's automation API to save the file as XML Spreadsheet and then get process that file using LINQ to XML.
A: SpreadsheetGear for .NET is an Excel compatible spreadsheet component for .NET. You can see what our customers say about performance on the right hand side of our product page. You can try it yourself with the free, fully-functional evaluation.
A: SmartXLS is another excel spreadsheet component which support most features of excel Charts,formulas engines, and can read/write the excel2007 openxml format.
A: The .NET component Excel Reader .NET may satisfy your requirement. It's good enought for reading XLSX and XLS files. So try it from:
http://www.devtriogroup.com/ExcelReader
A: The ADO.NET approach is quick and easy, but it has a few quirks which you should be aware of, especially regarding how DataTypes are handled.
This excellent article will help you avoid some common pitfalls:
http://blog.lab49.com/archives/196
A: This is what I used for Excel 2003:
Dictionary<string, string> props = new Dictionary<string, string>();
props["Provider"] = "Microsoft.Jet.OLEDB.4.0";
props["Data Source"] = repFile;
props["Extended Properties"] = "Excel 8.0";
StringBuilder sb = new StringBuilder();
foreach (KeyValuePair<string, string> prop in props)
{
sb.Append(prop.Key);
sb.Append('=');
sb.Append(prop.Value);
sb.Append(';');
}
string properties = sb.ToString();
using (OleDbConnection conn = new OleDbConnection(properties))
{
conn.Open();
DataSet ds = new DataSet();
string columns = String.Join(",", columnNames.ToArray());
using (OleDbDataAdapter da = new OleDbDataAdapter(
"SELECT " + columns + " FROM [" + worksheet + "$]", conn))
{
DataTable dt = new DataTable(tableName);
da.Fill(dt);
ds.Tables.Add(dt);
}
}
A: How about Excel Data Reader?
http://exceldatareader.codeplex.com/
I've used in it anger, in a production environment, to pull large amounts of data from a variety of Excel files into SQL Server Compact. It works very well and it's rather robust.
A: I recommend the FileHelpers Library which is a free and easy to use .NET library to import/export data from EXCEL, fixed length or delimited records in files, strings or streams + More.
The Excel Data Link Documentation Section
http://filehelpers.sourceforge.net/example_exceldatalink.html
A: You can try using this open source solution that makes dealing with Excel a lot more cleaner.
http://excelwrapperdotnet.codeplex.com/
A: SpreadsheetGear is awesome. Yes it's an expense, but compared to twiddling with these other solutions, it's worth the cost. It is fast, reliable, very comprehensive, and I have to say after using this product in my fulltime software job for over a year and a half, their customer support is fantastic!
A: The solution that we used, needed to:
*
*Allow Reading/Writing of Excel produced files
*Be Fast in performance (not like using COMs)
*Be MS Office Independent (needed to be usable without clients having MS Office installed)
*Be Free or Open Source (but actively developed)
There are several choices, but we found NPoi (.NET port of Java's long existing Poi open source project) to be the best:
http://npoi.codeplex.com/
It also allows working with .doc and .ppt file formats
A: If it's just tabular data. I would recommend file data helpers by Marcos Melli which can be downloaded here.
A: Late to the party, but I'm a fan of LinqToExcel
A: Here's some code I wrote in C# using .NET 1.1 a few years ago. Not sure if this would be exactly what you need (and may not be my best code :)).
using System;
using System.Data;
using System.Data.OleDb;
namespace ExportExcelToAccess
{
/// <summary>
/// Summary description for ExcelHelper.
/// </summary>
public sealed class ExcelHelper
{
private const string CONNECTION_STRING = "Provider=Microsoft.Jet.OLEDB.4.0;Data Source=<FILENAME>;Extended Properties=\"Excel 8.0;HDR=Yes;\";";
public static DataTable GetDataTableFromExcelFile(string fullFileName, ref string sheetName)
{
OleDbConnection objConnection = new OleDbConnection();
objConnection = new OleDbConnection(CONNECTION_STRING.Replace("<FILENAME>", fullFileName));
DataSet dsImport = new DataSet();
try
{
objConnection.Open();
DataTable dtSchema = objConnection.GetOleDbSchemaTable(OleDbSchemaGuid.Tables, null);
if( (null == dtSchema) || ( dtSchema.Rows.Count <= 0 ) )
{
//raise exception if needed
}
if( (null != sheetName) && (0 != sheetName.Length))
{
if( !CheckIfSheetNameExists(sheetName, dtSchema) )
{
//raise exception if needed
}
}
else
{
//Reading the first sheet name from the Excel file.
sheetName = dtSchema.Rows[0]["TABLE_NAME"].ToString();
}
new OleDbDataAdapter("SELECT * FROM [" + sheetName + "]", objConnection ).Fill(dsImport);
}
catch (Exception)
{
//raise exception if needed
}
finally
{
// Clean up.
if(objConnection != null)
{
objConnection.Close();
objConnection.Dispose();
}
}
return dsImport.Tables[0];
#region Commented code for importing data from CSV file.
// string strConnectionString = "Provider=Microsoft.Jet.OLEDB.4.0;" +"Data Source=" + System.IO.Path.GetDirectoryName(fullFileName) +";" +"Extended Properties=\"Text;HDR=YES;FMT=Delimited\"";
//
// System.Data.OleDb.OleDbConnection conText = new System.Data.OleDb.OleDbConnection(strConnectionString);
// new System.Data.OleDb.OleDbDataAdapter("SELECT * FROM " + System.IO.Path.GetFileName(fullFileName).Replace(".", "#"), conText).Fill(dsImport);
// return dsImport.Tables[0];
#endregion
}
/// <summary>
/// This method checks if the user entered sheetName exists in the Schema Table
/// </summary>
/// <param name="sheetName">Sheet name to be verified</param>
/// <param name="dtSchema">schema table </param>
private static bool CheckIfSheetNameExists(string sheetName, DataTable dtSchema)
{
foreach(DataRow dataRow in dtSchema.Rows)
{
if( sheetName == dataRow["TABLE_NAME"].ToString() )
{
return true;
}
}
return false;
}
}
}
A: var fileName = string.Format("{0}\\fileNameHere", Directory.GetCurrentDirectory());
var connectionString = string.Format("Provider=Microsoft.Jet.OLEDB.4.0; data source={0}; Extended Properties=Excel 8.0;", fileName);
var adapter = new OleDbDataAdapter("SELECT * FROM [workSheetNameHere$]", connectionString);
var ds = new DataSet();
adapter.Fill(ds, "anyNameHere");
DataTable data = ds.Tables["anyNameHere"];
This is what I usually use. It is a little different because I usually stick a AsEnumerable() at the edit of the tables:
var data = ds.Tables["anyNameHere"].AsEnumerable();
as this lets me use LINQ to search and build structs from the fields.
var query = data.Where(x => x.Field<string>("phoneNumber") != string.Empty).Select(x =>
new MyContact
{
firstName= x.Field<string>("First Name"),
lastName = x.Field<string>("Last Name"),
phoneNumber =x.Field<string>("Phone Number"),
});
A: Koogra is an open-source component written in C# that reads and writes Excel files.
A: While you did specifically ask for .xls, implying the older file formats, for the OpenXML formats (e.g. xlsx) I highly recommend the OpenXML SDK (http://msdn.microsoft.com/en-us/library/bb448854.aspx)
A: you could write an excel spreadsheet that loads a given excel spreadsheet and saves it as csv (rather than doing it manually).
then you could automate that from c#.
and once its in csv, the c# program can grok that.
(also, if someone asks you to program in excel, it's best to pretend you don't know how)
(edit: ah yes, rob and ryan are both right)
A: I know that people have been making an Excel "extension" for this purpose.
You more or less make a button in Excel that says "Export to Program X", and then export and send off the data in a format the program can read.
http://msdn.microsoft.com/en-us/library/ms186213.aspx should be a good place to start.
Good luck
A: Just did a quick demo project that required managing some excel files. The .NET component from GemBox software was adequate for my needs. It has a free version with a few limitations.
http://www.gemboxsoftware.com/GBSpreadsheet.htm
A: Excel Package is an open-source (GPL) component for reading/writing Excel 2007 files. I used it on a small project, and the API is straightforward. Works with XLSX only (Excel 200&), not with XLS.
The source code also seems well-organized and easy to get around (if you need to expand functionality or fix minor issues as I did).
At first, I tried the ADO.Net (Excel connection string) approach, but it was fraught with nasty hacks -- for instance if second row contains a number, it will return ints for all fields in the column below and quietly drop any data that doesn't fit.
A: Take.io Spreadsheet will do this work for you, and at no charge. Just take a look at this.
A: We use ClosedXML in rather large systems.
*
*Free
*Easy to install
*Straight forward coding
*Very responsive support
*Developer team is extremly open to new suggestions. Often new features and bug fixes are implemented within the same week
A: I just used ExcelLibrary to load an .xls spreadsheet into a DataSet. Worked great for me.
A: Excel Data Reader is the way to go!
It´s Open Source, at http://exceldatareader.codeplex.com/ and actively developed.
We been using it for reading Tabular (and sometimes not so tabular) worksheets for a couple of years now (In a financial application).
Works like a charm to read unit test data from human-readable sheets.
Just avoid the feature of trying to return DateTime's, as, for Excel, DateTime's are just double numbers.
A: If you have multiple tables in the same worksheet you can give each table an object name and read the table using the OleDb method as shown here: http://vbktech.wordpress.com/2011/05/10/c-net-reading-and-writing-to-multiple-tables-in-the-same-microsoft-excel-worksheet/
| {
"language": "en",
"url": "https://stackoverflow.com/questions/15828",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "232"
} |
Q: Optimizing a search algorithm in C Can the performance of this sequential search algorithm (taken from
The Practice of Programming) be improved using any of C's native utilities, e.g. if I set the i variable to be a register variable ?
int lookup(char *word, char*array[])
{
int i
for (i = 0; array[i] != NULL; i++)
if (strcmp(word, array[i]) == 0)
return i;
return -1;
}
A: Yes, but only very slightly. A much bigger performance improvement can be achieved by using better algorithms (for example keeping the list sorted and doing a binary search).
In general optimizing a given algorithm only gets you so far. Choosing a better algorithm (even if it's not completely optimized) can give you a considerable (order of magnitude) performance improvement.
A: I think, it will not make much of a difference. The compiler will already optimize it in that direction.
Besides, the variable i does not have much impact, word stays constant throughout the function and the rest is too large to fit in any register. It is only a matter how large the cache is and if the whole array might fit in there.
String comparisons are rather expensive computationally.
Can you perhaps use some kind of hashing for the array before searching?
A: There is well-known technique as sentinal method.
To use sentinal method, you must know about the length of "array[]".
You can remove "array[i] != NULL" comparing by using sentinal.
int lookup(char *word, char*array[], int array_len)
{
int i = 0;
array[array_len] = word;
for (;; ++i)
if (strcmp(word, array[i]) == 0)
break;
array[array_len] = NULL;
return (i != array_len) ? i : -1;
}
A: If you're reading TPOP, you will next see how they make this search many times faster with different data structures and algorithms.
But you can make things a bit faster by replacing things like
for (i = 0; i < n; ++i)
foo(a[i]);
with
char **p = a;
for (i = 0; i < n; ++i)
foo(*p);
++p;
If there is a known value at the end of the array (e.g. NULL) you can eliminate the loop counter:
for (p = a; *p != NULL; ++p)
foo(*p)
Good luck, that's a great book!
A: To optimize that code the best bet would be to rewrite the strcmp routine since you are only checking for equality and don't need to evaluate the entire word.
Other than that you can't do much else. You can't sort as it appears you are looking for text within a larger text. Binary search won't work either since the text is unlikely to be sorted.
My 2p (C-psuedocode):
wrd_end = wrd_ptr + wrd_len;
arr_end = arr_ptr - wrd_len;
while (arr_ptr < arr_end)
{
wrd_beg = wrd_ptr; arr_beg = arr_ptr;
while (wrd_ptr == arr_ptr)
{
wrd_ptr++; arr_ptr++;
if (wrd_ptr == wrd_en)
return wrd_beg;
}
wrd_ptr++;
}
A: Realistically, setting I to be a register variable won't do anything that the compiler wouldn't do already.
If you are willing to spend some time upfront preprocessing the reference array, you should google "The World's Fastest Scrabble Program" and implement that. Spoiler: it's a DAG optimized for character lookups.
A: Mark Harrison: Your for loop will never terminate! (++p is indented, but is not actually within the for :-)
Also, switching between pointers and indexing will generally have no effect on performance, nor will adding register keywords (as mat already mentions) -- the compiler is smart enough to apply these transformations where appropriate, and if you tell it enough about your cpu arch, it will do a better job of these than manual psuedo-micro-optimizations.
A: A faster way to match strings would be to store them Pascal style. If you don't need more than 255 characters per string, store them roughly like this, with the count in the first byte:
char s[] = "\x05Hello";
Then you can do:
for(i=0; i<len; ++i) {
s_len = strings[i][0];
if(
s_len == match_len
&& strings[i][s_len] == match[s_len-1]
&& 0 == memcmp(strings[i]+1, match, s_len-1)
) {
return 1;
}
}
And to get really fast, add memory prefetch hints for string start + 64, + 128 and the start of the next string. But that's just crazy. :-)
A: Another fast way to do it is to get your compiler to use a SSE2 optimized memcmp. Use fixed-length char arrays and align so the string starts on a 64-byte alignment. Then I believe you can get the good memcmp functions if you pass const char match[64] instead of const char *match into the function, or strncpy match into a 64,128,256,whatever byte array.
Thinking a bit more about this, these SSE2 match functions might be part of packages like Intel's and AMD's accelerator libraries. Check them out.
A: /* there is no more quick */
int lookup(char *word, char*array[])
{
int i;
for(i=0; *(array++) != NULL;i++)
if (strcmp(word, *array) == 0)
return i;
return -1;
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/15838",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
} |
Q: Refactoring away labeled loops After I was convinced that labeled breaks/continues are a total "nono" over here, I need help to remove the label out of my code.
I have a square matrix and a vector that has the same length. The vector has already some values in it an depending on the values in the matrix the vector is changed in the loop.
I hope, the code-fragment is basically understandable…
vectorLoop:
for( int idx = 0; idx < vectorLength; idx++) {
if( conditionAtVectorPosition( v, idx ) ) continue vectorLoop;
matrixLoop:
for( rowIdx = 0; rowIdx < n; rowIdx++ ) {
if( anotherConditionAtVector( v, rowIdx ) ) continue matrixLoop;
if( conditionAtMatrixRowCol( m, rowIdx, idx ) ) continue vectorLoop;
}
setValueInVector( v, idx );
}
Please convince me, that there is a more readable/better version without the labels.
A: Looking at the solutions presented so far:
*
*They all look less readable than the original, in that they involve spending more code on the mechanism of the code rather than on the algorithm itself
*Some of them are broken, or were before they were edited. Most damning is the fact that people are having to think quite hard about how to write the code without labels and not break anything.
*Some come with a performance penalty of running the same test twice, which may not always be trivial. The alternative to that is storing and passing round booleans, which gets ugly.
*Refactoring the relevant part of the code into a method is effectively a no-op: it rearranges how the code is laid out in the file, but has no effect on how it's executed.
All of which makes me believe that, at least in the case of this question as phrased, the label is the correct solution and doesn't need to be refactored away. Certainly there are cases where labels are used incorrectly and should be refactored away. I just don't think it should be treated as some unbreakable rule.
A: Easily, my good man.
for( int idx = 0; idx < vectorLength; idx++) {
if( conditionAtVectorPosition( v, idx ) ) continue;
for( rowIdx = 0; rowIdx < n; rowIdx++ ) {
if( anotherConditionAtVector( v, rowIdx ) ) continue;
if( conditionAtMatrixRowCol( m, rowIdx, idx ) ) break;
}
if( !conditionAtMatrixRowCol( m, rowIdx, idx ) )
setValueInVector( v, idx );
}
EDIT: Quite correct you are Anders. I've edited my solution to take that into account as well.
A: @Patrick you are assuming calling setValueInVector( v, idx ); at the end of the second loop is OK. If the code is to be identical, logically, it must be rewritten to somethng like this:
for( int idx = 0; idx
A: From reading your code.
*
*I noticed your eliminating the invalid vector positions at conditionAtVectorPosition then you remove the invalid rows at anotherConditionAtVector.
*It seems that checking rows at anotherConditionAtVector is redundant since whatever the value of idx is, anotherConditionAtVector only depends on the row index (assuming anotherConditionAtVector has no side effects).
So you can do this:
*
*Get the valid positions first using conditionAtVectorPosition (these are the valid columns).
*Then get the valid rows using anotherConditionAtVector.
*Finally, use conditionAtMatrixRowCol using the valid columns and rows.
I hope this helps.
A: @Sadie:
They all look less readable than the original, in that they involve spending more code on the mechanism of the code rather than on the algorithm itself
Externalizing the second loop outside the algorithm is not necessarily less readable. If the method name is well chosen, it can improve readability.
Some of them are broken, or were before they were edited. Most damning is the fact that people are having to think quite hard about how to write the code without labels and not break anything.
I have a different point of view: some of them are broken because it is hard to figure out the behavior of the original algorithm.
Some come with a performance penalty of running the same test twice, which may not always be trivial. The alternative to that is storing and passing round booleans, which gets ugly.
The performance penalty is minor. However I agree that running a test twice is not a nice solution.
Refactoring the relevant part of the code into a method is effectively a no-op: it rearranges how the code is laid out in the file, but has no effect on how it's executed.
I don't see the point. Yep, it doesn't change the behavior, like... refactoring?
Certainly there are cases where labels are used incorrectly and should be refactored away. I just don't think it should be treated as some unbreakable rule.
I totally agree. But as you have pointed out, some of us have difficulties while refactoring this example. Even if the initial example is readable, it is hard to maintain.
A: @Nicolas
Some of them are broken, or were before they were edited. Most damning is the fact that
people are having to think quite hard about how to write the code without labels and not
break anything.
I have a different point of view: some of them are broken because it is hard to figure out
the behavior of the original algorithm.
I realise that it's subjective, but I don't have any trouble reading the original algorithm. It's shorter and clearer than the proposed replacements.
What all the refactorings in this thread do is emulate the behaviour of a label using other language features - as if you were porting the code to a language that didn't have labels.
A: Some come with a performance penalty of running the same test twice, which may not always be trivial. The alternative to that is storing and passing round booleans, which gets ugly.
The performance penalty is minor. However I agree that running a test twice is not a nice solution.
I believe the question was how to remove the labels, not how to optimize the algorithm. It appeared to me that the original poster was unaware of how to use 'continue' and 'break' keywords without labels, but of course, my assumptions may be wrong.
When it comes to performance, the post does not give any information about the implementation of the other functions, so for all I know they might as well be downloading the results via FTP as consisting of simple calculations inlined by the compiler.
That being said, doing the same test twice is not optimal—in theory.
EDIT: On a second thought, the example is actually not a horrible use of labels. I agree that "goto is a no-no", but not because of code like this. The use of labels here does not actually affect the readability of the code in a significant way. Of course, they are not required and can easily be omitted, but not using them simply because "using labels is bad" is not a good argument in this case. After all, removing the labels does not make the code much easier to read, as others have already commented.
A: This question was not about optimizing the algorithm - but thanks anyway ;-)
At the time I wrote it, I considered the labeled continue as a readable solution.
I asked SO a question about the convention (having the label in all caps or not) for labels in Java.
Basically every answer told me "do not use them - there is always a better way! refactor!". So I posted this question to ask for a more readable (and therefore better?) solution.
Until now, I am not completely convinced by the alternatives presented so far.
Please don't get me wrong. Labels are evil most of the time.
But in my case, the conditional tests are pretty simple and the algorithm is taken from a mathematical paper and therefore very likely to not change in the near future. So I prefer having all the relevant parts visible at once instead of having to scroll to another method named something like checkMatrixAtRow(x).
Especially at more complex mathematical algorithms, I find it pretty hard to find "good" function-names - but I guess that is yet another question
A: I think that labelled loops are so uncommon that you can pick whatever method of labelling works for you - what you have there makes your intentions with the continues perfectly clear.
After leading the charge to suggest refactoring the loops in the original question and now seeing the code in question, I think you've got a very readable loop there.
What I had imagined was a very different chunk of code - putting the actual example up, I can see it is much cleaner than I had thought.
My apologies for the misunderstanding.
A: Does this work for you? I extracted the inner loop into a method CheckedEntireMatrix (you can name it better than me) - Also my java is a bit rusty.. but I think it gets the message across
for( int idx = 0; idx < vectorLength; idx++) {
if( conditionAtVectorPosition( v, idx )
|| !CheckedEntireMatrix(v)) continue;
setValueInVector( v, idx );
}
private bool CheckedEntireMatrix(Vector v)
{
for( rowIdx = 0; rowIdx < n; rowIdx++ ) {
if( anotherConditionAtVector( v, rowIdx ) ) continue;
if( conditionAtMatrixRowCol( m, rowIdx, idx ) ) return false;
}
return true;
}
A: Gishu has the right idea :
for( int idx = 0; idx < vectorLength; idx++) {
if (!conditionAtVectorPosition( v, idx )
&& checkedRow(v, idx))
setValueInVector( v, idx );
}
private boolean checkedRow(Vector v, int idx) {
for( rowIdx = 0; rowIdx < n; rowIdx++ ) {
if( anotherConditionAtVector( v, rowIdx ) ) continue;
if( conditionAtMatrixRowCol( m, rowIdx, idx ) ) return false;
}
return true;
}
A: I'm not too sure to understand the first continue.
I would copy Gishu and write something like ( sorry if there are some mistakes ) :
for( int idx = 0; idx < vectorLength; idx++) {
if( !conditionAtVectorPosition( v, idx ) && CheckedEntireMatrix(v))
setValueInVector( v, idx );
}
inline bool CheckedEntireMatrix(Vector v) {
for(rowIdx = 0; rowIdx < n; rowIdx++)
if ( !anotherConditionAtVector(v,rowIdx) && conditionAtMatrixRowCol(m,rowIdx,idx) )
return false;
return true;
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/15851",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "19"
} |
Q: CSS Drop-Down Menus - "Best"? Most feature-rich? I'm in the unfortunate position of having to implement a drop-down cascading menu on a site I'm building. I'm looking for a Suckerfish-style solution that is primarily CSS-based and works on a simple set of nested ULs and LIs.
Son of Suckerfish seems like the way to go, but I don't like the way it just disappears the second you move the mouse away, as users with co-ordination difficulties will have a nightmare navigating the site (or just not bother, but since it's a corporate site there are some who will probably have to use whatever I implement).
Neat features that I've not even thought about needing are welcome, but the two main elements I'm looking for are:
*
*Multi-level using a nested UL/LI structure
*Small (possibly configurable?) delay before disappearing when the menu is "mouseout"-ed, even if it is provided by some extra JavaScript.
A: I would strongly suggest that you use superfish, the jQuery adaptation of the suckerfish menu. It has loads of features (and delay is one of them), adds some fancy animation capabilities, and degrades to the normal suckerfish menu gracefully. It also doesn't need any extra markup.
A: You could use jQuery. Here is an example: http://www.jqueryplugins.com/plugin/47/
A: You won't be able to get a pure CSS drop down menu with the functionality you require. You'll have to use some kind of Javascript. Either a library like JQuery that has been mentioned or by modifying the Suckerfish code to use onclick instead of onmouseover/out.
But by going an all Javascript route you could be making it easier for one group of people ("users with co-ordination difficulties") but making it difficult for others (anyone with Javascript turned off for some reason).
You may want to look into adding some alternatives - mouse controlled hover menu for those comfortable with the mouse; keyboard based control via access keys and the like for others.
A: I am using the solution implemented on Steve Gibson's site grc.com. It does everything I need, and uses no javascript. The delay thing you are looking for isn't there however, so you will probably need to add some Javascript for that.
A: Part of the coordination problem can stem from bad design. Make sure you have fairly large buttons with, if possible, overlap on all sides. Ideally a top nav button would have a drop down menu appearing centered below it (instead of left aligned). Sub-menus of the drop-down would follow a similar pattern. I've found having this level of error padding accommodates uncoordinated users, and saves you the trouble of programming in javascript.
Every site is different of course, so I present this more as an alternative 'what-if' solution.
A: I can't see a way to add delay outside of JavaScript - but if you're going to use JavaScript you may as well use a JavaScript controlled menu.
If you follow a semantically-correct nav pattern and set it up so it display's normally (e.g. static) when JavaScript is not present you should be fine with whatever you use.
It's all about your target audience - who's larger? JS-disabled or users with co-ordination difficulties? I would guess that the latter require the priority (if not for percentage use then disability laws).
A: As Lee Theobald said, drop/down need Javascript, and Jquery is a great choice. But in the side of accesibility, take a look at "Listamatic" a great list of menus and special this nested.
A: My first recomendation echos one already made - Steve Gibson's CSS Menu. It uses no JavaScript, is about as cross-platform compliant as you're going to get, and is relatively simple to implement.
If that doesn't work, my JS-based reccomendation goes to mygosuMenu. I've been using it for quite some time on all my projects prior to finding Steve's menu. Its highly configurable; and style, structure, and the menu code are all seperate. Its a basic HTML Table you can style via CSS to your heart's content.
I've still got two sites using the latter:
*
*Christian Rock group Jesus Joshua 24:15 (be kind on the traffic, the guys are still on shared hosting...)
*Eastover Fire Department
A: For anyone coming to this old thread now I would suggest looking in to various modifications of the bootstrap drop-down menu. For example this:
http://bootsnipp.com/snippets/featured/multi-level-dropdown-menu-bs3
Good luck
| {
"language": "en",
"url": "https://stackoverflow.com/questions/15857",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15"
} |
Q: Getting developers to use a wiki I work on a complex application where different teams work on their own modules with a degree of overlap. A while back we got a Mediawiki instance set up, partly at my prompting. I have a hard job getting people to actually use it, let alone contribute.
I can see a lot of benefit in sharing information. It may at least reduce the times we reinvent the wheel.
The wiki is not very structured, but I'm not sure that is a problem as long as you can search for what you need.
Any hints?
A: As I mentioned before, a Wiki is very unorganized.
However, if that is the only argument from your developers, then invest some effort to create a simple index page and keep it updated (either do it yourself or ask people to link their contributions to the index). That way, the Wiki might grow into a very nice and quite comprehensive collection of documentation for all your work.
A: We've been using a wiki in some form or another for a while now, but it does take a while for people to get on board. You might find that you will be the only one writing articles for some time, but bear with it, other people will come on board eventually.
If someone sends an email around that contains information related to the project then helpfully point them in the direction of the wiki - and keep doing that - they should get the hint.
We have a SharePoint portal and use the wiki from there - we customised it with our own branding so that it "looks the part" - I really feel this has helped to improve the uptake of it.
Make sure that everyone is aware that the wiki is even more informal than email.... because there will be a "fear factor" that people may think anything they add to the wiki will be over-analysed.
A: I think most of the answers so far are spot on - the more you plug away at it yourself, the larger the body of useful information will become, so slowly but surely people will naturally start to use it.
The other approach you could use is this: Suggest that every time someone asks another team member a question about the project, they should answer the question as normal, but also add the answer to a section of the Wiki. This may take a few minutes extra, but it will mean that the next time someone asks the same question (which they inevitably will), you can save time by pointing them at the Wiki. This, in turn, should help people to start using the Wiki as a first source of information and help overall up-take.
A: You can't force developers to do something they do not have an incentive of using for; unfortunately wikis, like documentation (well, in fact wikis are documentation) rarely have any "cool" value for developers. Besides, they're already deep into dev work -- could you really bother them with a wiki?
That being said, the people who pushed for the wiki (e.g., you) should be primarily responsible for updating it, and you really would have a lot of work cut out for you if you're serious about it.
You might also try the ff:
*
*It's not very structured you say -- a lot of people get turned off from ill-structured (hard-to-search/browse) wikis. So maybe you can fix that first
*Maybe you can ask lead developers/project managers to populate it with things that are issues for them: things like code conventions and API design for your particular project
*Lead by example: religiously document your part of the system. Setting a precedent may encourage others to do the same
A: Sell the idea of using the wiki to the developers. You've identified some benefits, share those with the developers. If they can see that they'll get something of value out of it they'll start using it.
Example advantages from What Is a Wiki
*
*Good for writing down quick ideas or longer ones, giving you more time for formal writing and editing.
*Instantly collaborative without emailing documents, keeping the group in sync.
*Accessible from anywhere with a web connection (if you don't mind writing in web-browser text forms).
*Your archive, because every page revision is kept.
*Exciting, immediate, and empowering--everyone has a say.
A: Some tips:
Any time someone sends information by email that really should be in a wiki, make a page for that topic and add what they put in the email. Then reply "Thanks for that info, I've put it into the wiki here so that it's easier to find in the future."
Likewise, if you have information you need to share that should be in the wiki, put it there and just send an email with a link to it, rather than email people.
When you ask people for information, phrase it so that putting such documentation in the wiki should be considered the default or standard: "I searched in the wiki but I couldn't find it. Have you put that info up there yet?"
If you are the "wiki champion", make sure other people know how to use it, e.g. "Did I go through how to create a new page with you yet?"
Edit the sidebar to make sure it is relevant to your work.
Use "nav box" style templates on related pages for easier navigation.
Put something like {{Special:NewPages/5}} on the front page, or recent changes, so that people can see the activity.
Take a peek at Recent changes every few days or week, and if you notice someone adding information without being prodded, send them an email or drop by and give them a little compliment.
A: I have done some selling and even run some training sessions. I think some people are turned off by the lack of WYSIWYG editing and ability to paste formatted text from Word or Outlook. I know there are some tools to work around these, but they are still barriers.
There are some areas where the wiki is being used to log certain areas, but people who update those are not doing anything else with it.
I will use the wiki to document my specialised area regardless as it acts as a convenient brain extension. When starting a new development I use it as a notepad for ideas that I can expand on as it progresses.
It would help if management would give it some vocal support, even if it is not made mandatory.
A:
I have a hard job getting people to actually use it, let alone contribute.
One of the easiest ways to get people to contribute to a wiki, is to actually have them provide contents in a wiki-suitable fashion, i.e. so that whatever they post using their usual channels of communications (newsgroups, mailing lists, forums, issue trackers, chat), is basically suitable for inclusion on the wiki.
So that others (users/volunteers) can simply take such contents and put them on the wiki.
This sounds more complicated than it really is, it's mostly about generalizing questions and answers, so that they are not necessarily part of a conversation, but can be comprehensible, meaningful and useful in a standalone fashion.
For example a question like the following:
how do I get git to clone a remote repository???
Can be answered like this:
Hello,
Just use git clone git://...
But questions can also be answered in a less personal style:
In order to clone a git repository, you will want to use the clone parameter to git:
git clone git://....
What I am trying to say is that most discussions in a project can and should be easily used to become documentation eventually. With this sort of mindset, your documentation can actually grow rather rapidly. You only need to get people to keep in mind that useful information should be ideally provided in a fashion that is suitable for wiki inclusion.
I have witnessed several instances where open source projects started to use this approach to some extent and while some people (largely new users) complained that answers were not very personal, the body of documentation was increasing steadily, because other people simply monitored such discussions and started to copy/paste such responses to the wiki.
Basically, this is one of the easiest ways to get people to contribute to a wiki, without requiring them to actually use it themselves, the only thing that's required of them is a shift in thinking.
A: If the developers still need to maintain 'real' documentation (s.a. Word documents), I see no way to meaningfully duplicate that on a Wiki.
*
*It does not make sense for people to write twice
*Any duplicated data is prone to get out of sync, soon.
What my current customer has done is move all this to Wiki. So I only document once, and I do it on the Wiki.
This is okay. Working with Wiki is more tedious than with Word, but at least the doc is online and others can mix-and-match with it.
Another working solution (imho) would be to store docs alongside the source, on subversion. But then the merging system needs to be able to cope with rich text etc. as well. I don't know, if any solution for that exists (other than using HTML or LaTex, which actually would not be bad picks).
A: Find "sticky" items (sub-3 pg. docs / diagrams / etc) something that the team seems to be creating again and again & post it on the wiki. Make sure everyone has access to the wiki and knows its there - set up a notification mechanism if possible. With some luck, the next time they have to access, rather than dig it out of version control or their machines - they should hit the wiki.
If they still don't, try to see if the team has enough slack to actually use the wiki - Subtler issues may lie beneath their reluctance.
A: Take a look at the advice at http://www.ikiw.org/ Grow your Wiki
A: Just to add to some of the excellent advice being offered here...
As a dev in a small company that does largely gov't contract work in the 6-24 month range, I find that my time is often split between development and writing status reports (right up there with writing documentation, only worse!) Having a wiki to slap down unorganized thoughts and notes as we go along has made report-writing a lot less painful (not pain-LESS, but better all the same).
Further, if you're already in the Mediawiki world, you might want to look at SemanticMediawiki. It allows you to take the organization of your data to another level by semantically tagging it. That doesn't mean a lot on its own, I know, but I can tell you (for example) that it can drastically improve the relevance of the data returned from searches. It is definitely worth a look.
A: Generally good advice here. I'd like to add:
*
*You really need a champion - someone pushing this to developers and management (without being pushy - that's a challenge!) and providing support & tutorials when possible. This person also needs to be a peer (so a fellow developer, not someone in a remote IT department) and really customer focused i.e. ready to make changes when requested.
*Speaking of changes, some people here say wikis are unstructured. I disagree. Our MediaWiki installation is structured using categories, particularly with two extensions:WarnNoCategories (to require users to add a category when saving a page) and CategoryTree to show how all the categories fit together (this can be linked to from the sidebar). I've got more tips on how we keep this low threshold, if you're interested.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/15871",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17"
} |
Q: Read from .msg files I need to read from Outlook .MSG file in .NET without using COM API for Outlook (cos it will not be installed on the machines that my app will run). Are there any free 3rd party libraries to do that? I want to extract From, To, CC and BCC fields. Sent/Receive date fields would be good if they are also stored in MSG files.
A: Update: I have found a 3rd party COM library called Outlook Redemption which is working fine for me at the moment. If you use it via COM-Interop in .NET, don't forget to release every COM object after you are done with it, otherwise your application crashes randomly.
A: Here's some sample VBA code using Outlook Redemption that Huseyint found.
Public Sub ProcessMail()
Dim Sess As RDOSession
Dim myMsg As RDOMail
Dim myString As String
Set Sess = CreateObject("Redemption.RDOSession")
Set myMsg = Sess.GetMessageFromMsgFile("C:\TestHarness\kmail.msg")
myString = myMsg.Body
myMsg.Body = Replace(myString, "8750", "XXXX")
myMsg.Save
End Sub
A: Microsoft has documented this: .MSG File Format Specification
A: It's a "Structured Storage" document. I've successfully used Andrew Peace's code to read these in the past, even under .NET (using C++/CLI) - it's clean and fairly easy to understand. Basically, you need to figure out which records you need, and query for those - it gets a little bit hairy, since different versions of Outlook and different types of messages will result in different records...
A: There is code avaliable on CodeProject for reading .msg files without COM. See here.
A: You can try our (commercial) Rebex Secure Mail library. It can read Outlooks MSG format. Following code shows how:
// Load message
MailMessage message = new MailMessage();
message.Load(@"c:\Temp\t\message.msg");
// show From, To and Sent date
Console.WriteLine("From: {0}", message.From);
Console.WriteLine("To: {0}", message.To);
Console.WriteLine("Sent: {0}", message.Date.LocalTime);
// find and try to parse the first 'Received' header
MailDateTime receivedDate = null;
string received = message.Headers.GetRaw("Received");
if (received != null)
{
int lastSemicolon = received.LastIndexOf(';');
if (lastSemicolon >= 0)
{
string rawDate = received.Substring(lastSemicolon + 1);
MimeHeader header = new MimeHeader("Date", rawDate);
receivedDate = header.Value as MailDateTime;
}
}
// display the received date if available
if (receivedDate != null)
Console.WriteLine("Received: {0}", receivedDate.LocalTime);
More info on Sent and Received dates and how are they represented in the message can be found at http://forum.rebex.net/questions/816/extract-senttime-receivetime-and-time-zones
A: If you open the .MSG file in a text editor, i believe you will find that the information you are after is stored as plain text inside the file. (It is on all the messages i have checked at least)
It would be pretty easy to write some code to parse the file looking for lines beginning with "From:" or "To:" etc. and then extracting the information you need.
If you need the body of the email as well, that may be a bit more complicated.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/15880",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "18"
} |
Q: How to insert/replace XML tag in XmlDocument? I have a XmlDocument in java, created with the Weblogic XmlDocument parser.
I want to replace the content of a tag in this XMLDocument with my own data, or insert the tag if it isn't there.
<customdata>
<tag1 />
<tag2>mfkdslmlfkm</tag2>
<location />
<tag3 />
</customdata>
For example I want to insert a URL in the location tag:
<location>http://something</location>
but otherwise leave the XML as is.
Currently I use a XMLCursor:
XmlObject xmlobj = XmlObject.Factory.parse(a.getCustomData(), options);
XmlCursor xmlcur = xmlobj.newCursor();
while (xmlcur.hasNextToken()) {
boolean found = false;
if (xmlcur.isStart() && "schema-location".equals(xmlcur.getName().toString())) {
xmlcur.setTextValue("http://replaced");
System.out.println("replaced");
found = true;
} else if (xmlcur.isStart() && "customdata".equals(xmlcur.getName().toString())) {
xmlcur.push();
} else if (xmlcur.isEnddoc()) {
if (!found) {
xmlcur.pop();
xmlcur.toEndToken();
xmlcur.insertElementWithText("schema-location", "http://inserted");
System.out.println("inserted");
}
}
xmlcur.toNextToken();
}
I tried to find a "quick" xquery way to do this since the XmlDocument has an execQuery method, but didn't find it very easy.
Do anyone have a better way than this? It seems a bit elaborate.
A: How about an XPath based approach? I like this approach as the logic is super-easy to understand. The code is pretty much self-documenting.
If your xml document is available to you as an org.w3c.dom.Document object (as most parsers return), then you could do something like the following:
// get the list of customdata nodes
NodeList customDataNodeSet = findNodes(document, "//customdata" );
for (int i=0 ; i < customDataNodeSet.getLength() ; i++) {
Node customDataNode = customDataNodeSet.item( i );
// get the location nodes (if any) within this one customdata node
NodeList locationNodeSet = findNodes(customDataNode, "location" );
if (locationNodeSet.getLength() > 0) {
// replace
locationNodeSet.item( 0 ).setTextContent( "http://stackoverflow.com/" );
}
else {
// insert
Element newLocationNode = document.createElement( "location" );
newLocationNode.setTextContent("http://stackoverflow.com/" );
customDataNode.appendChild( newLocationNode );
}
}
And here's the helper method findNodes that does the XPath search.
private NodeList findNodes( Object obj, String xPathString )
throws XPathExpressionException {
XPath xPath = XPathFactory.newInstance().newXPath();
XPathExpression expression = xPath.compile( xPathString );
return (NodeList) expression.evaluate( obj, XPathConstants.NODESET );
}
A: How about an object oriented approach? You could deserialise the XML to an object, set the location value on the object, then serialise back to XML.
XStream makes this really easy.
For example, you would define the main object, which in your case is CustomData (I'm using public fields to keep the example simple):
public class CustomData {
public String tag1;
public String tag2;
public String location;
public String tag3;
}
Then you initialize XStream:
XStream xstream = new XStream();
// if you need to output the main tag in lowercase, use the following line
xstream.alias("customdata", CustomData.class);
Now you can construct an object from XML, set the location field on the object and regenerate the XML:
CustomData d = (CustomData)xstream.fromXML(xml);
d.location = "http://stackoverflow.com";
xml = xstream.toXML(d);
How does that sound?
A: If you don't know the schema the XStream solution probably isn't the way to go. At least XStream is on your radar now, might come in handy in the future!
A: You should be able to do this with query
try
fn:replace(string,pattern,replace)
I am new to xquery myself and I have found it to be a painful query language to work with, but it does work quiet well once you get over the initial learning curve.
I do still wish there was an easier way which was as efficient?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/15899",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: Entity diagrams in ASP.NET MVC What's the best way/tool to display nice diagrams with entity relationships in ASP MVC views?
i.e. servers and and applications, or servers with other servers.
Are there any third party tools out there that can do this?
I've been searching around things like Telerik, but it's really hard to google for this!
A: Here are some 3rd-party diagramming tools:
*
*http://www.nevron.com/Products.DiagramFor.NET.Overview.aspx
*http://www.nwoods.com/GO/dotnet.htm
*http://www.syncfusion.com/products/diagram/web/default.aspx
| {
"language": "en",
"url": "https://stackoverflow.com/questions/15900",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Data Auditing in NHibernate and SqlServer I'm using NHibernate on a project and I need to do data auditing. I found this article on codeproject which discusses the IInterceptor interface.
What is your preferred way of auditing data? Do you use database triggers? Do you use something similar to what's dicussed in the article?
A: [EDIT]
Post NH2.0 release, please look at the Event Listeners as suggested below. My answer is outdated.
The IInterceptor is the recommended way to modify any data in nhibernate in a non-invasive fashion. It's also useful for decryption / encryption of data without your application code needing to know.
Triggers on the database are moving the responsibility of logging (an application concern) in to the DBMS layer which effectively ties your logging solution to your database platform. By encapsulating the auditing mechanics in the persistance layer you retain platform independance and code transportability.
I use Interceptors in production code to provide auditing in a few large systems.
A: I prefer the CodeProject approach you mentioned.
One problem with database triggers is that it leaves you no choice but to use Integrated Security coupled with ActiveDirectory as access to your SQL Server. The reason for that is that your connection should inherit the identity of the user who triggered the connection; if your application uses a named "sa" account or other user accounts, the "user" field will only reflect "sa".
This can be overriden by creating a named SQL Server account for each and every user of the application, but this will be impractical for non-intranet, public facing web applications, for example.
A: I do like the Interceptor approach mentioned, and use this on the project I'm currently working on.
However, one obvious disadvantage that deserves highlighting is that this approach will only audit data changes made via your application. Any direct data modifications such as ad-hoc SQL scripts that you may need to execute from time to time (it always happens!) won't be audited, unless you remember to perform the audit table insertions at the same time.
A: I understand this is an old question. But I would like to answer this in the light of the new Event System in NH 2.0. Event Listeners are better for auditing-like-functions than Interceptors. Ayende wrote a great example on his blog last month. Here's the URL to his blog post -
ayende.com/Blog/archive/2009/04/29/nhibernate-ipreupdateeventlistener-amp-ipreinserteventlistener.aspx
A: As an entirely different approach, you could use the decorator pattern with your repositories.
Say I have
public interface IRepository<EntityType> where EntityType:IAuditably
{
public void Save(EntityType entity);
}
Then, we'd have our NHibernateRepository:
public class NHibernateRepository<EntityType>:IRepository<EntityType>
{
/*...*/
public void Save ( EntityType entity )
{
session.SaveOrUpdate(entity);
}
}
Then we could have an Auditing Repository:
public class AuditingRepository<EntityType>:IRepository<EntityType>
{
/*...*/
public void Save ( EntityType entity )
{
entity.LastUser = security.CurrentUser;
entity.LastUpdate = DateTime.UtcNow;
innerRepository.Save(entity)
}
}
Then, using an IoC Framework (StructureMap, Castle Windsor, NInject) you could build it all up without the rest of your code every knowing you had auditing going on.
Of course, how you audit the elements of cascaded collections is another issue entirely...
A: For NHibernate 2.0, you should also look at Event Listeners. These are the evolution of the IInterceptor interface and we use them successfully for auditing.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/15917",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15"
} |
Q: Java+Tomcat, Dying database connection? I have a tomcat instance setup but the database connection I have configured in context.xml keeps dying after periods of inactivity.
When I check the logs I get the following error:
com.mysql.jdbc.exceptions.jdbc4.CommunicationsException:
The last packet successfully received from the server was68051 seconds
ago. The last packet sent successfully to the server was 68051 seconds
ago, which is longer than the server configured value of
'wait_timeout'. You should consider either expiring and/or testing
connection validity before use in your application, increasing the
server configured values for client timeouts, or using the Connector/J
connection property 'autoReconnect=true' to avoid this problem.
Here is the configuration in context.xml:
<Resource name="dataSourceName"
auth="Container"
type="javax.sql.DataSource"
maxActive="100"
maxIdle="30"
maxWait="10000"
username="username"
password="********"
removeAbandoned = "true"
logAbandoned = "true"
driverClassName="com.mysql.jdbc.Driver"
url="jdbc:mysql://127.0.0.1:3306/databasename?autoReconnect=true&useEncoding=true&characterEncoding=UTF-8" />
I am using autoReconnect=true like the error says to do, but the connection keeps dying. I have never seen this happen before.
I have also verified that all database connections are being closed properly.
A: Just to clarify what is actually causing this. MySQL by default terminates open connections after 8 hours of inactivity. However the database connection pool will retain connections for longer than that.
So by setting timeBetweenEvictionRunsMillis=300000 you are instructing the connection pool to run through connections and evict and close idle ones every 5 minutes.
A: Tomcat Documentation
DBCP uses the Jakarta-Commons Database Connection Pool. It relies on number of Jakarta-Commons components:
* Jakarta-Commons DBCP
* Jakarta-Commons Collections
* Jakarta-Commons Pool
This attribute may help you out.
removeAbandonedTimeout="60"
I'm using the same connection pooling stuff and I'm setting these properties to prevent the same thing it's just not configured through tomcat.
But if the first thing doesn't work try these.
testWhileIdle=true
timeBetweenEvictionRunsMillis=300000
A: The removeAbandoned option is deprecated as of DBCP 1.2 (though still present in the 1.3 branch). Here's a non-official explanation.
A: I do not know whether the above answer does basically the same thing, but some of our systems use the DB connection about once a week and I've seen that we provide a -Otimeout flag or something of that sort to mysql to set the connection timeout.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/15949",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17"
} |
Q: How would a sdbm hash function be implemented in C#? How can a sdbm hash function (such as this) be implemented in C# ?
A: You can take the C code almost without changes:
uint sdbm( string str )
{
uint hash = 0;
foreach( char ch in str )
{
hash = ch + (hash << 6) + (hash << 16) - hash;
}
return hash;
}
Or did you think of something more sophisticated?
A: I don't have a C compiler set up so I can't test to see if it performs the same, but I think the following is correct:
private static ulong SBDM(string str)
{
ulong hash = 0;
foreach (char c in str)
{
hash = c + (hash << 6) + (hash << 16) - hash;
}
return hash;
}
If you just need to get a hash of the string and it doesn't matter too much what the implementation is you can always do the String.GetHashCode();
A: The result from the hash differs between the C++ and C# implementation. I figured out that str parameter needs to be passed as a byte array.
private uint sdbm(byte[] str)
{
uint hash = 0;
foreach (char ch in str)
hash = ch + (hash << 6) + (hash << 16) - hash;
return hash;
}
Call the method by converting the value to be hashed with the BitConverter.GetBytes method.
uint Hash = sdbm(BitConverter.GetBytes(myID));
| {
"language": "en",
"url": "https://stackoverflow.com/questions/15954",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: WildcardQuery error in Solr I use solr to search for documents and when trying to search for documents using this query "id:*", I get this query parser exception telling that it cannot parse the query with * or ? as the first character.
HTTP Status 400 - org.apache.lucene.queryParser.ParseException: Cannot parse 'id:*': '*' or '?' not allowed as first character in WildcardQuery
type Status report
message org.apache.lucene.queryParser.ParseException: Cannot parse 'id:*': '*' or '?' not allowed as first character in WildcardQuery
description The request sent by the client was syntactically incorrect (org.apache.lucene.queryParser.ParseException: Cannot parse 'id:*': '*' or '?' not allowed as first character in WildcardQuery).
Is there any patch for getting this to work with just * ? Or is it very costly to do such a query?
A: id:[a* TO z*] id:[0* TO 9*] etc.
I just did this in lukeall on my index and it worked, therefore it should work in Solr which uses the standard query parser. I don't actually use Solr.
In base Lucene there's a fine reason for why you'd never query for every document, it's because to query for a document you must use a new indexReader("DirectoryName") and apply a query to it. Therefore you could totally skip applying a query to it and use the indexReader methods numDocs() to get a count of all the documents, and document(int n) to retrieve any of the documents.
A: If you are just trying to get all documents, Solr does support the *:* query. It's the only time I know of that Solr will let you begin a query with an *. I'm sure you've probably seen this as the default query in the Solr admin page.
If you are trying to do a more specific query with an * as the first character, like say id:*456 then one of the best ways I've seen is to index that field twice. Once normally (field name: id), and once with all the characters reversed (field name: reverse_id). Then you could essentially do the query id:456 by sending the query reverse_id:654 instead. Hope that makes sense.
You can also search the Solr user group mailing list at http://www.mail-archive.com/solr-user@lucene.apache.org/ where questions like this come up quite often.
A: The following Solr issue is a request to be able to configure the default lucene query parser.
https://issues.apache.org/jira/browse/SOLR-218
In this issue you can find the following description how to 'patch' Solr. This modification would allow you to start queries with a *.
Jonas Salk: I've basically updated only one Java file: SolrQueryParser.java.
public SolrQueryParser(IndexSchema schema, String defaultField) {
...
setAllowLeadingWildcard(true);
setLowercaseExpandedTerms(true);
...
}
...
public SolrQueryParser(QParser parser, String defaultField, Analyzer analyzer) {
...
setAllowLeadingWildcard(true);
setLowercaseExpandedTerms(true);
...
}
I'm not sure if setLowercaseExpandedTerms is needed...
A: If you want all documents, do a query on *:*
If you want all documents with a certain field (e.g. id) try id:[* TO *]
A: Lucene doesn't allow you to start WildcardQueries with an asterisk by default, because those are incredibly expensive queries and will be very, very, very slow on large indexes.
If you're using the Lucene QueryParser, call setAllowLeadingWildcard(true) on it to enable it.
If you want all of the documents with a certain field set, you are much better off querying or walking the index programmatically than using QueryParser. You should really only use QueryParser to parse user input.
A: I'm assuming with id:* you're just trying to match all documents, right?
I've never used solr before, but in my Lucene experience, when ingesting data, we've added a hidden field to every document, then when we need to return every record we do a search for the string constant in that field that's the same for every record.
If you can't add a field like that in your situation, you could use a RegexQuery with a regex that would match anything that could be found in the id field.
Edit: actually answering the question. I've never heard of a patch to get that to work, but I would be surprised if it could even be made to work reasonably well. See this question for a reason why unconstrained PrefixQuery's can cause a problem.
A: Actually, I have been using a workaround for this. I append a character to the id, eg: A1, A2, etc.
With such values in the field, it is possible to search using the query id:A*
But would love to find whether a true solution exists.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/15979",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
} |
Q: Useful code which uses reduce()? Does anyone here have any useful code which uses reduce() function in python? Is there any code other than the usual + and * that we see in the examples?
Refer Fate of reduce() in Python 3000 by GvR
A: You could replace value = json_obj['a']['b']['c']['d']['e'] with:
value = reduce(dict.__getitem__, 'abcde', json_obj)
If you already have the path a/b/c/.. as a list. For example, Change values in dict of nested dicts using items in a list.
A: @Blair Conrad: You could also implement your glob/reduce using sum, like so:
files = sum([glob.glob(f) for f in args], [])
This is less verbose than either of your two examples, is perfectly Pythonic, and is still only one line of code.
So to answer the original question, I personally try to avoid using reduce because it's never really necessary and I find it to be less clear than other approaches. However, some people get used to reduce and come to prefer it to list comprehensions (especially Haskell programmers). But if you're not already thinking about a problem in terms of reduce, you probably don't need to worry about using it.
A: The other uses I've found for it besides + and * were with and and or, but now we have any and all to replace those cases.
foldl and foldr do come up in Scheme a lot...
Here's some cute usages:
Flatten a list
Goal: turn [[1, 2, 3], [4, 5], [6, 7, 8]] into [1, 2, 3, 4, 5, 6, 7, 8].
reduce(list.__add__, [[1, 2, 3], [4, 5], [6, 7, 8]], [])
List of digits to a number
Goal: turn [1, 2, 3, 4, 5, 6, 7, 8] into 12345678.
Ugly, slow way:
int("".join(map(str, [1,2,3,4,5,6,7,8])))
Pretty reduce way:
reduce(lambda a,d: 10*a+d, [1,2,3,4,5,6,7,8], 0)
A: reduce can be used to support chained attribute lookups:
reduce(getattr, ('request', 'user', 'email'), self)
Of course, this is equivalent to
self.request.user.email
but it's useful when your code needs to accept an arbitrary list of attributes.
(Chained attributes of arbitrary length are common when dealing with Django models.)
A: reduce() can be used to find Least common multiple for 3 or more numbers:
#!/usr/bin/env python
from math import gcd
from functools import reduce
def lcm(*args):
return reduce(lambda a,b: a * b // gcd(a, b), args)
Example:
>>> lcm(100, 23, 98)
112700
>>> lcm(*range(1, 20))
232792560
A: reduce is useful when you need to find the union or intersection of a sequence of set-like objects.
>>> reduce(operator.or_, ({1}, {1, 2}, {1, 3})) # union
{1, 2, 3}
>>> reduce(operator.and_, ({1}, {1, 2}, {1, 3})) # intersection
{1}
(Apart from actual sets, an example of these are Django's Q objects.)
On the other hand, if you're dealing with bools, you should use any and all:
>>> any((True, False, True))
True
A: reduce() could be used to resolve dotted names (where eval() is too unsafe to use):
>>> import __main__
>>> reduce(getattr, "os.path.abspath".split('.'), __main__)
<function abspath at 0x009AB530>
A: Not sure if this is what you are after but you can search source code on Google.
Follow the link for a search on 'function:reduce() lang:python' on Google Code search
At first glance the following projects use reduce()
*
*MoinMoin
*Zope
*Numeric
*ScientificPython
etc. etc. but then these are hardly surprising since they are huge projects.
The functionality of reduce can be done using function recursion which I guess Guido thought was more explicit.
Update:
Since Google's Code Search was discontinued on 15-Jan-2012, besides reverting to regular Google searches, there's something called Code Snippets Collection that looks promising. A number of other resources are mentioned in answers this (closed) question Replacement for Google Code Search?.
Update 2 (29-May-2017):
A good source for Python examples (in open-source code) is the Nullege search engine.
A: I'm writing a compose function for a language, so I construct the composed function using reduce along with my apply operator.
In a nutshell, compose takes a list of functions to compose into a single function. If I have a complex operation that is applied in stages, I want to put it all together like so:
complexop = compose(stage4, stage3, stage2, stage1)
This way, I can then apply it to an expression like so:
complexop(expression)
And I want it to be equivalent to:
stage4(stage3(stage2(stage1(expression))))
Now, to build my internal objects, I want it to say:
Lambda([Symbol('x')], Apply(stage4, Apply(stage3, Apply(stage2, Apply(stage1, Symbol('x'))))))
(The Lambda class builds a user-defined function, and Apply builds a function application.)
Now, reduce, unfortunately, folds the wrong way, so I wound up using, roughly:
reduce(lambda x,y: Apply(y, x), reversed(args + [Symbol('x')]))
To figure out what reduce produces, try these in the REPL:
reduce(lambda x, y: (x, y), range(1, 11))
reduce(lambda x, y: (y, x), reversed(range(1, 11)))
A: Reduce isn't limited to scalar operations; it can also be used to sort things into buckets. (This is what I use reduce for most often).
Imagine a case in which you have a list of objects, and you want to re-organize it hierarchically based on properties stored flatly in the object. In the following example, I produce a list of metadata objects related to articles in an XML-encoded newspaper with the articles function. articles generates a list of XML elements, and then maps through them one by one, producing objects that hold some interesting info about them. On the front end, I'm going to want to let the user browse the articles by section/subsection/headline. So I use reduce to take the list of articles and return a single dictionary that reflects the section/subsection/article hierarchy.
from lxml import etree
from Reader import Reader
class IssueReader(Reader):
def articles(self):
arts = self.q('//div3') # inherited ... runs an xpath query against the issue
subsection = etree.XPath('./ancestor::div2/@type')
section = etree.XPath('./ancestor::div1/@type')
header_text = etree.XPath('./head//text()')
return map(lambda art: {
'text_id': self.id,
'path': self.getpath(art)[0],
'subsection': (subsection(art)[0] or '[none]'),
'section': (section(art)[0] or '[none]'),
'headline': (''.join(header_text(art)) or '[none]')
}, arts)
def by_section(self):
arts = self.articles()
def extract(acc, art): # acc for accumulator
section = acc.get(art['section'], False)
if section:
subsection = acc.get(art['subsection'], False)
if subsection:
subsection.append(art)
else:
section[art['subsection']] = [art]
else:
acc[art['section']] = {art['subsection']: [art]}
return acc
return reduce(extract, arts, {})
I give both functions here because I think it shows how map and reduce can complement each other nicely when dealing with objects. The same thing could have been accomplished with a for loop, ... but spending some serious time with a functional language has tended to make me think in terms of map and reduce.
By the way, if anybody has a better way to set properties like I'm doing in extract, where the parents of the property you want to set might not exist yet, please let me know.
A: reduce can be used to get the list with the maximum nth element
reduce(lambda x,y: x if x[2] > y[2] else y,[[1,2,3,4],[5,2,5,7],[1,6,0,2]])
would return [5, 2, 5, 7] as it is the list with max 3rd element +
A: Find the intersection of N given lists:
input_list = [[1, 2, 3, 4, 5], [2, 3, 4, 5, 6], [3, 4, 5, 6, 7]]
result = reduce(set.intersection, map(set, input_list))
returns:
result = set([3, 4, 5])
via: Python - Intersection of two lists
A: After grepping my code, it seems the only thing I've used reduce for is calculating the factorial:
reduce(operator.mul, xrange(1, x+1) or (1,))
A: import os
files = [
# full filenames
"var/log/apache/errors.log",
"home/kane/images/avatars/crusader.png",
"home/jane/documents/diary.txt",
"home/kane/images/selfie.jpg",
"var/log/abc.txt",
"home/kane/.vimrc",
"home/kane/images/avatars/paladin.png",
]
# unfolding of plain filiname list to file-tree
fs_tree = ({}, # dict of folders
[]) # list of files
for full_name in files:
path, fn = os.path.split(full_name)
reduce(
# this fucction walks deep into path
# and creates placeholders for subfolders
lambda d, k: d[0].setdefault(k, # walk deep
({}, [])), # or create subfolder storage
path.split(os.path.sep),
fs_tree
)[1].append(fn)
print fs_tree
#({'home': (
# {'jane': (
# {'documents': (
# {},
# ['diary.txt']
# )},
# []
# ),
# 'kane': (
# {'images': (
# {'avatars': (
# {},
# ['crusader.png',
# 'paladin.png']
# )},
# ['selfie.jpg']
# )},
# ['.vimrc']
# )},
# []
# ),
# 'var': (
# {'log': (
# {'apache': (
# {},
# ['errors.log']
# )},
# ['abc.txt']
# )},
# [])
#},
#[])
A: I just found useful usage of reduce: splitting string without removing the delimiter. The code is entirely from Programatically Speaking blog. Here's the code:
reduce(lambda acc, elem: acc[:-1] + [acc[-1] + elem] if elem == "\n" else acc + [elem], re.split("(\n)", "a\nb\nc\n"), [])
Here's the result:
['a\n', 'b\n', 'c\n', '']
Note that it handles edge cases that popular answer in SO doesn't. For more in-depth explanation, I am redirecting you to original blog post.
A: I used reduce to concatenate a list of PostgreSQL search vectors with the || operator in sqlalchemy-searchable:
vectors = (self.column_vector(getattr(self.table.c, column_name))
for column_name in self.indexed_columns)
concatenated = reduce(lambda x, y: x.op('||')(y), vectors)
compiled = concatenated.compile(self.conn)
A: I think reduce is a silly command. Hence:
reduce(lambda hold,next:hold+chr(((ord(next.upper())-65)+13)%26+65),'znlorabggbbhfrshy','')
A: The usage of reduce that I found in my code involved the situation where I had some class structure for logic expression and I needed to convert a list of these expression objects to a conjunction of the expressions. I already had a function make_and to create a conjunction given two expressions, so I wrote reduce(make_and,l). (I knew the list wasn't empty; otherwise it would have been something like reduce(make_and,l,make_true).)
This is exactly the reason that (some) functional programmers like reduce (or fold functions, as such functions are typically called). There are often already many binary functions like +, *, min, max, concatenation and, in my case, make_and and make_or. Having a reduce makes it trivial to lift these operations to lists (or trees or whatever you got, for fold functions in general).
Of course, if certain instantiations (such as sum) are often used, then you don't want to keep writing reduce. However, instead of defining the sum with some for-loop, you can just as easily define it with reduce.
Readability, as mentioned by others, is indeed an issue. You could argue, however, that only reason why people find reduce less "clear" is because it is not a function that many people know and/or use.
A: Function composition: If you already have a list of functions that you'd like to apply in succession, such as:
color = lambda x: x.replace('brown', 'blue')
speed = lambda x: x.replace('quick', 'slow')
work = lambda x: x.replace('lazy', 'industrious')
fs = [str.lower, color, speed, work, str.title]
Then you can apply them all consecutively with:
>>> call = lambda s, func: func(s)
>>> s = "The Quick Brown Fox Jumps Over the Lazy Dog"
>>> reduce(call, fs, s)
'The Slow Blue Fox Jumps Over The Industrious Dog'
In this case, method chaining may be more readable. But sometimes it isn't possible, and this kind of composition may be more readable and maintainable than a f1(f2(f3(f4(x)))) kind of syntax.
A: I have an old Python implementation of pipegrep that uses reduce and the glob module to build a list of files to process:
files = []
files.extend(reduce(lambda x, y: x + y, map(glob.glob, args)))
I found it handy at the time, but it's really not necessary, as something similar is just as good, and probably more readable
files = []
for f in args:
files.extend(glob.glob(f))
A: Let say that there are some yearly statistic data stored a list of Counters.
We want to find the MIN/MAX values in each month across the different years.
For example, for January it would be 10. And for February it would be 15.
We need to store the results in a new Counter.
from collections import Counter
stat2011 = Counter({"January": 12, "February": 20, "March": 50, "April": 70, "May": 15,
"June": 35, "July": 30, "August": 15, "September": 20, "October": 60,
"November": 13, "December": 50})
stat2012 = Counter({"January": 36, "February": 15, "March": 50, "April": 10, "May": 90,
"June": 25, "July": 35, "August": 15, "September": 20, "October": 30,
"November": 10, "December": 25})
stat2013 = Counter({"January": 10, "February": 60, "March": 90, "April": 10, "May": 80,
"June": 50, "July": 30, "August": 15, "September": 20, "October": 75,
"November": 60, "December": 15})
stat_list = [stat2011, stat2012, stat2013]
print reduce(lambda x, y: x & y, stat_list) # MIN
print reduce(lambda x, y: x | y, stat_list) # MAX
A: I have objects representing some kind of overlapping intervals (genomic exons), and redefined their intersection using __and__:
class Exon:
def __init__(self):
...
def __and__(self,other):
...
length = self.length + other.length # (e.g.)
return self.__class__(...length,...)
Then when I have a collection of them (for instance, in the same gene), I use
intersection = reduce(lambda x,y: x&y, exons)
A: def dump(fname,iterable):
with open(fname,'w') as f:
reduce(lambda x, y: f.write(unicode(y,'utf-8')), iterable)
A: Using reduce() to find out if a list of dates are consecutive:
from datetime import date, timedelta
def checked(d1, d2):
"""
We assume the date list is sorted.
If d2 & d1 are different by 1, everything up to d2 is consecutive, so d2
can advance to the next reduction.
If d2 & d1 are not different by 1, returning d1 - 1 for the next reduction
will guarantee the result produced by reduce() to be something other than
the last date in the sorted date list.
Definition 1: 1/1/14, 1/2/14, 1/2/14, 1/3/14 is consider consecutive
Definition 2: 1/1/14, 1/2/14, 1/2/14, 1/3/14 is consider not consecutive
"""
#if (d2 - d1).days == 1 or (d2 - d1).days == 0: # for Definition 1
if (d2 - d1).days == 1: # for Definition 2
return d2
else:
return d1 + timedelta(days=-1)
# datelist = [date(2014, 1, 1), date(2014, 1, 3),
# date(2013, 12, 31), date(2013, 12, 30)]
# datelist = [date(2014, 2, 19), date(2014, 2, 19), date(2014, 2, 20),
# date(2014, 2, 21), date(2014, 2, 22)]
datelist = [date(2014, 2, 19), date(2014, 2, 21),
date(2014, 2, 22), date(2014, 2, 20)]
datelist.sort()
if datelist[-1] == reduce(checked, datelist):
print "dates are consecutive"
else:
print "dates are not consecutive"
| {
"language": "en",
"url": "https://stackoverflow.com/questions/15995",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "124"
} |
Q: LocationProvider We need to replace the menu system in our main ASP.NET application. So naturally we're looking at the ASP.NET SiteMapProvider and Menu controls. However we also need enough security to prevent users from directly entering URLs that they shouldn't have access to. We can do this by putting <location> entries in web.config and securing them individually but that's going to be a PITA to manage across multiple web servers.
Is there a Provider that can be used to, well, provide the equivalent of the <location> entries? I haven't been able to find one, and it's slightly frustrating given the existence of the ConfigurationLocation class.
Alternatively is there a configuration option we're missing in SiteMapProvider that will restrict users from getting to URLs they shouldn't?
A: Why don't you create rights & profiles to manage wich pages a user can see?
I usually create a user class which implements the IPrincipal security interface. On every request to your application, you check the rights of a particular user and output the SiteMap's nodes allowed for this user.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16003",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: How do I convert a file path to a URL in ASP.NET Basically I have some code to check a specific directory to see if an image is there and if so I want to assign a URL to the image to an ImageControl.
if (System.IO.Directory.Exists(photosLocation))
{
string[] files = System.IO.Directory.GetFiles(photosLocation, "*.jpg");
if (files.Length > 0)
{
// TODO: return the url of the first file found;
}
}
A: I've accepted Fredriks answer as it appears to solve the problem with the least amount of effort however the Request object doesn't appear to conatin the ResolveUrl method.
This can be accessed through the Page object or an Image control object:
myImage.ImageUrl = Page.ResolveUrl(photoURL);
myImage.ImageUrl = myImage.ResolveUrl(photoURL);
An alternative, if you are using a static class as I am, is to use the VirtualPathUtility:
myImage.ImageUrl = VirtualPathUtility.ToAbsolute(photoURL);
A: This worked for me:
HttpContext.Current.Request.Url.GetLeftPart(UriPartial.Authority) + HttpRuntime.AppDomainAppVirtualPath + "ImageName";
A: Maybe this is not the best way, but it works.
// Here is your path
String p = photosLocation + "whatever.jpg";
// Here is the page address
String pa = Page.Request.Url.AbsoluteUri;
// Take the page name
String pn = Page.Request.Url.LocalPath;
// Here is the server address
String sa = pa.Replace(pn, "");
// Take the physical location of the page
String pl = Page.Request.PhysicalPath;
// Replace the backslash with slash in your path
pl = pl.Replace("\\", "/");
p = p.Replace("\\", "/");
// Root path
String rp = pl.Replace(pn, "");
// Take out same path
String final = p.Replace(rp, "");
// So your picture's address is
String path = sa + final;
Edit: Ok, somebody marked as not helpful. Some explanation: take the physical path of the current page, split it into two parts: server and directory (like c:\inetpub\whatever.com\whatever) and page name (like /Whatever.aspx). The image's physical path should contain the server's path, so "substract" them, leaving only the image's path relative to the server's (like: \design\picture.jpg). Replace the backslashes with slashes and append it to the server's url.
A: this is what i use:
private string MapURL(string path)
{
string appPath = Server.MapPath("/").ToLower();
return string.Format("/{0}", path.ToLower().Replace(appPath, "").Replace(@"\", "/"));
}
A: As far as I know, there's no method to do what you want; at least not directly. I'd store the photosLocation as a path relative to the application; for example: "~/Images/". This way, you could use MapPath to get the physical location, and ResolveUrl to get the URL (with a bit of help from System.IO.Path):
string photosLocationPath = HttpContext.Current.Server.MapPath(photosLocation);
if (Directory.Exists(photosLocationPath))
{
string[] files = Directory.GetFiles(photosLocationPath, "*.jpg");
if (files.Length > 0)
{
string filenameRelative = photosLocation + Path.GetFilename(files[0])
return Page.ResolveUrl(filenameRelative);
}
}
A: The problem with all these answers is that they do not take virtual directories into account.
Consider:
Site named "tempuri.com/" rooted at c:\domains\site
virtual directory "~/files" at c:\data\files
virtual directory "~/files/vip" at c:\data\VIPcust\files
So:
Server.MapPath("~/files/vip/readme.txt")
= "c:\data\VIPcust\files\readme.txt"
But there is no way to do this:
MagicResolve("c:\data\VIPcust\files\readme.txt")
= "http://tempuri.com/files/vip/readme.txt"
because there is no way to get a complete list of virtual directories.
A: So far as I know there's no single function which does this (maybe you were looking for the inverse of MapPath?). I'd love to know if such a function exists. Until then, I would just take the filename(s) returned by GetFiles, remove the path, and prepend the URL root. This can be done generically.
A: The simple solution seems to be to have a temporary location within the website that you can access easily with URL and then you can move files to the physical location when you need to save them.
A: For get the left part of the URL:
?HttpContext.Current.Request.Url.GetLeftPart(UriPartial.Authority)
"http://localhost:1714"
For get the application (web) name:
?HttpRuntime.AppDomainAppVirtualPath
"/"
With this, you are available to add your relative path after that obtaining the complete URL.
A: I think this should work. It might be off on the slashes. Not sure if they are needed or not.
string url = Request.ApplicationPath + "/" + photosLocation + "/" + files[0];
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16007",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "47"
} |
Q: What IDE to use for developing in Ruby on Rails on Windows?
Possible Duplicate:
What Ruby IDE do you prefer?
I've generally been doing stuff on Microsoft .NET out of college almost 2 years ago. I just started looking at Ruby on Rails. So what editor should I use? I'm using Notepad++ right now but can I get debugging etc. somehow?
A: Try both NetBeans and RadRails for maybe a week each, then you can find which works best for you. The best advice is to learn your tool. If you are not checking out something new about your editor, something that could potentially save you time (regexp, etc) then you are doing yourself a huge disservice.
I have been using Eclipse/Aptana/RadRails and unlike Gaius have been pretty happy with it.
I recommend the Eclipse IDE for Java Developers from Eclipse Downloads: http://www.eclipse.org/downloads/
Then grab Aptana Studio, following these instructions.
When Eclipse restarts Aptana will have a view, click on rad rails and you are good to go. Just make sure you have ruby installed already, or it becomes a pain to resolve.
A: Aptana Studio
I use it for all web development - HTML, CSS, PHP, JavaScript, Rails...
EDIT: For full disclosure, I'm biased toward Aptana and RadRails as I know a few members of the original RadRails dev team.
A: rubyMine is the most full featured IDE for Rails at the current time (2012).
Personally, for rails development I had used Eclipe for several months and then netBeans for several weeks and rubyMine is clearly better than them.
It's great in all the areas that count - code views, search and replace, source control management, testing, debugging and it's got features like viewing a model dependency diagram that are really neat.
It isn't free - cost about $50-$100. This has recently become a key positive criteria for me. Too many "free" products that I invest thousands of hours getting proficient in eventually die and stop being developed but paid products pay for continued development. I've become weary of investing a lot of time and energy into such products only to have them wither and die. Given the hundreds of thousands of dollars one earns from rails development a $100 tool is a bargain.
Despite how much I love rubyMine I still use vim along side it. Sometimes my tasks works better with vim, sometimes with rubyMine.
A: I've been very happy with E. It's pretty lightweight and supports TextMate snippets and commands, which means you get access to a huge set of Rails-specific helpers.
However, it is decidedly an editor and not an IDE, so you won't get debugging, built in console, etc. But I've found that for Rails projects I prefer a light editor and a shell (like Console) for tests, debugging, etc.
A: I've been using Aptana/Eclipse/RadRails, but if I were to do it again, I'd definitely try NetBeans. Aptana has been a major headache.
I've never used IronRuby, but that might make you feel more at home.
A: The Netbeans IDE is a good, all around editor for many languages. I'm pretty sure the 6.5 beta has support for Ruby on Rails, along with Javascript and a few other web languages. It's worth checking out (Netbeans.org).
A: Sapphire in Steel integrates with Visual Studio.
A: I mainly code ColdFusion or PHP (and JS/CSS/xHTML), but have dabbled in a bit of RoR. RadRails/Apatana has been great for me, because it's built on Eclipse, which I was already using for my other work. It also integrates with Subversion via the Subclipse plugin.
The Eclipse platform is so extensible that it's worth investing a bit of time in to learn, but then again I like having a single IDE rather than having to switch between different apps.
I briefly looked at Netbeans, but TBH Eclipse just felt better for me, and Aptana itself is great when you come to do anything in JavaScript.
YMMV...
A: I use Emacs on Windows.
Installing and configuring it to work with rails is a pain though.
A: I found Geany to be a lightweight alternative (which works on linux as well with little modification), although I am checking out Gedit for features that not present or implemented as well in Geany.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16064",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
} |
Q: Prototyping with Python code before compiling I have been mulling over writing a peak-fitting library for a while. I know Python fairly well and plan on implementing everything in Python to begin with but envisage that I may have to re-implement some core routines in a compiled language eventually.
IIRC, one of Python's original remits was as a prototyping language, however Python is pretty liberal in allowing functions, functors, objects to be passed to functions and methods, whereas I suspect the same is not true of say C or Fortran.
What should I know about designing functions/classes which I envisage will have to interface into the compiled language? And how much of these potential problems are dealt with by libraries such as cTypes, bgen, SWIG, Boost.Python, Cython or Python SIP?
For this particular use case (a fitting library), I imagine allowing users to define mathematical functions (Guassian, Lorentzian etc.) as Python functions which can then to be passed an interpreted by the compiled code fitting library. Passing and returning arrays is also essential.
A: The best way to plan for an eventual transition to compiled code is to write the performance sensitive portions as a module of simple functions in a functional style (stateless and without side effects), which accept and return basic data types.
This will provide a one-to-one mapping from your Python prototype code to the eventual compiled code, and will let you use ctypes easily and avoid a whole bunch of headaches.
For peak fitting, you'll almost certainly need to use arrays, which will complicate things a little, but is still very doable with ctypes.
If you really want to use more complicated data structures, or modify the passed arguments, SWIG or Python's standard C-extension interface will let you do what you want, but with some amount of hassle.
For what you're doing, you may also want to check out NumPy, which might do some of the work you would want to push to C, as well as offering some additional help in moving data back and forth between Python and C.
A: f2py (part of numpy) is a simpler alternative to SWIG and boost.python for wrapping C/Fortran number-crunching code.
A: Finally a question that I can really put a value answer to :).
I have investigated f2py, boost.python, swig, cython and pyrex for my work (PhD in optical measurement techniques). I used swig extensively, boost.python some and pyrex and cython a lot. I also used ctypes. This is my breakdown:
Disclaimer: This is my personal experience. I am not involved with any of these projects.
swig:
does not play well with c++. It should, but name mangling problems in the linking step was a major headache for me on linux & Mac OS X. If you have C code and want it interfaced to python, it is a good solution. I wrapped the GTS for my needs and needed to write basically a C shared library which I could connect to. I would not recommend it.
Ctypes:
I wrote a libdc1394 (IEEE Camera library) wrapper using ctypes and it was a very straigtforward experience. You can find the code on https://launchpad.net/pydc1394. It is a lot of work to convert headers to python code, but then everything works reliably. This is a good way if you want to interface an external library. Ctypes is also in the stdlib of python, so everyone can use your code right away. This is also a good way to play around with a new lib in python quickly. I can recommend it to interface to external libs.
Boost.Python: Very enjoyable. If you already have C++ code of your own that you want to use in python, go for this. It is very easy to translate c++ class structures into python class structures this way. I recommend it if you have c++ code that you need in python.
Pyrex/Cython: Use Cython, not Pyrex. Period. Cython is more advanced and more enjoyable to use. Nowadays, I do everything with cython that i used to do with SWIG or Ctypes. It is also the best way if you have python code that runs too slow. The process is absolutely fantastic: you convert your python modules into cython modules, build them and keep profiling and optimizing like it still was python (no change of tools needed). You can then apply as much (or as little) C code mixed with your python code. This is by far faster then having to rewrite whole parts of your application in C; you only rewrite the inner loop.
Timings: ctypes has the highest call overhead (~700ns), followed by boost.python (322ns), then directly by swig (290ns). Cython has the lowest call overhead (124ns) and the best feedback where it spends time on (cProfile support!). The numbers are from my box calling a trivial function that returns an integer from an interactive shell; module import overhead is therefore not timed, only function call overhead is. It is therefore easiest and most productive to get python code fast by profiling and using cython.
Summary: For your problem, use Cython ;). I hope this rundown will be useful for some people. I'll gladly answer any remaining question.
Edit: I forget to mention: for numerical purposes (that is, connection to NumPy) use Cython; they have support for it (because they basically develop cython for this purpose). So this should be another +1 for your decision.
A: I haven't used SWIG or SIP, but I find writing Python wrappers with boost.python to be very powerful and relatively easy to use.
I'm not clear on what your requirements are for passing types between C/C++ and python, but you can do that easily by either exposing a C++ type to python, or by using a generic boost::python::object argument to your C++ API. You can also register converters to automatically convert python types to C++ types and vice versa.
If you plan use boost.python, the tutorial is a good place to start.
I have implemented something somewhat similar to what you need. I have a C++ function that
accepts a python function and an image as arguments, and applies the python function to each pixel in the image.
Image* unary(boost::python::object op, Image& im)
{
Image* out = new Image(im.width(), im.height(), im.channels());
for(unsigned int i=0; i<im.size(); i++)
{
(*out)[i] == extract<float>(op(im[i]));
}
return out;
}
In this case, Image is a C++ object exposed to python (an image with float pixels), and op is a python defined function (or really any python object with a __call__ attribute). You can then use this function as follows (assuming unary is located in the called image that also contains Image and a load function):
import image
im = image.load('somefile.tiff')
double_im = image.unary(lambda x: 2.0*x, im)
As for using arrays with boost, I personally haven't done this, but I know the functionality to expose arrays to python using boost is available - this might be helpful.
A: In my experience, there are two easy ways to call into C code from Python code. There are other approaches, all of which are more annoying and/or verbose.
The first and easiest is to compile a bunch of C code as a separate shared library and then call functions in that library using ctypes. Unfortunately, passing anything other than basic data types is non-trivial.
The second easiest way is to write a Python module in C and then call functions in that module. You can pass anything you want to these C functions without having to jump through any hoops. And it's easy to call Python functions or methods from these C functions, as described here: https://docs.python.org/extending/extending.html#calling-python-functions-from-c
I don't have enough experience with SWIG to offer intelligent commentary. And while it is possible to do things like pass custom Python objects to C functions through ctypes, or to define new Python classes in C, these things are annoying and verbose and I recommend taking one of the two approaches described above.
A:
Python is pretty liberal in allowing functions, functors, objects to be passed to functions and methods, whereas I suspect the same is not true of say C or Fortran.
In C you cannot pass a function as an argument to a function but you can pass a function pointer which is just as good a function.
I don't know how much that would help when you are trying to integrate C and Python code but I just wanted to clear up one misconception.
A: In addition to the tools above, I can recommend using Pyrex
(for creating Python extension modules) or Psyco (as JIT compiler for Python).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16067",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "22"
} |
Q: Visual Studio open files question Is it possible to open a project in Visual Studio 2008 without opening all the files that were previously opened last time I had the project open. I have a habit of keeping many files open as I am working on them, so next time I open the project, it (very slowly) loads up a bunch of files into the editor that I may not even need open. I have searched through the settings and cannot find anything to stop this behavior.
A: I never realized how much that annoyed me as well! I haven't been able to find a setting, but in Options > Environment > Keyboard you can bind a shortcut to Window.CloseAllDocuments. ALT+X was unbound for me so I just used that. I'm interested if there's some hidden setting to automatically do this on solution exit though (or load).
A: Edit: Totally read the question wrong at first - ignore my first (now gone) answer. :)
I changed the keyboard mapping for CTRL-SHIFT-C from bringing up the Class View to closing all document windows - something I use several orders of magnitude more often - and then I just clear my workspace before closing a solution.
A: Try the following:
*
*Close the program after closing all files.
*Make a copy of [whatever].suo
*Open the solution again, open some files, and exit.
*Copy (don't move) the old .suo file over the one that was just generated.
*Make the .suo file read only.
If you have a repository you might want to check that file in.
I suggest this because I was having the reverse problem, where it wasn't opening my old files automatically, and the cause was a .suo file that had been checked into the repository and was (for some reason) not being overwritten by Studio. The file wasn't even write protected.
A: Simply delete the .suo file.
It contains the list of open files.
A: A bit of research turns up the fact that you can do it with a macro:
*
*Create a new macro (or use an existing one). You should see a module called EnvironmentEvents in Macro Explorer. (For details, see here.)
*Open the EnvironmentEvents module.
*Put in this code:
Public Sub CloseDocsOnExit() Handles SolutionEvents.BeforeClosing
DTE.ExecuteCommand("Window.CloseAllDocuments")
End Sub
*Save and Build the macro.
*Open a whole bunch of documents in your solution, then close Visual Studio.
*Yay! No more open documents!
*(Note: Despite that it says SolutionEvents, it also works if you're working on a project that doesn't have a solution.)
A: I was hoping for something a little more automatic. VS will create a new .suo file every time the project is saved. So I would have to delete that file every time I open the project. I also don't want to have to remember to close all the files before closing VS.
Other IDEs that I have used have similar functionality, but also make it rather simple to turn on/off.
Thanks for your help.
A: Or you can close all open document from the Window menu before closing VS.
A: In Visual Studio 6.0 (VC++), the procedure is slightly different.
Delete the .ncb file (located normally in the same place as your .dsp or .dsw files).
A: The only way works for me is : change the project location and again reopen the solutions form there. :)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16074",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "31"
} |
Q: How to apply multiple styles in WPF In WPF, how would I apply multiple styles to a FrameworkElement? For instance, I have a control which already has a style. I also have a separate style which I would like to add to it without blowing away the first one. The styles have different TargetTypes, so I can't just extend one with the other.
A: Bea Stollnitz had a good blog post about using a markup extension for this, under the heading "How can I set multiple styles in WPF?"
That blog is dead now, so I'm reproducing the post here:
WPF and Silverlight both offer the ability to derive a Style from
another Style through the “BasedOn” property. This feature enables
developers to organize their styles using a hierarchy similar to class
inheritance. Consider the following styles:
<Style TargetType="Button" x:Key="BaseButtonStyle">
<Setter Property="Margin" Value="10" />
</Style>
<Style TargetType="Button" x:Key="RedButtonStyle" BasedOn="{StaticResource BaseButtonStyle}">
<Setter Property="Foreground" Value="Red" />
</Style>
With this syntax, a Button that uses RedButtonStyle will have its
Foreground property set to Red and its Margin property set to 10.
This feature has been around in WPF for a long time, and it’s new in
Silverlight 3.
What if you want to set more than one style on an element? Neither WPF
nor Silverlight provide a solution for this problem out of the box.
Fortunately there are ways to implement this behavior in WPF, which I
will discuss in this blog post.
WPF and Silverlight use markup extensions to provide properties with
values that require some logic to obtain. Markup extensions are easily
recognizable by the presence of curly brackets surrounding them in
XAML. For example, the {Binding} markup extension contains logic to
fetch a value from a data source and update it when changes occur; the
{StaticResource} markup extension contains logic to grab a value from
a resource dictionary based on a key. Fortunately for us, WPF allows
users to write their own custom markup extensions. This feature is not
yet present in Silverlight, so the solution in this blog is only
applicable to WPF.
Others
have written great solutions to merge two styles using markup
extensions. However, I wanted a solution that provided the ability to
merge an unlimited number of styles, which is a little bit trickier.
Writing a markup extension is straightforward. The first step is to
create a class that derives from MarkupExtension, and use the
MarkupExtensionReturnType attribute to indicate that you intend the
value returned from your markup extension to be of type Style.
[MarkupExtensionReturnType(typeof(Style))]
public class MultiStyleExtension : MarkupExtension
{
}
Specifying inputs to the markup extension
We’d like to give users of our markup extension a simple way to
specify the styles to be merged. There are essentially two ways in
which the user can specify inputs to a markup extension. The user can
set properties or pass parameters to the constructor. Since in this
scenario the user needs the ability to specify an unlimited number of
styles, my first approach was to create a constructor that takes any
number of strings using the “params” keyword:
public MultiStyleExtension(params string[] inputResourceKeys)
{
}
My goal was to be able to write the inputs as follows:
<Button Style="{local:MultiStyle BigButtonStyle, GreenButtonStyle}" … />
Notice the comma separating the different style keys. Unfortunately,
custom markup extensions don’t support an unlimited number of
constructor parameters, so this approach results in a compile error.
If I knew in advance how many styles I wanted to merge, I could have
used the same XAML syntax with a constructor taking the desired number
of strings:
public MultiStyleExtension(string inputResourceKey1, string inputResourceKey2)
{
}
As a workaround, I decided to have the constructor parameter take a
single string that specifies the style names separated by spaces. The
syntax isn’t too bad:
<Button Style="{local:MultiStyle BigButtonStyle GreenButtonStyle}" … />
private string[] resourceKeys;
public MultiStyleExtension(string inputResourceKeys)
{
if (inputResourceKeys == null)
{
throw new ArgumentNullException("inputResourceKeys");
}
this.resourceKeys = inputResourceKeys.Split(new char[] { ' ' }, StringSplitOptions.RemoveEmptyEntries);
if (this.resourceKeys.Length == 0)
{
throw new ArgumentException("No input resource keys specified.");
}
}
Calculating the output of the markup extension
To calculate the output of a markup extension, we need to override a
method from MarkupExtension called “ProvideValue”. The value returned
from this method will be set in the target of the markup extension.
I started by creating an extension method for Style that knows how to
merge two styles. The code for this method is quite simple:
public static void Merge(this Style style1, Style style2)
{
if (style1 == null)
{
throw new ArgumentNullException("style1");
}
if (style2 == null)
{
throw new ArgumentNullException("style2");
}
if (style1.TargetType.IsAssignableFrom(style2.TargetType))
{
style1.TargetType = style2.TargetType;
}
if (style2.BasedOn != null)
{
Merge(style1, style2.BasedOn);
}
foreach (SetterBase currentSetter in style2.Setters)
{
style1.Setters.Add(currentSetter);
}
foreach (TriggerBase currentTrigger in style2.Triggers)
{
style1.Triggers.Add(currentTrigger);
}
// This code is only needed when using DynamicResources.
foreach (object key in style2.Resources.Keys)
{
style1.Resources[key] = style2.Resources[key];
}
}
With the logic above, the first style is modified to include all
information from the second. If there are conflicts (e.g. both styles
have a setter for the same property), the second style wins. Notice
that aside from copying styles and triggers, I also took into account
the TargetType and BasedOn values as well as any resources the second
style may have. For the TargetType of the merged style, I used
whichever type is more derived. If the second style has a BasedOn
style, I merge its hierarchy of styles recursively. If it has
resources, I copy them over to the first style. If those resources are
referred to using {StaticResource}, they’re statically resolved before
this merge code executes, and therefore it isn’t necessary to move
them. I added this code in case we’re using DynamicResources.
The extension method shown above enables the following syntax:
style1.Merge(style2);
This syntax is useful provided that I have instances of both styles
within ProvideValue. Well, I don’t. All I get from the constructor is
a list of string keys for those styles. If there was support for
params in the constructor parameters, I could have used the following
syntax to get the actual style instances:
<Button Style="{local:MultiStyle {StaticResource BigButtonStyle}, {StaticResource GreenButtonStyle}}" … />
public MultiStyleExtension(params Style[] styles)
{
}
But that doesn’t work. And even if the params limitation didn’t exist,
we would probably hit another limitation of markup extensions, where
we would have to use property-element syntax instead of attribute
syntax to specify the static resources, which is verbose and
cumbersome (I explain this bug better in a previous blog
post).
And even if both those limitations didn’t exist, I would still rather
write the list of styles using just their names – it is shorter and
simpler to read than a StaticResource for each one.
The solution is to create a StaticResourceExtension using code. Given
a style key of type string and a service provider, I can use
StaticResourceExtension to retrieve the actual style instance. Here is
the syntax:
Style currentStyle = new StaticResourceExtension(currentResourceKey).ProvideValue(serviceProvider) as Style;
Now we have all the pieces needed to write the ProvideValue method:
public override object ProvideValue(IServiceProvider serviceProvider)
{
Style resultStyle = new Style();
foreach (string currentResourceKey in resourceKeys)
{
Style currentStyle = new StaticResourceExtension(currentResourceKey).ProvideValue(serviceProvider) as Style;
if (currentStyle == null)
{
throw new InvalidOperationException("Could not find style with resource key " + currentResourceKey + ".");
}
resultStyle.Merge(currentStyle);
}
return resultStyle;
}
Here is a complete example of the usage of the MultiStyle markup
extension:
<Window.Resources>
<Style TargetType="Button" x:Key="SmallButtonStyle">
<Setter Property="Width" Value="120" />
<Setter Property="Height" Value="25" />
<Setter Property="FontSize" Value="12" />
</Style>
<Style TargetType="Button" x:Key="GreenButtonStyle">
<Setter Property="Foreground" Value="Green" />
</Style>
<Style TargetType="Button" x:Key="BoldButtonStyle">
<Setter Property="FontWeight" Value="Bold" />
</Style>
</Window.Resources>
<Button Style="{local:MultiStyle SmallButtonStyle GreenButtonStyle BoldButtonStyle}" Content="Small, green, bold" />
A: But you can extend from another.. take a look at the BasedOn property
<Style TargetType="TextBlock">
<Setter Property="Margin" Value="3" />
</Style>
<Style x:Key="AlwaysVerticalStyle" TargetType="TextBlock"
BasedOn="{StaticResource {x:Type TextBlock}}">
<Setter Property="VerticalAlignment" Value="Top" />
</Style>
A: This is possible by creating a helper class to use and wrap your styles. CompoundStyle mentioned here shows how to do it. There are multiple ways, but the easiest is to do the following:
<TextBlock Text="Test"
local:CompoundStyle.StyleKeys="headerStyle,textForMessageStyle,centeredStyle"/>
Hope that helps.
A: Use AttachedProperty to set multiple styles like following code:
public static class Css
{
public static string GetClass(DependencyObject element)
{
if (element == null)
throw new ArgumentNullException("element");
return (string)element.GetValue(ClassProperty);
}
public static void SetClass(DependencyObject element, string value)
{
if (element == null)
throw new ArgumentNullException("element");
element.SetValue(ClassProperty, value);
}
public static readonly DependencyProperty ClassProperty =
DependencyProperty.RegisterAttached("Class", typeof(string), typeof(Css),
new PropertyMetadata(null, OnClassChanged));
private static void OnClassChanged(DependencyObject d, DependencyPropertyChangedEventArgs e)
{
var ui = d as FrameworkElement;
Style newStyle = new Style();
if (e.NewValue != null)
{
var names = e.NewValue as string;
var arr = names.Split(new char[] { ' ' }, StringSplitOptions.RemoveEmptyEntries);
foreach (var name in arr)
{
Style style = ui.FindResource(name) as Style;
foreach (var setter in style.Setters)
{
newStyle.Setters.Add(setter);
}
foreach (var trigger in style.Triggers)
{
newStyle.Triggers.Add(trigger);
}
}
}
ui.Style = newStyle;
}
}
Usage: (Point the xmlns:local="clr-namespace:style_a_class_like_css" to the right namespace)
<Window x:Class="MainWindow"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
xmlns:d="http://schemas.microsoft.com/expression/blend/2008"
xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"
xmlns:local="clr-namespace:style_a_class_like_css"
mc:Ignorable="d"
Title="MainWindow" Height="150" Width="325">
<Window.Resources>
<Style TargetType="TextBlock" x:Key="Red" >
<Setter Property="Foreground" Value="Red"/>
</Style>
<Style TargetType="TextBlock" x:Key="Green" >
<Setter Property="Foreground" Value="Green"/>
</Style>
<Style TargetType="TextBlock" x:Key="Size18" >
<Setter Property="FontSize" Value="18"/>
<Setter Property="Margin" Value="6"/>
</Style>
<Style TargetType="TextBlock" x:Key="Bold" >
<Setter Property="FontWeight" Value="Bold"/>
</Style>
</Window.Resources>
<StackPanel>
<Button Content="Button" local:Css.Class="Red Bold" Width="75"/>
<Button Content="Button" local:Css.Class="Red Size18" Width="75"/>
<Button Content="Button" local:Css.Class="Green Size18 Bold" Width="75"/>
</StackPanel>
</Window>
Result:
A: I think the simple answer is that you can't do (at least in this version of WPF) what you are trying to do.
That is, for any particular element only one Style can be applied.
However, as others have stated above, maybe you can use BasedOn to help you out. Check out the following piece of loose xaml. In it you will see that I have a base style that is setting a property that exists on the base class of the element that I want to apply two styles to. And, in the second style which is based on the base style, I set another property.
So, the idea here ... is if you can somehow separate the properties that you want to set ... according the inheritance hierarchy of the element you want to set multiple styles on ... you might have a workaround.
<Page xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml">
<Page.Resources>
<Style x:Key="baseStyle" TargetType="FrameworkElement">
<Setter Property="HorizontalAlignment" Value="Left"/>
</Style>
<Style TargetType="Button" BasedOn="{StaticResource baseStyle}">
<Setter Property="Content" Value="Hello World"/>
</Style>
</Page.Resources>
<Grid>
<Button Width="200" Height="50"/>
</Grid>
</Page>
Note:
One thing in particular to note. If you change the TargetType in the second style (in first set of xaml above) to ButtonBase, the two Styles do not get applied. However, check out the following xaml below to get around that restriction. Basically, it means you need to give the Style a key and reference it with that key.
<Page xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml">
<Page.Resources>
<Style x:Key="baseStyle" TargetType="FrameworkElement">
<Setter Property="HorizontalAlignment" Value="Left"/>
</Style>
<Style x:Key="derivedStyle" TargetType="ButtonBase" BasedOn="{StaticResource baseStyle}">
<Setter Property="Content" Value="Hello World"/>
</Style>
</Page.Resources>
<Grid>
<Button Width="200" Height="50" Style="{StaticResource derivedStyle}"/>
</Grid>
</Page>
A: WPF/XAML doesn't provide this functionality natively, but it does provide the extensibility to allow you to do what you want.
We ran into the same need, and ended up creating our own XAML Markup Extension (which we called "MergedStylesExtension") to allow us to create a new Style from two other styles (which, if needed, could probably be used multiple times in a row to inherit from even more styles).
Due to a WPF/XAML bug, we need to use property element syntax to use it, but other than that it seems to work ok. E.g.,
<Button
Content="This is an example of a button using two merged styles">
<Button.Style>
<ext:MergedStyles
BasedOn="{StaticResource FirstStyle}"
MergeStyle="{StaticResource SecondStyle}"/>
</Button.Style>
</Button>
I recently wrote about it here:
http://swdeveloper.wordpress.com/2009/01/03/wpf-xaml-multiple-style-inheritance-and-markup-extensions/
A: if you are not touching any specific properties, you can get all base and common properties to the style which's target type would be FrameworkElement. then, you can create specific flavours for each target types you need, without need of copying all those common properties again.
A: You can probably get something similar if applying this to a collection of items by the use of a StyleSelector, i have used this to approach a similar problem in using different styles on TreeViewItems depending on the bound object type in the tree. You may have to modify the class below slightly to adjust to your particular approach but hopefully this will get you started
public class MyTreeStyleSelector : StyleSelector
{
public Style DefaultStyle
{
get;
set;
}
public Style NewStyle
{
get;
set;
}
public override Style SelectStyle(object item, DependencyObject container)
{
ItemsControl ctrl = ItemsControl.ItemsControlFromItemContainer(container);
//apply to only the first element in the container (new node)
if (item == ctrl.Items[0])
{
return NewStyle;
}
else
{
//otherwise use the default style
return DefaultStyle;
}
}
}
You then apply this as so
<TreeView>
<TreeView.ItemContainerStyleSelector
<myassembly:MyTreeStyleSelector DefaultStyle="{StaticResource DefaultItemStyle}"
NewStyle="{StaticResource NewItemStyle}" />
</TreeView.ItemContainerStyleSelector>
</TreeView>
A: Sometimes you can approach this by nesting panels. Say you have a Style which changes Foreground and another changes FontSize, you can apply the latter one on a TextBlock, and put it in a Grid which its Style is the first one. This might help and might be the easiest way in some cases, though it won't solve all the problems.
A: When you override SelectStyle you can get GroupBy property via reflection like below:
public override Style SelectStyle(object item, DependencyObject container)
{
PropertyInfo p = item.GetType().GetProperty("GroupBy", BindingFlags.NonPublic | BindingFlags.Instance);
PropertyGroupDescription propertyGroupDescription = (PropertyGroupDescription)p.GetValue(item);
if (propertyGroupDescription != null && propertyGroupDescription.PropertyName == "Title" )
{
return this.TitleStyle;
}
if (propertyGroupDescription != null && propertyGroupDescription.PropertyName == "Date")
{
return this.DateStyle;
}
return null;
}
A: If you are trying to apply a unique style to just one single element as an addition to a base style, there is a completely different way to do this that is IMHO much better for readable and maintainable code.
It's extremely common to need to tweak parameters per individual element. Defining dictionary styles just for use on one-element is extremely cumbersome to maintain or make sense of. To avoid creating styles just for one-off element tweaks, read my answer to my own question here here:
https://stackoverflow.com/a/54497665/1402498
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16096",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "168"
} |
Q: What is the best way to replicate a version control repository? Here is the scenario that I have. I have a cvs repository in one location (A) and I want to replicate it and keep it in sync with a repository in another location(B). This would be a single directional sync from A to B. What is the best way to do this? If it is not really feasible in CVS then which source code control system would you recommend to accomplish this? Thanks
A: When using CVS, I don't know any tools to do that other than files syncing. You can achieve that using tools like rsync (Unix) or xcopy/robocopy (Windows).
If you plan on migrating to Subversion, it provides a tool called svnsync that allows to sync a repository from another one.
A: I would recommend you migrate from CVS to a proper distributed version control system such as git, which will provide this sort of functionality very naturally.
Subversion also provides svnsync which does the same sort of thing.
A: If you do take the rsync/filecopy approach with CVS, it is important to only sync the files at a time when there is not an active commit. Otherwise, the repository's lock file will get copied over and you will be unable to checkout/update on the target side until the next sync.
This reason alone may make CVS a bad choice. The migration path from CVS to Subversion is pretty smooth and there are tools to import a full CVS repo, with history, into Subversion.
Consider Git or Mercurial if you want to get into true distributed versioning, but it sounds like that would be overkill for your "read only" needs.
A: The Best (and perhaps costliest) way is Clearcase Multisite
But if you are looking for opensource, Git is becoming quickly replacing svn everywhere..
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16097",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Convert a string to an enum in C# What's the best way to convert a string to an enumeration value in C#?
I have an HTML select tag containing the values of an enumeration. When the page is posted, I want to pick up the value (which will be in the form of a string) and convert it to the corresponding enumeration value.
In an ideal world, I could do something like this:
StatusEnum MyStatus = StatusEnum.Parse("Active");
but that isn't valid code.
A: // str.ToEnum<EnumType>()
T static ToEnum<T>(this string str)
{
return (T) Enum.Parse(typeof(T), str);
}
A: Not sure when this was added but on the Enum class there is now a
Parse<TEnum>(stringValue)
Used like so with example in question:
var MyStatus = Enum.Parse<StatusEnum >("Active")
or ignoring casing by:
var MyStatus = Enum.Parse<StatusEnum >("active", true)
Here is the decompiled methods this uses:
[NullableContext(0)]
public static TEnum Parse<TEnum>([Nullable(1)] string value) where TEnum : struct
{
return Enum.Parse<TEnum>(value, false);
}
[NullableContext(0)]
public static TEnum Parse<TEnum>([Nullable(1)] string value, bool ignoreCase) where TEnum : struct
{
TEnum result;
Enum.TryParse<TEnum>(value, ignoreCase, true, out result);
return result;
}
A: Most of the answers here require you to always pass in the default value of the enum each time you call on the extension method. If you don't want to go by that approach, you can implement it like below:
public static TEnum ToEnum<TEnum>(this string value) where TEnum : struct
{
if (string.IsNullOrWhiteSpace(value))
return default(TEnum);
return Enum.TryParse(value, true, out TEnum result) ? result : default(TEnum);
}
Using default literal (available from C# 7.1)
public static TEnum ToEnum<TEnum>(this string value, TEnum defaultValue = default) where TEnum : struct
{
if (string.IsNullOrWhiteSpace(value))
return defaultValue ;
return Enum.TryParse(value, true, out TEnum result) ? result : defaultValue ;
}
Better still:
public static TEnum ToEnum<TEnum>(this string value) where TEnum : struct
{
if (string.IsNullOrWhiteSpace(value))
return default;
return Enum.TryParse(value, true, out TEnum result) ? result : default;
}
A: Parses string to TEnum without try/catch and without TryParse() method from .NET 4.5
/// <summary>
/// Parses string to TEnum without try/catch and .NET 4.5 TryParse()
/// </summary>
public static bool TryParseToEnum<TEnum>(string probablyEnumAsString_, out TEnum enumValue_) where TEnum : struct
{
enumValue_ = (TEnum)Enum.GetValues(typeof(TEnum)).GetValue(0);
if(!Enum.IsDefined(typeof(TEnum), probablyEnumAsString_))
return false;
enumValue_ = (TEnum) Enum.Parse(typeof(TEnum), probablyEnumAsString_);
return true;
}
A: I like the extension method solution..
namespace System
{
public static class StringExtensions
{
public static bool TryParseAsEnum<T>(this string value, out T output) where T : struct
{
T result;
var isEnum = Enum.TryParse(value, out result);
output = isEnum ? result : default(T);
return isEnum;
}
}
}
Here below my implementation with tests.
using static Microsoft.VisualStudio.TestTools.UnitTesting.Assert;
using static System.Console;
private enum Countries
{
NorthAmerica,
Europe,
Rusia,
Brasil,
China,
Asia,
Australia
}
[TestMethod]
public void StringExtensions_On_TryParseAsEnum()
{
var countryName = "Rusia";
Countries country;
var isCountry = countryName.TryParseAsEnum(out country);
WriteLine(country);
IsTrue(isCountry);
AreEqual(Countries.Rusia, country);
countryName = "Don't exist";
isCountry = countryName.TryParseAsEnum(out country);
WriteLine(country);
IsFalse(isCountry);
AreEqual(Countries.NorthAmerica, country); // the 1rst one in the enumeration
}
A: Super simple code using TryParse:
var value = "Active";
StatusEnum status;
if (!Enum.TryParse<StatusEnum>(value, out status))
status = StatusEnum.Unknown;
A: You can use extension methods now:
public static T ToEnum<T>(this string value, bool ignoreCase = true)
{
return (T) Enum.Parse(typeof (T), value, ignoreCase);
}
And you can call them by the below code (here, FilterType is an enum type):
FilterType filterType = type.ToEnum<FilterType>();
A: For performance this might help:
private static Dictionary<Type, Dictionary<string, object>> dicEnum = new Dictionary<Type, Dictionary<string, object>>();
public static T ToEnum<T>(this string value, T defaultValue)
{
var t = typeof(T);
Dictionary<string, object> dic;
if (!dicEnum.ContainsKey(t))
{
dic = new Dictionary<string, object>();
dicEnum.Add(t, dic);
foreach (var en in Enum.GetValues(t))
dic.Add(en.ToString(), en);
}
else
dic = dicEnum[t];
if (!dic.ContainsKey(value))
return defaultValue;
else
return (T)dic[value];
}
A: Use Enum.TryParse<T>(String, T) (≥ .NET 4.0):
StatusEnum myStatus;
Enum.TryParse("Active", out myStatus);
It can be simplified even further with C# 7.0's parameter type inlining:
Enum.TryParse("Active", out StatusEnum myStatus);
A: BEWARE:
enum Example
{
One = 1,
Two = 2,
Three = 3
}
Enum.(Try)Parse() accepts multiple, comma-separated arguments, and combines them with binary 'or' |. You cannot disable this and in my opinion you almost never want it.
var x = Enum.Parse("One,Two"); // x is now Three
Even if Three was not defined, x would still get int value 3. That's even worse: Enum.Parse() can give you a value that is not even defined for the enum!
I would not want to experience the consequences of users, willingly or unwillingly, triggering this behavior.
Additionally, as mentioned by others, performance is less than ideal for large enums, namely linear in the number of possible values.
I suggest the following:
public static bool TryParse<T>(string value, out T result)
where T : struct
{
var cacheKey = "Enum_" + typeof(T).FullName;
// [Use MemoryCache to retrieve or create&store a dictionary for this enum, permanently or temporarily.
// [Implementation off-topic.]
var enumDictionary = CacheHelper.GetCacheItem(cacheKey, CreateEnumDictionary<T>, EnumCacheExpiration);
return enumDictionary.TryGetValue(value.Trim(), out result);
}
private static Dictionary<string, T> CreateEnumDictionary<T>()
{
return Enum.GetValues(typeof(T))
.Cast<T>()
.ToDictionary(value => value.ToString(), value => value, StringComparer.OrdinalIgnoreCase);
}
A: If the property name is different from what you want to call it (i.e. language differences) you can do like this:
MyType.cs
using System;
using System.Runtime.Serialization;
using Newtonsoft.Json;
using Newtonsoft.Json.Converters;
[JsonConverter(typeof(StringEnumConverter))]
public enum MyType
{
[EnumMember(Value = "person")]
Person,
[EnumMember(Value = "annan_deltagare")]
OtherPerson,
[EnumMember(Value = "regel")]
Rule,
}
EnumExtensions.cs
using System;
using Newtonsoft.Json;
using Newtonsoft.Json.Converters;
public static class EnumExtensions
{
public static TEnum ToEnum<TEnum>(this string value) where TEnum : Enum
{
var jsonString = $"'{value.ToLower()}'";
return JsonConvert.DeserializeObject<TEnum>(jsonString, new StringEnumConverter());
}
public static bool EqualsTo<TEnum>(this string strA, TEnum enumB) where TEnum : Enum
{
TEnum enumA;
try
{
enumA = strA.ToEnum<TEnum>();
}
catch
{
return false;
}
return enumA.Equals(enumB);
}
}
Program.cs
public class Program
{
static public void Main(String[] args)
{
var myString = "annan_deltagare";
var myType = myString.ToEnum<MyType>();
var isEqual = myString.EqualsTo(MyType.OtherPerson);
//Output: true
}
}
A: At some point a generic version of Parse was added. For me this was preferable because I didn't need to "try" to parse and I also want the result inline without generating an output variable.
ColorEnum color = Enum.Parse<ColorEnum>("blue");
MS Documentation: Parse
A: Note that the performance of Enum.Parse() is not ideal, because it is implemented via reflection. (The same is true of Enum.ToString(), which goes the other way.)
If you need to convert strings to Enums in performance-sensitive code, your best bet is to create a Dictionary<String,YourEnum> at startup and use that to do your conversions.
A: object Enum.Parse(System.Type enumType, string value, bool ignoreCase);
So if you had an enum named mood it would look like this:
enum Mood
{
Angry,
Happy,
Sad
}
// ...
Mood m = (Mood) Enum.Parse(typeof(Mood), "Happy", true);
Console.WriteLine("My mood is: {0}", m.ToString());
A: In .NET Core and .NET Framework ≥4.0 there is a generic parse method:
Enum.TryParse("Active", out StatusEnum myStatus);
This also includes C#7's new inline out variables, so this does the try-parse, conversion to the explicit enum type and initialises+populates the myStatus variable.
If you have access to C#7 and the latest .NET this is the best way.
Original Answer
In .NET it's rather ugly (until 4 or above):
StatusEnum MyStatus = (StatusEnum) Enum.Parse(typeof(StatusEnum), "Active", true);
I tend to simplify this with:
public static T ParseEnum<T>(string value)
{
return (T) Enum.Parse(typeof(T), value, true);
}
Then I can do:
StatusEnum MyStatus = EnumUtil.ParseEnum<StatusEnum>("Active");
One option suggested in the comments is to add an extension, which is simple enough:
public static T ToEnum<T>(this string value)
{
return (T) Enum.Parse(typeof(T), value, true);
}
StatusEnum MyStatus = "Active".ToEnum<StatusEnum>();
Finally, you may want to have a default enum to use if the string cannot be parsed:
public static T ToEnum<T>(this string value, T defaultValue)
{
if (string.IsNullOrEmpty(value))
{
return defaultValue;
}
T result;
return Enum.TryParse<T>(value, true, out result) ? result : defaultValue;
}
Which makes this the call:
StatusEnum MyStatus = "Active".ToEnum(StatusEnum.None);
However, I would be careful adding an extension method like this to string as (without namespace control) it will appear on all instances of string whether they hold an enum or not (so 1234.ToString().ToEnum(StatusEnum.None) would be valid but nonsensical) . It's often be best to avoid cluttering Microsoft's core classes with extra methods that only apply in very specific contexts unless your entire development team has a very good understanding of what those extensions do.
A: I used class (strongly-typed version of Enum with parsing and performance improvements). I found it on GitHub, and it should work for .NET 3.5 too. It has some memory overhead since it buffers a dictionary.
StatusEnum MyStatus = Enum<StatusEnum>.Parse("Active");
The blogpost is Enums – Better syntax, improved performance and TryParse in NET 3.5.
And code:
https://github.com/damieng/DamienGKit/blob/master/CSharp/DamienG.Library/System/EnumT.cs
A: public static T ParseEnum<T>(string value) //function declaration
{
return (T) Enum.Parse(typeof(T), value);
}
Importance imp = EnumUtil.ParseEnum<Importance>("Active"); //function call
====================A Complete Program====================
using System;
class Program
{
enum PetType
{
None,
Cat = 1,
Dog = 2
}
static void Main()
{
// Possible user input:
string value = "Dog";
// Try to convert the string to an enum:
PetType pet = (PetType)Enum.Parse(typeof(PetType), value);
// See if the conversion succeeded:
if (pet == PetType.Dog)
{
Console.WriteLine("Equals dog.");
}
}
}
-------------
Output
Equals dog.
A: I found that here the case with enum values that have EnumMember value was not considered. So here we go:
using System.Runtime.Serialization;
public static TEnum ToEnum<TEnum>(this string value, TEnum defaultValue) where TEnum : struct
{
if (string.IsNullOrEmpty(value))
{
return defaultValue;
}
TEnum result;
var enumType = typeof(TEnum);
foreach (var enumName in Enum.GetNames(enumType))
{
var fieldInfo = enumType.GetField(enumName);
var enumMemberAttribute = ((EnumMemberAttribute[]) fieldInfo.GetCustomAttributes(typeof(EnumMemberAttribute), true)).FirstOrDefault();
if (enumMemberAttribute?.Value == value)
{
return Enum.TryParse(enumName, true, out result) ? result : defaultValue;
}
}
return Enum.TryParse(value, true, out result) ? result : defaultValue;
}
And example of that enum:
public enum OracleInstanceStatus
{
Unknown = -1,
Started = 1,
Mounted = 2,
Open = 3,
[EnumMember(Value = "OPEN MIGRATE")]
OpenMigrate = 4
}
A: You have to use Enum.Parse to get the object value from Enum, after that you have to change the object value to specific enum value. Casting to enum value can be do by using Convert.ChangeType. Please have a look on following code snippet
public T ConvertStringValueToEnum<T>(string valueToParse){
return Convert.ChangeType(Enum.Parse(typeof(T), valueToParse, true), typeof(T));
}
A: Try this sample:
public static T GetEnum<T>(string model)
{
var newModel = GetStringForEnum(model);
if (!Enum.IsDefined(typeof(T), newModel))
{
return (T)Enum.Parse(typeof(T), "None", true);
}
return (T)Enum.Parse(typeof(T), newModel.Result, true);
}
private static Task<string> GetStringForEnum(string model)
{
return Task.Run(() =>
{
Regex rgx = new Regex("[^a-zA-Z0-9 -]");
var nonAlphanumericData = rgx.Matches(model);
if (nonAlphanumericData.Count < 1)
{
return model;
}
foreach (var item in nonAlphanumericData)
{
model = model.Replace((string)item, "");
}
return model;
});
}
In this sample you can send every string, and set your Enum. If your Enum had data that you wanted, return that as your Enum type.
A: public TEnum ToEnum<TEnum>(this string value, TEnum defaultValue){
if (string.IsNullOrEmpty(value))
return defaultValue;
return Enum.Parse(typeof(TEnum), value, true);}
A: Enum.Parse is your friend:
StatusEnum MyStatus = (StatusEnum)Enum.Parse(typeof(StatusEnum), "Active");
A: You can extend the accepted answer with a default value to avoid exceptions:
public static T ParseEnum<T>(string value, T defaultValue) where T : struct
{
try
{
T enumValue;
if (!Enum.TryParse(value, true, out enumValue))
{
return defaultValue;
}
return enumValue;
}
catch (Exception)
{
return defaultValue;
}
}
Then you call it like:
StatusEnum MyStatus = EnumUtil.ParseEnum("Active", StatusEnum.None);
If the default value is not an enum the Enum.TryParse would fail and throw an exception which is catched.
After years of using this function in our code on many places maybe it's good to add the information that this operation costs performance!
A: We couldn't assume perfectly valid input, and went with this variation of @Keith's answer:
public static TEnum ParseEnum<TEnum>(string value) where TEnum : struct
{
TEnum tmp;
if (!Enum.TryParse<TEnum>(value, true, out tmp))
{
tmp = new TEnum();
}
return tmp;
}
A: You're looking for Enum.Parse.
SomeEnum enum = (SomeEnum)Enum.Parse(typeof(SomeEnum), "EnumValue");
A: <Extension()>
Public Function ToEnum(Of TEnum)(ByVal value As String, ByVal defaultValue As TEnum) As TEnum
If String.IsNullOrEmpty(value) Then
Return defaultValue
End If
Return [Enum].Parse(GetType(TEnum), value, True)
End Function
A: If you want to use a default value when null or empty (e.g. when retrieving from config file and the value does not exist) and throw an exception when the string or number does not match any of the enum values. Beware of caveat in Timo's answer though (https://stackoverflow.com/a/34267134/2454604).
public static T ParseEnum<T>(this string s, T defaultValue, bool ignoreCase = false)
where T : struct, IComparable, IConvertible, IFormattable//If C# >=7.3: struct, System.Enum
{
if ((s?.Length ?? 0) == 0)
{
return defaultValue;
}
var valid = Enum.TryParse<T>(s, ignoreCase, out T res);
if (!valid || !Enum.IsDefined(typeof(T), res))
{
throw new InvalidOperationException(
$"'{s}' is not a valid value of enum '{typeof(T).FullName}'!");
}
return res;
}
A: I started to use this approach. Performance seems to be ok however it requires a bit of boilerplate code setup.
public enum StatusType {
Success,
Pending,
Rejected
}
static class StatusTypeMethods {
public static StatusType GetEnum(string type) {
switch (type) {
case nameof(StatusType.Success): return StatusType.Success;
case nameof(StatusType.Pending): return StatusType.Pending;
case nameof(StatusType.Rejected): return StatusType.Rejected;
default:
throw new ArgumentOutOfRangeException(nameof(type), type, null);
};
}
}
And later on, you can use it like this:
StatusType = StatusType.GetEnum("Success");
A: First of all, you need to decorate your enum, like this:
public enum Store : short
{
[Description("Rio Big Store")]
Rio = 1
}
in .net 5, i create this extension method:
//The class also needs to be static, ok?
public static string GetDescription(this System.Enum enumValue)
{
FieldInfo fi = enumValue.GetType().GetField(enumValue.ToString());
DescriptionAttribute[] attributes = (DescriptionAttribute[])fi.GetCustomAttributes(
typeof(DescriptionAttribute), false);
if (attributes != null && attributes.Length > 0) return attributes[0].Description;
else return enumValue.ToString();
}
now you have an extension methods to use in any Enums
Like this:
var Desc = Store.Rio.GetDescription(); //Store is your Enum
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16100",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1225"
} |
Q: C# Auto Clearing Winform Textbox I have a user that want to be able to select a textbox and have the current text selected so that he doesn't have to highlight it all in order to change the contents.
The contents need to be handle when enter is pushed. That part I think I have figured out but any suggestions would be welcome.
The part I need help with is that once enter has been pushed, any entry into the textbox should clear the contents again.
Edit: The textbox controls an piece of RF hardware. What the user wants to be able to do is enter a setting and press enter. The setting is sent to the hardware. Without doing anything else the user wants to be able to type in a new setting and press enter again.
A: Hook into the KeyPress event on the TextBox, and when it encounters the Enter key, run your hardware setting code, and then highlight the full text of the textbox again (see below) - Windows will take care of clearing the text with the next keystroke for you.
TextBox1.Select(0, TextBox1.Text.Length);
A: OK, are you sure that is wise? I am picturing two scenarios here:
*
*There is a default button on the form, which is "clicked" when enter is pushed".
*There is no default button, and you want the user to have to press enter, regardless.
Both of these raise the same questions:
*
*Is there any validation that is taking place on the text?
*Why not create a user control to encapsulate this logic?
*If you know the enter button is being pushed and consumed fine, how are you having problems with TextBoxName.Text = string.Empty ?
Also, as a polite note, can you please try and break up your question a bit? One big block is a bit of a pain to read..
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16110",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Software for Webserver Log Analysis? Can I get some recommendations (preferably with some reasons) for good log analysis software for Apache 2.2 access log files?
I have heard of Webalizer and AWStats, but have never really used any of them, and would like to know:
*
*What they can do
*Why they are useful
*Interesting uses for them
Any and all comments and thoughts are welcome.
A: I use Analog because it's free, but it is rather dated now (wow, last release was 4 years ago!) and I'm sure it doesn't have as many fancy features as newer ones.
A: Splunk > is great, and free. It allows you to visualize and search all of your logs in real time. And it's all web based, so you can view your logs from just about anywhere.
A: I'd go for awstats. I was getting my site usage reports from awstats for a while and then turned on Google Analytics to do this job as well. And, surprisingly, Google turned out to be rather unreliable.
I get usually have no visits from a country like Finland and once I knew for sure a friend of mine visited the site from Finland and what happend? Awstats says - hey, you had a visit from Finland! And Google Analytics? I move my mouse over Finland and the bubble says "visits: 0". After that I could never quite make myself trust it again.
A: It seems that as javascript page-tagging becomes the more popular choice as a way of processing web stats, there's not as much work being done on log-based analysis tools anymore in the marketplace. My office used to use a product called LiveStats.XSP. It wasn't the greatest tool by any means, but it did have some nice features. It was recently bought by Microsoft and is no longer supported however. It abandoned log analysis turned into a proposed Google Analytics killer called Microsoft Gatineau, which supposedly is good at determining the demographics of your visitors, including age and gender (yeah sure...)
When I was looking for log analysis software a while ago, I wanted to avoid anything that looked overly bloated and enterprisey, which is what most stuff seemed to be, focusing more on the marketing and advertising aspects of reports.
One thing you may want to look at is the new version of Urchin, Urchin6 (see features here). Urchin I believe was bought by Google a few years ago. It's offered as a locally installed solution, and with it you have the option to use either page-tagging or log file analysis for any site that it monitors. There also seems to be some interface ties between Google's own web-based Google Analytics and Urchin. It's not free though, unfortunately, and I think you can only get it through authorized partners.
It does all the standard logfile analysis stuff, everything is browser-based, the reports it offers are pretty deep and comprehensive, and it also seems to have a few bells and whistles that other services don't offer. For example, I remember it being able to present a view of a web page it tracks with colored hot spots overlayed on top of it, based on how often users click on items on that area of the page. Worth checking out the demo of it anyways.
A: I get Awstats and Webalizer with my web hosting account and I find that neither is accurate or very useful. The reported numbers are inflated by up to 1000%, because the tools don't properly identify bots and spiders. Here is a comparison of the Visits metric between three tools over the past 3 weeks (I think Awstats has only partial data for 3/23, and no data for today 3/24, which is why I did not include the numbers).
Google Woopra Awstats
Sunday, March 1, 2009 10 11 69
Monday, March 2, 2009 13 14 85
Tuesday, March 3, 2009 13 14 96
Wednesday, March 4, 2009 21 28 91
Thursday, March 5, 2009 19 25 107
Friday, March 6, 2009 12 10 88
Saturday, March 7, 2009 12 14 100
Sunday, March 8, 2009 10 11 65
Monday, March 9, 2009 13 14 78
Tuesday, March 10, 2009 17 13 96
Wednesday, March 11, 2009 18 16 87
Thursday, March 12, 2009 19 18 87
Friday, March 13, 2009 12 13 66
Saturday, March 14, 2009 11 7 52
Sunday, March 15, 2009 11 12 57
Monday, March 16, 2009 13 15 92
Tuesday, March 17, 2009 24 22 102
Wednesday, March 18, 2009 18 16 79
Thursday, March 19, 2009 17 18 73
Friday, March 20, 2009 16 11 70
Saturday, March 21, 2009 24 26 67
Sunday, March 22, 2009 103 114 216
Monday, March 23, 2009 232 223 117
I personally prefer Woopra over Google. While it is still in Beta, it can take a long time for your site to get approved, and it will probably be a paid service at some point, the real-time monitoring capabilities are amazing. The new custom reporting capabilities on Google Analytics are superior to Woopra, though. Woopra does not have any capabilties to produce printed reports
A: My old company always used WebLog Expert. There is a free 'lite' version. Its still has on-going development and can be combined with geo-location databases if you use one of the paid versions.
A: AWStats and Webalizer are both good and free (I think both free speech as well as free beer). I generally prefer the look of AWStats - it has a nice modern look whereas Webalizer looks like something created in about 1992.
They both give roughly the same information which includes:
*
*Most frequently accessed pages
*Which hosts (IPs and Domain Names) visitors come from
*Proportion of users using different browsers
*Proportion of downloads of different file types
All of this information is usually viewable on a hour by hour, day by day, month by month and year by year basis. Normally the raw data is available but also with bar charts and pie charts. Both AWStats and Webalizer will (I think) try and work out where your visitors come from by using services such as GeoIP, although I never bothered to set this up. Some also try to work out what order people have visited pages in and things like that - but that is very difficult to do so the results are guesses at the best.
I generally find them both useful - even if just to get an overview of what is going on with my server and who is accessing it. They are both relatively easy to install - although I seem to remember Webalizer being a little easier than AWStats, and they both have varied configuration options to let you decide exactly what you want to get out of them.
For more information see their sites at awstats.sourceforge.net/ and http://www.webalizer.org/.
Hope that helps.
Robin
A: I typically do a search every 30 or so days for new projects for better stats. I currently (still) use AWStats (and JAWStats as I mentioned in another thread).
One of the best things about AWStats when compared to it counterparts is ability to break up your datafiles to create reports by day, week, month, year or even hour using the databasebreak feature.
Why people are still using Webalizer and Analog (well, speed aside) is beyond me. At least in my experience they tend to bury as much information as they expose and I end up reading raw logs.
A: Try out XpoLog log analysis for web servers it provide log analysis platform to anlyze multiple web and application servers for centrelized location and create automatic reports and monitors for applications health. find it at XpoLog log analysis
A: If you are looking for stats that are meant to be given to a customer and provide them with realistic data (not number of hits based on every file), then check out Google Analytics. Free and looks pretty.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16113",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "25"
} |
Q: C# Include Derived Control in Toolbox This is in reference to my other question Auto Clearing Textbox.
If I choose to derive a new TextBox control from TextBox instead of implement a user control just containing my Textbox, how would I include that in the toolbox.
A:
Right-click the toolbox, click "Choose
Items" from the context menu, browse
to your DLL, and select it.
To extend on Greg's answer...
Just to clarify, you cannot add a user control to the tool box if the code for it is in the same project that you want to use it in. For some reason MS has never added this ability, which would make sense since we don't want to always have to create a User Control Library DLL everytime we want to use a user control. So, to get it in your tool box, you have to first create a separate "User Control Library" project (which can be in the same solution!) and then do what Greg said.
A: Right-click the toolbox, click "Choose Items" from the context menu, browse to your DLL, and select it.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16114",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: What's the best way to get started with OSGI? What makes a module/service/bit of application functionality a particularly good candidate for an OSGi module?
I'm interested in using OSGi in my applications. We're a Java shop and we use Spring pretty extensively, so I'm leaning toward using Spring Dynamic Modules for OSGi(tm) Service Platforms. I'm looking for a good way to incorporate a little bit of OSGi into an application as a trial. Has anyone here used this or a similar OSGi technology? Are there any pitfalls?
@Nicolas - Thanks, I've seen that one. It's a good tutorial, but I'm looking more for ideas on how to do my first "real" OSGi bundle, as opposed to a Hello World example.
@david - Thanks for the link! Ideally, with a greenfield app, I'd design the whole thing to be dynamic. What I'm looking for right now, though, is to introduce it in a small piece of an existing application. Assuming I can pick any piece of the app, what are some factors to consider that would make that piece better or worse as an OSGi guinea pig?
A: When learning a new technology rich tooling gets you into things without big headaches.
At this point the community at ops4j.org provides a rich toolset called "PAX" which includes:
*
*Pax Runner: Run and switch between Felix, Equinox, Knopflerfish and Concierge easily
*Pax Construct: Construct, Organize & Build OSGi projects with maven easily
*Pax Drone: Test your OSGi bundles with Junit while being framework independent (uses PaxRunner)
Then there are many implementations of OSGi compendium services:
*
*Pax Logging (logging),
*Pax Web (http service),
*Pax Web Extender (war support),
*Pax Coin (configuration),
*Pax Shell (shell implementation, part of the next osgi release)
*and much more.
.. and there is a helpful, framework independend community, - but thats now advertisement ;-)
A: This answer comes nearly 3 years after the question was asked, but the link I just found is really good, especially for starters using maven. A step-by-step explanation.
A: Well, since you can not have one part OSGi and one part non-OSGi you'll need to make your entire app OSGi. In its simplest form you make a single OSGi bundle out of your entire application. Clearly this is not a best practice but it can be useful to get a feel for deploying a bundle in an OSGi container (Equinox, Felix, Knoplerfish, etc).
To take it to the next level you'll want to start splitting your app into components, components should typically have a set of responsibilities that can be isolated from the rest of your application through a set of interfaces and class dependencies. Identifying these purely by hand can range from rather straightforward for a well designed highly cohesive but loosely coupled application to a nightmare for interlocked source code that you are not familiar with.
Some help can come from tools like JDepend which can show you the coupling of Java packages against other packages/classes in your system. A package with low efferent coupling should be easier to extract into an OSGi bundle than one with high efferent coupling. Even more architectural insight can be had with pro tools like Structure 101.
Purely on a technical level, working daily with an application that consists of 160 OSGi bundles and using Spring DM I can confirm that the transition from "normal" Spring to Spring DM is largely pain free. The extra namespace and the fact that you can (and should) isolate your OSGi specific Spring configuration in separate files makes it even easier to have both with and without OSGi deployment scenarios.
OSGi is a deep and wide component model, documentation I recommend:
*
*OSGi R4 Specification: Get the PDFs of the Core and Compendium specification, they are canonical, authoritative and very readable. Have a shortcut to them handy at all times, you will consult them.
*Read up on OSGi best practices, there is a large set of things you can do but a somewhat smaller set of things you should do and there are some things you should never do (DynamicImport: * for example).
Some links:
*
*OSGi best practices and using Apache Felix
*Peter Kriens and BJ Hargrave in a Sun presentation on OSGi best practices
*one key OSGi concept are Services, learn why and how they supplant the Listener pattern with the Whiteboard pattern
*The Spring DM Google Group is very responsive and friendly in my experience
The Spring DM Google Group is no longer active and has moved to Eclipse.org as the Gemini Blueprint project which has a forum here.
A: Is your existing application monolithic or tiered in seperate processes/layers?
If tiered, you can convert the middle/app-tier to run in an OSGi container.
In my team's experience, we've found trying to do web-stuff in OSGi painful. Other pain points are Hibernate and Jakarta Commons Logging.
I find the OSGi specs pretty readable and I recommend you print out the flowchart that shows the algorithm for class loading. I'll guarantee you'll have moments of, "why am I getting a NoClassDefFoundError?": the flowchart will tell you why.
A: Try http://neilbartlett.name/blog/osgibook/. The book has hands on examples with OSGi best practices.
A: Try http://njbartlett.name/files/osgibook_preview_20091217.pdf
OR
http://www.manning.com/hall/
The second is not a book i have read myself but I have heard good things about it.
The first was very useful for me. He takes you through the architecture initially and then it's hands on OSGi.
A: There are a couple of thinks to keep in mind if you are starting with OSGi.
As mentioned elsewhere in this thread, knowing about classloading is really important. In my experience everybody sooner or later runs into problems with it.
Another important thing to remember is: never hold references! Have a look at the whiteboard pattern on which the services concept of OSGi is build (see the link in one of the other answers).
In my experience you should not try to convert a monolitic application into an OSGi-based one. This usually leads to a badly and unmanageable mess. Start anew.
Download one of the freely available stand-alone OSGi implementations. I found Knopflerfish rather good and stable (I use it in many projects). It also comes with lots of source code. You can find it here: http://www.knopflerfish.org
Another good tutorial can be found here. https://pro40.abac.com/deanhiller/cgi-bin/moin.cgi/OsgiTutorial
Peter Kriens of the OSGi Alliance gave a nice interview: http://www.infoq.com/interviews/osgi-peter-kriens. His homepage and blog (which is always a good read can be found here: http://www.aqute.biz
A: I really like the Apache Felix tutorials. However, I think in general leveraging OSGi in your application isn't one of those "let's use this framework, because it's hype" decision. It's more of a design question, but then everything that OSGi gives you in terms of design, you can have with vanilla Java as well.
As for the runtime, you cannot just add an existing application and make it OSGi enabled. It needs to be design to be dynamic. Spring DM makes it easy to hide that from you, but it's still there and you need to be aware of it.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16140",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "45"
} |
Q: What do "branch", "tag" and "trunk" mean in Subversion repositories? I've seen these words a lot around Subversion (and I guess general repository) discussions.
I have been using SVN for my projects for the last few years, but I've never grasped the complete concept of these directories.
What do they mean?
A: In addition to what Nick has said you can find out more at Streamed Lines: Branching Patterns for Parallel Software Development
In this figure main is the trunk, rel1-maint is a branch and 1.0 is a tag.
A: Hmm, not sure I agree with Nick re tag being similar to a branch. A tag is just a marker
*
*Trunk would be the main body of development, originating from the start of the project until the present.
*Branch will be a copy of code derived from a certain point in the trunk that is used for applying major changes to the code while preserving the integrity of the code in the trunk. If the major changes work according to plan, they are usually merged back into the trunk.
*Tag will be a point in time on the trunk or a branch that you wish to preserve. The two main reasons for preservation would be that either this is a major release of the software, whether alpha, beta, RC or RTM, or this is the most stable point of the software before major revisions on the trunk were applied.
In open source projects, major branches that are not accepted into the trunk by the project stakeholders can become the bases for forks -- e.g., totally separate projects that share a common origin with other source code.
The branch and tag subtrees are distinguished from the trunk in the following ways:
Subversion allows sysadmins to create hook scripts which are triggered for execution when certain events occur; for instance, committing a change to the repository. It is very common for a typical Subversion repository implementation to treat any path containing "/tag/" to be write-protected after creation; the net result is that tags, once created, are immutable (at least to "ordinary" users). This is done via the hook scripts, which enforce the immutability by preventing further changes if tag is a parent node of the changed object.
Subversion also has added features, since version 1.5, relating to "branch merge tracking" so that changes committed to a branch can be merged back into the trunk with support for incremental, "smart" merging.
A: The trunk directory is the directory that you're probably most familiar with, because it is used to hold the most recent changes. Your main codebase should be in trunk.
The branches directory is for holding your branches, whatever they may be.
The tags directory is basically for tagging a certain set of files. You do this for things like releases, where you want "1.0" to be these files at these revisions and "1.1" to be these files at these revisions. You usually don't modify tags once they're made. For more information on tags, see Chapter 4. Branching and Merging (in Version Control with Subversion).
A: One of the reasons why everyone has a slightly different definition is because Subversion implements zero support for branches and tags. Subversion basically says: We looked at full-featured branches and tags in other systems and did not found them useful, so we did not implement anything. Just make a copy into a new directory with a name convention instead. Then of course everyone is free to have slightly different conventions. To understand the difference between a real tag and a mere copy + naming convention
see the Wikipedia entry Subversion tags & branches.
A:
Tag = a defined slice in time, usually used for releases
I think this is what one typically means by "tag". But in Subversion:
They don't really have any formal meaning. A folder is a folder to SVN.
which I find rather confusing: a revision control system that knows nothing about branches or tags. From an implementation point of view, I think the Subversion way of creating "copies" is very clever, but me having to know about it is what I'd call a leaky abstraction.
Or perhaps I've just been using CVS far too long.
A: In general (tool agnostic view), a branch is the mechanism used for parallel development. An SCM can have from 0 to n branches. Subversion has 0.
*
*Trunk is a main branch recommended by Subversion, but you are in no way forced to create it. You could call it 'main' or 'releases', or not have one at all!
*Branch represents a development effort. It should never be named after a resource (like 'vonc_branch') but after:
*
*a purpose 'myProject_dev' or 'myProject_Merge'
*a release perimeter 'myProjetc1.0_dev'or myProject2.3_Merge' or 'myProject6..2_Patch1'...
*Tag is a snapshot of files in order to easily get back to that state.
The problem is that tag and branch is the same in Subversion. And I would definitely recommend the paranoid approach:
you can use one of the access control scripts provided with Subversion to prevent anyone from doing anything but creating new copies in the tags area.
A tag is final. Its content should never change. NEVER. Ever. You forgot a line in the release note? Create a new tag. Obsolete or remove the old one.
Now, I read a lot about "merging back such and such in such and such branches, then finally in the trunk branch".
That is called merge workflow and there is nothing mandatory here. It is not because you have a trunk branch that you have to merge back anything.
By convention, the trunk branch can represent the current state of your development, but that is for a simple sequential project, that is a project which has:
*
*no 'in advance' development (for the preparing the next-next version implying such changes that they are not compatible with the current 'trunk' development)
*no massive refactoring (for testing a new technical choice)
*no long-term maintenance of a previous release
Because with one (or all) of those scenario, you get yourself four 'trunks', four 'current developments', and not all you do in those parallel development will necessarily have to be merged back in 'trunk'.
A: I think that some of the confusion comes from the difference between the concept of a tag and the implementation in SVN. To SVN a tag is a branch which is a copy. Modifying tags is considered wrong and in fact tools like TortoiseSVN will warn you if you attempt to modify anything with ../tags/.. in the path.
A: First of all, as @AndrewFinnell and @KenLiu point out, in SVN the directory names themselves mean nothing -- "trunk, branches and tags" are simply a common convention that is used by most repositories. Not all projects use all of the directories (it's reasonably common not to use "tags" at all), and in fact, nothing is stopping you from calling them anything you'd like, though breaking convention is often confusing.
I'll describe probably the most common usage scenario of branches and tags, and give an example scenario of how they are used.
*
*Trunk: The main development area. This is where your next major release of the code lives, and generally has all the newest features.
*Branches: Every time you release a major version, it gets a branch created. This allows you to do bug fixes and make a new release without having to release the newest - possibly unfinished or untested - features.
*Tags: Every time you release a version (final release, release candidates (RC), and betas) you make a tag for it. This gives you a point-in-time copy of the code as it was at that state, allowing you to go back and reproduce any bugs if necessary in a past version, or re-release a past version exactly as it was. Branches and tags in SVN are lightweight - on the server, it does not make a full copy of the files, just a marker saying "these files were copied at this revision" that only takes up a few bytes. With this in mind, you should never be concerned about creating a tag for any released code. As I said earlier, tags are often omitted and instead, a changelog or other document clarifies the revision number when a release is made.
For example, let's say you start a new project. You start working in "trunk", on what will eventually be released as version 1.0.
*
*trunk/ - development version, soon to be 1.0
*branches/ - empty
Once 1.0.0 is finished, you branch trunk into a new "1.0" branch, and create a "1.0.0" tag. Now work on what will eventually be 1.1 continues in trunk.
*
*trunk/ - development version, soon to be 1.1
*branches/1.0 - 1.0.0 release version
*tags/1.0.0 - 1.0.0 release version
You come across some bugs in the code, and fix them in trunk, and then merge the fixes over to the 1.0 branch. You can also do the opposite, and fix the bugs in the 1.0 branch and then merge them back to trunk, but commonly projects stick with merging one-way only to lessen the chance of missing something. Sometimes a bug can only be fixed in 1.0 because it is obsolete in 1.1. It doesn't really matter: you only want to make sure that you don't release 1.1 with the same bugs that have been fixed in 1.0.
*
*trunk/ - development version, soon to be 1.1
*branches/1.0 - upcoming 1.0.1 release
*tags/1.0.0 - 1.0.0 release version
Once you find enough bugs (or maybe one critical bug), you decide to do a 1.0.1 release. So you make a tag "1.0.1" from the 1.0 branch, and release the code. At this point, trunk will contain what will be 1.1, and the "1.0" branch contains 1.0.1 code. The next time you release an update to 1.0, it would be 1.0.2.
*
*trunk/ - development version, soon to be 1.1
*branches/1.0 - upcoming 1.0.2 release
*tags/1.0.0 - 1.0.0 release version
*tags/1.0.1 - 1.0.1 release version
Eventually you are almost ready to release 1.1, but you want to do a beta first. In this case, you likely do a "1.1" branch, and a "1.1beta1" tag. Now, work on what will be 1.2 (or 2.0 maybe) continues in trunk, but work on 1.1 continues in the "1.1" branch.
*
*trunk/ - development version, soon to be 1.2
*branches/1.0 - upcoming 1.0.2 release
*branches/1.1 - upcoming 1.1.0 release
*tags/1.0.0 - 1.0.0 release version
*tags/1.0.1 - 1.0.1 release version
*tags/1.1beta1 - 1.1 beta 1 release version
Once you release 1.1 final, you do a "1.1" tag from the "1.1" branch.
You can also continue to maintain 1.0 if you'd like, porting bug fixes between all three branches (1.0, 1.1, and trunk). The important takeaway is that for every main version of the software you are maintaining, you have a branch that contains the latest version of code for that version.
Another use of branches is for features. This is where you branch trunk (or one of your release branches) and work on a new feature in isolation. Once the feature is completed, you merge it back in and remove the branch.
*
*trunk/ - development version, soon to be 1.2
*branches/1.1 - upcoming 1.1.0 release
*branches/ui-rewrite - experimental feature branch
The idea of this is when you're working on something disruptive (that would hold up or interfere with other people from doing their work), something experimental (that may not even make it in), or possibly just something that takes a long time (and you're afraid if it holding up a 1.2 release when you're ready to branch 1.2 from trunk), you can do it in isolation in branch. Generally you keep it up to date with trunk by merging changes into it all the time, which makes it easier to re-integrate (merge back to trunk) when you're finished.
Also note, the versioning scheme I used here is just one of many. Some teams would do bug fix/maintenance releases as 1.1, 1.2, etc., and major changes as 1.x, 2.x, etc. The usage here is the same, but you may name the branch "1" or "1.x" instead of "1.0" or "1.0.x". (Aside, semantic versioning is a good guide on how to do version numbers).
A: I'm not really sure what 'tag' is, but branch is a fairly common source control concept.
Basically, a branch is a way to work on changes to the code without affecting trunk. Say you want to add a new feature that's fairly complicated. You want to be able to check in changes as you make them, but don't want it to affect trunk until you're done with the feature.
First you'd create a branch. This is basically a copy of trunk as-of the time you made the branch. You'd then do all your work in the branch. Any changes made in the branch don't affect trunk, so trunk is still usable, allowing others to continue working there (like doing bugfixes or small enhancements). Once your feature is done you'd integrate the branch back into trunk. This would move all your changes from the branch to trunk.
There are a number of patterns people use for branches. If you have a product with multiple major versions being supported at once, usually each version would be a branch. Where I work we have a QA branch and a Production branch. Before releasing our code to QA we integrate changes to the QA branch, then deploy from there. When releasing to production we integrate from the QA branch to the Production branch, so we know the code running in production is identical to what QA tested.
Here's the Wikipedia entry on branches, since they probably explain things better than I can. :)
A: Trunk : After the completion of every sprint in agile we come out with a partially shippable product. These releases are kept in trunk.
Branches : All parallel developments codes for each ongoing sprint are kept in branches.
Tags : Every time we release a partially shippable product kind of beta version, we make a tag for it. This gives us the code that was available at that point of time, allowing us to go back at that state if required at some point during development.
A: For people familiar with GIT, master in GIT is equivalent to trunk in SVN.
Branch and tag has same terminology in both GIT and SVN.
A: In SVN a tag and branch are really similar.
Tag = a defined slice in time, usually used for releases
Branch = also a defined slice in time that development can continue on, usually used for major version like 1.0, 1.5, 2.0, etc, then when you release you tag the branch. This allows you to continue to support a production release while moving forward with breaking changes in the trunk
Trunk = development work space, this is where all development should happen, and then changes merged back from branch releases.
A: They don't really have any formal meaning. A folder is a folder
to SVN. They are a generally accepted way to organize your project.
*
*The trunk is where you keep your main line of developmemt. The branch folder is where you might create, well, branches, which are hard to explain in a short post.
*A branch is a copy of a subset of your project that you work on separately from the trunk. Maybe it's for experiments that might not go anywhere, or maybe it's for the next release, which you will later merge back into the trunk when it becomes stable.
*And the tags folder is for creating tagged copies of your repository, usually at release checkpoints.
But like I said, to SVN, a folder is a folder. branch, trunk and tag are just a convention.
I'm using the word 'copy' liberally. SVN doesn't actually make full copies of things in the repository.
A: The trunk is the development line that holds the latest source code and features. It should have the latest bug fixes in it as well as the latest features added to the project.
The branches are usually used to do something away from the trunk (or other development line) that would otherwise break the build. New features are often built in a branch and then merged back into the trunk. Branches often contain code that are not necessarily approved for the development line it branched from. For example, a programmer could try an optimization on something in a branch and only merge back in the development line once the optimization is satisfactory.
The tags are snapshots of the repository at a particular time. No development should occur on these. They are most often used to take a copy of what was released to a client so that you can easily have access to what a client is using.
Here's a link to a very good guide to repositories:
*
*Source Control HOWTO
The articles in Wikipedia are also worth reading.
A: Now that's the thing about software development, there's no consistent knowledge about anything, everybody seems to have it their own way, but that's because it is a relatively young discipline anyway.
Here's my plain simple way,
trunk - The trunk directory contains the most current, approved, and merged body of work. Contrary to what many have confessed, my trunk is only for clean, neat, approved work, and not a development area, but rather a release area.
At some given point in time when the trunk seems all ready to release, then it is tagged and released.
branches - The branches directory contains experiments and ongoing work. Work under a branch stays there until is approved to be merged into the trunk. For me, this is the area where all the work is done.
For example: I can have an iteration-5 branch for a fifth round of development on the product, maybe a prototype-9 branch for a ninth round of experimenting, and so on.
tags - The tags directory contains snapshots of approved branches and trunk releases. Whenever a branch is approved to merge into the trunk, or a release is made of the trunk, a snapshot of the approved branch or trunk release is made under tags.
I suppose with tags I can jump back and forth through time to points interest quite easily.
A: I found this great tutorial regarding SVN when I was looking up the website of the author of the OpenCV 2 Computer Vision Application Programming Cookbook and I thought I should share.
He has a tutorial on how to use SVN and what the phrases 'trunk', 'tag' and 'branch' mean.
Cited directly from his tutorial:
The current version of your software project, on which your team is currently working is usually located under a directory called trunk. As the project evolves, the developer updates that version fix bugs, add new features) and submit his changes under that directory.
At any given point in time, you may want to freeze a version and capture a snapshot of the software as it is at this stage of the development. This generally corresponds to the official versions of your software, for example, the ones you will deliver to your clients. These snapshots are located under the tags directory of your project.
Finally, it is often useful to create, at some point, a new line of development for your software. This happens, for example, when you wish to test an alternative implementation in which you have to modify your software but you do not want to submit these changes to the main project until you decide if you adopt the new solution. The main team can then continue to work on the project while other developer work on the prototype. You would put these new lines of development of the project under a directory called branches.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16142",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1210"
} |
Q: What's the best way to report errors from a SharePoint workflow? I have a custom action in a workflow and would like to report an error to the user when something goes wrong. What's the best way of doing that?
UPD: Ideally I would like to put the workflow in the error state and log a message to the workflow log. That doesn't seem possible. What's the closest I can get to it? I want to a reusable solution,something that's easy for users to set up when using my custom action in SharePoint Designer.
Added more details to the question.
@mauro that takes care of storing the error, but how do I display the error to the user in a way which makes sense?
@AdamSane That's seems like a rather fragile solution. It's not a matter of setting this up once on a single site. I need something others can add using SPD.
A: when you throw the error your error handler can then email the user, or better if the list is massive, add the error state to the workflow item - i think this is default functionality though as the error would be mentioned there.
http://www.sharepointsecurity.com/blog/sharepoint/sharepoint-2007-development/fault-handling-in-sharepoint-workflows/
A: Add the error to a hidden list with that users name. Set the visibility on the list (for users) to only read/write their own values. Then use a custom web part or FlexListViewer to view the contents of that list and display it to the user. Once they acknowledge that error, remove it from the list.
If necessary, you can add a different workflow action on that message list, that says pause for 2 days and then email. Whatever, depending on your requirements.
Otherwise you can have a custom db table that you use for pretty much the same thing, this way sharepoint does most of the work for you.
Update This can be packaged up as a feature and deployed to each site as needed. The strengths of this approach (adding a list item to a list, querying, alerting a user, and emailing a user) are all built into the sharepoint itself. In this case you can focus on your custom logic only, while letting sharepoint focus on the implementation details.
A: Personally I would log it to either a log file or the event log depending on the issue. I think storing it using a users permissions would be a bad idea, what happens if that user does not have the correct rights? or worse still they get elevated permissions by browsing the list in explorer view?
The log file would be the best way, that way you rely only on the file system being available - you dont have to worry about trapping errors happening whilst connecting to the database etc.
Mauro
A: If you need the user to take some action as a result of the error (e.g. retrying the workflow) is it possible to create a task for that user with information on the error and the location of the workflow?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16145",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Making one interface overwrite a method it inherits from another interface in PHP Is there a way in PHP to overwrite a method declared by one interface in an interface extending that interface?
The Example:
I'm probably doing something wrong, but here is what I have:
interface iVendor{
public function __construct($vendors_no = null);
public function getName();
public function getVendors_no();
public function getZip();
public function getCountryCode();
public function setName($name);
public function setVendors_no($vendors_no);
public function setZip($zip);
public function setCountryCode($countryCode);
}
interface iShipper extends iVendor{
public function __construct($vendors_no = null, $shipment = null);
public function getTransitTime($shipment = null);
public function getTransitCost($shipment = null);
public function getCurrentShipment();
public function setCurrentShipment($shipment);
public function getStatus($shipment = null);
}
Normally in PHP, when you extend something, you can overwrite any method contained therein (right?). However, when one interface extends another, it won't let you. Unless I'm thinking about this wrong... When I implement the iShipper interface, I don't have to make the Shipper object extend the Vendor object (that implements the iVendor interface). I just say:
class FedEx implements iShipper{}
and make FedEx implement all of the methods from iVendor and iShipper. However, I need the __construct functions in iVendor and iShipper to be unique. I know I could take out the $shipment = null, but then it wouldn't be as convenient to create Shippers (by just passing in the vendors_no and the shipment while instantiating).
Anyone know how to make this work? My fallback is to have to set the shipment by calling $shipper->setShipment($shipment); on the Shipper after I instantiate it, but I'm hoping for a way to get around having to do that...
A little more explanation for the curious:
The FedEx Object has methods that go to the FedEx site (using cURL) and gets an estimate for the Shipment in question. I have a UPS Object, a BAXGlobal Object, a Conway Object, etc. Each one has COMPLETELY different methods for actually getting the shipping estimate, but all the system needs to know is that they are a "shipper" and that the methods listed in the interface are callable on them (so it can treat them all exactly the same, and loop through them in a "shippers" array calling getTransitX() to find the best shipper for a shipment).
Each "Shipper" is also a "Vendor" though, and is treated as such in other parts of the system (getting and putting in the DB, etc. Our data design is a pile of crap, so FedEx is stored right alongside companies like Dunder Mifflin in the "Vendors" table, which means it gets to have all the properties of every other Vendor, but needs the extra properties and methods supplied by iShipper).
A: @cmcculloh Yeah, in Java you don't define constructors in Interfaces. This allows you to both extend interfaces and also have a class that implements multiple interfaces (both allowed, and very useful in many cases) without worrying about having to satisfy a particular constructor.
EDIT:
Here's my new model:
A. Each interface no longer has a constructor method.
B. All Shippers (UPS, FedEx, etc) now implement iShipper (which extends iVendor) and extend the abstract class Shipper (which has all common non-abstract methods for shippers defined in it, getName(), getZip() etc).
C. Each Shipper has it's own unique _construct method which overwrites the abstract __construct($vendors_no = null, $shipment = null) method contained in Shipper (I don't remember why I'm allowing those to be optional now though. I'd have to go back through my documentation...).
So:
interface iVendor{
public function getName();
public function getVendors_no();
public function getZip();
public function getCountryCode();
public function setName($name);
public function setVendors_no($vendors_no);
public function setZip($zip);
public function setCountryCode($countryCode);
}
interface iShipper extends iVendor{
public function getTransitTime($shipment = null);
public function getTransitCost($shipment = null);
public function getCurrentShipment();
public function setCurrentShipment($shipment);
public function getStatus($shipment = null);
}
abstract class Shipper implements iShipper{
abstract public function __construct($vendors_no = null, $shipment = null);
//a bunch of non-abstract common methods...
}
class FedEx extends Shipper implements iShipper{
public function __construct($vendors_no = null, $shipment = null){
//a bunch of setup code...
}
//all my FedEx specific methods...
}
Thanks for the help!
ps. since I have now added this to "your" answer, if there is something about it you don't like/think should be different, feel free to change it...
A: You could drop off the constructor and just put them in each individual class. Then what you have is each class has its own __construct, which is probably the same depending on if it is a shipper or vendor. If you want to only have those constructs defined once I don't think you want to go down that route.
What I think you want to do is make an abstract class that implements vendor, and one that implements shipper. There you could define the constructors differently.
abstract class Vendor implements iVendor {
public function __construct() {
whatever();
}
}
abstract class Shipper implements iShipper {
public function __construct() {
something();
}
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16155",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: cvs checkin: Identifying the names of the files checked in The cvsnt manual provides a detailed list of parameters than can be passed to the postcommand module, but none of them specify the file name.
Is anybody aware of an option not listed here that would provide the name of the file being checked in?
ColinYounger - The %c command is just the command, e.g. "Commit"
A: The answer (thanks to an answer to a different question by Sally) is to not use the postcommand file, but use the loginfo file and provide the arguments ‘%{s}’
A: Doesn't the %c parameter pass in the command issued (including the files being committed)?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16161",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: RSS/Atom for professional use I wondered if anyone can give an example of a professional use of RSS/Atom feeds in a company product. Does anyone use feeds for other things than updating news?
For example, did you create a product that gives results as RSS/Atom feeds? Like price listings or current inventory, or maybe dates of training lessons?
Or am I thinking in a wrong way of use cases for RSS/Atom feeds anyway?
edit @abyx has a really good example of a somewhat unexpected use of RSS as a way to get debug information from program transactions. I like the idea of this process. This is the type of use I was thinking of - besides publishing search results or last changes (like mediawiki)
A: Some of my team's new systems generate RSS feeds that the developers syndicate.
These feeds push out events that interest the developers at certain times and the information is controlled using different loggers. Thus when debugging you can get the debugging feed, when you want to see completed transactions you go to the transactions feeds etc.
This allows all the developers to get the information they want in a comfortable way and without any need to mess a lot with configuration. If you don't want to get it there's no need to remove yourself from a mailing list or edit a configuration file - simply remove the feed and be done with it.
Very cool, and the idea was stolen from Pragmatic Project Automation.
A: Most of the digital libraries uses RSS/ATOM to display their search/results, data update, according to the OAI-PMH protocol
A: With our internal TRAC server, I'm subscribed to the timeline view for each project that I work on. It's great for keeping track of checkins and bug tickets. This is pretty exclusive to a developer position though.
I also am subscribed to the recent changes for our installation of MediaWiki that we use for our intranet. That way it's easy to see if documents that I need have been changed, or if there's new policies etc.
Our website has a news page that I wrote an RSS feed for as well. While you mentioned that you weren't really interested in recent news, it is nice to keep up with our press releases.
A: I have seen RSS used to syndicate gas prices from a service for a specific zip code.
A: immobilienscout24
they use RSS feeds for updates on your search.
A: there are many examples. Here are a couple.
SharePoint provides RSS feeds from its lists.
Many faceted navigation products allow you to get an RSS feed based on a selected filter. For example, you can navigate to view 24" LCD Monitors on newegg.com and then get an RSS feed of that view.
A: Mantis bug tracker includes RSS feeds although I wish they were more configurable. Also we use MediaWiki for documentation which has all sorts of RSS Feeds including a per page watch, and recent changes.
A: I just added RSS feeds to the ticketing system I use at work (TicketDesk) and that feature should be in the next release of the product.
It's nice because it basically provides me a custom search view of outstanding trouble tickets or work requests that comes to me rather then me having to go to the application. It also allows users to get feeds of issues they may be interested in, but not require them to get emails on each update.
I'm looking at implementing an RSS feed for calls for service that our agency takes, to provide the administrators a quick and easy way to see what has been going on.
A: Atom feed documents and Atom entry documents are used as the representation format for RESTful web services that follow the Atom Publication Protocol (AtomPub).
I personally have used syndication feeds to expose a sub-set of the Windows Event Log information so that I could subscribe and be notified of critical events on a server.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16164",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: Good refactoring support for C++ The Visual Studio refactoring support for C# is quite good nowadays (though not half as good as some Java IDE's I've seen already) but I'm really missing C++ support.
I have seen Refactor! and am currently trying it out, but maybe one of you guys know a better tool or plugin?
I've been working with Visual Assist X now for a week or two and got totally addicted. Thanks for the tip, I'll try to convince my boss to get me a license at work too.
I've been bughunting for a few days since Visual Assist X kept messing up my Visual Studio after a few specific refactorings, It took me (and customer support) a week to hunt down but let's say for now that Visual Assist X is not a good combination with ClipX.
A: I have tried Refactor!, as its features seemed promising, as did its testing with a simple testing project, but it failed to work with our real project at all - a lots of CPU activity, sometimes even frozen VS IDE, Refactoring UI not appearing at all for most of the code.
We are using Visual Assist X instead. While it does not offer than many refactorings and it seems to me somewhat more complicated to use, it works.
A: I didn't find this post and created another one. There is a great response about VS2010 there.
If you are like me, who wishes VS2010 comes with C++ refactoring support, please visit my Microsoft Connect ticket and vote for it. Hopefully with enough votes, MS may give it a higher priority.
A: Visual Assist X by Whole Tomato software is not free, but it's absolutely worth the money if you use Visual Studio for C++.
http://www.wholetomato.com/
A: Mozilla's Taras Glek worked the last year or two on C++ analysis and code rewriting tools. His blog is at http://blog.mozilla.com/tglek/, you can find links to the tools they created there. They are of course free and open-source. No GUI, but I thought I'd link it in case it's interesting to anybody.
A: If you like emacs then Xrefactory is a good choice.
A: I'm not familiar with the tools you mentioned but the refactoring support for C++ in Eclipse 3.4 is getting pretty useful and growing.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16167",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "27"
} |
Q: Best Way To Determine If .NET 3.5 Is Installed I need to programatically determine whether .NET 3.5 is installed. I thought it would be easy:
<% Response.Write(Environment.Version.ToString()); %>
Which returns "2.0.50727.1434" so no such luck...
In my research I have that there are some rather obscure registry keys I can look at but I'm not sure if that is the route to go. Does anyone have any suggestions?
A: That is because technically .NET 3.5 is an extension of the 2.0 framework. The quickest way is to include an assembly from .NET 3.5 and see if it breaks.
System.Web.Extensions
Is a good assembly that is only included in version 3.5. Also it seems that you are using ASP.NET to run this check, this really limits you because you will be unable to check the file system or the registry running in the protected mode of ASP.NET. Or you can always problematically try loading an assembly from the GAC that should only be in .NET 3.5, however you may run in to problems with permissions again.
This may be one of those times where you ask your self "What am I trying to accomplish?" and see if there are alternative routes.
A: You could try:
static bool HasNet35()
{
try
{
AppDomain.CurrentDomain.Load(
"System.Core, Version=3.5.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089");
return true;
}
catch
{
return false;
}
}
@Nick: Good question, I'll try it in a bit.
Kev
A: A good resource I found:
http://www.walkernews.net/2008/05/16/how-to-check-net-framework-version-installed/
A: @Kev, really like your solution. Thanks for the help.
Using the registry the code would look something like this:
RegistryKey key = Registry
.LocalMachine
.OpenSubKey("Software\\Microsoft\\NET Framework Setup\\NDP\\v3.5");
return (key != null);
I would be curious if either of these would work in a medium trust environment (although I am working in full trust so it doesn't matter to what I am currently working on).
A: @komradekatz, your solution below from MSDN for convenience for others looking into this. I do not like this solution because it uses the user agent to determine the version. This is not viable for what I need (I am writing a class library that needs to know whether .NET 3.5 is installed). I also question how reliable this solution may prove to be.
<%@ Page Language="C#" %>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<HTML>
<HEAD>
<TITLE>Test for the .NET Framework 3.5</TITLE>
<META HTTP-EQUIV="Content-Type" CONTENT="text/html; charset=utf-8" />
<SCRIPT LANGUAGE="JavaScript">
<!--
var dotNETRuntimeVersion = "3.5.0.0";
function window::onload()
{
if (HasRuntimeVersion(dotNETRuntimeVersion))
{
result.innerText =
"This machine has the correct version of the .NET Framework 3.5."
}
else
{
result.innerText =
"This machine does not have the correct version of the .NET Framework 3.5." +
" The required version is v" + dotNETRuntimeVersion + ".";
}
result.innerText += "\n\nThis machine's userAgent string is: " +
navigator.userAgent + ".";
}
//
// Retrieve the version from the user agent string and
// compare with the specified version.
//
function HasRuntimeVersion(versionToCheck)
{
var userAgentString =
navigator.userAgent.match(/.NET CLR [0-9.]+/g);
if (userAgentString != null)
{
var i;
for (i = 0; i < userAgentString.length; ++i)
{
if (CompareVersions(GetVersion(versionToCheck),
GetVersion(userAgentString[i])) <= 0)
return true;
}
}
return false;
}
//
// Extract the numeric part of the version string.
//
function GetVersion(versionString)
{
var numericString =
versionString.match(/([0-9]+)\.([0-9]+)\.([0-9]+)/i);
return numericString.slice(1);
}
//
// Compare the 2 version strings by converting them to numeric format.
//
function CompareVersions(version1, version2)
{
for (i = 0; i < version1.length; ++i)
{
var number1 = new Number(version1[i]);
var number2 = new Number(version2[i]);
if (number1 < number2)
return -1;
if (number1 > number2)
return 1;
}
return 0;
}
-->
</SCRIPT>
</HEAD>
<BODY>
<div id="result" />
</BODY>
</HTML>
On my machine this outputs:
This machine has the correct version
of the .NET Framework 3.5.
This machine's userAgent string is:
Mozilla/4.0 (compatible; MSIE 7.0;
Windows NT 6.0; SLCC1; .NET CLR
2.0.50727; .NET CLR 3.0.04506; InfoPath.2; .NET CLR 1.1.4322; .NET
CLR 3.5.21022; Zune 2.5).
A: Another interesting find is the presence of assemblies here:
C:\Program Files\Reference
Assemblies\Microsoft\Framework\v3.5
You'd think Microsoft would build a check for "latest version" into the framework.
A: If you want to require a specific version of .net to be installed and can control the distribution of your application, you should really use ClickOnce. It allows you to specify the minimum required version of the .Net framework that should be installed, and it will only check when it is being installed so that all your subsequent startups are not impeded by an unnecessary check.
Also, with ClickOnce you get updating for free. Why wouldn't somebody want to use it?
To set up a ClickOnce application, just right click on the project within Visual Studio and go to the Publish Settings. This will create a special build of your application that you can place on your website. When users download the program, the installer will check for any prerequisites like .Net for you.
A: One option is to detect 4.0 using the version string:
Environment.Version.CompareTo(new Version(4, 0));
then since 2.0 and 2.5 share a CLR version number, these need to be distenguished by checking the registry. Since those versions are released already, the strings to look for are known.
A: Without any assembly loading and catching exceptions (which is slow), check for class API changes between 2.0 and 3.5. Mono Class Status is very helpful for this. For example you could check for GC.Collect Method (Int32, GCCollectionMode) which is in mscorlib and was added in 3.5 .
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16178",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15"
} |
Q: Web App - Dashboard Type GUI - Interface I'm looking to create a dashboard type gui for a web application. I'm looking for the user to be able to drag and drop different elements (probably either image buttons, anchor tags, or maybe just divs) to different (defined) places and be able to save their setup (in a cookie or on the server). I'm working with c# in the .Net 2.0 framework. I've tried using mootools but their recent update has left their drag/drop capabilities un-useful for me. I'm looking for a bit of direction because I know there is something out there that is just what I'm looking for so I wont have to build from scratch.
Thanks.
A: I have been looking at this kind of functionality myself recently and have decided on using jQuery with the help of jQuery UI. I came across a large amount of information that also suggested Yahoo UI (YUI), I had already started learning jQuery due to the AJAX support that it offers, so I stuck with it.
jQuery UI Site
jQuery UI Documentation
Example of a drag and drop screen layout with jQuery UI
Introduction to jQuery UI
If you decide to use the YUI javascript library, here is a link to a vast amount of videos to help get you started.
http://developer.yahoo.com/yui/theater/
A: If you still want to give MooTools a second chance, I'd recommend taking a look at Mocha UI.
A: I prefer using jQuery for AJAXy stuff like that. It also has a lot of very good plugins that make writing client-side code very easy.
Here is the plugin page specifically for Drag-n-Drop.
http://plugins.jquery.com/project/Plugins/category/45
Ajax callback are also very easy so saving the setup should be fairly easy as well.
A: I used the Microsoft ASP.Net Ajax and AjaxControlToolkit to do something like this. They have a ResizeableControl and a DragPanel. I used these, then hosted an IFrame inside the panel to display the content.
Worked pretty well.
This site:
http://www.asp.net/learn/videos/default.aspx?tabid=63#ajax
Has lots of tutorial videos that show you how to get started using the controls.
A: You might want to look at DropThings on Codeplex.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16199",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Displaying Version Information in a Web Service Can anyone suggest a way of getting version information into a Web Service? (VB.NET)
I would like to dynamically use the assembly version in the title or description, but the attributes require constants.
Is manually writing the version info as a string the only way of displaying the information on the .asmx page?
A: Yeah, attributes cannot have anything but constants in them, so you cannot use reflection to get the version number. The WebServiceAttribute class is sealed too, so you cannot inherit it and do what you want from there.
A solution might be to use some kind of placeholder text as the Name, and set up an MsBuild task to replace it with the version number when building the project.
A: You need to pick a type in your assembly and then do the following:
typeof(Some.Object.In.My.Assembly).Assembly.GetName().Version;
A: via reflection you can get the Assembly object which contains the assembly version.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16209",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Fonts on the Web The collection of fonts available to a web developer is depressingly limited. I remember reading long ago about TrueDoc, as a way of shipping fonts alongside a website - but it seems to have languished. Has anybody used this, or something similar? Is it supported by enough browsers? Am I missing a good solution?
Note that a responsible web developer does not use fonts that are only available on Windows (and especially ones that are only available on Vista), nor do they use a technology that isn't supported by at least the majority of browsers.
Update: As several people have pointed out, there's nothing wrong with providing a list of fallback fonts for people who don't have the specific font you use. I do in fact always do this, and didn't mean to suggest that this was wrong.
While my question was badly phrased, what I meant was that a designer should not make too many assumptions about what the client will have available. You should plan for how all users will see your site, not just for people using your own preferred setup.
A: You can of course use SIFR.
This degrades gracefully in browsers that do not support it and is accessible.
It's not really suitable for using on loads of text but for headings and highlight text it's perfect.
Of course this is a work around to an intrinsic limitation of browsers and the web at this time, but when was this not the case for the majority of web technologies and techniques.
A: You can do that with the new @font-face declaration available in CSS3. It has very good support for a CSS3 feature (i.e. since IE4), too.
The general syntax is:
@font-face {
src: url('path to your font') format('woff|ttf|svg|eot|…');
font-family: the name to use;
font-weight: an optional weight;
font-style: an optional style;
}
There's also a generator available that converts the font to multiple formats and creates the appropriate CSS.
Nowadays, I would recommend providing only a WOFF file; it’s convenient, easy to create.
Also, make sure to quote the name of the format (e.g. format('woff')); it won’t work on Firefox otherwise.
A: Safari, and to a lesser extent, Firefox 3 have support for @font-face in CSS, which lets you use custom fonts. You need to have the appropriate licence to distribute the font files though. These articles explain it in more detail:
*
*http://www.css3.info/preview/web-fonts-with-font-face/
*http://www.alistapart.com/articles/cssatten
*http://www.sitepoint.com/blogs/2008/07/30/custom-web-fonts-pick-your-poison/
A:
Note that a responsible web developer does not use fonts that are only
available on Windows (and especially ones that are only available on
Vista), nor do they use a technology that isn't supported by at least
the majority of browsers.
There's nothing wrong or incorrect about using Windows/Vista-specific fonts provided you gracefully degrade to a widely-available font. For example:
font-family: Calibri, Tahoma, Helvetica, Sans-Serif;
In fact that's the whole point!
A: This is a timely thread; we switched to Arial because Calibri is WAY small compared to all the other fallback fonts! It pained me greatly to switch to (gag) Arial because it's a crap copy of Helvetica:
http://www.ms-studio.com/articles.html
The sizing difficulties (too big if you go with a "c" font as your standard; too small if you go with something normal) are described in detail here:
http://neosmart.net/blog/2006/css-vistas-new-fonts/
I will miss Calibri's beautiful hand-tuned RGB aliasing a lot, but it was just impossible to deliver a good experience for most users without demanding Calibri be installed. It's reasonably common, as it comes with Office 2007 (Win/Mac) and of course Vista.. but it's far from universal, so it's a little irresponsible to rely heavily on it for a global web audience.
A: CSS2 offers:
@font-face {
font-family: Garamond;
src: url(garamond.eot), url(garamond.pfr);
}
A: IE supports @font-face (it started out as their proprietary technology in MS Word). Here's a blog post from the IE team about it just about a month ago.
A:
Note that a responsible web developer does not use fonts that are only available on Windows (and especially ones that are only available on Vista), nor do they use a technology that isn't supported by at least the majority of browsers.
I think this is rather missing the point. It wouldn't matter if you did; everyone would get something sensible that they could read easily, and the ones who need to can change the font to whatever they want anyway because it's just text and all major browsers let you customise the font you see regardless of the preferences of the site designer.
There is nothing broken about suggesting fonts in your CSS that some users don't have; they just see something different from you. Different is not broken. They won't even wonder why you're using default fonts because they won't know that other people see anything different.
This is the whole point of font sets:
Verdana, Arial, Helvetica, sans-serif
It's good practice precisely because it acknowledges that people will see different things.
This is good practice too:
Gill Sans, Verdana, Arial, Helvetica, sans-serif
So most people don't have Gill – who cares? They get a perfectly good site regardless.
And this would be fine too, but a bit weird and lazy:
Gill Sans
Irresponsible web design is doing things like setting text as images without using alt text, not using interesting fonts in font sets.
A:
Note that a responsible web developer does not use fonts that are only available on Windows (and especially ones that are only available on Vista), nor do they use a technology that isn't supported by at least the majority of browsers.
Well… You can, as long as you know how it will render on non-Vista/non-Windows OS.
Otherwise: yep, @font-face in CSS2 is the best standard alternative, even if it is not widely supported.
A: I'd imagine any method of sending fonts with a web page would pose some sort of security risk. I have heard of methods where, in the case that the client doesn't have the required font, the text can be dynamically replaced with an image of or some sort a flash displayed. The only one like this I can find now is at AListApart but it may already be redundant.
Said method would also require css, images and javascript to work and could be painful to implement across browsers.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16233",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "26"
} |
Q: Getting the subversion repository number into code I'd like to implement a way of recording the version of a project within code, so that it can be used when testing and to help track bugs.
It seems the best version number to use would just be the current revision number from Subversion. Is there an easy way to hook this number into a (C++ in my case) header file or something, which I can then get at in code? I guess this is a post commit hook or something?
Does anyone have any experience of implementing this (with code to share, please?), or can suggest a better alternative?
Thanks.
A: in your Makefile, add:
SVNDEV := -D'SVN_REV="$(shell svnversion -n .)"'
CFLAGS := $(SVNDEV) ...
then you can use macro SVN_REV anywhere in your code, eg:
printf ("Version: SVN %s\n", SVN_REV);
A: You can use the svn:keywords property to enable the Rev keyword.
You can then use $Rev$ in your code and SVN will expand it automatically when updating to $Rev: 256 $ which can then parse...
More info on the Subversion manual
A: Two ways:
Embed $Id$ or $Revision$ within the code. Then set svn:keywords="Id Revision" property on the file. This will give you the last modified revision of that source file. Good for smaller projects and scripts.
Alternatively, use a Makefile driven process and the command line tool svnversion. (Language specific - this should work for C/C++)
echo -n "#define VERSION 1.0.1-" > version.h
svnversion -n . >> version.h
Or some more complex build script with sed and version.h.in. Then just #include version.h
That will give you the repository version number, which will change with every commit / update, and is probably a more appropriate version number for most projects.
Note: I also used a human readable version string that I manually update. The example would give: Version: 1.0.1-r13445
~J
A: While nifty, the revision keyword trick only updates the file when it's changed in that revision - if you don't change the file, then it will continue to reflect the old revision.
If you want the software to always reflect the overall revision number, then you'll have to delve into the relevant SVN entries file and extract it, which isn't too difficult (it's an XML file).
Wikipedia does this on their version page to indicate the revision of the software that's running live; the code is here - look for the getSvnRevision() method.
A: You can also use SubWCRev which is part of TortoiseSVN.
SubWCRev is Windows console program which can be used to read the status of a Subversion working copy and optionally perform keyword substitution in a template file. This is often used as part of the build process as a means of incorporating working copy information into the object you are building. Typically it might be used to include the revision number in an “About” box.
http://tortoisesvn.net/docs/release/TortoiseSVN_en/tsvn-subwcrev.html
A: A good up-to-date solution:
Create a Makefile containing the following line (in the same folder as YourFile.dox):
sed "s~RevNumber~$(shell svnversion ../)~g" YourFile.dox > YourFileDummy.dox; doxygen YourFileDummy.dox
And YourFile.dox should contain this:
...
PROJECT_NUMBER = "Revision RevNumber"
...
Now:
*
*sed replaces RevNumber in the .dox with the output of svnversion (executed in the main folder of your repository) and saves the modified file to YourFileDummy.dox
*doxygen is executed on YourFileDummy.dox to generate the documentation
*Your documentation will now contain the revision number!
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16248",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "40"
} |
Q: LINQ to SQL Mapping From Money to Double I'm working with LINQ for the first time and wanted to get the Mapping to work when I have a money type in SQL, but my domain object property is of type double. How can I express this in the XML file, or in code so that the mapping does not throw the usual "invalid cast" exception?
A: Slightly off topic, but this is something everyone should know when working with the Money type in SQLServer.
You don't want to use a double, you want to use a decimal.
A Double is a long float, and floating point arithmetic should never be used for financial calculations.
Think of this, the fractions 1/3, 1/3 and 1/3 equal 1. However, when expressed as a double ie:
.3333 + .3333 + .3333 = .9999 not 1.
You may think that losing a thousandth of a cent is trivial, but its not when your working with someone elses money.
Use System.Decimal.
A: pretty sure Money maps to System.Decimal
Check here
A: In the DBML XML file, you can set the Expression attribute of a Column element to something like this:
<Column Name="Table1.Amount" DbType="smallint" Type="System.Int32"
Expression="CAST(Table1.Amount as int)" />
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16265",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: How to redirect siteA to siteB with A or CNAME records I have 2 hosts and I would like to point a subdomain on host one to a subdomain on host two:
subdomain.hostone.com --> subdomain.hosttwo.com
I added a CNAME record to host one that points to subdomain.hosttwo.com but all I get is a '400 Bad Request' Error.
Can anyone see what I'm doing wrong?
A: It sounds like the web server on hosttwo.com doesn't allow undefined domains to be passed through. You also said you wanted to do a redirect, this isn't actually a method for redirecting. If you bought this domain through GoDaddy you may just want to use their redirection service.
A: These days, many site owners are using CDN services which pulls data from CDN server. If that's your case then you are left with two options:
*
*Create a subdomain and edit DNS by Adding a CNAME record
*Don't create a subdomain but only create a CNAME record pointing back to your temporary DNS URL.
This solution only implies to pulling code from CDN which will show that it's fetching data from cdn.sitename.com but practically its pulling from your CDN host.
A: Try changing it to "subdomain -> subdomain.hosttwo.com"
The CNAME is an alias for a certain domain, so when you go to the control panel for hostone.com, you shouldn't have to enter the whole name into the CNAME alias.
As far as the error you are getting, can you log onto subdomain.hostwo.com and check the logs?
A: I think several of the answers hit around the possible solution to your problem.
I agree the easiest (and best solution for SEO purposes) is the 301 redirect. In IIS this is fairly trivial, you'd create a site for subdomain.hostone.com, after creating the site, right-click on the site and go into properties. Click on the "Home Directory" tab of the site properties window that opens. Select the radio button "A redirection to a URL", enter the url for the new site (http://subdomain.hosttwo.com), and check the checkboxes for "The exact URL entered above", "A permanent redirection for this resource" (this second checkbox causes a 301 redirect, instead of a 302 redirect). Click OK, and you're done.
Or you could create a page on the site of http://subdomain.hostone.com, using one of the following methods (depending on what the hosting platform supports)
PHP Redirect:
<?
Header( "HTTP/1.1 301 Moved Permanently" );
Header( "Location: http://subdomain.hosttwo.com" );
?>
ASP Redirect:
<%@ Language=VBScript %>
<%
Response.Status="301 Moved Permanently"
Response.AddHeader "Location","http://subdomain.hosttwo.com"
%>
ASP .NET Redirect:
<script runat="server">
private void Page_Load(object sender, System.EventArgs e)
{
Response.Status = "301 Moved Permanently";
Response.AddHeader("Location","http://subdomain.hosttwo.com");
}
</script>
Now assuming your CNAME record is correctly created, then the only problem you are experiencing is that the site created for http://subdomain.hosttwo.com is using a shared IP, and host headers to determine which site should be displayed. To resolve this issue under IIS, in IIS Manager on the web server, you'd right-click on the site for subdomain.hosttwo.com, and click "Properties". On the displayed "Web Site" tab, you should see an "Advanced" button next to the IP address that you'll need to click. On the "Advanced Web Site Identification" window that appears, click "Add". Select the same IP address that is already being used by subdomain.hosttwo.com, enter 80 as the TCP port, and then enter subdomain.hosttwo.com as the Host Header value. Click OK until you are back to the main IIS Manager window, and you should be good to go. Open a browser, and browse to http://subdomain.hostone.com, and you'll see the site at http://subdomain.hosttwo.com appear, even though your URL shows http://subdomain.hostone.com
Hope that helps...
A: You can only make DNS name pont to a different IP address, so if You you are using virtual hosts redirecting with DNS won't work.
When you enter subdomain.hostone.com in your browser it will use DNS to get it's IP address (if it's a CNAME it will continue trying until it gets IP from A record) then it will connect to that IP and send a http request with
Host: subdomain.hostone.com
somewhere in the http headers.
A: It's probably best/easiest to set up a 301 redirect. No DNS hacking required.
A: You can do this a number of non-DNS ways. The landing page at subdomain.hostone.com can have an HTTP redirect. The webserver at hostone.com can be configured to redirect (easy in Apache, not sure about IIS), etc.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16298",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "46"
} |
Q: How to get only directory name from SaveFileDialog.FileName What would be the easiest way to separate the directory name from the file name when dealing with SaveFileDialog.FileName in C#?
A: You could construct a FileInfo object. It has a Name, FullName, and DirectoryName property.
var file = new FileInfo(saveFileDialog.FileName);
Console.WriteLine("File is: " + file.Name);
Console.WriteLine("Directory is: " + file.DirectoryName);
A: Use:
System.IO.Path.GetDirectoryName(saveDialog.FileName)
(and the corresponding System.IO.Path.GetFileName). The Path class is really rather useful.
A: The Path object in System.IO parses it pretty nicely.
A: Since the forward slash is not allowed in the filename, one simple way is to divide the SaveFileDialog.Filename using String.LastIndexOf; for example:
string filename = dialog.Filename;
string path = filename.Substring(0, filename.LastIndexOf("\"));
string file = filename.Substring(filename.LastIndexOf("\") + 1);
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16306",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: Creating test data in a database I'm aware of some of the test data generators out there, but most seem to just fill name and address style databases [feel free to correct me].
We have a large integrated and normalised application - e.g. invoices have part numbers linked to stocking tables, customer numbers linked to customer tables, change logs linked to audit information, etc which are obviously difficult to fill randomly. Currently we obfuscate real life data to get test data (but not very well).
What tools\methods do you use to create large volumes of data to test with?
A: Where I work we use RedGate Data Generator to generate test data.
Since we work in the banking domain. When we have to work with nominative data (Credit card numbers, personnal ID, phone numbers) we developed an application that can mask these database fields so we can work with them as real data.
I can say with Redgate you can get close to what your real data can look like on a production server since you can customize every field of every table in your BD.
A: You can generate data plans with VSTS Database Edition (with the latest 2008 Power tools).
It includes a Data Generation Wizard which allows automated data generation by pointing to an existing database so you get something that is realistic but contains entirely different data
A: I've rolled my own data generator that generates random data conforming to regular expressions. The basic idea is to use validation rules twice. First you use them to generate valid random data and then you use them to validate new input in production.
I've stated a rewrite of the utility as it seems like a nice learning project. It's available at googlecode.
A: I just completed a project creating 3,500,000+ health insurance claim lines. Due to HIPPA and PHI restrictions, using even scrubbed real data is a PITA. I used a tool called Datatect for this (http://www.datatect.com/).
Some of the things I like about this tool:
*
*Uses ODBC so you can generate data into any ODBC data source. I've used this for Oracle, SQL and MS Access databases, flat files, and Excel spreadsheets.
*Extensible via VBScript. You can write hooks at various parts of the data generation workflow to extend the abilities of the tool. I used this feature to "sync up" dependent columns in the database, and to control the frequency distribution of values to align with real world observed frequencies.
*Referentially aware. When populating foreign key columns, pulls valid keys from parent table.
A: The Red Gate product is good...but not perfect.
I found that I did better when I wrote my own tools to generate the data. I use it when I want to generate say Customers...but it's not great if you wanted to simulate randomness that customers might engage in like creating orders...some with one item some with multiple items.
Homegrown tools will provide the most 'realistic' data I think.
A: Joel also mentioned RedGate in podcast #11
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16317",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "24"
} |
Q: Should DB layer members be static or instance? I've seen projects where the classes in the DB layer have just static functions in them and other projects where those classes need to be instantiated to get access to the member functions.
Which is "better" and why?
A: I like a single object to be correlated to a single record in the database, i.e. an object must be instantiated. This is your basic ActiveRecord pattern. In my experience, the one-object-to-one-row approach creates a much more fluid and literate presentation in code. Also, I like to treat objects as records and the class as the table. For example to change the name of a record I do:
objPerson = new Person(id)
objPerson.name = "George"
objPerson.save()
while to get all people who live in Louisiana I might do
aryPeople = Person::getPeopleFromState("LA")
There are plenty of criticisms of Active Record. You can especially run into problems where you are querying the database for each record or your classes are tightly coupled to your database, creating inflexibility in both. In that case you can move up a level and go with something like DataMapper.
Many of the modern frameworks and ORM's are aware of some of these drawbacks and provide solutions for them. Do a little research and you will start to see that this is a problem that has a number of solutions and it all depend on your needs.
A: It's all about the purpose of the DB Layer.
If you use an instance to access the DB layer, you are allowing multiple versions of that class to exist. This is desirable if you want to use the same DB layer to access multiple databases for example.
So you might have something like this:
DbController acrhive = new DbController("dev");
DbController prod = new DbController("prod");
Which allows you to use multiple instances of the same class to access different databases.
Conversely you might want to allow only one database to be used within your application at a time. If you want to do this then you could look at using a static class for this purpose.
A: As lomaxx mentioned, it's all about the purpose of the DB model.
I find it best to use static classes, as I usually only want one instance of my DAL classes being created. I'd rather use static methods than deal with the overhead of potentially creating multiple instances of my DAL classes where only 1 should exist that can be queried multiple times.
A: I would say that it depends on what you want the "DB layer" to do...
If you have general routines for executing a stored procedure, or sql statement, that return a dataset, then using static methods would make more sense to me, since you don't need a permanent reference to an object that created the dataset for you.
I'd use a static method as well if I created a DB Layer that returned a strongly-typed class or collection as its result.
If on the other hand you want to create an instance of a class, using a given parameter like an ID (see @barret-conrad's answer), to connect to the DB and get the necessary record, then you'd probably not want to use a static method on the class. But even then I'd say you'd probably have some sort of DB Helper class that DID have static methods that your other class was relying on.
A: Another "it depends". However, I can also think of a very common scenario where static just won't work. If you have a web site that gets a decent amount of traffic, and you have a static database layer with a shared connection, you could be in trouble. In ASP.Net, there is one instance of your application created by default, and so if you have a static database layer you may only get one connection to the database for everyone who uses your web site.
A: It depends which model you subscribe to. ORM (Object Relational Model) or Interface Model. ORM is very popular right now because of frameworks like nhibernate, LINQ to SQL, Entity Framework, and many others. The ORM lets you customize some business constraints around your object model and pass it around with out actually knowing how it should be committed to the database. Everything related to inserting, updating, and deleting happens in the object and doesn't really have to worry the developer too much.
The Interface Model like the Enterprise Data Pattern made popular by Microsoft, requires you to know what state your object is in and how it should be handled. It also requires you to create the necessary SQL to perform the actions.
I would say go with ORM.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16320",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Learning about LINQ Overview
One of the things I've asked a lot about on this site is LINQ. The questions I've asked have been wide and varied and often don't have much context behind them. So in an attempt to consolidate the knowledge I've acquired on Linq I'm posting this question with a view to maintaining and updating it with additional information as I continue to learn about LINQ.
I also hope that it will prove to be a useful resource for other people wanting to learn about LINQ.
What is LINQ?
From MSDN:
The LINQ Project is a codename for a
set of extensions to the .NET
Framework that encompass
language-integrated query, set, and
transform operations. It extends C#
and Visual Basic with native language
syntax for queries and provides class
libraries to take advantage of these
capabilities.
What this means is that LINQ provides a standard way to query a variety of datasources using a common syntax.
What flavours of LINQ are there?
Currently there are a few different LINQ providers provided by Microsoft:
*
*Linq to Objects which allows you to execute queries on any IEnumerable object.
*Linq to SQL which allows you to execute queries against a database in an object oriented manner.
*Linq to XML which allows you to query, load, validate, serialize and manipulate XML documents.
*Linq to Entities as suggested by Andrei
*Linq to Dataset
There are quite a few others, many of which are listed here.
What are the benefits?
*
*Standardized way to query multiple datasources
*Compile time safety of queries
*Optimized way to perform set based operations on in memory objects
*Ability to debug queries
So what can I do with LINQ?
Chook provides a way to output CSV files
Jeff shows how to remove duplicates from an array
Bob gets a distinct ordered list from a datatable
Marxidad shows how to sort an array
Dana gets help implementing a Quick Sort Using Linq
Where to start?
A summary of links from GateKiller's question are below:
Scott Guthrie provides an intro to Linq on his blog
An overview of LINQ on MSDN
ChrisAnnODell suggests checking out:
*
*Hooked on Linq
*101 Linq Samples
*LinqPad
What do I need to use LINQ?
Linq is currently available in VB.Net 9.0 and C# 3.0 so you'll need Visual Studio 2008 or greater to get the full benefits. (You could always write your code in notepad and compile using MSBuild)
There is also a tool called LinqBridge which will allow you to run Linq like queries in C# 2.0.
Tips and tricks using LINQ
This question has some tricky ways to use LINQ
A: Mention LINQ to Entities since ADO.NET Entity Framework will be an important .NET module.
A: A few LINQ Tips:
*
*Apply filters before a join to improve query performance
*Filter LINQ queries using object reference comparison
*Apply aggregates to empty collections in LINQ to SQL queries
*Delay loading a property in LINQ to SQL
*Use table-valued functions with eager loading turned on
*Put joins in the correct order in a LINQ to Objects query
*Compose a LINQ query inside a loop
http://www.aspnetpro.com/articles/2009/04/asp200904zh_f/asp200904zh_f.asp
A: Get the book Linq in Action it is an easy read for a coding book and really teaches you how to use Linq and the new features of .NET 3.5 some of the cool parts they put in for the language.
A: IMHO, an overlooked, but important, benefit is the coding efficiency of LINQ, e.g how much can be accomplished with so little code. I personally find the query syntax easy to read and comprehend.
A: Some caveats about using LINQ to SQL:
Has Microsoft really killed LINQ to SQL?
Is LINQ to SQL DOA?
There's also some controversy about the first version of Entity Framework, including a petition.
A: I think, the answer to "What flavors of LINQ are there?" is incomplete.
First of all, you can create your own "flavor". Yes, it is an advanced task, but there are a lot of different LINQ implementations now.
Here is the list of existing LINQ providers (plus some more resources on learning LINQ) on Charlie Calvert's blog: Links to LINQ.
And also there is an excellent series of blog posts by Matt Warren on how to create your own LINQ Provider: LINQ: Building an IQueryable provider series
A: My 2 cents , Read chapters "11 Query expressions and LINQ to Objects" and "12 LINQ beyond collections" in "C# in Depth" book to understand how LINQ works.
A: LINQ to entities:
*
*Video walkthroughs
*Channel 9 video
*Entity framework FAQ
*Entity framework performance
I've got a lot more I tagged on Delicious.com.
A: For Linq Practice
If you want some practice on LINQ with exercises and answers, really easy to set up and, in my opinion, awesome:
https://github.com/walkhard/linq-exercises
Download from git, open in Visual Studio. Your job is to make the tests pass.
[disclosure: i learned some linq from it and I contribute to the project so yeah i think it's an awesome, fast and efficient way to learn.]
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16322",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "190"
} |
Q: How can in install VS 2008 without using ~6.5GB of space on my C drive? I am trying to run the VS 2008 SP1 installer, but it says that I need 6,366MB of available space, and my C drive currently only has 2,452MB available. Is there any good way to install that doesn't require so much free space?
I have also tried downloading the ISO image and mounting that, but it still requires 5,864MB free. Am I missing any obvious command line switches?
A: Burn it to a DVD and install it from there. Also remove any development software that you don't need from Visual Studio, such as C++, VB.NET, Crystal Reports, etc.
A: Since hard drives are very cheap these days, I would suggest buying a larger hard drive and installing VS on that drive.
You should never run your OS hard drive close to max capacity, this can seriously reduce the performance of your system.
Also, you may be able to install VS but I'm quite sure it'll use alot of disk space during install (temp files) and while you actually use VS (again temp files).
A: Just as an update in case anyone else has this problem, I installed the .Net 3.5 SP1 update separately. Now I'm down to "only" 5045MB required.
A: You can have Visual Studio install components onto a separate hard drive from your primary but be warned, you still end up with a lot of data on your primary drive.
I tried to install VS 2008 on a machine with 2gb of hard drive space on the primary and a lot of the secondary, you still end up with about 1gb on the primary though.
Try methods to clean up your hard drive, run the Disk Cleaner tool (I found 10gb of MS error emails the other day!). Also, try removing features you wont need, MSDN is a huge install but if you're always online google is just a few clicks away, if you aren't doing VB, don't install VB (or C#, or J#, etc).
Scott Hanselman has a recent post on freeing up disk space in Vista: http://www.hanselman.com/blog/GuideToFreeingUpDiskSpaceUnderWindowsVista.aspx
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16326",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-2"
} |
Q: How do I handle page flow in MVC (particularly asp.net) If you had to provide a wizard like form entry experience in mvc how would you abstract the page flow?
A: Investigate the post-redirect-get pattern.
http://weblogs.asp.net/mhawley/archive/tags/MVC/default.aspx
http://devlicio.us/blogs/tim_barcz/archive/2008/08/22/prg-pattern-in-the-asp-net-mvc-framework.aspx
Use that along with a robust domain model (for tracking steps or form completion state or whatever you call it) and you're golden.
A: In order to keep the steps you could implement a page flow action filters, which provide an experience like this one:
[RequiredStep(FlowStart = true)]
public ActionResult Confirm()
{
return View();
}
[RequiredStep (PreviousStep = "Confirm")]
public ActionResult ExecuteOrder()
{
return RedirectToAction("ThankYou");
}
[RequiredStep(PreviousStep = "ExecuteOrder")]
public ActionResult ThankYou()
{
return View();
}
A: I left the page flow up to the view, where I believe it belongs, so different views could have different page flows (e.g. for desktop browser clients or mobile phone clients etc.) I wrote it up on my blog: A RESTful Wizard Using ASP.Net MVC… Perhaps?
A: public class CreateAccountWizardController : Controller
{
public ActionRresult Step1()
{
}
public ActionResult Step2()
{
}
}
A: There are a couple ways, create an action for each step of the wizard process, or create a parameter that is passed in to the action method. Like step that will allow you to know what the state of the wizard is in.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16330",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: How do I generate a hashcode from a byte array in C#? Say I have an object that stores a byte array and I want to be able to efficiently generate a hashcode for it. I've used the cryptographic hash functions for this in the past because they are easy to implement, but they are doing a lot more work than they should to be cryptographically oneway, and I don't care about that (I'm just using the hashcode as a key into a hashtable).
Here's what I have today:
struct SomeData : IEquatable<SomeData>
{
private readonly byte[] data;
public SomeData(byte[] data)
{
if (null == data || data.Length <= 0)
{
throw new ArgumentException("data");
}
this.data = new byte[data.Length];
Array.Copy(data, this.data, data.Length);
}
public override bool Equals(object obj)
{
return obj is SomeData && Equals((SomeData)obj);
}
public bool Equals(SomeData other)
{
if (other.data.Length != data.Length)
{
return false;
}
for (int i = 0; i < data.Length; ++i)
{
if (data[i] != other.data[i])
{
return false;
}
}
return true;
}
public override int GetHashCode()
{
return BitConverter.ToInt32(new MD5CryptoServiceProvider().ComputeHash(data), 0);
}
}
Any thoughts?
dp: You are right that I missed a check in Equals, I have updated it. Using the existing hashcode from the byte array will result in reference equality (or at least that same concept translated to hashcodes).
for example:
byte[] b1 = new byte[] { 1 };
byte[] b2 = new byte[] { 1 };
int h1 = b1.GetHashCode();
int h2 = b2.GetHashCode();
With that code, despite the two byte arrays having the same values within them, they are referring to different parts of memory and will result in (probably) different hash codes. I need the hash codes for two byte arrays with the same contents to be equal.
A: The hash code of an object does not need to be unique.
The checking rule is:
*
*Are the hash codes equal? Then call the full (slow) Equals method.
*Are the hash codes not equal? Then the two items are definitely not equal.
All you want is a GetHashCode algorithm that splits up your collection into roughly even groups - it shouldn't form the key as the HashTable or Dictionary<> will need to use the hash to optimise retrieval.
How long do you expect the data to be? How random? If lengths vary greatly (say for files) then just return the length. If lengths are likely to be similar look at a subset of the bytes that varies.
GetHashCode should be a lot quicker than Equals, but doesn't need to be unique.
Two identical things must never have different hash codes. Two different objects should not have the same hash code, but some collisions are to be expected (after all, there are more permutations than possible 32 bit integers).
A: Don't use cryptographic hashes for a hashtable, that's ridiculous/overkill.
Here ya go... Modified FNV Hash in C#
http://bretm.home.comcast.net/hash/6.html
public static int ComputeHash(params byte[] data)
{
unchecked
{
const int p = 16777619;
int hash = (int)2166136261;
for (int i = 0; i < data.Length; i++)
hash = (hash ^ data[i]) * p;
hash += hash << 13;
hash ^= hash >> 7;
hash += hash << 3;
hash ^= hash >> 17;
hash += hash << 5;
return hash;
}
}
A: Have you compared with the SHA1CryptoServiceProvider.ComputeHash method? It takes a byte array and returns a SHA1 hash, and I believe it's pretty well optimized. I used it in an Identicon Handler that performed pretty well under load.
A: I found interesting results:
I have the class:
public class MyHash : IEquatable<MyHash>
{
public byte[] Val { get; private set; }
public MyHash(byte[] val)
{
Val = val;
}
/// <summary>
/// Test if this Class is equal to another class
/// </summary>
/// <param name="other"></param>
/// <returns></returns>
public bool Equals(MyHash other)
{
if (other.Val.Length == this.Val.Length)
{
for (var i = 0; i < this.Val.Length; i++)
{
if (other.Val[i] != this.Val[i])
{
return false;
}
}
return true;
}
else
{
return false;
}
}
public override int GetHashCode()
{
var str = Convert.ToBase64String(Val);
return str.GetHashCode();
}
}
Then I created a dictionary with keys of type MyHash in order to test how fast I can insert and I can also know how many collisions there are. I did the following
// dictionary we use to check for collisions
Dictionary<MyHash, bool> checkForDuplicatesDic = new Dictionary<MyHash, bool>();
// used to generate random arrays
Random rand = new Random();
var now = DateTime.Now;
for (var j = 0; j < 100; j++)
{
for (var i = 0; i < 5000; i++)
{
// create new array and populate it with random bytes
byte[] randBytes = new byte[byte.MaxValue];
rand.NextBytes(randBytes);
MyHash h = new MyHash(randBytes);
if (checkForDuplicatesDic.ContainsKey(h))
{
Console.WriteLine("Duplicate");
}
else
{
checkForDuplicatesDic[h] = true;
}
}
Console.WriteLine(j);
checkForDuplicatesDic.Clear(); // clear dictionary every 5000 iterations
}
var elapsed = DateTime.Now - now;
Console.Read();
Every time I insert a new item to the dictionary the dictionary will calculate the hash of that object. So you can tell what method is most efficient by placing several answers found in here in the method public override int GetHashCode() The method that was by far the fastest and had the least number of collisions was:
public override int GetHashCode()
{
var str = Convert.ToBase64String(Val);
return str.GetHashCode();
}
that took 2 seconds to execute. The method
public override int GetHashCode()
{
// 7.1 seconds
unchecked
{
const int p = 16777619;
int hash = (int)2166136261;
for (int i = 0; i < Val.Length; i++)
hash = (hash ^ Val[i]) * p;
hash += hash << 13;
hash ^= hash >> 7;
hash += hash << 3;
hash ^= hash >> 17;
hash += hash << 5;
return hash;
}
}
had no collisions also but it took 7 seconds to execute!
A: If you are looking for performance, I tested a few hash keys, and
I recommend Bob Jenkin's hash function. It is both crazy fast
to compute and will give as few collisions as the cryptographic
hash you used until now.
I don't know C# at all, and I don't know if it can link with C, but
here is its implementation in C.
A: Borrowing from the code generated by JetBrains software, I have settled on this function:
public override int GetHashCode()
{
unchecked
{
var result = 0;
foreach (byte b in _key)
result = (result*31) ^ b;
return result;
}
}
The problem with just XOring the bytes is that 3/4 (3 bytes) of the returned value has only 2 possible values (all on or all off). This spreads the bits around a little more.
Setting a breakpoint in Equals was a good suggestion. Adding about 200,000 entries of my data to a Dictionary, sees about 10 Equals calls (or 1/20,000).
A: Is using the existing hashcode from the byte array field not good enough? Also note that in the Equals method you should check that the arrays are the same size before doing the compare.
A: Generating a good hash is easier said than done. Remember, you're basically representing n bytes of data with m bits of information. The larger your data set and the smaller m is, the more likely you'll get a collision ... two pieces of data resolving to the same hash.
The simplest hash I ever learned was simply XORing all the bytes together. It's easy, faster than most complicated hash algorithms and a halfway decent general-purpose hash algorithm for small data sets. It's the Bubble Sort of hash algorithms really. Since the simple implementation would leave you with 8 bits, that's only 256 hashes ... not so hot. You could XOR chunks instead of individal bytes, but then the algorithm gets much more complicated.
So certainly, the cryptographic algorithms are maybe doing some stuff you don't need ... but they're also a huge step up in general-purpose hash quality. The MD5 hash you're using has 128 bits, with billions and billions of possible hashes. The only way you're likely to get something better is to take some representative samples of the data you expect to be going through your application and try various algorithms on it to see how many collisions you get.
So until I see some reason to not use a canned hash algorithm (performance, perhaps?), I'm going to have to recommend you stick with what you've got.
A: Whether you want a perfect hashfunction (different value for each object that evaluates to equal) or just a pretty good one is always a performance tradeoff, it takes normally time to compute a good hashfunction and if your dataset is smallish you're better of with a fast function. The most important (as your second post points out) is correctness, and to achieve that all you need is to return the Length of the array. Depending on your dataset that might even be ok. If it isn't (say all your arrays are equally long) you can go with something cheap like looking at the first and last value and XORing their values and then add more complexity as you see fit for your data.
A quick way to see how your hashfunction performs on your data is to add all the data to a hashtable and count the number of times the Equals function gets called, if it is too often you have more work to do on the function. If you do this just keep in mind that the hashtable's size needs to be set bigger than your dataset when you start, otherwise you are going to rehash the data which will trigger reinserts and more Equals evaluations (though possibly more realistic?)
For some objects (not this one) a quick HashCode can be generated by ToString().GetHashCode(), certainly not optimal, but useful as people tend to return something close to the identity of the object from ToString() and that is exactly what GetHashcode is looking for
Trivia: The worst performance I have ever seen was when someone by mistake returned a constant from GetHashCode, easy to spot with a debugger though, especially if you do lots of lookups in your hashtable
A: RuntimeHelpers.GetHashCode might help:
From Msdn:
Serves as a hash function for a
particular type, suitable for use in
hashing algorithms and data structures
such as a hash table.
A: private int? hashCode;
public override int GetHashCode()
{
if (!hashCode.HasValue)
{
var hash = 0;
for (var i = 0; i < bytes.Length; i++)
{
hash = (hash << 4) + bytes[i];
}
hashCode = hash;
}
return hashCode.Value;
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16340",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "54"
} |
Q: How do you configure VS2008 to only open one webserver in a solution with multiple projects? Starting with 2005, VS started this behavior of when starting debugging session it spawns up a webserver for every project in a solution. I have a solution with 15 projects so it takes a while and is a waste of resources. Is there a way to configure it differently besides just using IIS?
A: I know this is an old question, but in Visual Studio 2010 choosing properties from a web project brings you to the big configuration screen/grid. The Always Start When Debugging setting is buried a bit.
With focus on the desired web project look at the properties window/tab (CTRL+W, P)and set the property there.
A: Some details here on why it does it and how you can overcome it:
http://vishaljoshi.blogspot.com/2007/12/tips-tricks-start-up-options-and.html
There are instances when you might have many web applications or web sites in the same solution and you may be actually debugging only one of them... In such scenario it might not be desirable to have multiple instances of ASP.NET Development Server running... VS provides an explicit setting in the property grid of web application/site called Development Web Server - "Always Start When Debugging" which is set to True by default... If you set this Property to be False only one web server instance will be created for the start up web project...
A: In Visual Studio 2008, there is an entry on the Properties page for the project called "Always Start When Debugging".
Note you have to get to this by selecting the project and going to the Properties pane (or right-clicking Properties). This option is not present when you double-click the project and open it in the main editing pane.
VS by default sets this value to on for all your web projects. Turning it off will solve this problem.
[editorial]This is fairly annoying and I wish the default were false![/editorial]
A: Set the web service project's "Always Start When Debugging" property to false. To get to the property, click on the project node and then hit F4 or click View | Properties Window (not Property Pages).
Be careful: this is not in the properties you reach by clicking the project node then clicking Properties; or by double-clicking the project's Properties sub-node; or by clicking View | Property Pages.
Also annoying is that this is property only persists as a user setting, in the .csproj.user file.
A: I have also been highly annoyed by that behavior. The only solution I have found is to manually change the properties page for each web appllication so it hits a real running instance in IIS.
I prefer this anyway, because debugging with the integrated web server can give you a very false impression of how your application will interact with the IIS security model.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16363",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13"
} |
Q: Easy way for Crystal Reports to MS SQL Server Reporting Services conversion Is there a way to easily convert Crystal Reports reports to Reporting Services RDL format?
We have quite a few reports that will be needing conversion soon.
I know about the manual process (which is basically rebuilding all your reports from scratch in SSRS), but my searches pointed to a few possibilities with automatic conversion "acceleration" with several consulting firms. (As described on .... - link broken).
Do any of you have any valid experiences or recomendations regarding this particular issue?
Are there any tools around that I do not know about?
A: An alternative would be to use a much cheaper reporting solution which can read Crystal Reports templates, such as (our very own) Java-based i-net Clear Reports (used to be i-net Crystal-Clear).
Note that unlike most solutions, we do NOT lose information such as data source information, formulas, SQL expressions, etc. from the original templates. Even charts are converted quite well.
Also, we now offer a fully functional, free report designer (which can also run reports).
A: My original VB code was converted to C#. See RptToXml.
A: We're in a very similar situation at the moment. Dozens of crystal reports in place and we're shifting to Reporting Services (mainly for its ease of deployment and more flexible UI for the end user) - our solution is to leave them be, but anything new is being created in RptgSvcs. Ideally we'd translate them, but the .rpt format is too opaque.
A: In my search I got to know a product of KTL Solutions with the name KTL Crystal Converter which is capable of converting a Crystal report to SSRS.
A: I have searched previously for this, with no luck. There does not seem to be any tools available for this conversion, the manual method thereby becomes the only method. And yes, there are consulting firms who will do the manual work for you, but they still do it manually.
Crystal Reports and Reporting Services have different architectural styles, making it a difficult task for a conversion tool, so I view it as unlikely that someone will build one anytime soon.
A: I work for one of those consulting firms mentioned by Carlton, and let me tell you that we are the only company to have developed an in-house (proprietary) tool that helps us jump start the conversion process from RPTs to RDLs. Our current tool only supports Crystal Rpts 9 and older, and the RDL 2000 and 2005 formats.
I have done several of these conversion projects myself, and there are lots of things that cannot be automatically converted due to the nature of each product. Things like expressions, data source information, and dataset queries just cannot be automagically "translated" by a tool.
Cheers.
A: As suggested by Microsoft you may try any of the following migration tools for SAP Crystal Reports to SSRS conversion:
*
*Hitachi Consulting http://www.hitachiconsulting.com/page.cfm?ID=bi
*TCS http://www.tcs.com/Microsoft2005Launch/download/Microsoft_Recast.pdf
*Sonata Software Ltd. http://microsoftcc.sonata-software.com/html/mcc_bi.htm
*KTL Solutions http://www.ktlsolutions.com/t-crystalconverter.aspx
*90DegreeSoftware http://www.90degreesoftware.com
*Jeff-Net http://www.rpttosql.com/index.html
*Neudesic http://www.neudesic.com
*PLUS-IT http://www.plus-it.de
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16396",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "32"
} |
Q: Easy installation method for windows/ Batch Reference needed? I have a bunch of files that I need to be able to transport and install quickly. My current method for doing so is moving a flash drive with a readme file of where stuff goes whenever I need to move stuff, which is rather inelegant and cumbersome.
My idea for a solution would be to write up a quick script to move files around that I could just click on. I've done some bash scripting before but batch scripting is a little odd to me. Does anyone have a good online reference guide I could use?
An alternative soulution I could accept would be a program that makes an installer for you, though I'm a bit against that as I would lose a lot of control. However, I'd be alright with it if it was extremely simple,
A: Sounds like robocopy tool is exactly what you need.
Very powerful replication command-line tool.
*
*MS TechNet reference,
*Wikipedia article about robocopy,
*Full command switch guide,
*Batch scripting guide.
A: I like to use VBscript for this kind of thing. The VBS engine is on every recent windows machine and the language is a little more like real programming than a batch script.
Also, if your installer grows to require WMI functions too, this becomes a piece of cake.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16402",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How do you handle white space in your HTML One of my biggest typographical frustrations about HTML is the way that it mangles conjoined whitespace. For example if I have:
<span>Following punctuation rules. With two spaces after the period. </span>
One of the two spaces following the period will be considered to be insignificant whitespace and be removed. I can of course, force the whitespace to be significant with:
<span>Following punctuation rules. With two spaces after the period. </span>
but it just irks me to have to do that and I usually don't bother. Does anyone out there automatically insert significant whitespace into external content submissions that are intended for a web page?
A: If you really want your white space to be preserved, try the css property: white-space: pre;
Or, you could just use a <pre> tag in your markup.
By the way, it's a good thing that HTML browsers ignore white space in general, it allows us to have clearly formatted source code, without affecting the output.
A: For your specific example, there is no need to worry about it. Web browsers perform typographical rendering and place the correct amount of space between periods and whatever character follows (and it's different depending on the next character, according to kerning rules.)
If you want line breaks, <br/> isn't really a big deal, is it?
Not sure what's worthy of a downmod here... You should not be forcing two spaces after a period, unless you're using a monospace font. For proportional fonts, the rederer kerns the right amount of space after a period. See here and here for detailed discussions.
A: It may not be very elegant, but I apply CSS to a <pre> tag.
There's always the "white-space" CSS attribute, but it can be a bit hit and miss.
A: You can use a styled pre block to preserve whitespace. Most WYSIWYG editors also insert for you...
Overall, it's good that the browser ignores whitespace. Just view the source on this website for yourself and imagine how crazy the site would look if every space was displayed.
A: Take a look at the pre tag. It might do what you want.
A: You'd better use white-space: pre-wrap than white-space: pre or
With your example, the latter solutions can start a new line on "rules. " just because your non-breakable space hit the end of the line.
A: The PRE tag can be a valid solution, depending on your needs. However, if you are trying to use the 2 space rule in sentences throughout your site, you'll soon find that the other characters the PRE tag preserves are the line feed/carriage returns (or lack of) will muck up any styling you try to do.
In general, I tend to ignore the "2 spaces after a sentence" rule, or if you're a stickler for it, I'd stick with the , but you'll occasionally run into the issue Nicolas stated.
A: There is a page regarding this topic on webtypography.net. That site has many other interesting things about creating text for the web from the point of view of typography, things that web page designers often don't even think about. It's worth reading.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16403",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: Best way to implement a dirty flag in EF You can easily use the PropertyChanges events to set the flag. But how do you easily reset it after a save to the ObjectContext?
A: what about the ObjectContext.SavingChanges event? See also http://www.thedatafarm.com/blog/2008/07/13/OverridingObjectContextSaveChanges.aspx.
A: The above method calls for using the SavingChanges event which is called before the changes are persisted. If there is an error during the save, you have already cleared your dirty flag. I would think there would be a SavedChanges event exposed as well.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16406",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Parse usable Street Address, City, State, Zip from a string Problem: I have an address field from an Access database which has been converted to SQL Server 2005. This field has everything all in one field. I need to parse out the address's individual sections into their appropriate fields in a normalized table. I need to do this for approximately 4,000 records, and it needs to be repeatable.
Assumptions:
*
*Assume an address in the US (for now)
*assume that the input string will sometimes contain an addressee (the person being addressed) and/or a second street address (i.e. Suite B)
*states may be abbreviated
*zip code could be standard 5 digits or zip+4
*there are typos in some instances
UPDATE: In response to the questions posed, standards were not universally followed; I need need to store the individual values, not just geocode and errors means typo (corrected above)
Sample Data:
*
*A. P. Croll & Son 2299 Lewes-Georgetown Hwy, Georgetown, DE 19947
*11522 Shawnee Road, Greenwood DE 19950
*144 Kings Highway, S.W. Dover, DE 19901
*Intergrated Const. Services 2 Penns Way Suite 405 New Castle, DE 19720
*Humes Realty 33 Bridle Ridge Court, Lewes, DE 19958
*Nichols Excavation 2742 Pulaski Hwy Newark, DE 19711
*2284 Bryn Zion Road, Smyrna, DE 19904
*VEI Dover Crossroads, LLC 1500 Serpentine Road, Suite 100 Baltimore MD 21
*580 North Dupont Highway Dover, DE 19901
*P.O. Box 778 Dover, DE 19903
A: I think outsourcing the problem is the best bet: send it to the Google (or Yahoo) geocoder. The geocoder returns not only the lat/long (which aren't of interest here), but also a rich parsing of the address, with fields filled in that you didn't send (including ZIP+4 and county).
For example, parsing "1600 Amphitheatre Parkway, Mountain View, CA" yields
{
"name": "1600 Amphitheatre Parkway, Mountain View, CA, USA",
"Status": {
"code": 200,
"request": "geocode"
},
"Placemark": [
{
"address": "1600 Amphitheatre Pkwy, Mountain View, CA 94043, USA",
"AddressDetails": {
"Country": {
"CountryNameCode": "US",
"AdministrativeArea": {
"AdministrativeAreaName": "CA",
"SubAdministrativeArea": {
"SubAdministrativeAreaName": "Santa Clara",
"Locality": {
"LocalityName": "Mountain View",
"Thoroughfare": {
"ThoroughfareName": "1600 Amphitheatre Pkwy"
},
"PostalCode": {
"PostalCodeNumber": "94043"
}
}
}
}
},
"Accuracy": 8
},
"Point": {
"coordinates": [-122.083739, 37.423021, 0]
}
}
]
}
Now that's parseable!
A:
This won't solve your problem, but if
you only needed lat/long data for
these addresses, the Google Maps API
will parse non-formatted addresses
pretty well.
Good suggestion, alternatively you can execute a CURL request for each address to Google Maps and it will return the properly formatted address. From that, you can regex to your heart's content.
A: +1 on James A. Rosen's suggested solution as it has worked well for me, however for completists this site is a fascinating read and the best attempt I've seen in documenting addresses worldwide: http://www.columbia.edu/kermit/postal.html
A: Are there any standards at all in the way that the addresses are recorded? For example:
*
*Are there always commas or new-lines separating street1 from street2 from city from state from zip?
*Are address types (road, street, boulevard, etc) always spelled out? always abbreviated? Some of each?
*Define "error".
My general answer is a series of Regular Expressions, though the complexity of this depends on the answer. And if there is no consistency at all, then you may only be able to achieve partial success with a Regex (ie: filtering out zip code and state) and will have to do the rest by hand (or at least go through the rest very carefully to make sure you spot the errors).
A: Another request for sample data.
As has been mentioned I would work backwards from the zip.
Once you have a zip I would query a zip database, store the results, and remove them & the zip from the string.
That will leave you with the address mess. MOST (All?) addresses will start with a number so find the first occurrence of a number in the remaining string and grab everything from it to the (new) end of the string. That will be your address. Anything to the left of that number is likely an addressee.
You should now have the City, State, & Zip stored in a table and possibly two strings, addressee and address. For the address, check for the existence of "Suite" or "Apt." etc. and split that into two values (address lines 1 & 2).
For the addressee I would punt and grab the last word of that string as the last name and put the rest into the first name field. If you don't want to do that, you'll need to check for salutation (Mr., Ms., Dr., etc.) at the start and make some assumptions based on the number of spaces as to how the name is made up.
I don't think there's any way you can parse with 100% accuracy.
A: Try www.address-parser.com. We use their web service, which you can test online
A: Based on the sample data:
*
*I would start at the end of the string. Parse a Zip-code (either format). Read end to first space. If no Zip Code was found Error.
*Trim the end then for spaces and special chars (commas)
*Then move on to State, again use the Space as the delimiter. Maybe use a lookup list to validate 2 letter state codes, and full state names. If no valid state found, error.
*Trim spaces and commas from the end again.
*City gets tricky, I would actually use a comma here, at the risk of getting too much data in the city. Look for the comma, or beginning of the line.
*If you still have chars left in the string, shove all of that into an address field.
This isn't perfect, but it should be a pretty good starting point.
A: If it's human entered data, then you'll spend too much time trying to code around the exceptions.
Try:
*
*Regular expression to extract the zip code
*Zip code lookup (via appropriate government DB) to get the correct address
*Get an intern to manually verify the new data matches the old
A: This won't solve your problem, but if you only needed lat/long data for these addresses, the Google Maps API will parse non-formatted addresses pretty well.
A: RecogniContact is a Windows COM object that parses US and European addresses. You can try it right on
http://www.loquisoft.com/index.php?page=8
A: You might want to check this out!! http://jgeocoder.sourceforge.net/parser.html
Worked like a charm for me.
A: This type of problem is hard to solve because of underlying ambiguities in the data.
Here is a Perl based solution that defines a recursive descent grammar tree based on regular expressions to parse many valid combination of street addresses: http://search.cpan.org/~kimryan/Lingua-EN-AddressParse-1.20/lib/Lingua/EN/AddressParse.pm . This includes sub properties within an address such as:
12 1st Avenue N Suite # 2 Somewhere CA 12345 USA
It is similar to http://search.cpan.org/~timb/Geo-StreetAddress-US-1.03/US.pm mentioned above, but also works for addresses that are not from the USA, such as the UK, Australia and Canada.
Here is the output for one of your sample addresses. Note that the name section would need to be removed first from "A. P. Croll & Son 2299 Lewes-Georgetown Hwy, Georgetown, DE 19947" to reduce it to "2299 Lewes-Georgetown Hwy, Georgetown, DE 19947". This is easily achieved by removing all data up to the first number found in the string.
Non matching part ''
Error '0'
Error descriptions ''
Case all '2299 Lewes-Georgetown Hwy Georgetown DE 19947'
COMPONENTS ''
country ''
po_box_type ''
post_box ''
post_code '19947'
pre_cursor ''
property_identifier '2299'
property_name ''
road_box ''
street 'Lewes-Georgetown'
street_direction ''
street_type 'Hwy'
sub_property_identifier ''
subcountry 'DE'
suburb 'Georgetown'
A: The original poster has likely long moved on, but I took a stab at porting the Perl Geo::StreetAddress:US module used by geocoder.us to C#, dumped it on CodePlex, and think that people stumbling across this question in the future may find it useful:
US Address Parser
On the project's home page, I try to talk about its (very real) limitations. Since it is not backed by the USPS database of valid street addresses, parsing can be ambiguous and it can't confirm nor deny the validity of a given address. It can just try to pull data out from the string.
It's meant for the case when you need to get a set of data mostly in the right fields, or want to provide a shortcut to data entry (letting users paste an address into a textbox rather than tabbing among multiple fields). It is not meant for verifying the deliverability of an address.
It doesn't attempt to parse out anything above the street line, but one could probably diddle with the regex to get something reasonably close--I'd probably just break it off at the house number.
A: Since there is chance of error in word, think about using SOUNDEX combined with LCS algorithm to compare strings, this will help a lot !
A: using google API
$d=str_replace(" ", "+", $address_url);
$completeurl ="http://maps.googleapis.com/maps/api/geocode/xml?address=".$d."&sensor=true";
$phpobject = simplexml_load_file($completeurl);
print_r($phpobject);
A: For ruby or rails developers there is a nice gem available called street_address.
I have been using this on one of my project and it does the work I need.
The only Issue I had was whenever an address is in this format P. O. Box 1410 Durham, NC 27702 it returned nil and therefore I had to replace "P. O. Box" with '' and after this it were able to parse it.
A: I've done this in the past.
Either do it manually, (build a nice gui that helps the user do it quickly) or have it automated and check against a recent address database (you have to buy that) and manually handle errors.
Manual handling will take about 10 seconds each, meaning you can do 3600/10 = 360 per hour, so 4000 should take you approximately 11-12 hours. This will give you a high rate of accuracy.
For automation, you need a recent US address database, and tweak your rules against that. I suggest not going fancy on the regex (hard to maintain long-term, so many exceptions). Go for 90% match against the database, do the rest manually.
Do get a copy of Postal Addressing Standards (USPS) at http://pe.usps.gov/cpim/ftp/pubs/Pub28/pub28.pdf and notice it is 130+ pages long. Regexes to implement that would be nuts.
For international addresses, all bets are off. US-based workers would not be able to validate.
Alternatively, use a data service. I have, however, no recommendations.
Furthermore: when you do send out the stuff in the mail (that's what it's for, right?) make sure you put "address correction requested" on the envelope (in the right place) and update the database. (We made a simple gui for the front desk person to do that; the person who actually sorts through the mail)
Finally, when you have scrubbed data, look for duplicates.
A: After the advice here, I have devised the following function in VB which creates passable, although not always perfect (if a company name and a suite line are given, it combines the suite and city) usable data. Please feel free to comment/refactor/yell at me for breaking one of my own rules, etc.:
Public Function parseAddress(ByVal input As String) As Collection
input = input.Replace(",", "")
input = input.Replace(" ", " ")
Dim splitString() As String = Split(input)
Dim streetMarker() As String = New String() {"street", "st", "st.", "avenue", "ave", "ave.", "blvd", "blvd.", "highway", "hwy", "hwy.", "box", "road", "rd", "rd.", "lane", "ln", "ln.", "circle", "circ", "circ.", "court", "ct", "ct."}
Dim address1 As String
Dim address2 As String = ""
Dim city As String
Dim state As String
Dim zip As String
Dim streetMarkerIndex As Integer
zip = splitString(splitString.Length - 1).ToString()
state = splitString(splitString.Length - 2).ToString()
streetMarkerIndex = getLastIndexOf(splitString, streetMarker) + 1
Dim sb As New StringBuilder
For counter As Integer = streetMarkerIndex To splitString.Length - 3
sb.Append(splitString(counter) + " ")
Next counter
city = RTrim(sb.ToString())
Dim addressIndex As Integer = 0
For counter As Integer = 0 To streetMarkerIndex
If IsNumeric(splitString(counter)) _
Or splitString(counter).ToString.ToLower = "po" _
Or splitString(counter).ToString().ToLower().Replace(".", "") = "po" Then
addressIndex = counter
Exit For
End If
Next counter
sb = New StringBuilder
For counter As Integer = addressIndex To streetMarkerIndex - 1
sb.Append(splitString(counter) + " ")
Next counter
address1 = RTrim(sb.ToString())
sb = New StringBuilder
If addressIndex = 0 Then
If splitString(splitString.Length - 2).ToString() <> splitString(streetMarkerIndex + 1) Then
For counter As Integer = streetMarkerIndex To splitString.Length - 2
sb.Append(splitString(counter) + " ")
Next counter
End If
Else
For counter As Integer = 0 To addressIndex - 1
sb.Append(splitString(counter) + " ")
Next counter
End If
address2 = RTrim(sb.ToString())
Dim output As New Collection
output.Add(address1, "Address1")
output.Add(address2, "Address2")
output.Add(city, "City")
output.Add(state, "State")
output.Add(zip, "Zip")
Return output
End Function
Private Function getLastIndexOf(ByVal sArray As String(), ByVal checkArray As String()) As Integer
Dim sourceIndex As Integer = 0
Dim outputIndex As Integer = 0
For Each item As String In checkArray
For Each source As String In sArray
If source.ToLower = item.ToLower Then
outputIndex = sourceIndex
If item.ToLower = "box" Then
outputIndex = outputIndex + 1
End If
End If
sourceIndex = sourceIndex + 1
Next
sourceIndex = 0
Next
Return outputIndex
End Function
Passing the parseAddress function "A. P. Croll & Son 2299 Lewes-Georgetown Hwy, Georgetown, DE 19947" returns:
2299 Lewes-Georgetown Hwy
A. P. Croll & Son
Georgetown
DE
19947
A: I've been working in the address processing domain for about 5 years now, and there really is no silver bullet. The correct solution is going to depend on the value of the data. If it's not very valuable, throw it through a parser as the other answers suggest. If it's even somewhat valuable you'll definitely need to have a human evaluate/correct all the results of the parser. If you're looking for a fully automated, repeatable solution, you probably want to talk to a address correction vendor like Group1 or Trillium.
A: SmartyStreets has a new feature that extracts addresses from arbitrary input strings. (Note: I don't work at SmartyStreets.)
It successfully extracted all addresses from the sample input given in the question above. (By the way, only 9 of those 10 addresses are valid.)
Here's some of the output:
And here's the CSV-formatted output of that same request:
ID,Start,End,Segment,Verified,Candidate,Firm,FirstLine,SecondLine,LastLine,City,State,ZIPCode,County,DpvFootnotes,DeliveryPointBarcode,Active,Vacant,CMRA,MatchCode,Latitude,Longitude,Precision,RDI,RecordType,BuildingDefaultIndicator,CongressionalDistrict,Footnotes
1,32,79,"2299 Lewes-Georgetown Hwy, Georgetown, DE 19947",N,,,,,,,,,,,,,,,,,,,,,,
2,81,119,"11522 Shawnee Road, Greenwood DE 19950",Y,0,,11522 Shawnee Rd,,Greenwood DE 19950-5209,Greenwood,DE,19950,Sussex,AABB,199505209226,Y,N,N,Y,38.82865,-75.54907,Zip9,Residential,S,,AL,N#
3,121,160,"144 Kings Highway, S.W. Dover, DE 19901",Y,0,,144 Kings Hwy,,Dover DE 19901-7308,Dover,DE,19901,Kent,AABB,199017308444,Y,N,N,Y,39.16081,-75.52377,Zip9,Commercial,S,,AL,L#
4,190,232,"2 Penns Way Suite 405 New Castle, DE 19720",Y,0,,2 Penns Way Ste 405,,New Castle DE 19720-2407,New Castle,DE,19720,New Castle,AABB,197202407053,Y,N,N,Y,39.68332,-75.61043,Zip9,Commercial,H,,AL,N#
5,247,285,"33 Bridle Ridge Court, Lewes, DE 19958",Y,0,,33 Bridle Ridge Cir,,Lewes DE 19958-8961,Lewes,DE,19958,Sussex,AABB,199588961338,Y,N,N,Y,38.72749,-75.17055,Zip7,Residential,S,,AL,L#
6,306,339,"2742 Pulaski Hwy Newark, DE 19711",Y,0,,2742 Pulaski Hwy,,Newark DE 19702-3911,Newark,DE,19702,New Castle,AABB,197023911421,Y,N,N,Y,39.60328,-75.75869,Zip9,Commercial,S,,AL,A#
7,341,378,"2284 Bryn Zion Road, Smyrna, DE 19904",Y,0,,2284 Bryn Zion Rd,,Smyrna DE 19977-3895,Smyrna,DE,19977,Kent,AABB,199773895840,Y,N,N,Y,39.23937,-75.64065,Zip7,Residential,S,,AL,A#N#
8,406,450,"1500 Serpentine Road, Suite 100 Baltimore MD",Y,0,,1500 Serpentine Rd Ste 100,,Baltimore MD 21209-2034,Baltimore,MD,21209,Baltimore,AABB,212092034250,Y,N,N,Y,39.38194,-76.65856,Zip9,Commercial,H,,03,N#
9,455,495,"580 North Dupont Highway Dover, DE 19901",Y,0,,580 N DuPont Hwy,,Dover DE 19901-3961,Dover,DE,19901,Kent,AABB,199013961803,Y,N,N,Y,39.17576,-75.5241,Zip9,Commercial,S,,AL,N#
10,497,525,"P.O. Box 778 Dover, DE 19903",Y,0,,PO Box 778,,Dover DE 19903-0778,Dover,DE,19903,Kent,AABB,199030778781,Y,N,N,Y,39.20946,-75.57012,Zip5,Residential,P,,AL,
I was the developer who originally wrote the service. The algorithm we implemented is a bit different from any specific answers here, but each extracted address is verified against the address lookup API, so you can be sure if it's valid or not. Each verified result is guaranteed, but we know the other results won't be perfect because, as has been made abundantly clear in this thread, addresses are unpredictable, even for humans sometimes.
A: I've done a lot of work on this kind of parsing. Because there are errors you won't get 100% accuracy, but there are a few things you can do to get most of the way there, and then do a visual BS test. Here's the general way to go about it. It's not code, because it's pretty academic to write it, there's no weirdness, just lots of string handling.
(Now that you've posted some sample data, I've made some minor changes)
*
*Work backward. Start from the zip code, which will be near the end, and in one of two known formats: XXXXX or XXXXX-XXXX. If this doesn't appear, you can assume you're in the city, state portion, below.
*The next thing, before the zip, is going to be the state, and it'll be either in a two-letter format, or as words. You know what these will be, too -- there's only 50 of them. Also, you could soundex the words to help compensate for spelling errors.
*before that is the city, and it's probably on the same line as the state. You could use a zip-code database to check the city and state based on the zip, or at least use it as a BS detector.
*The street address will generally be one or two lines. The second line will generally be the suite number if there is one, but it could also be a PO box.
*It's going to be near-impossible to detect a name on the first or second line, though if it's not prefixed with a number (or if it's prefixed with an "attn:" or "attention to:" it could give you a hint as to whether it's a name or an address line.
I hope this helps somewhat.
A: There are data services that given a zip code will give you list of street names in that zip code.
Use a regex to extract Zip or City State - find the correct one or if a error get both.
pull the list of streets from a data source Correct the city and state, and then street address. Once you get a valid Address line 1, city, state, and zip you can then make assumptions on address line 2..3
A: I don't know HOW FEASIBLE this would be, but I haven't seen this mentioned so I thought I would go ahead and suggest this:
If you are strictly in the US... get a huge database of all zip codes, states, cities and streets. Now look for these in your addresses. You can validate what you find by testing if, say, the city you found exists in the state you found, or by checking if the street you found exists in the city you found. If not, chances are John isn't for John's street, but is the name of the addressee... Basically, get the most information you can and check your addresses against it.
An extreme example would be to get A LIST OF ALL THE ADDRESSES IN THE US OF A and then find which one has the most relevant match to each of your addresses...
A: There is javascript port of perl Geo::StreetAddress::US package: https://github.com/hassansin/parse-address . It's regex-based and works fairly well.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16413",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "133"
} |
Q: String output: format or concat in C#? Let's say that you want to output or concat strings. Which of the following styles do you prefer?
*
*var p = new { FirstName = "Bill", LastName = "Gates" };
*Console.WriteLine("{0} {1}", p.FirstName, p.LastName);
*Console.WriteLine(p.FirstName + " " + p.LastName);
Do you rather use format or do you simply concat strings? What is your favorite? Is one of these hurting your eyes?
Do you have any rational arguments to use one and not the other?
I'd go for the second one.
A: Concatenating strings is fine in a simple scenario like that - it is more complicated with anything more complicated than that, even LastName, FirstName. With the format you can see, at a glance, what the final structure of the string will be when reading the code, with concatenation it becomes almost impossible to immediately discern the final result (except with a very simple example like this one).
What that means in the long run is that when you come back to make a change to your string format, you will either have the ability to pop in and make a few adjustments to the format string, or wrinkle your brow and start moving around all kinds of property accessors mixed with text, which is more likely to introduce problems.
If you're using .NET 3.5 you can use an extension method like this one and get an easy flowing, off the cuff syntax like this:
string str = "{0} {1} is my friend. {3}, {2} is my boss.".FormatWith(prop1,prop2,prop3,prop4);
Finally, as your application grows in complexity you may decide that to sanely maintain strings in your application you want to move them into a resource file to localize or simply into a static helper. This will be MUCH easier to achieve if you have consistently used formats, and your code can be quite simply refactored to use something like
string name = String.Format(ApplicationStrings.General.InformalUserNameFormat,this.FirstName,this.LastName);
A: Try this code.
It's a slightly modified version of your code.
*
*I removed Console.WriteLine as it's probably a few orders of magnitude slower than what I'm trying to measure.
*I'm starting the Stopwatch before the loop and stopping it right after, this way I'm not losing precision if the function takes for example 26.4 ticks to execute.
*The way you divided the result by some iterations was wrong. See what happens if you have 1,000 milliseconds and 100 milliseconds. In both situations, you will get 0 ms after dividing it by 1,000,000.
Code:
Stopwatch s = new Stopwatch();
var p = new { FirstName = "Bill", LastName = "Gates" };
int n = 1000000;
long fElapsedMilliseconds = 0, fElapsedTicks = 0, cElapsedMilliseconds = 0, cElapsedTicks = 0;
string result;
s.Start();
for (var i = 0; i < n; i++)
result = (p.FirstName + " " + p.LastName);
s.Stop();
cElapsedMilliseconds = s.ElapsedMilliseconds;
cElapsedTicks = s.ElapsedTicks;
s.Reset();
s.Start();
for (var i = 0; i < n; i++)
result = string.Format("{0} {1}", p.FirstName, p.LastName);
s.Stop();
fElapsedMilliseconds = s.ElapsedMilliseconds;
fElapsedTicks = s.ElapsedTicks;
s.Reset();
Console.Clear();
Console.WriteLine(n.ToString()+" x result = string.Format(\"{0} {1}\", p.FirstName, p.LastName); took: " + (fElapsedMilliseconds) + "ms - " + (fElapsedTicks) + " ticks");
Console.WriteLine(n.ToString() + " x result = (p.FirstName + \" \" + p.LastName); took: " + (cElapsedMilliseconds) + "ms - " + (cElapsedTicks) + " ticks");
Thread.Sleep(4000);
Those are my results:
1000000 x result = string.Format("{0} {1}", p.FirstName, p.LastName); took: 618ms - 2213706 ticks
1000000 x result = (p.FirstName + " " + p.LastName); took: 166ms - 595610 ticks
A: Starting from C# 6.0 interpolated strings can be used to do this, which simplifies the format even more.
var name = "Bill";
var surname = "Gates";
MessageBox.Show($"Welcome to the show, {name} {surname}!");
An interpolated string expression looks like a template string that contains expressions. An interpolated string expression creates a string by replacing the contained expressions with the ToString representations of the expressions’ results.
Interpolated strings have a similar performance to String.Format, but improved readability and shorter syntax, due to the fact that values and expressions are inserted in-line.
Please also refer to this dotnetperls article on string interpolation.
If you are looking for a default way to format your strings, this makes sense in terms of readability and performance (except if microseconds are going to make a difference in your specific use case).
A: For very simple manipulation I'd use concatenation, but once you get beyond 2 or 3 elements Format becomes more appropriate IMO.
Another reason to prefer String.Format is that .NET strings are immutable and doing it this way creates fewer temporary/intermediate copies.
A: While I totally understand the style preference and picked concatenation for my first answer partly based on my own preference, part of my decision was based on the thought that concatenation would be faster. So, out of curiosity, I tested it and the results were staggering, especially for such a small string.
Using the following code:
System.Diagnostics.Stopwatch s = new System.Diagnostics.Stopwatch();
var p = new { FirstName = "Bill", LastName = "Gates" };
s.Start();
Console.WriteLine("{0} {1}", p.FirstName, p.LastName);
s.Stop();
Console.WriteLine("Console.WriteLine(\"{0} {1}\", p.FirstName, p.LastName); took: " + s.ElapsedMilliseconds + "ms - " + s.ElapsedTicks + " ticks");
s.Reset();
s.Start();
Console.WriteLine(p.FirstName + " " + p.LastName);
s.Stop();
Console.WriteLine("Console.WriteLine(p.FirstName + \" \" + p.LastName); took: " + s.ElapsedMilliseconds + "ms - " + s.ElapsedTicks + " ticks");
I got the following results:
Bill Gates
Console.WriteLine("{0} {1}", p.FirstName, p.LastName); took: 2ms - 7280 ticks
Bill Gates
Console.WriteLine(p.FirstName + " " + p.LastName); took: 0ms - 67 ticks
Using the formatting method is over 100 times slower!! Concatenation didn't even register as 1ms, which is why I output the timer ticks as well.
A: Oh dear - after reading one of the other replies I tried reversing the order of the operations - so performing the concatenation first, then the String.Format...
Bill Gates
Console.WriteLine(p.FirstName + " " + p.LastName); took: 8ms - 30488 ticks
Bill Gates
Console.WriteLine("{0} {1}", p.FirstName, p.LastName); took: 0ms - 182 ticks
So the order of the operations makes a HUGE difference, or rather the very first operation is ALWAYS much slower.
Here are the results of a run where operations are completed more than once. I have tried changing the orders but things generally follow the same rules, once the first result is ignored:
Bill Gates
Console.WriteLine(FirstName + " " + LastName); took: 5ms - 20335 ticks
Bill Gates
Console.WriteLine(FirstName + " " + LastName); took: 0ms - 156 ticks
Bill Gates
Console.WriteLine(FirstName + " " + LastName); took: 0ms - 122 ticks
Bill Gates
Console.WriteLine("{0} {1}", FirstName, LastName); took: 0ms - 181 ticks
Bill Gates
Console.WriteLine("{0} {1}", FirstName, LastName); took: 0ms - 122 ticks
Bill Gates
String.Concat(FirstName, " ", LastName); took: 0ms - 142 ticks
Bill Gates
String.Concat(FirstName, " ", LastName); took: 0ms - 117 ticks
As you can see subsequent runs of the same method (I refactored the code into 3 methods) are incrementally faster. The fastest appears to be the Console.WriteLine(String.Concat(...)) method, followed by normal concatenation, and then the formatted operations.
The initial delay in startup is likely the initialisation of Console Stream, as placing a Console.Writeline("Start!") before the first operation brings all times back into line.
A: Generally, I prefer the former, as especially when the strings get long it can be much easier to read.
The other benefit is I believe one of the performances, as the latter actually performs 2 string creation statements before passing the final string to the Console.Write method. String.Format uses a StringBuilder under the covers I believe, so multiple concatenations are avoided.
It should be noted however that if the parameters you are passing into String.Format (and other such methods like Console.Write) are value types then they will be boxed before passed in, which can provide its own performance hits. Blog post on this here.
A: For basic string concatenation, I generally use the second style - easier to read and simpler. However, if I am doing a more complicated string combination I usually opt for String.Format.
String.Format saves on lots of quotes and pluses...
Console.WriteLine("User {0} accessed {1} on {2}.", user.Name, fileName, timestamp);
vs
Console.WriteLine("User " + user.Name + " accessed " + fileName + " on " + timestamp + ".");
Only a few charicters saved, but I think, in this example, format makes it much cleaner.
A: A better test would be to watch your memory using Perfmon and the CLR memory counters. My understanding is that the whole reason you want to use String.Format instead of just concatenating strings is since strings are immutable, you are unnecessarily burdening the garbage collector with temporary strings that need to be reclaimed in the next pass.
StringBuilder and String.Format, although potentially slower, is more memory efficient.
What is so bad about string concatenation?
A: A week from now Aug 19, 2015, this question will be exactly seven (7) years old. There is now a better way of doing this. Better in terms of maintainability as I haven't done any performance test compared to just concatenating strings (but does it matter these days? a few milliseconds in difference?). The new way of doing it with C# 6.0:
var p = new { FirstName = "Bill", LastName = "Gates" };
var fullname = $"{p.FirstName} {p.LastName}";
This new feature is better, IMO, and actually better in our case as we have codes where we build querystrings whose values depends on some factors. Imagine one querystring where we have 6 arguments. So instead of doing a, for example:
var qs = string.Format("q1={0}&q2={1}&q3={2}&q4={3}&q5={4}&q6={5}",
someVar, anotherVarWithLongName, var3, var4, var5, var6)
in can be written like this and it's easier to read:
var qs=$"q1={someVar}&q2={anotherVarWithLongName}&q3={var3}&q4={var4}&q5={var5}&q6={var6}";
A: *
*Formatting is the “.NET” way of doing it. Certain refactoring tools (Refactor! for one) will even propose to refactor the concat-style code to use the formatting style.
*Formatting is easier to optimize for the compiler (although the second will probably be refactored to use the 'Concat' method which is fast).
*Formatting is usually clearer to read (especially with “fancy” formatting).
*Formatting means implicit calls to '.ToString' on all variables, which is good for readability.
*According to “Effective C#”, the .NET 'WriteLine' and 'Format' implementations are messed up, they autobox all value types (which is bad). “Effective C#” advises to perform '.ToString' calls explicitly, which IMHO is bogus (see Jeff's posting)
*At the moment, formatting type hints are not checked by the compiler, resulting in runtime errors. However, this could be amended in future versions.
A: I'd use the String.Format, but I would also have the format string in the resource files so it can be localised for other languages. Using a simple string concat doesn't allow you to do that. Obviously, if you won't ever need to localise that string, this isn't a reason to think about it. It really depends on what the string is for.
If it's going to be shown to the user, I'd use String.Format so I can localize if I need to - and FxCop will spell-check it for me, just in case :)
If it contains numbers or any other non-string things (e.g. dates), I'd use String.Format because it gives me more control over the formatting.
If it's for building a query like SQL, I'd use Linq.
If for concatenating strings inside a loop, I'd use StringBuilder to avoid performance problems.
If it's for some output the user won't see and isn't going to affect performance I'd use String.Format because I'm in the habit of using it anyway and I'm just used to it :)
A: I choose based on readability.
I prefer the format option when there's some text around the variables. In this example:
Console.WriteLine("User {0} accessed {1} on {2}.",
user.Name, fileName, timestamp);
you understand the meaning even without variable names, whereas the concat is cluttered with quotes and + signs and confuses my eyes:
Console.WriteLine("User " + user.Name + " accessed " + fileName +
" on " + timestamp + ".");
(I borrowed Mike's example because I like it)
If the format string doesn't mean much without variable names, I have to use concat:
Console.WriteLine("{0} {1}", p.FirstName, p.LastName);
The format option makes me read the variable names and map them to the corresponding numbers. The concat option doesn't require that. I'm still confused by the quotes and + signs, but the alternative is worse. Ruby?
Console.WriteLine(p.FirstName + " " + p.LastName);
Performance-wise, I expect the format option to be slower than the concat, since the format requires the string to be parsed. I don't remember having to optimize this kind of instruction, but if I did, I'd look at string methods like Concat() and Join().
The other advantage of the format is that the format string can be put in a configuration file. Very handy with error messages and UI text.
A: Strings are immutable, this means the same tiny piece of memory is used over and over in your code. Adding the same two strings together and creating the same new string over and over again doesn't impact memory. .Net is smart enough just to use the same memory reference. Therefore your code doesn't truly test the difference between the two concat methods.
Try this on for size:
Stopwatch s = new Stopwatch();
int n = 1000000;
long fElapsedMilliseconds = 0, fElapsedTicks = 0, cElapsedMilliseconds = 0, cElapsedTicks = 0, sbElapsedMilliseconds = 0, sbElapsedTicks = 0;
Random random = new Random(DateTime.Now.Millisecond);
string result;
s.Start();
for (var i = 0; i < n; i++)
result = (random.Next().ToString() + " " + random.Next().ToString());
s.Stop();
cElapsedMilliseconds = s.ElapsedMilliseconds;
cElapsedTicks = s.ElapsedTicks;
s.Reset();
s.Start();
for (var i = 0; i < n; i++)
result = string.Format("{0} {1}", random.Next().ToString(), random.Next().ToString());
s.Stop();
fElapsedMilliseconds = s.ElapsedMilliseconds;
fElapsedTicks = s.ElapsedTicks;
s.Reset();
StringBuilder sb = new StringBuilder();
s.Start();
for(var i = 0; i < n; i++){
sb.Clear();
sb.Append(random.Next().ToString());
sb.Append(" ");
sb.Append(random.Next().ToString());
result = sb.ToString();
}
s.Stop();
sbElapsedMilliseconds = s.ElapsedMilliseconds;
sbElapsedTicks = s.ElapsedTicks;
s.Reset();
Console.WriteLine(n.ToString() + " x result = string.Format(\"{0} {1}\", p.FirstName, p.LastName); took: " + (fElapsedMilliseconds) + "ms - " + (fElapsedTicks) + " ticks");
Console.WriteLine(n.ToString() + " x result = (p.FirstName + \" \" + p.LastName); took: " + (cElapsedMilliseconds) + "ms - " + (cElapsedTicks) + " ticks");
Console.WriteLine(n.ToString() + " x sb.Clear();sb.Append(random.Next().ToString()); sb.Append(\" \"); sb.Append(random.Next().ToString()); result = sb.ToString(); took: " + (sbElapsedMilliseconds) + "ms - " + (sbElapsedTicks) + " ticks");
Console.WriteLine("****************");
Console.WriteLine("Press Enter to Quit");
Console.ReadLine();
Sample Output:
1000000 x result = string.Format("{0} {1}", p.FirstName, p.LastName); took: 513ms - 1499816 ticks
1000000 x result = (p.FirstName + " " + p.LastName); took: 393ms - 1150148 ticks
1000000 x sb.Clear();sb.Append(random.Next().ToString()); sb.Append(" "); sb.Append(random.Next().ToString()); result = sb.ToString(); took: 405ms - 1185816 ticks
A: If you're dealing with something that needs to be easy to read (and this is most code), I'd stick with the operator overload version UNLESS:
*
*The code needs to be executed millions of times
*You're doing tons of concats (more than 4 is a ton)
*The code is targeted towards the Compact Framework
Under at least two of these circumstances, I would use StringBuilder instead.
A: If you intend to localise the result, then String.Format is essential because different natural languages might not even have the data in the same order.
A: Pity the poor translators
If you know your application will stay in English, then fine, save the clock ticks. However, many cultures would usually see Lastname Firstname in, for instance, addresses.
So use string.Format(), especially if you're going to ever have your application go anywhere that English is not the first language.
A: I think this depends heavily on how complex the output is. I tend to choose whichever scenario works best at the time.
Pick the right tool based on the job :D Whichever looks cleanest!
A: I prefer the second as well but I have no rational arguments at this time to support that position.
A: Nice one!
Just added
s.Start();
for (var i = 0; i < n; i++)
result = string.Concat(p.FirstName, " ", p.LastName);
s.Stop();
ceElapsedMilliseconds = s.ElapsedMilliseconds;
ceElapsedTicks = s.ElapsedTicks;
s.Reset();
And it is even faster (I guess string.Concat is called in both examples, but the first one requires some sort of translation).
1000000 x result = string.Format("{0} {1}", p.FirstName, p.LastName); took: 249ms - 3571621 ticks
1000000 x result = (p.FirstName + " " + p.LastName); took: 65ms - 944948 ticks
1000000 x result = string.Concat(p.FirstName, " ", p.LastName); took: 54ms - 780524 ticks
A: Since I don't think the answers here cover everything, I'd like to make a small addition here.
Console.WriteLine(string format, params object[] pars) calls string.Format. The '+' implies string concatenation. I don't think this always has to do with style; I tend to mix the two styles depending on the context I'm in.
Short answer
The decision you're facing has to do with string allocation. I'll try to make it simple.
Say you have
string s = a + "foo" + b;
If you execute this, it will evaluate as follows:
string tmp1 = a;
string tmp2 = "foo"
string tmp3 = concat(tmp1, tmp2);
string tmp4 = b;
string s = concat(tmp3, tmp4);
tmp here is not really a local variable, but it is temporary for the JIT (it's pushed on the IL stack). If you push a string on the stack (such as ldstr in IL for literals), you put a reference to a string pointer on the stack.
The moment you call concat this reference becomes a problem because there isn't any string reference available that contains both strings. This means that .NET needs to allocate a new block of memory, and then fill it with the two strings. The reason this is a problem is that allocation is relatively expensive.
Which changes the question to: How can you reduce the number of concat operations?
So, the rough answer is: string.Format for >1 concats, '+' will work just fine for 1 concat. And if you don't care about doing micro-performance optimizations, string.Format will work just fine in the general case.
A note about Culture
And then there's something called culture...
string.Format enables you to use CultureInfo in your formatting. A simple operator '+' uses the current culture.
This is especially an important remark if you're writing file formats and f.ex. double values that you 'add' to a string. On different machines, you might end up with different strings if you don't use string.Format with an explicit CultureInfo.
F.ex. consider what happens if you change a '.' for a ',' while writing your comma-seperated-values file... in Dutch, the decimal separator is a comma, so your user might just get a 'funny' surprise.
More detailed answer
If you don't know the exact size of the string beforehand, it's best to use a policy like this to over allocate the buffers you use. The slack space is first filled, after which the data is copied in.
Growing means allocating a new block of memory and copying the old data to the new buffer. The old block of memory can then be released. You get the bottom line at this point: growing is an expensive operation.
The most practical way to do this is to use an overallocation policy. The most common policy is to over allocate buffers in powers of 2. Of course, you have to do it a bit smarter than that (since it makes no sense to grow from 1,2,4,8 if you already know you need 128 chars) but you get the picture. The policy ensures you don't need too many of the expensive operations I described above.
StringBuilder is a class that basically over allocates the underlying buffer in powers of two. string.Format uses StringBuilder under the hood.
This makes your decision a basic trade-off between over-allocate-and-append (-multiple) (w/w.o. culture) or just allocate-and-append.
A: I'm amazed that so many people immediately want to find the code that executes the fastest. If ONE MILLION iterations STILL take less than a second to process, is this going to be in ANY WAY noticeable to the end user? Not very likely.
Premature optimization = FAIL.
I'd go with the String.Format option, only because it makes the most sense from an architectural standpoint. I don't care about the performance until it becomes an issue (and if it did, I'd ask myself: Do I need to concatenate a million names at once? Surely they won't all fit on the screen...)
Consider if your customer later wants to change it so that they can configure whether to display "Firstname Lastname" or "Lastname, Firstname." With the Format option, this is easy - just swap out the format string. With the concat, you'll need extra code. Sure that doesn't sound like a big deal in this particular example but extrapolate.
A: Here are my results over 100,000 iterations:
Console.WriteLine("{0} {1}", p.FirstName, p.LastName); took (avg): 0ms - 689 ticks
Console.WriteLine(p.FirstName + " " + p.LastName); took (avg): 0ms - 683 ticks
And here is the bench code:
Stopwatch s = new Stopwatch();
var p = new { FirstName = "Bill", LastName = "Gates" };
//First print to remove the initial cost
Console.WriteLine(p.FirstName + " " + p.LastName);
Console.WriteLine("{0} {1}", p.FirstName, p.LastName);
int n = 100000;
long fElapsedMilliseconds = 0, fElapsedTicks = 0, cElapsedMilliseconds = 0, cElapsedTicks = 0;
for (var i = 0; i < n; i++)
{
s.Start();
Console.WriteLine(p.FirstName + " " + p.LastName);
s.Stop();
cElapsedMilliseconds += s.ElapsedMilliseconds;
cElapsedTicks += s.ElapsedTicks;
s.Reset();
s.Start();
Console.WriteLine("{0} {1}", p.FirstName, p.LastName);
s.Stop();
fElapsedMilliseconds += s.ElapsedMilliseconds;
fElapsedTicks += s.ElapsedTicks;
s.Reset();
}
Console.Clear();
Console.WriteLine("Console.WriteLine(\"{0} {1}\", p.FirstName, p.LastName); took (avg): " + (fElapsedMilliseconds / n) + "ms - " + (fElapsedTicks / n) + " ticks");
Console.WriteLine("Console.WriteLine(p.FirstName + \" \" + p.LastName); took (avg): " + (cElapsedMilliseconds / n) + "ms - " + (cElapsedTicks / n) + " ticks");
So, I don't know whose reply to mark as an answer :)
A: Personally, the second one as everything you are using is in the direct order it will be output in. Whereas with the first you have to match up the {0} and {1} with the proper var, which is easy to mess up.
At least it's not as bad as the C++ sprint where if you get the variable type wrong the whole thing will blow up.
Also, since the second is all inline and it doesn't have to do any searching and replacing for all the {0} things, the latter should be faster... though I don't know for sure.
A: I actually like the first one because when there are a lot of variables intermingled with the text it seems easier to read to me. Plus, it is easier to deal with quotes when using the string.Format(), uh, format. Here is decent analysis of string concatenation.
A: I've always gone the string.Format() route. Being able to store formats in variables like Nathan's example is a great advantage. In some cases I may append a variable but once more than 1 variable is being concatenated I refactor to use formatting.
A: Oh, and just for completeness, the following is a few ticks faster than normal concatenation:
Console.WriteLine(String.Concat(p.FirstName," ",p.LastName));
A: The first one (format) looks better to me. It's more readable and you are not creating extra temporary string objects.
A: I was curious where StringBuilder stood with these tests. Results below...
class Program {
static void Main(string[] args) {
var p = new { FirstName = "Bill", LastName = "Gates" };
var tests = new[] {
new { Name = "Concat", Action = new Action(delegate() { string x = p.FirstName + " " + p.LastName; }) },
new { Name = "Format", Action = new Action(delegate() { string x = string.Format("{0} {1}", p.FirstName, p.LastName); }) },
new { Name = "StringBuilder", Action = new Action(delegate() {
StringBuilder sb = new StringBuilder();
sb.Append(p.FirstName);
sb.Append(" ");
sb.Append(p.LastName);
string x = sb.ToString();
}) }
};
var Watch = new Stopwatch();
foreach (var t in tests) {
for (int i = 0; i < 5; i++) {
Watch.Reset();
long Elapsed = ElapsedTicks(t.Action, Watch, 10000);
Console.WriteLine(string.Format("{0}: {1} ticks", t.Name, Elapsed.ToString()));
}
}
}
public static long ElapsedTicks(Action ActionDelg, Stopwatch Watch, int Iterations) {
Watch.Start();
for (int i = 0; i < Iterations; i++) {
ActionDelg();
}
Watch.Stop();
return Watch.ElapsedTicks / Iterations;
}
}
Results:
Concat: 406 ticks
Concat: 356 ticks
Concat: 411 ticks
Concat: 299 ticks
Concat: 266 ticks
Format: 5269 ticks
Format: 954 ticks
Format: 1004 ticks
Format: 984 ticks
Format: 974 ticks
StringBuilder: 629 ticks
StringBuilder: 484 ticks
StringBuilder: 482 ticks
StringBuilder: 508 ticks
StringBuilder: 504 ticks
A: According to the MCSD prep material, Microsoft suggests using the + operator when dealing with a very small number of concatenations (probably 2 to 4). I'm still not sure why, but it's something to consider.
A: The most readable would be to use the string interpolation feature of C# 6.0:
Console.WriteLine($"{p.FirstName} {p.LastName}");
Its performance is similar to using "+".
A: Actually, I ran these tests yesterday, but it was getting late so I didnt put my responses.
The bottom line seems that they take both the same time on average. I did the test over 100000 iterations.
I'll try with StringBuilder as well, and I'll post the code and results when I get home.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16432",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "184"
} |
Q: How to test numerical analysis routines? Are there any good online resources for how to create, maintain and think about writing test routines for numerical analysis code?
One of the limitations I can see for something like testing matrix multiplication is that the obvious tests (like having one matrix being the identity) may not fully test the functionality of the code.
Also, there is the fact that you are usually dealing with large data structures as well. Does anyone have some good ideas about ways to approach this, or have pointers to good places to look?
A: It sounds as if you need to think about testing in at least two different ways:
*
*Some numerical methods allow for some meta-thinking. For example, invertible operations allow you to set up test cases to see if the result is within acceptable error bounds of the original. For example, matrix M-inverse times the matrix M * random vector V should result in V again, to within some acceptable measure of error.
Obviously, this example exercises matrix inverse, matrix multiplication and matrix-vector multiplication. I like chains like these because you can generate quite a lot of random test cases and get statistical coverage that would be a slog to have to write by hand. They don't exercise single operations in isolation, though.
*Some numerical methods have a closed-form expression of their error. If you can set up a situation with a known solution, you can then compare the difference between the solution and the calculated result, looking for a difference that exceeds these known bounds.
Fundamentally, this question illustrates the problem that testing complex methods well requires quite a lot of domain knowledge. Specific references would require a little more specific information about what you're testing. I'd definitely recommend that you at least have Steve Yegge's recommended book list on hand.
A: If you're going to be doing matrix calculations, use LAPACK. This is very well-tested code. Very smart people have been working on it for decades. They've thought deeply about issues that the uninitiated would never think about.
In general, I'd recommend two kinds of testing: systematic and random. By systematic I mean exploring edge cases etc. It helps if you can read the source code. Often algorithms have branch points: calculate this way for numbers in this range, this other way for numbers in another range, etc. Test values close to the branch points on either side because that's where approximation error is often greatest.
Random input values are important too. If you rationally pick all the test cases, you may systematically avoid something that you don't realize is a problem. Sometimes you can make good use of random input values even if you don't have the exact values to test against. For example, if you have code to calculate a function and its inverse, you can generate 1000 random values and see whether applying the function and its inverse put you back close to where you started.
A: Check out a book by David Gries called The Science of Programming. It's about proving the correctness of programs. If you want to be sure that your programs are correct (to the point of proving their correctness), this book is a good place to start.
Probably not exactly what you're looking for, but it's the computer science answer to a software engineering question.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16434",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: Create DB table from dataset table Is it possible (in Vb.Net 2005), without manually parsing the dataset table properties, to create the table and add it to the database?
We have old versions of our program on some machines, which obviously has our old database, and we are looking for a way to detect if there is a missing table and then generate the table based on the current status of the table in the dataset. We were re-scripting the table every time we released a new version (if new columns were added) but we would like to avoid this step if possible.
A: See this MSDN Forum Post: Creating a new Table in SQL Server from ADO.net DataTable.
Here the poster seems to be trying to do the same thing as you, and provides code that generates a Create Table statement using the schema contained in a DataTable.
Assuming this works as it should, you could then take that code, and submit it to the database through SqlCommand.ExecuteNonQuery() in order to create your table.
A: Here is the code:
SqlConnection con = new SqlConnection("Data Source=.;uid=sa;pwd=sa123;database=Example1");
con.Open();
string sql = "Create Table abcd (";
foreach (DataColumn column in dt.Columns)
{
sql += "[" + column.ColumnName + "] " + "nvarchar(50)" + ",";
}
sql = sql.TrimEnd(new char[] { ',' }) + ")";
SqlCommand cmd = new SqlCommand(sql, con);
SqlDataAdapter da = new SqlDataAdapter(cmd);
cmd.ExecuteNonQuery();
using (var adapter = new SqlDataAdapter("SELECT * FROM abcd", con))
using(var builder = new SqlCommandBuilder(adapter))
{
adapter.InsertCommand = builder.GetInsertCommand();
adapter.Update(dt);
}
con.Close();
I hope you got the problem solved.
Here dt is the name of the DataTable.
Alternatively you can replace:
adapter.update(dt);
with
//if you have a DataSet
adapter.Update(ds.Tables[0]);
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16443",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
} |
Q: Select Query on 2 tables, on different database servers I am trying to generate a report by querying 2 databases (Sybase) in classic ASP.
I have created 2 connection strings:
connA for databaseA
connB for databaseB
Both databases are present on the same server (don't know if this matters)
Queries:
q1 = SELECT column1 INTO #temp FROM databaseA..table1 WHERE xyz="A"
q2 = SELECT columnA,columnB,...,columnZ FROM table2 a #temp b WHERE b.column1=a.columnB
followed by:
response.Write(rstsql) <br>
set rstSQL = CreateObject("ADODB.Recordset")<br>
rstSQL.Open q1, connA<br>
rstSQL.Open q2, connB
When I try to open up this page in a browser, I get error message:
Microsoft OLE DB Provider for ODBC Drivers error '80040e37'
[DataDirect][ODBC Sybase Wire Protocol driver][SQL Server]#temp not found. Specify owner.objectname or use sp_help to check whether the object exists (sp_help may produce lots of output).
Could anyone please help me understand what the problem is and help me fix it?
Thanks.
A: your temp table is out of scope, it is only 'alive' during the first connection and will not be available in the 2nd connection
Just move all of it in one block of code and execute it inside one conection
A: With both queries, it looks like you are trying to insert into #temp. #temp is located on one of the databases (for arguments sake, databaseA). So when you try to insert into #temp from databaseB, it reports that it does not exist.
Try changing it from Into #temp From to Into databaseA.dbo.#temp From in both statements.
Also, make sure that the connection strings have permissions on the other DB, otherwise this will not work.
Update: relating to the temp table going out of scope - if you have one connection string that has permissions on both databases, then you could use this for both queries (while keeping the connection alive). While querying the table in the other DB, be sure to use [DBName].[Owner].[TableName] format when referring to the table.
A: temp is out of scope in q2.
All your work can be done in one query:
SELECT a.columnA, a.columnB,..., a.columnZ
FROM table2 a
INNER JOIN (SELECT databaseA..table1.column1
FROM databaseA..table1
WHERE databaseA..table1.xyz = 'A') b
ON a.columnB = b.column1
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16447",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Calling .NET Web Service (WSE 2/3, WS-Security) from Java I need to call a web service written in .NET from Java. The web service implements the WS-Security stack (either WSE 2 or WSE 3, it's not clear from the information I have).
The information that I received from the service provider included WSDL, a policyCache.config file, some sample C# code, and a sample application that can successfully call the service.
This isn't as useful as it sounds because it's not clear how I'm supposed to use this information to write a Java client. If the web service request isn't signed according to the policy then it is rejected by the service. I'm trying to use Apache Axis2 and I can't find any instructions on how I'm supposed to use the policyCahce.config file and the WSDL to generate a client.
There are several examples that I have found on the Web but in all cases the authors of the examples had control of both the service and the client and so were able to make tweaks on both sides in order to get it to work. I'm not in that position.
Has anyone done this successfully?
A: @Mike
I recently did a test and this is the code I used.
I'm not using policy stuff, but I used WS-Security with plain text authentication.
CXF has really good documentation on how to accomplish this stuff.
I used wsdl2java and then added this code to use the web service with ws-security.
I hope this helps you out.
import java.io.IOException;
import java.util.HashMap;
import java.util.Map;
import javax.security.auth.callback.Callback;
import javax.security.auth.callback.CallbackHandler;
import javax.security.auth.callback.UnsupportedCallbackException;
import org.apache.cxf.ws.security.wss4j.WSS4JOutInterceptor;
import org.apache.ws.security.WSConstants;
import org.apache.ws.security.WSPasswordCallback;
import org.apache.ws.security.handler.WSHandlerConstants;
public class ServiceTest implements CallbackHandler
{
public void handle(Callback[] callbacks) throws IOException, UnsupportedCallbackException {
WSPasswordCallback pc = (WSPasswordCallback) callbacks[0];
// set the password for our message.
pc.setPassword("buddah");
}
public static void main(String[] args){
PatientServiceImplService locator = new PatientServiceImplService();
PatientService service = locator.getPatientServiceImplPort();
org.apache.cxf.endpoint.Client client = org.apache.cxf.frontend.ClientProxy.getClient(service);
org.apache.cxf.endpoint.Endpoint cxfEndpoint = client.getEndpoint();
Map<String, Object> outProps = new HashMap<String, Object>();
outProps.put(WSHandlerConstants.ACTION, WSHandlerConstants.USERNAME_TOKEN + " " + WSHandlerConstants.TIMESTAMP);
outProps.put(WSHandlerConstants.USER, "joe");
outProps.put(WSHandlerConstants.PASSWORD_TYPE, WSConstants.PW_TEXT);
// Callback used to retrieve password for given user.
outProps.put(WSHandlerConstants.PW_CALLBACK_CLASS, ServiceTest.class.getName());
WSS4JOutInterceptor wssOut = new WSS4JOutInterceptor(outProps);
cxfEndpoint.getOutInterceptors().add(wssOut);
try
{
List list = service.getInpatientCensus();
for(Patient p : list){
System.out.println(p.getFirstName() + " " + p.getLastName());
}
}
catch (Exception e)
{
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
A: *
*Apache Axis can generate proxy code from WSDL http://ws.apache.org/axis/java/user-guide.html#UsingWSDLWithAxis
*NetBeans with the RESTful Web Services plug-in can generate code for you. Instructions for an example client for the eBay shopping web service are at http://ebay.custhelp.com/cgi-bin/ebay.cfg/php/enduser/std_adp.php?p_faqid=1230.
A: WS-Security specifications are not typically contained in a WSDL (never in a WSE WSDL). So wsdl2java does not know that WS-Security is even required for this service. The fact that security constraints are not present in a WSE WSDL is a big disappointment to me (WCF will include WS-Trust information in a WSDL).
On the client end, you'll need to use Rampart to add the necessary WS-Security headers to your outgoing client message. Since the WSDL does not report what WS-Security settings are necessary, you're best off by asking the service provider what is required. WS-Security requirements may be simple plaintext password, or might be X509 certificates, or might be encrypted message..... Rampart should be able to handle most of these scenarios.
Apache Rampart is "turned on" by engaging the module in your axis2.xml file. You'll need to download the Rampart module and put it in a specific place in your axis2 directory, then modify the xml file. You can also engage Rampart programatically (please edit your original question if this is a requirement and I'll edit this response).
Depending on how you configure rampart (through other XML files or programatically), it will intercept any outgoing messages and add the necessary WS-Security information to it. I've personally used axis2 with rampart to call a WSE3 service that is secured with UsernameToken in plaintext and it worked great. Similar, but more advanced scenarios should also work. There are more details on how to set up and get started with Rampart on the site linked above. If you have problems about the specifics of Rampart or how to use Rampart with your particular WSE setup, then edit your question and I'll try my best to answer.
A: This seems to be a popular question so I'll provide an overview of what we did in our situation.
It seems that services built in .NET are following an older ws-addressing standard (http://schemas.xmlsoap.org/ws/2004/03/addressing/) and axis2 only understands the newer standard (http://schemas.xmlsoap.org/ws/2004/08/addressing/).
In addition, the policyCache.config file provided is in a form that the axis2 rampart module can't understand.
So the steps we had to do, in a nutshell:
*
*Read the policyCache.config and try to understand it. Then rewrite it into a policy that rampart could understand. (Some updated docs helped.)
*Configure rampart with this policy.
*Take the keys that were provided in the .pfx file and convert them to a java key store. There is a utility that comes with Jetty that can do that.
*Configure rampart with that key store.
*Write a custom axis2 handler that backward-converts the newer ws-addressing stuff that comes out of axis2 into the older stuff expected by the service.
*Configure axis2 to use the handler on outgoing messages.
In the end it was a lot of configuration and code for something that is supposed to be an open standard supported by the vendors.
Although I'm not sure what the alternative is...can you wait for the vendors (or in this case, the one vendor) to make sure that everything will inter-op?
As a postscript I'll add that I didn't end up doing the work, it was someone else on my team, but I think I got the salient details correct. The other option that I was considering (before my teammate took over) was to call the WSS4J API directly to construct the SOAP envelope as the .NET service expected it. I think that would have worked too.
A: CXF - I'd look into CXF. I've used it to create a web service and client in java using ws-secuirty. I also connected a .net web service to it.
They have pretty good documentation too. I had more luck with it than axis.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16451",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "23"
} |
Q: Dynamic Alphabetical Navigation I'm using ColdFusion to return a result set from a SQL database and turn it into a list.
I need some way to generate an alphabetical navigation bar for that list. I have ColdFusion and the jQuery library available.
I'm looking to generate something like this:
A | B | C | ...
- A
- A
- B
- B
- B
- C
- D
Where clicking on one of the letters drops you down the page to the first item for that letter. Not all 26 letters of the alphabet are necessarily in use.
A: To generate the navigation bar, you could do something like this:
<cfoutput>
<cfloop from="#asc('A')#" to="#asc('Z')#" index="i">
<a href="###chr(i)#">#chr(i)#</a>
<cfif asc('Z') neq i>|</cfif>
</cfloop>
</cfoutput>
(CFLOOP doesn't work on characters, so you have to convert to ascii codes and back.)
To display the items in your query you could do something like this.
<cfset currentLetter = "">
<cfoutput query="data">
<cfif currentLetter neq left(data.name, 1)>
<h3><a name="#ucase(left(data.name, 1))#">#ucase(left(data.name, 1))#</a></h3>
</cfif>
<cfset currentLetter = left(data.name, 1)>
#name#<br>
</cfoutput>
A: You could use the query grouping function on your query of records. You will obviously have to change the query fields according to your data and the left() function may be different syntax depending on your database engine. The query below works on MSSQL.
<cfquery datasource="#application.dsn#" name="qMembers">
SELECT firstname,lastname, left(lastname,1) as indexLetter
FROM member
ORDER BY indexLetter,lastName
</cfquery>
<p id="indexLetter">
<cfoutput query="qMembers" group="indexLetter">
<a href="###qMembers.indexLetter#">#UCase(qMembers.indexLetter)#</a>
</cfoutput>
</p>
<cfif qMembers.recordCount>
<table>
<cfoutput query="qMembers" group="indexLetter">
<tr>
<th colspan="99" style="background-color:##324E7C;">
<a name="#qMembers.indexLetter#" style="float:left;">#UCase(qMembers.indexLetter)#</a>
<a href="##indexLetter" style="color:##fff;float:right;">index</a>
</th>
</tr>
<cfoutput>
<tr>
<td><strong>#qMembers.lastName#</strong> #qMembers.firstName#</td>
</tr>
</cfoutput>
</cfoutput>
</table>
<cfelse>
<p>No Members were found</p>
</cfif>
A: I would get the SQL result set to return the list in the first place, you can easily just take the first letter of the required item, and a count. The quickest way would be to do a join on a table of 26 characters (less string manipulation that way).
In CF use the count value to ensure that if there is no result you either only display the letter (as standard text) or dont display it at all.
How many rows are you going to be working on as there may be better ways of doing this. For example, storing the first letter of your required link field in a separate column on insert would reduce the overhead when selecting.
A: So, there were plenty of good suggestions, but none did exactly what I wanted. Fortunately I was able to use them to figure out what I really wanted to do. The only thing the following doesn't do is print the last few unused letters (if there are any). That's why I have that cfif statement checking for 'W' as that's the last letter I use, otherwise it should check for Z.
<cfquery datasource="#application.dsn#" name="qTitles">
SELECT title, url, substr(titles,1,1) as indexLetter
FROM list
ORDER BY indexLetter,title
</cfquery>
<cfset linkLetter = "#asc('A')#">
<cfoutput query="titles" group="indexletter">
<cfif chr(linkLetter) eq #qTitles.indexletter#>
<a href="###ucase(qTitles.indexletter)#">#ucase(qTitles.indexletter)#</a>
<cfif asc('W') neq linkLetter>|</cfif>
<cfset linkLetter = ++LinkLetter>
<cfelse>
<cfscript>
while(chr(linkLetter) != qTitles.indexletter)
{
WriteOutput(" " & chr(linkLetter) & " ");
IF(linkLetter != asc('W')){WriteOutput("|");};
++LinkLetter;
}
</cfscript>
<a href="###ucase(qTitles.indexletter)#">#ucase(qTitles.indexletter)#</a>
<cfif asc('W') neq linkLetter>|</cfif>
<cfset linkLetter = ++LinkLetter>
</cfif>
</cfoutput>
<ul>
<cfset currentLetter = "">
<cfoutput query="qTitles" group="title">
<cfif currentLetter neq #qTitles.indexletter#>
<li><a name="#ucase(qTitles.indexletter)#">#ucase(qTitles.indexletter)#</a></li>
</cfif>
<cfset currentLetter = #qTitles.indexletter#>
<li><a href="#url#">#title#</a></li>
</cfoutput>
</ul>
A: This question was posted quite a long time ago, but there is now an open source vanilla JavaScript plugin available that will alphabetically filter any HTML list with alphabetical navigation
It's called AlphaListNav.js
Just output your HTML list (in your case, your list generated with Coldfusion:
<ul id="myList">
<li>Eggplant</li>
<li>Apples</li>
<li>Carrots</li>
<li>Blueberries</li>
</ul>
Add the CSS in the <head> of your page:
<link rel="stylesheet" href="alphaListNav.css">
<!-- note: you can edit/overide the css to customize how you want it to look -->
Add the JavaScript file just before the closing </body> tag:
<script src="alphaListNav.js"></script>
And then Initialize the AlphaListNav library on your list by passing it the id of your list. Like so:
<script>
new AlphaListNav('myList');
</script>
It has all kinds of different options for customizing the behavior you may want:
For example:
<script>
new AlphaListNav('myList', {
initLetter: 'A',
includeAll: false,
includeNums: false,
removeDisabled: true,
//and many other options available..
});
</script>
The GitHub project is here
And a CodePen example is here
The AlphaListNav.js website & documentation is here
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16458",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
} |
Q: Removing N items from a list conditionally I was writing some ASP.NET control when I came to the scenario where I needed to remove items from a list, only when they matched a certain condition.
The RemoveAll method of the generic List class does a good job, but removes all items that match the condition, specified by the predicate.
What if I want to only remove a certain number of items specifying the condition? What do you think is the best way to do this?
A: If you want to specify both a limit for number of items to remove and a condition to select the items to remove, you can use this approach:
int limit = 30; // Suppose you want to remove 30 items at most
list.RemoveAll(item => ShouldIRemoveThis(item) && limit-- > 0);
A: @buyutec
Instead of
list.RemoveAll(item => ShouldIRemoveThis(item));
you can use:
list.RemoveAll(ShouldIRemoveThis);
The lambda has the same signature as the method, so they are equivalent so you can just pass the method directly.
A: Unless the method provides a "limit" parameter (which it doesn't) your best option is to go with a simple loop that removes the items that match, breaking when your incremented "match counter" hits your limit.
That's pretty much how the internal function works anyway, but in a more optimized way.
A: In framework 3.5, RemoveAll method takes a predicate as a parameter. So you may use
list.RemoveAll(item => ShouldIRemoveThis(item));
where ShouldIRemoveThis is a method that returns a boolean indicating whether the item must be removed from the list.
A: Can you use LINQ? If so, you can just use the .Take() method and specify how many records you want (maybe as total - N).
A: Anonymous delegates are useful here. A simple example to remove the first limit even numbers from a list.
List<int> myList = new List<int>;
for (int i = 0; i < 20; i++) myList.add(i);
int total = 0;
int limit = 5;
myList.RemoveAll(delegate(int i) { if (i % 2 == 0 && total < limit) { total++; return true; } return false; });
myList.ForEach(i => Console.Write(i + " "));
Gives 1 3 5 7 9 10 11 12 13 14 15 16 17 18 19, as we want. Easy enough to wrap that up in a function, suitable for use as a lambda expression, taking the real test as a parameter.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16460",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Can using lambdas as event handlers cause a memory leak? Say we have the following method:
private MyObject foo = new MyObject();
// and later in the class
public void PotentialMemoryLeaker(){
int firedCount = 0;
foo.AnEvent += (o,e) => { firedCount++;Console.Write(firedCount);};
foo.MethodThatFiresAnEvent();
}
If the class with this method is instantiated and the PotentialMemoryLeaker method is called multiple times, do we leak memory?
Is there any way to unhook that lambda event handler after we're done calling MethodThatFiresAnEvent?
A: You wont just leak memory, you will also get your lambda called multiple times. Each call of 'PotentialMemoryLeaker' will add another copy of the lambda to the event list, and every copy will be called when 'AnEvent' is fired.
A: Well you can extend what has been done here to make delegates safer to use (no memory leaks)
A: Your example just compiles to a compiler-named private inner class (with field firedCount and a compiler-named method). Each call to PotentialMemoryLeaker creates a new instance of the closure class to which where foo keeps a reference by way of a delegate to the single method.
If you don't reference the whole object that owns PotentialMemoryLeaker, then that will all be garbage collected. Otherwise, you can either set foo to null or empty foo's event handler list by writing this:
foreach (var handler in AnEvent.GetInvocationList()) AnEvent -= handler;
Of course, you'd need access to the MyObject class's private members.
A: Yes, save it to a variable and unhook it.
DelegateType evt = (o, e) => { firedCount++; Console.Write(firedCount); };
foo.AnEvent += evt;
foo.MethodThatFiresAnEvent();
foo.AnEvent -= evt;
And yes, if you don't, you'll leak memory, as you'll hook up a new delegate object each time. You'll also notice this because each time you call this method, it'll dump to the console an increasing number of lines (not just an increasing number, but for one call to MethodThatFiresAnEvent it'll dump any number of items, once for each hooked up anonymous method).
A: Yes in the same way that normal event handlers can cause leaks. Because the lambda is actually changed to:
someobject.SomeEvent += () => ...;
someobject.SomeEvent += delegate () {
...
};
// unhook
Action del = () => ...;
someobject.SomeEvent += del;
someobject.SomeEvent -= del;
So basically it is just short hand for what we have been using in 2.0 all these years.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16473",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "20"
} |
Q: Convince Firefox to send an If-Modified-Since header over HTTPS How can I convince Firefox (3.0.1, if it matters) to send an If-Modified-Since header in an HTTPS request? It sends the header if the request uses plain HTTP and my server dutifully honors it. But when I request the same resource from the same server using HTTPS instead (i.e., simply changing the http:// in the URL to https://) then Firefox does not send an If-Modified-Since header at all. Is this behavior mandated by the SSL spec or something?
Here are some example HTTP and HTTPS request/response pairs, pulled using the Live HTTP Headers Firefox extension, with some differences in bold:
HTTP request/response:
http://myserver.com:30000/scripts/site.js
GET /scripts/site.js HTTP/1.1
Host: myserver.com:30000
User-Agent: Mozilla/5.0 (...) Gecko/2008070206 Firefox/3.0.1
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-us,en;q=0.5
Accept-Encoding: gzip,deflate
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Keep-Alive: 300
Connection: keep-alive
If-Modified-Since: Tue, 19 Aug 2008 15:57:30 GMT
If-None-Match: "a0501d1-300a-454d22526ae80"-gzip
Cache-Control: max-age=0
HTTP/1.x 304 Not Modified
Date: Tue, 19 Aug 2008 15:59:23 GMT
Server: Apache/2.2.8 (Unix) mod_ssl/2.2.8 OpenSSL/0.9.8
Connection: Keep-Alive
Keep-Alive: timeout=5, max=99
Etag: "a0501d1-300a-454d22526ae80"-gzip
HTTPS request/response:
https://myserver.com:30001/scripts/site.js
GET /scripts/site.js HTTP/1.1
Host: myserver.com:30001
User-Agent: Mozilla/5.0 (...) Gecko/2008070206 Firefox/3.0.1
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-us,en;q=0.5
Accept-Encoding: gzip,deflate
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Keep-Alive: 300
Connection: keep-alive
HTTP/1.x 200 OK
Date: Tue, 19 Aug 2008 16:00:14 GMT
Server: Apache/2.2.8 (Unix) mod_ssl/2.2.8 OpenSSL/0.9.8
Last-Modified: Tue, 19 Aug 2008 15:57:30 GMT
Etag: "a0501d1-300a-454d22526ae80"-gzip
Accept-Ranges: bytes
Content-Encoding: gzip
Content-Length: 3766
Keep-Alive: timeout=5, max=100
Connection: Keep-Alive
Content-Type: text/javascript
UPDATE: Setting browser.cache.disk_cache_ssl to true did the trick (which is odd because, as Nickolay points out, there's still the memory cache). Adding a "Cache-control: public" header to the response also worked. Thanks!
A: HTTPS requests are not cached so sending an If-Modified-Since doesn't make any sense. The not caching is a security precaution.
A:
HTTPS requests are not cached so sending an If-Modified-Since doesn't make any sense. The not caching is a security precaution.
The not caching on disk is a security pre-caution, but it seems it indeed affects the If-Modified-Since behavior (glancing over the code).
Try setting the Firefox preference (in about:config) browser.cache.disk_cache_ssl to true. If that helps, try sending Cache-Control: public header in your response.
UPDATE: Firefox behavior was changed for Gecko 2.0 (Firefox 4) -- HTTPS content is now cached.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16483",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14"
} |
Q: How can I deploy artifacts from a Maven build to the SourceForge File Release System? I am using SourceForge for some Open Source projects and I want to automate the deployment of releases to the SourceForge File Release System. I use Maven for my builds and the standard SFTP deployment mechanism doesn't seem to work unless you do some manual preparation work. I have come across some old postings on other forums suggesting that the only approach is to write a Wagon specifically for SourceForge.
Has anybody had any recent experience with this?
A: I have uploaded an example to sourceforge.net at: http://sf-mvn-plugins.sourceforge.net/example-1jar-thinlet/
You can check out it via svn - so you can see how to use plugins for upload and download of and to sourceforge.net file system area and web site.
The main points to upload are to use sftp:
Add this similar code to your pom.xml
<distributionManagement>
<!-- use the following if you're not using a snapshot version. -->
<repository>
<id>sourceforge-sf-mvn-plugins</id>
<name>FRS Area</name>
<uniqueVersion>false</uniqueVersion>
<url>sftp://web.sourceforge.net/home/frs/project/s/sf/sf-mvn-plugins/m2-repo</url>
</repository>
<site>
<id>sourceforge-sf-mvn-plugins</id>
<name>Web Area</name>
<url>
sftp://web.sourceforge.net/home/groups/s/sf/sf-mvn-plugins/htdocs/${artifactId}
</url>
</site>
</distributionManagement>
Add similar code to settings.xml
<server>
<id>sourceforge-sf-mvn-plugins-svn</id>
<username>tmichel,sf-mvn-plugins</username>
<password>secret</password>
</server>
<server>
<id>sourceforge-sf-mvn-plugins</id>
<username>user,project</username>
<password>secret</password>
</server>
The main point for download is to use the wagon-http-sourceforge maven plugin - please see at: sf-mvn-plugins.sourceforge.net/wagon-http-sourceforge/FAQ.html
Please add the following code to your pom.xml
<repositories>
<repository>
<id>sourceforge-svn</id>
<name>SF Maven Plugin SVN Repository</name>
<url>http://sf-mvn-plugins.svn.sourceforge.net/svnroot/sf-mvn-plugins/_m2-repo/trunk</url>
</repository>
</repositories>
<pluginRepositories>
<pluginRepository>
<id>sourceforge-frs</id>
<name>SF Maven Plugin Repository</name>
<url>http://sourceforge.net/projects/sf-mvn-plugins/files/m2-repo</url>
</pluginRepository>
</pluginRepositories>
<build>
<extensions>
<extension>
<groupId>net.sf.maven.plugins</groupId>
<artifactId>wagon-http-sourceforge</artifactId>
<version>0.4</version>
</extension>
</extensions>
:
</build>
A: It looks like I am going to have to write this myself.
https://sourceforge.net/projects/wagon-sf/
A: I'm not able to test this to confirm, but I believe it is possible without writing any plugins.
You can deploy to SourceForge using SCP, and the maven-deploy-plugin can be configured to use SCP so it should work. You can also deploy your site to SourceForge via SCP.
You would configure the SourceForge server in your settings.xml to use a "combined" username with a comma separator. With these credentials:
SourceForge username: foo
SourceForge user password: secret
SourceForge project name: bar
Path: /home/frs/project/P/PR/PROJECT_UNIX_NAME/
- Substitute your project UNIX name data for /P/PR/PROJECT_UNIX_NAME
The server element would look like this:
<server>
<id>sourceforge</id>
<username>foo,bar</username>
<password>secret</password>
</server>
And the distributionManagement section in your POM would look like this:
<!-- Enabling the use of FTP -->
<distributionManagement>
<repository>
<id>ssh-repository</id>
<url>
scpexe://frs.sourceforge.net:/home/frs/project/P/PR/PROJECT_UNIX_NAME</url>
</repository>
</distributionManagement>
Finally declare that ssh-external is to be used:
<build>
<extensions>
<extension>
<groupId>org.apache.maven.wagon</groupId>
<artifactId>wagon-ssh-external</artifactId>
<version>1.0-alpha-5</version>
</extension>
</extensions>
</build>
If this doesn't work, you may be able to use the recommended approach in the site reference above, i.e. create a shell on shell.sourceforge.net with your username and project group:
ssh -t <username>,<project name>@shell.sf.net create
Then use shell.sourceforge.net (instead of web.sourceforge.net) in your site URL in the diestributionManagement section:
<url>scp://shell.sourceforge.net/home/frs/project/P/PR/PROJECT_UNIX_NAME/</url>
A: After trying this a number of times, I finally got it to work -- with sftp not scp. This should work from a unix box (or Mac) -- I'm not sure about sftp clients for Windoze. I am using mvn version 2.2.0 and I don't think I have any special plugins installed. This deploys the various mvn packages to the Files section of my project page.
You'll need to change the following in your settings to get it to work:
*
*user -- replace with your sourceforce username
*secret -- replace with your password
*ormlite -- replace with your project name
*/o/or/ -- replace with the first char and first 2 chars of your project name
In my $HOME/.m2/settings.xml file I have the following for the SF server:
<server>
<id>sourceforge</id>
<password>secret</password>
<filePermissions>775</filePermissions>
<directoryPermissions>775</directoryPermissions>
</server>
I don't specify the username in the settings.xml file because it needs to be username,project and I want to deploy multiple packages to SF. Then, in my pom.xml file for the ormlite package I have the following:
<distributionManagement>
<repository>
<id>sourceforge</id>
<name>SourceForge</name>
<url>sftp://user,ormlite@frs.sourceforge.net:/home/frs/project/o/or/ormlite/releases
</url>
</repository>
<snapshotRepository>
<id>sourceforge</id>
<name>SourceForge</name>
<url>sftp://user,ormlite@frs.sourceforge.net:/home/frs/project/o/or/ormlite/snapshots
</url>
</snapshotRepository>
</distributionManagement>
Obviously the /releases and /snapshots directory suffixes can be changed depending on your file hierarchy.
A: Where timp = user and webmacro = project
scp url does not work:
scp://timp,webmacro@shell.sourceforge.net:/home/groups/w/we/webmacro/htdocs/maven2/
sftp url works:
sftp://timp,webmacro@web.sourceforge.net:/home/groups/w/we/webmacro/htdocs/maven2
or for project release artifacts:
sftp://timp,webmacro@web.sourceforge.net:/home/frs/project/w/we/webmacro/releases
scp will work to shell.sourceforge.net, but you have to create the shell before use with
ssh -t timp,webmacro@shell.sourceforge.net create
A: This really did not turn out to be that hard. First up I had the mvn site:deploy working following the instructions at this sourceforge site. Basically you start the sourceforge shell with
ssh -t user,project@shell.sourceforge.net create
That will create the shell at their end with a folder mounted to your project on a path such as (depending on your projects name):
/home/groups/c/ch/chex4j/
In that shell I on the sourceforge server I created a folder for my repo under the project apache folder "htdocs" with
mkdir /home/groups/c/ch/chex4j/htdocs/maven2
In my settings.xml I set the username and password to that shell server so that maven can login:
<settings xmlns="http://maven.apache.org/SETTINGS/1.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.0.0
http://maven.apache.org/xsd/settings-1.0.0.xsd/">
<servers>
<server>
<id>chex4j.sf.net</id>
<username>me,myproject</username>
<password>password</password>
<filePermissions>775</filePermissions>
<directoryPermissions>775</directoryPermissions>
</server>
</servers>
</settings>
In the pom.xml you just need your distibutionManagement section setup to name the server by ID that you set the password for in your settings file:
<distributionManagement>
<site>
<id>chex4j.sf.net</id>
<url>scp://shell.sourceforge.net/home/groups/c/ch/chex4j/htdocs/
</url>
</site>
<repository>
<id>chex4j.sf.net</id>
<name>SourceForge shell repo</name>
<url>scp://shell.sourceforge.net/home/groups/c/ch/chex4j/htdocs/maven2</url>
</repository>
</distributionManagement>
There the repository entry is the one for the mvn deploy command and the site entry is for the mvn site:deploy command. Then all I have to do is start the shell connection to bring up the server side then on my local side just run:
mvn deploy
which uploads the jar, pom and sources and the like onto my sourceforge projects website. If you try to hit the /maven2 folder on your project website sourceforge kindly tell you that directory listing is off by default and how to fix it. To do this on the server shell you create a .htaccess file in your htdocs/maven2 folder containing the following apache options
Options +Indexes
Then bingo, you have a maven repo which looks like:
http://chex4j.sourceforge.net/maven2/net/sf/chex4j/chex4j-core/1.0.0/
Your sf.net shell it shuts down after a number of hours to not hog resources; so you run the "ssh -t ... create" when you want to deploy the site or your build artifacts.
You can browse all my maven project code under sourceforge to see my working settings:
http://chex4j.svn.sourceforge.net/viewvc/chex4j/branches/1.0.x/chex4j-core/
A: SCP URL does work. But do not use ":" after server name. MVN tries to read the following test as integer (port number).
You do not need to establish tunnels as simbo1905 did.
A: The Maven SourceForge plug-in does not work with Maven 2. Also I believe this plug-in uses FTP which is no longer supported.
A: I found that CruiseControl can upload releases to SFEE and also works with Maven and Maven2
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16487",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12"
} |
Q: Restore database backup over the network How do you restore a database backup using SQL Server 2005 over the network? I recall doing this before but there was something odd about the way you had to do it.
A: You have few options to use a network file as a backup source
*
*Map network drive/path, hosting file, under SAME user as MS-SQL Server.
*Use xp_cmdshell extended stored procedure to map network drive from inside of MS SQL (such way, command shell will have the same privilegies as the user account running SSMS)
-- allow changes to advanced options
EXEC sp_configure 'show advanced options', 1
GO
-- Update currently configured values for advanced options.
RECONFIGURE
GO
-- To enable xp_cmdshell
EXEC sp_configure 'xp_cmdshell', 1
GO
-- Update currently configured values for advanced options.
RECONFIGURE
GO
EXEC xp_cmdshell 'NET USE Z: \\Srv\Path password1 /USER:Domain\UserName'
Afterwards drive Z: will be visible in Server Managment studio, or just
RESTORE DATABASE DataBaseNameHere FROM DISK = 'Z:\BackNameHere.BAK'
GO
A: You can use the SP xp_cmdshell to map the networkdrive for sql server, after that it will show up in the file browsing window.
EXEC xp_cmdshell 'NET USE Z: SERVERLOCATION PASSWORD /USER:DOMAIN\USERNAME'
more info here: DB Restore from Network Drive
Worked for me!
A: Make sure that the user running your SQL services in "Services.msc" is an active directory "Domain User" this will fix the issue.
A: The database is often running as a service under an account with no network access. If this is the case, then you wouldn't be able to restore directly over the network. Either the backup needs to be copied to the local machine or the database service needs to run as a user with the proper network access.
A: You cannot do this through the SSMS GUI, but you can do it using a script. RESTORE DATABASE from DISK='\unc\path\filename' If you need this process automated, the best way is to setup a SQL Server Job and run it as a user with access to the file location.
A: I've had to do this a few times, and there are only two options that I know of. Copy the file locally to the SQL Server, or on the SQL server create a mapped network drive to the share that contains the backup file.
A: Also, you need to make sure that the SQL Server Service is running as a user that has network access - and permissions to the share where the backup file lives. 'Local System' won't have permissions to access the network.
A: As a side note, if you happen to be running SQL on a Virtual Machine it's often less hassle to just temporarily set up a new drive on the VM with enough space to copy your backup files to, do the restore from that new local copy, and then delete the temp drive.
This can be useful if stopping/starting the SQL service to change it's account is an issue.
A: Create a shared drive on machine that has the backups, say server1 has the backups in folder "Backups". Grant full control to the account running the SQL Server. On the Server that you want to restore to launch SSMS go restore database and select "From Device". In the "Locate Backup file-"Server"" dialog box and remove anything in the "Selected Path" field and in the "File Name" field supply full path so "\server\backups\db.bak". At least it worked for me when migrating from 05 to 08. Not the preferred method because any network hiccup can cause an issue with the restore.
A: EXEC sp_configure 'show advanced options', 1
GO
-- Update currently configured values for advanced options.
RECONFIGURE
GO
-- To enable xp_cmdshell
EXEC sp_configure 'xp_cmdshell', 1
GO
-- Update currently configured values for advanced options.
RECONFIGURE
GO
--This should be run on command prompt (cmd)
NET USE Z: \\172.100.1.100\Shared Password /USER:administrator /Persistent:no
then on SQL Server
EXEC xp_cmdshell 'NET USE Z: \\172.100.1.100\Shared Password /USER:administrator /Persistent:no'
--Afterwards drive Z: will be visible in Server Management studio, or just
RESTORE DATABASE DB FROM DISK = 'Z:\DB.BAK'
WITH REPLACE
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16491",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "50"
} |
Q: What is a lambda (function)? For a person without a comp-sci background, what is a lambda in the world of Computer Science?
A: I like the explanation of Lambdas in this article: The Evolution Of LINQ And Its Impact On The Design Of C#. It made a lot of sense to me as it shows a real world for Lambdas and builds it out as a practical example.
Their quick explanation: Lambdas are a way to treat code (functions) as data.
A: The name "lambda" is just a historical artifact. All we're talking about is an expression whose value is a function.
A simple example (using Scala for the next line) is:
args.foreach(arg => println(arg))
where the argument to the foreach method is an expression for an anonymous function. The above line is more or less the same as writing something like this (not quite real code, but you'll get the idea):
void printThat(Object that) {
println(that)
}
...
args.foreach(printThat)
except that you don't need to bother with:
*
*Declaring the function somewhere else (and having to look for it when you revisit the code later).
*Naming something that you're only using once.
Once you're used to function values, having to do without them seems as silly as being required to name every expression, such as:
int tempVar = 2 * a + b
...
println(tempVar)
instead of just writing the expression where you need it:
println(2 * a + b)
The exact notation varies from language to language; Greek isn't always required! ;-)
A:
A Lambda Function, or a Small Anonymous Function, is a self-contained block of functionality that can be passed around and used in your code. Lambda has different names in different programming languages – Lambda in Python and Kotlin, Closure in Swift, or Block in C and Objective-C. Although lambda's meaning is quite similar for these languages it has slight distinctions sometimes.
Let's see how Closure (Lambda) works in Swift:
let coffee: [String] = ["Cappuccino", "Espresso", "Latte", "Ristretto"]
1. Regular Function
func backward(_ n1: String, _ n2: String) -> Bool {
return n1 > n2
}
var reverseOrder = coffee.sorted(by: backward)
// RESULT: ["Ristretto", "Latte", "Espresso", "Cappuccino"]
2. Closure Expression
reverseOrder = coffee.sorted(by: { (n1: String, n2: String) -> Bool in
return n1 > n2
})
3. Inline Closure Expression
reverseOrder = coffee.sorted(by: { (n1: String, n2: String) -> Bool in
return n1 > n2
})
4. Inferring Type From Context
reverseOrder = coffee.sorted(by: { n1, n2 in return n1 > n2 } )
5. Implicit Returns from Single-Expression Closures
reverseOrder = coffee.sorted(by: { n1, n2 in n1 > n2 } )
6. Shorthand Argument Names
reverseOrder = coffee.sorted(by: { $0 > $1 } )
// $0 and $1 are closure’s first and second String arguments.
7. Operator Methods
reverseOrder = coffee.sorted(by: >)
// RESULT: ["Ristretto", "Latte", "Espresso", "Cappuccino"]
A: An example of a lambda in Ruby is as follows:
hello = lambda do
puts('Hello')
puts('I am inside a proc')
end
hello.call
Will genereate the following output:
Hello
I am inside a proc
A: @Brian I use lambdas all the time in C#, in LINQ and non-LINQ operators. Example:
string[] GetCustomerNames(IEnumerable<Customer> customers)
{ return customers.Select(c=>c.Name);
}
Before C#, I used anonymous functions in JavaScript for callbacks to AJAX functions, before the term Ajax was even coined:
getXmlFromServer(function(result) {/*success*/}, function(error){/*fail*/});
The interesting thing with C#'s lambda syntax, though, is that on their own their type cannot be infered (i.e., you can't type var foo = (x,y) => x * y) but depending on which type they're assigned to, they'll be compiled as delegates or abstract syntax trees representing the expression (which is how LINQ object mappers do their "language-integrated" magic).
Lambdas in LISP can also be passed to a quotation operator and then traversed as a list of lists. Some powerful macros are made this way.
A: It refers to lambda calculus, which is a formal system that just has lambda expressions, which represent a function that takes a function for its sole argument and returns a function. All functions in the lambda calculus are of that type, i.e., λ : λ → λ.
Lisp used the lambda concept to name its anonymous function literals. This lambda represents a function that takes two arguments, x and y, and returns their product:
(lambda (x y) (* x y))
It can be applied in-line like this (evaluates to 50):
((lambda (x y) (* x y)) 5 10)
A: Just because I cant see a C++11 example here, I'll go ahead and post this nice example from here. After searching, it is the clearest language specific example that I could find.
Hello, Lambdas, version 1
template<typename F>
void Eval( const F& f ) {
f();
}
void foo() {
Eval( []{ printf("Hello, Lambdas\n"); } );
}
Hello, Lambdas, version 2:
void bar() {
auto f = []{ printf("Hello, Lambdas\n"); };
f();
}
A: The lambda calculus is a consistent mathematical theory of substitution. In school mathematics one sees for example x+y=5 paired with x−y=1. Along with ways to manipulate individual equations it's also possible to put the information from these two together, provided cross-equation substitutions are done logically. Lambda calculus codifies the correct way to do these substitutions.
Given that y = x−1 is a valid rearrangement of the second equation, this: λ y = x−1 means a function substituting the symbols x−1 for the symbol y. Now imagine applying λ y to each term in the first equation. If a term is y then perform the substitution; otherwise do nothing. If you do this out on paper you'll see how applying that λ y will make the first equation solvable.
That's an answer without any computer science or programming.
The simplest programming example I can think of comes from http://en.wikipedia.org/wiki/Joy_(programming_language)#How_it_works:
here is how the square function might be defined in an imperative
programming language (C):
int square(int x)
{
return x * x;
}
The variable x is a formal parameter which is replaced by the actual
value to be squared when the function is called. In a functional
language (Scheme) the same function would be defined:
(define square
(lambda (x)
(* x x)))
This is different in many ways, but it still uses the formal parameter
x in the same way.
Added: http://imgur.com/a/XBHub
A:
For a person without a comp-sci background, what is a lambda in the world of Computer Science?
I will illustrate it intuitively step by step in simple and readable python codes.
In short, a lambda is just an anonymous and inline function.
Let's start from assignment to understand lambdas as a freshman with background of basic arithmetic.
The blueprint of assignment is 'the name = value', see:
In [1]: x = 1
...: y = 'value'
In [2]: x
Out[2]: 1
In [3]: y
Out[3]: 'value'
'x', 'y' are names and 1, 'value' are values.
Try a function in mathematics
In [4]: m = n**2 + 2*n + 1
NameError: name 'n' is not defined
Error reports,
you cannot write a mathematic directly as code,'n' should be defined or be assigned to a value.
In [8]: n = 3.14
In [9]: m = n**2 + 2*n + 1
In [10]: m
Out[10]: 17.1396
It works now,what if you insist on combining the two seperarte lines to one.
There comes lambda
In [13]: j = lambda i: i**2 + 2*i + 1
In [14]: j
Out[14]: <function __main__.<lambda>>
No errors reported.
This is a glance at lambda, it enables you to write a function in a single line as you do in mathematic into the computer directly.
We will see it later.
Let's continue on digging deeper on 'assignment'.
As illustrated above, the equals symbol = works for simple data(1 and 'value') type and simple expression(n**2 + 2*n + 1).
Try this:
In [15]: x = print('This is a x')
This is a x
In [16]: x
In [17]: x = input('Enter a x: ')
Enter a x: x
It works for simple statements,there's 11 types of them in python 7. Simple statements — Python 3.6.3 documentation
How about compound statement,
In [18]: m = n**2 + 2*n + 1 if n > 0
SyntaxError: invalid syntax
#or
In [19]: m = n**2 + 2*n + 1, if n > 0
SyntaxError: invalid syntax
There comes def enable it working
In [23]: def m(n):
...: if n > 0:
...: return n**2 + 2*n + 1
...:
In [24]: m(2)
Out[24]: 9
Tada, analyse it, 'm' is name, 'n**2 + 2*n + 1' is value.: is a variant of '='.
Find it, if just for understanding, everything starts from assignment and everything is assignment.
Now return to lambda, we have a function named 'm'
Try:
In [28]: m = m(3)
In [29]: m
Out[29]: 16
There are two names of 'm' here, function m already has a name, duplicated.
It's formatting like:
In [27]: m = def m(n):
...: if n > 0:
...: return n**2 + 2*n + 1
SyntaxError: invalid syntax
It's not a smart strategy, so error reports
We have to delete one of them,set a function without a name.
m = lambda n:n**2 + 2*n + 1
It's called 'anonymous function'
In conclusion,
*
*lambda in an inline function which enable you to write a function in one straight line as does in mathematics
*lambda is anonymous
Hope, this helps.
A: Lambda explained for everyone:
Lambda is an anonymous function. This means lambda is a function object in Python that doesn't require a reference before. Let's consider this bit of code here:
def name_of_func():
#command/instruction
print('hello')
print(type(name_of_func)) #the name of the function is a reference
#the reference contains a function Object with command/instruction
To proof my proposition I print out the type of name_of_func which returns us:
<class 'function'>
A function must have a interface, but a interface docent needs to contain something. What does this mean? Let's look a little bit closer to our function and we may notice that out of the name of the functions there are some more details we need to explain to understand what a function is.
A regular function will be defined with the syntax "def", then we type in the name and settle the interface with "()" and ending our definition by the syntax ":". Now we enter the functions body with our instructions/commands.
So let's consider this bit of code here:
def print_my_argument(x):
print(x)
print_my_argument('Hello')
In this case we run our function, named "print_my_argument" and passing a parameter/argument through the interface. The Output will be:
Hello
So now that we know what a function is and how the architecture works for a function, we can take a look to an anonymous function. Let's consider this bit of code here:
def name_of_func():
print('Hello')
lambda: print('Hello')
these function objects are pretty much the same except of the fact that the upper, regular function have a name and the other function is an anonymous one. Let's take a closer look on our anonymous function, to understand how to use it.
So let's consider this bit of code here:
def delete_last_char(arg1=None):
print(arg1[:-1])
string = 'Hello World'
delete_last_char(string)
f = lambda arg1=None: print(arg1[:-1])
f(string)
So what we have done in the above code is to write once again, a regular function and an anonymous function. Our anonymous function we had assigned to a var, which is pretty much the same as to give this function a name. Anyway, the output will be:
Hello Worl
Hello Worl
To fully proof that lambda is a function object and doesn't just mimic a function we run this bit of code here:
string = 'Hello World'
f = lambda arg1=string: print(arg1[:-1])
f()
print(type(f))
and the Output will be:
Hello Worl
<class 'function'>
Last but not least you should know that every function in python needs to return something. If nothing is defined in the body of the function, None will be returned by default. look at this bit of code here:
def delete_last_char(arg1):
print(arg1[:-1])
string = 'Hello World'
x = delete_last_char(string)
f = lambda arg1=string: print(arg1[:-1])
x2 = f()
print(x)
print(x2)
Output will be:
Hello Worl
Hello Worl
None
None
A: You can think of it as an anonymous function - here's some more info: Wikipedia - Anonymous Function
A: I have trouble wrapping my head around lambda expressions because I work in Visual FoxPro, which has Macro substitution and the ExecScript{} and Evaluate() functions, which seem to serve much the same purpose.
? Calculator(10, 23, "a + b")
? Calculator(10, 23, "a - b");
FUNCTION Calculator(a, b, op)
RETURN Evaluate(op)
One definite benefit to using formal lambdas is (I assume) compile-time checking: Fox won't know if you typo the text string above until it tries to run it.
This is also useful for data-driven code: you can store entire routines in memo fields in the database and then just evaluate them at run-time. This lets you tweak part of the application without actually having access to the source. (But that's another topic altogether.)
A: It is a function that has no name. For e.g. in c# you can use
numberCollection.GetMatchingItems<int>(number => number > 5);
to return the numbers that are greater than 5.
number => number > 5
is the lambda part here. It represents a function which takes a parameter (number) and returns a boolean value (number > 5). GetMatchingItems method uses this lambda on all the items in the collection and returns the matching items.
A: In Javascript, for example, functions are treated as the same mixed type as everything else (int, string, float, bool). As such, you can create functions on the fly, assign them to things, and call them back later. It's useful but, not something you want to over use or you'll confuse everyone who has to maintain your code after you...
This is some code I was playing with to see how deep this rabbit hole goes:
var x = new Object;
x.thingy = new Array();
x.thingy[0] = function(){ return function(){ return function(){ alert('index 0 pressed'); }; }; }
x.thingy[1] = function(){ return function(){ return function(){ alert('index 1 pressed'); }; }; }
x.thingy[2] = function(){ return function(){ return function(){ alert('index 2 pressed'); }; }; }
for(var i=0 ;i<3; i++)
x.thingy[i]()()();
A: In context of CS a lambda function is an abstract mathematical concept that tackles a problem of symbolic evaluation of mathematical expressions. In that context a lambda function is the same as a lambda term.
But in programming languages it's something different. It's a piece of code that is declared "in place", and that can be passed around as a "first-class citizen". This concept appeared to be useful so that it came into almost all popular modern programming languages (see lambda functions everwhere post).
A: In computer programming, lambda is a piece of code (statement, expression or a group of them) which takes some arguments from an external source. It must not always be an anonymous function - we have many ways to implement them.
We have clear separation between expressions, statements and functions, which mathematicians do not have.
The word "function" in programming is also different - we have "function is a series of steps to do" (from Latin "perform"). In math it is something about correlation between variables.
Functional languages are trying to be as similar to math formulas as possible, and their words mean almost the same. But in other programming languages we have it different.
A: Slightly oversimplified: a lambda function is one that can be passed round to other functions and it's logic accessed.
In C# lambda syntax is often compiled to simple methods in the same way as anonymous delegates, but it can also be broken down and its logic read.
For instance (in C#3):
LinqToSqlContext.Where(
row => row.FieldName > 15 );
LinqToSql can read that function (x > 15) and convert it to the actual SQL to execute using expression trees.
The statement above becomes:
select ... from [tablename]
where [FieldName] > 15 --this line was 'read' from the lambda function
This is different from normal methods or anonymous delegates (which are just compiler magic really) because they cannot be read.
Not all methods in C# that use lambda syntax can be compiled to expression trees (i.e. actual lambda functions). For instance:
LinqToSqlContext.Where(
row => SomeComplexCheck( row.FieldName ) );
Now the expression tree cannot be read - SomeComplexCheck cannot be broken down. The SQL statement will execute without the where, and every row in the data will be put through SomeComplexCheck.
Lambda functions should not be confused with anonymous methods. For instance:
LinqToSqlContext.Where(
delegate ( DataRow row ) {
return row.FieldName > 15;
} );
This also has an 'inline' function, but this time it's just compiler magic - the C# compiler will split this out to a new instance method with an autogenerated name.
Anonymous methods can't be read, and so the logic can't be translated out as it can for lambda functions.
A: Lambda comes from the Lambda Calculus and refers to anonymous functions in programming.
Why is this cool? It allows you to write quick throw away functions without naming them. It also provides a nice way to write closures. With that power you can do things like this.
Python
def adder(x):
return lambda y: x + y
add5 = adder(5)
add5(1)
6
As you can see from the snippet of Python, the function adder takes in an argument x, and returns an anonymous function, or lambda, that takes another argument y. That anonymous function allows you to create functions from functions. This is a simple example, but it should convey the power lambdas and closures have.
Examples in other languages
Perl 5
sub adder {
my ($x) = @_;
return sub {
my ($y) = @_;
$x + $y
}
}
my $add5 = adder(5);
print &$add5(1) == 6 ? "ok\n" : "not ok\n";
JavaScript
var adder = function (x) {
return function (y) {
return x + y;
};
};
add5 = adder(5);
add5(1) == 6
JavaScript (ES6)
const adder = x => y => x + y;
add5 = adder(5);
add5(1) == 6
Scheme
(define adder
(lambda (x)
(lambda (y)
(+ x y))))
(define add5
(adder 5))
(add5 1)
6
C# 3.5 or higher
Func<int, Func<int, int>> adder =
(int x) => (int y) => x + y; // `int` declarations optional
Func<int, int> add5 = adder(5);
var add6 = adder(6); // Using implicit typing
Debug.Assert(add5(1) == 6);
Debug.Assert(add6(-1) == 5);
// Closure example
int yEnclosed = 1;
Func<int, int> addWithClosure =
(x) => x + yEnclosed;
Debug.Assert(addWithClosure(2) == 3);
Swift
func adder(x: Int) -> (Int) -> Int{
return { y in x + y }
}
let add5 = adder(5)
add5(1)
6
PHP
$a = 1;
$b = 2;
$lambda = fn () => $a + $b;
echo $lambda();
Haskell
(\x y -> x + y)
Java see this post
// The following is an example of Predicate :
// a functional interface that takes an argument
// and returns a boolean primitive type.
Predicate<Integer> pred = x -> x % 2 == 0; // Tests if the parameter is even.
boolean result = pred.test(4); // true
Lua
adder = function(x)
return function(y)
return x + y
end
end
add5 = adder(5)
add5(1) == 6 -- true
Kotlin
val pred = { x: Int -> x % 2 == 0 }
val result = pred(4) // true
Ruby
Ruby is slightly different in that you cannot call a lambda using the exact same syntax as calling a function, but it still has lambdas.
def adder(x)
lambda { |y| x + y }
end
add5 = adder(5)
add5[1] == 6
Ruby being Ruby, there is a shorthand for lambdas, so you can define adder this way:
def adder(x)
-> y { x + y }
end
R
adder <- function(x) {
function(y) x + y
}
add5 <- adder(5)
add5(1)
#> [1] 6
A: A lambda is a type of function, defined inline. Along with a lambda you also usually have some kind of variable type that can hold a reference to a function, lambda or otherwise.
For instance, here's a C# piece of code that doesn't use a lambda:
public Int32 Add(Int32 a, Int32 b)
{
return a + b;
}
public Int32 Sub(Int32 a, Int32 b)
{
return a - b;
}
public delegate Int32 Op(Int32 a, Int32 b);
public void Calculator(Int32 a, Int32 b, Op op)
{
Console.WriteLine("Calculator: op(" + a + ", " + b + ") = " + op(a, b));
}
public void Test()
{
Calculator(10, 23, Add);
Calculator(10, 23, Sub);
}
This calls Calculator, passing along not just two numbers, but which method to call inside Calculator to obtain the results of the calculation.
In C# 2.0 we got anonymous methods, which shortens the above code to:
public delegate Int32 Op(Int32 a, Int32 b);
public void Calculator(Int32 a, Int32 b, Op op)
{
Console.WriteLine("Calculator: op(" + a + ", " + b + ") = " + op(a, b));
}
public void Test()
{
Calculator(10, 23, delegate(Int32 a, Int32 b)
{
return a + b;
});
Calculator(10, 23, delegate(Int32 a, Int32 b)
{
return a - b;
});
}
And then in C# 3.0 we got lambdas which makes the code even shorter:
public delegate Int32 Op(Int32 a, Int32 b);
public void Calculator(Int32 a, Int32 b, Op op)
{
Console.WriteLine("Calculator: op(" + a + ", " + b + ") = " + op(a, b));
}
public void Test()
{
Calculator(10, 23, (a, b) => a + b);
Calculator(10, 23, (a, b) => a - b);
}
A: The question is formally answered greatly, so I will not try to add more on this.
In very simple, informal words to someone that knows very little or nothing on math or programming, I would explain it as a small "machine" or "box" that takes some input, makes some work and produces some output, has no particular name, but we know where it is and by just this knowledge, we use it.
Practically speaking, for a person that knows what a function is, I would tell them that it is a function that has no name, usually put to a point in memory that can be used just by referencing to that memory (usually via the usage of a variable - if they have heard about the concept of the function pointers, I would use them as a similar concept) - this answer covers the pretty basics (no mention of closures etc) but one can get the point easily.
A: The question has been answered fully, I don't want to go into details. I want to share the usage when writing numerical computation in rust.
There is an example of a lambda(anonymous function)
let f = |x: f32| -> f32 { x * x - 2.0 };
let df = |x: f32| -> f32 { 2.0 * x };
When I was writing a module of Newton–Raphson method, it was used as first and second order derivative. (If you want to know what is Newton–Raphson method, please visit "https://en.wikipedia.org/wiki/Newton%27s_method".
The output as the following
println!("f={:.6} df={:.6}", f(10.0), df(10.0))
f=98.000000 df=20.000000
A: Imagine that you have a restaurant with a delivery option and you have an order that needs to be done in under 30 minutes. The point is clients usually don't care if you send their food by bike with a car or barefoot as long as you keep the meal warm and tied up. So lets convert this idiom to Javascript with anonymous and defined transportation functions.
Below we defined the way of our delivering aka we define a name to a function:
// ES5
var food = function withBike(kebap, coke) {
return (kebap + coke);
};
What if we would use arrow/lambda functions to accomplish this transfer:
// ES6
const food = (kebap, coke) => { return kebap + coke };
You see there is no difference for client and no time wasting to think about how to send food. Just send it.
Btw, I don't recommend the kebap with coke this is why upper codes will give you errors. Have fun.
A: A lambda function can take any number of arguments, but they contain only a single expression. ...
Lambda functions can be used to return function objects.
Syntactically, lambda functions are restricted to only a single expression.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16501",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "840"
} |
Q: Image size for BannerBitmap property in Windows Installer I'm working on a quick setup program in Visual Studio and wanted to change the banner bitmap. Anyone know off-hand what the ideal (or the required) dimensions are for the new banner image? Thanks.
A: Found it on MSDN docs for BannerBitmap Property:
For best results, you should use a bitmap with dimensions of 500 pixels wide by 70 pixels high.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16517",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "45"
} |
Q: Communication between Javascript and the server I've been developing a "Form Builder" in Javascript, and coming up to the part where I'll be sending the spec for the form back to the server to be stored. The builder maintains an internal data structure that represents the fields, label, options (for select/checkbox/radio), mandatory status, and the general sorting order of the fields.
When I want to send this structure back to the server, which format should I communicate it with?
Also, when restoring a server-saved form back into my Javascript builder, should I load in the data in the same format it sent it with, or should I rebuild the fields using the builder's createField() functions?
A: When making and processing requests with JavaScript, I live and breath JSON. It's easy to build on the client side and there are tons of parsers for the server side, so both ends get to use their native tongue as much as possible.
A: This seems like a perfect scenario for using JSON as a serialization format for the server. If you study a few examples it is not too difficult to understand.
A: Best practice on this dictates that if you are not planning to use the stored data for anything other than recreating the form then the best method is to send it back in some sort of native format (As mentioned above) With this then you can just load the data back in and requires the least processing of any method.
A: I'd implement some sort of custom text serialization and transmit plain text. As you say, you can rebuild the information doing the reversed process.
A: There's a lot of people who will push JSON. It's a lot lighter weight than XML. Personally, I find XML to be a little more standard though. You'll have trouble finding a server side technology that doesn't support XML. And JavaScript supports it just fine also.
You could also go a completely different route. Since you'll only be sending information back when the form design is complete, you could do it with a form submit, for a bunch of hidden fields. Create your hidden fields using JavaScript and set the values as needed.
This would probably be the best solution if didn't want to deal with JSON/XML at all.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16529",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Best .NET build tool
Possible Duplicate:
NAnt or MSBuild, which one to choose and when?
What is the best build tool for .NET?
I currently use NAnt but only because I have experience with Ant. Is MSBuild preferred?
A: I use MSBuild completely for building. Here's my generic MSBuild script that searches the tree for .csproj files and builds them:
<Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003" DefaultTargets="Build">
<UsingTask AssemblyFile="$(MSBuildProjectDirectory)\bin\xUnit\xunitext.runner.msbuild.dll" TaskName="XunitExt.Runner.MSBuild.xunit"/>
<PropertyGroup>
<Configuration Condition="'$(Configuration)'==''">Debug</Configuration>
<DeployDir>$(MSBuildProjectDirectory)\Build\$(Configuration)</DeployDir>
<ProjectMask>$(MSBuildProjectDirectory)\**\*.csproj</ProjectMask>
<ProjectExcludeMask></ProjectExcludeMask>
<TestAssembliesIncludeMask>$(DeployDir)\*.Test.dll</TestAssembliesIncludeMask>
</PropertyGroup>
<ItemGroup>
<ProjectFiles Include="$(ProjectMask)" Exclude="$(ProjectExcludeMask)"/>
</ItemGroup>
<Target Name="Build" DependsOnTargets="__Compile;__Deploy;__Test"/>
<Target Name="Clean">
<MSBuild Projects="@(ProjectFiles)" Targets="Clean"/>
<RemoveDir Directories="$(DeployDir)"/>
</Target>
<Target Name="Rebuild" DependsOnTargets="Clean;Build"/>
<!--
===== Targets that are meant for use only by MSBuild =====
-->
<Target Name="__Compile">
<MSBuild Projects="@(ProjectFiles)" Targets="Build">
<Output TaskParameter="TargetOutputs" ItemName="AssembliesBuilt"/>
</MSBuild>
<CreateItem Include="@(AssembliesBuilt -> '%(RootDir)%(Directory)*')">
<Output TaskParameter="Include" ItemName="DeployFiles"/>
</CreateItem>
</Target>
<Target Name="__Deploy">
<MakeDir Directories="$(DeployDir)"/>
<Copy SourceFiles="@(DeployFiles)" DestinationFolder="$(DeployDir)"/>
<CreateItem Include="$(TestAssembliesIncludeMask)">
<Output TaskParameter="Include" ItemName="TestAssemblies"/>
</CreateItem>
</Target>
<Target Name="__Test">
<xunit Assembly="@(TestAssemblies)"/>
</Target>
</Project>
(Sorry if it's a little dense. Markdown seems to be stripping out the blank lines.)
It's pretty simple though once you understand the concepts and all the dependencies are handled automatically. I should note that we use Visual Studio project files, which have a lot of logic built into them, but this system allows people to build almost identically both within the Visual Studio IDE or at the command line and still gives you the flexibility of adding things to the canonical build like the xUnit testing you see in the script above.
The one PropertyGroup is where all the configuration happens and things can be customized, like excluding certain projects from the build or adding new test assembly masks.
The ItemGroup is where the logic happens that finds all the .csproj files in the tree.
Then there are the targets, which most people familiar with make, nAnt or MSBuild should be able to follow. If you call the Build target, it calls __Compile, __Deploy and __Test. The Clean target calls MSBuild on all the project files for them to clean up their directories and then the global deployment directory is deleted. Rebuild calls Clean and then Build.
A: There is another new build tool (a very intelligent wrapper) called NUBuild. It's lightweight, open source and extremely easy to setup and provides almost no-touch maintenance. I really like this new tool, and we have made it a standard tool for our continuous build and integration of our projects (we have about 400 projects across 75 developers). Try it out.
http://nubuild.codeplex.com/
*
*Easy to use command line interface
*Ability to target all .NET Framework
versions, that is, 1.1, 2.0, 3.0 and 3.5
*Supports XML based configuration
*Supports both project and file
references
*Automatically generates the “complete
ordered build list” for a given
project – No touch maintenance.
*Ability to detect and display
circular dependencies
*Perform parallel build -
automatically decides which of the
projects in the generated build list
can be built independently.
*Ability to handle proxy assemblies
*Provides a visual clue to the build
process, for example, showing “% completed”,
“current status”, etc.
*Generates detailed execution log both
in XML and text format
*Easily integrated with
CruiseControl.NET continuous
integration system
*Can use custom logger like XMLLogger
when targeting 2.0 + version
*Ability to parse error logs
*Ability to deploy built assemblies to
user specified location
*Ability to synchronize source code
with source-control system
*Version management capability
A: Rake and Albacore is an excellent combination. The power of Ruby and no XML.
.NET Open Source 5 - .NET Automation with Rake and Albacore by Liam McLennan [Tekpub.com]
A: We use MSBuild, because we started with Visual Studio 2005 (now Visual Studio 2008), and MSBuild was already "built in" to the SDK - there is less maintenance on the build server. It's a NAnt clone, really - both tools are infinitely flexible in that they let you create custom build tasks in code, and both have a decent set of community build tasks already created.
*
*MSBuild Community Tasks
*NAntContrib
A: We're using Bounce, a framework for cleaner build scripts in C#.
A: We actually use a combination of NAnt and MSBuild with CruiseControl. NAnt is used for script flow control and calls MSBuild to compile projects. After the physical build is triggered, NAnt is used to publish the individual project build outputs to a shared location.
I am not sure this is the best process. I think many of us are still looking for a great build tool. One promising thing I heard recently on .NET Rocks, episode 362, is James Kovac's PSake, a build system he based entirely on PowerShell. It sounds really promising since what you can do with PowerShell is fairly limitless in theory.
A: I use a commercial software, Automated Build Studio for the build purpose.
A: I'd just like to throw FinalBuilder in to the mix. It's not free, but if you're fed up with editing XML files and want a somewhat nicer (IMO) environment to work in I would give it a go.
I've worked with all of them and have always went back to FinalBuilder.
A: I've used both and prefer NAnt. It's really hard for me to say one is "better" than the other.
A: I have used both MSBuild and NAnt, and I much prefer MSBuild, mainly because it requires a lot less configuration by default. Although you can over-complicate things and load MSBuild down with a lot of configuration junk too, at its simplest, you can just point it at a solution/project file and have it go which, most of the time, for most cases, is enough.
A: It also depends on what you're building. The MSBuild SDC Task library has a couple of special tasks. For example, for AD, BizTalk, etc.
There are over 300 tasks included in
this library including tasks for:
creating websites, creating
application pools, creating
ActiveDirectory users, running FxCop,
configuring virtual servers, creating
zip files, configuring COM+, creating
folder shares, installing into the
GAC, configuring SQL Server,
configuring BizTalk 2004 and BizTalk
2006, etc.
A: Using a dynamic scripting language like Python, BOO, Ruby, etc. to create and maintain build scripts might be a good alternative to an XML based one like NAnt. (They tend to be cleaner to read than XML.)
A: Generally speaking, I get the impression that NAnt offers more flexibility compared to MSBuild, whereas (with my relatively simple needs) I've been fine with the latter so far.
A: UppercuT uses NAnt to build and it is the insanely easy to use Build Framework.
Automated Builds as easy as (1) solution name, (2) source control path, (3) company name for most projects!
http://projectuppercut.org/
Some good explanations here: UppercuT
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16550",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "41"
} |
Q: VS.NET Application Diagrams Have you used VS.NET Architect Edition's Application and System diagrams to start designing a solution?
If so, did you find it useful?
Did the "automatic implementation" feature work ok?
A: I used to use it a lot. This designer worked good for stubbing out prototype projects, but ultimately I found myself wasting a lot of time moving the mouse around when I could be typing. It seemed like an awesome idea to be able to print out the class diagrams to show APIs to other developers while I was prototyping, but it proved quite limiting and it looks awful on a non-color printer.
Now I just use the text editor and some AutoHotkey macros to get everything done.
A: Yes, and no, it's not very useful in my opinion. It's not very stable, it's easy to get out of sync, and the "look how fast I generate this" advantage is virtually nil when compared to more mundane things such as code snippets.
Then again, I am a total "Architect" luddite, so take this with a grain of salt.
A: I agree with Stu, and I don't consider myself an Architect luddite :-). Kind of like a lot of MS frameworks over the years, you are tied to their particular way of thinking, which doesn't always gel with the ideas that come out of the rest of the architecture community at large. Generating stubs, in my opinion, doesn't really add that much value, and the round trip half of the equation has messed up some of my project files and made me have to re-write the things manually.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16556",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Overriding the equals method vs creating a new method I have always thought that the .equals() method in java should be overridden to be made specific to the class you have created. In other words to look for equivalence of two different instances rather than two references to the same instance. However I have encountered other programmers who seem to think that the default object behavior should be left alone and a new method created for testing equivalence of two objects of the same class.
What are the argument for and against overriding the equals method?
A: I would highly recommend picking up a copy of Effective Java and reading through item 7 obeying the equals contract. You need to be careful if you are overriding equals for mutable objects, as many of the collections such as Maps and Sets use equals to determine equivalence, and mutating an object contained in a collection could lead to unexpected results. Brian Goetz also has a pretty good overview of implementing equals and hashCode.
A: You should "never" override equals & getHashCode for mutable objects - this goes for .net and Java both. If you do, and use such an object as the key in f.ex a dictionary and then change that object, you'll be in trouble because the dictionary relies on the hashcode to find the object.
Here's a good article on the topic: http://weblogs.asp.net/bleroy/archive/2004/12/15/316601.aspx
A: @David Schlosnagle mentions mentions Josh Bloch's Effective Java -- this is a must-read for any Java developer.
There is a related issue: for immutable value objects, you should also consider overriding compare_to. The standard wording for if they differ is in the Comparable API:
It is generally the case, but not strictly required that (compare(x, y)==0) == (x.equals(y)). Generally speaking, any comparator that violates this condition should clearly indicate this fact. The recommended language is "Note: this comparator imposes orderings that are inconsistent with equals."
A: Overriding the equals method is necessary if you want to test equivalence in standard library classes (for example, ensuring a java.util.Set contains unique elements or using objects as keys in java.util.Map objects).
Note, if you override equals, ensure you honour the API contract as described in the documentation. For example, ensure you also override Object.hashCode:
If two objects are equal according to
the equals(Object) method, then
calling the hashCode method on each of
the two objects must produce the same
integer result.
EDIT: I didn't post this as a complete answer on the subject, so I'll echo Fredrik Kalseth's statement that overriding equals works best for immutable objects. To quote the API for Map:
Note: great care must be exercised if
mutable objects are used as map keys.
The behavior of a map is not specified
if the value of an object is changed
in a manner that affects equals
comparisons while the object is a key
in the map.
A: The Equals method is intended to compare references. So it should not be overriden to change its behaviour.
You should create a new method to test for equivalence in different instances if you need to (or use the CompareTo method in some .NET classes)
A: You should only need to override the equals() method if you want specific behaviour when adding objects to sorted data structures (SortedSet etc.)
When you do that you should also override hashCode().
See here for a complete explanation.
A: To be honest, in Java there is not really an argument against overriding equals. If you need to compare instances for equality, then that is what you do.
As mentioned above, you need to be aware of the contract with hashCode, and similarly, watch out for the gotchas around the Comparable interface - in almost all situations you want the natural ordering as defined by Comparable to be consistent with equals (see the BigDecimal api doc for the canonical counter example)
Creating a new method for deciding equality, quite apart from not working with the existing library classes, flies in the face of Java convention somewhat.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16557",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "21"
} |
Q: How to write regex to return only certain parts of this string? So I'm working on a project that will allow users to enter poker hand histories from sites like PokerStars and then display the hand to them.
It seems that regex would be a great tool for this, however I rank my regex knowledge at "slim to none".
So I'm using PHP and looping through this block of text line by line and on lines like this:
Seat 1: fabulous29 (835 in chips)
Seat 2: Nioreh_21 (6465 in chips)
Seat 3: Big Loads (3465 in chips)
Seat 4: Sauchie (2060 in chips)
I want to extract seat number, name, & chip count so the format is
Seat [number]: [letters&numbers&characters] ([number] in chips)
I have NO IDEA where to start or what commands I should even be using to optimize this.
Any advice is greatly appreciated - even if it is just a link to a tutorial on PHP regex or the name of the command(s) I should be using.
A: I'm not entirely sure what exactly to use for that without trying it, but a great tool I use all the time to validate my RegEx is RegExr which gives a great flash interface for trying out your regex, including real time matching and a library of predefined snippets to use. Definitely a great time saver :)
A: Something like this might do the trick:
/Seat (\d+): ([^\(]+) \((\d+)in chips\)/
And some basic explanation on how Regex works:
*
*\d = digit.
*\<character> = escapes character, if not part of any character class or subexpression. for example:
\t
would render a tab, while \\t would render "\t" (since the backslash is escaped).
*+ = one or more of the preceding element.
** = zero or more of the preceding element.
*[ ] = bracket expression. Matches any of the characters within the bracket. Also works with ranges (ex. A-Z).
*[^ ] = Matches any character that is NOT within the bracket.
*( ) = Marked subexpression. The data matched within this can be recalled later.
Anyway, I chose to use
([^\(]+)
since the example provides a name containing spaces (Seat 3 in the example). what this does is that it matches any character up to the point that it encounters an opening paranthesis.
This will leave you with a blank space at the end of the subexpression (using the data provided in the example). However, his can easily be stripped away using the trim() command in PHP.
If you do not want to match spaces, only alphanumerical characters, you could so something like this:
([A-Za-z0-9-_]+)
Which would match any letter (within A-Z, both upper- & lower-case), number as well as hyphens and underscores.
Or the same variant, with spaces:
([A-Za-z0-9-_\s]+)
Where "\s" is evaluated into a space.
Hope this helps :)
A: Look at the PCRE section in the PHP Manual. Also, http://www.regular-expressions.info/ is a great site for learning regex. Disclaimer: Regex is very addictive once you learn it.
A: I always use the preg_ set of function for REGEX in PHP because the PERL-compatible expressions have much more capability. That extra capability doesn't necessarily come into play here, but they are also supposed to be faster, so why not use them anyway, right?
For an expression, try this:
/Seat (\d+): ([^ ]+) \((\d+)/
You can use preg_match() on each line, storing the results in an array. You can then get at those results and manipulate them as you like.
EDIT:
Btw, you could also run preg_match_all on the entire block of text (instead of looping through line-by-line) and get the results that way, too.
A: Check out preg_match.
Probably looking for something like...
<?php
$str = 'Seat 1: fabulous29 (835 in chips)';
preg_match('/Seat (?<seatNo>\d+): (?<name>\w+) \((?<chipCnt>\d+) in chips\)/', $str, $matches);
print_r($matches);
?>
*It's been a while since I did php, so this could be a little or a lot off.*
A: May be it is very late answer, But I am interested in answering
Seat\s(\d):\s([\w\s]+)\s\((\d+).*\)
http://regex101.com/r/cU7yD7/1
A: Here's what I'm currently using:
preg_match("/(Seat \d+: [A-Za-z0-9 _-]+) \((\d+) in chips\)/",$line)
A: To process the whole input string at once, use preg_match_all()
preg_match_all('/Seat (\d+): \w+ \((\d+) in chips\)/', $preg_match_all, $matches);
For your input string, var_dump of $matches will look like this:
array
0 =>
array
0 => string 'Seat 1: fabulous29 (835 in chips)' (length=33)
1 => string 'Seat 2: Nioreh_21 (6465 in chips)' (length=33)
2 => string 'Seat 4: Sauchie (2060 in chips)' (length=31)
1 =>
array
0 => string '1' (length=1)
1 => string '2' (length=1)
2 => string '4' (length=1)
2 =>
array
0 => string '835' (length=3)
1 => string '6465' (length=4)
2 => string '2060' (length=4)
On learning regex: Get Mastering Regular Expressions, 3rd Edition. Nothing else comes close to the this book if you really want to learn regex. Despite being the definitive guide to regex, the book is very beginner friendly.
A: Try this code. It works for me
Let say that you have below lines of strings
$string1 = "Seat 1: fabulous29 (835 in chips)";
$string2 = "Seat 2: Nioreh_21 (6465 in chips)";
$string3 = "Seat 3: Big Loads (3465 in chips)";
$string4 = "Seat 4: Sauchie (2060 in chips)";
Add to array
$lines = array($string1,$string2,$string3,$string4);
foreach($lines as $line )
{
$seatArray = explode(":", $line);
$seat = explode(" ",$seatArray[0]);
$seatNumber = $seat[1];
$usernameArray = explode("(",$seatArray[1]);
$username = trim($usernameArray[0]);
$chipArray = explode(" ",$usernameArray[1]);
$chipNumber = $chipArray[0];
echo "<br>"."Seat [".$seatNumber."]: [". $username."] ([".$chipNumber."] in chips)";
}
A: Seat [number]: [letters&numbers&characters] ([number] in chips)
Your Regex should look something like this
Seat (\d+): ([a-zA-Z0-9]+) \((\d+) in chips\)
The brackets will let you capture the seat number, name and number of chips in groups.
A: you'll have to split the file by linebreaks,
then loop thru each line and apply the following logic
$seat = 0;
$name = 1;
$chips = 2;
foreach( $string in $file ) {
if (preg_match("Seat ([1-0]): ([A-Za-z_0-9]*) \(([1-0]*) in chips\)", $string, $matches)) {
echo "Seat: " . $matches[$seat] . "<br>";
echo "Name: " . $matches[$name] . "<br>";
echo "Chips: " . $matches[$chips] . "<br>";
}
}
I haven't ran this code, so you may have to fix some errors...
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16563",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: How to select the nth row in a SQL database table? I'm interested in learning some (ideally) database agnostic ways of selecting the nth row from a database table. It would also be interesting to see how this can be achieved using the native functionality of the following databases:
*
*SQL Server
*MySQL
*PostgreSQL
*SQLite
*Oracle
I am currently doing something like the following in SQL Server 2005, but I'd be interested in seeing other's more agnostic approaches:
WITH Ordered AS (
SELECT ROW_NUMBER() OVER (ORDER BY OrderID) AS RowNumber, OrderID, OrderDate
FROM Orders)
SELECT *
FROM Ordered
WHERE RowNumber = 1000000
Credit for the above SQL: Firoz Ansari's Weblog
Update: See Troels Arvin's answer regarding the SQL standard. Troels, have you got any links we can cite?
A:
SQL SERVER
Select n' th record from top
SELECT * FROM (
SELECT
ID, NAME, ROW_NUMBER() OVER(ORDER BY ID) AS ROW
FROM TABLE
) AS TMP
WHERE ROW = n
select n' th record from bottom
SELECT * FROM (
SELECT
ID, NAME, ROW_NUMBER() OVER(ORDER BY ID DESC) AS ROW
FROM TABLE
) AS TMP
WHERE ROW = n
A: When we used to work in MSSQL 2000, we did what we called the "triple-flip":
EDITED
DECLARE @InnerPageSize int
DECLARE @OuterPageSize int
DECLARE @Count int
SELECT @Count = COUNT(<column>) FROM <TABLE>
SET @InnerPageSize = @PageNum * @PageSize
SET @OuterPageSize = @Count - ((@PageNum - 1) * @PageSize)
IF (@OuterPageSize < 0)
SET @OuterPageSize = 0
ELSE IF (@OuterPageSize > @PageSize)
SET @OuterPageSize = @PageSize
DECLARE @sql NVARCHAR(8000)
SET @sql = 'SELECT * FROM
(
SELECT TOP ' + CAST(@OuterPageSize AS nvarchar(5)) + ' * FROM
(
SELECT TOP ' + CAST(@InnerPageSize AS nvarchar(5)) + ' * FROM <TABLE> ORDER BY <column> ASC
) AS t1 ORDER BY <column> DESC
) AS t2 ORDER BY <column> ASC'
PRINT @sql
EXECUTE sp_executesql @sql
It wasn't elegant, and it wasn't fast, but it worked.
A: Here is a fast solution of your confusion.
SELECT * FROM table ORDER BY `id` DESC LIMIT N, 1
Here You may get Last row by Filling N=0, Second last by N=1, Fourth Last By Filling N=3 and so on.
This is very common question over the interview and this is Very simple ans of it.
Further If you want Amount, ID or some Numeric Sorting Order than u may go for CAST function in MySQL.
SELECT DISTINCT (`amount`)
FROM cart
ORDER BY CAST( `amount` AS SIGNED ) DESC
LIMIT 4 , 1
Here By filling N = 4 You will be able to get Fifth Last Record of Highest Amount from CART table. You can fit your field and table name and come up with solution.
A: In Oracle 12c, You may use OFFSET..FETCH..ROWS option with ORDER BY
For example, to get the 3rd record from top:
SELECT *
FROM sometable
ORDER BY column_name
OFFSET 2 ROWS FETCH NEXT 1 ROWS ONLY;
A: ADD:
LIMIT n,1
That will limit the results to one result starting at result n.
A: Oracle:
select * from (select foo from bar order by foo) where ROWNUM = x
A: There are ways of doing this in optional parts of the standard, but a lot of databases support their own way of doing it.
A really good site that talks about this and other things is http://troels.arvin.dk/db/rdbms/#select-limit.
Basically, PostgreSQL and MySQL supports the non-standard:
SELECT...
LIMIT y OFFSET x
Oracle, DB2 and MSSQL supports the standard windowing functions:
SELECT * FROM (
SELECT
ROW_NUMBER() OVER (ORDER BY key ASC) AS rownumber,
columns
FROM tablename
) AS foo
WHERE rownumber <= n
(which I just copied from the site linked above since I never use those DBs)
Update: As of PostgreSQL 8.4 the standard windowing functions are supported, so expect the second example to work for PostgreSQL as well.
Update: SQLite added window functions support in version 3.25.0 on 2018-09-15 so both forms also work in SQLite.
A: I'm not sure about any of the rest, but I know SQLite and MySQL don't have any "default" row ordering. In those two dialects, at least, the following snippet grabs the 15th entry from the_table, sorting by the date/time it was added:
SELECT *
FROM the_table
ORDER BY added DESC
LIMIT 1,15
(of course, you'd need to have an added DATETIME field, and set it to the date/time that entry was added...)
A: For SQL Server, a generic way to go by row number is as such:
SET ROWCOUNT @row --@row = the row number you wish to work on.
For Example:
set rowcount 20 --sets row to 20th row
select meat, cheese from dbo.sandwich --select columns from table at 20th row
set rowcount 0 --sets rowcount back to all rows
This will return the 20th row's information. Be sure to put in the rowcount 0 afterward.
A: For example, if you want to select every 10th row in MSSQL, you can use;
SELECT * FROM (
SELECT
ROW_NUMBER() OVER (ORDER BY ColumnName1 ASC) AS rownumber, ColumnName1, ColumnName2
FROM TableName
) AS foo
WHERE rownumber % 10 = 0
Just take the MOD and change number 10 here any number you want.
A: SQL 2005 and above has this feature built-in. Use the ROW_NUMBER() function. It is excellent for web-pages with a << Prev and Next >> style browsing:
Syntax:
SELECT
*
FROM
(
SELECT
ROW_NUMBER () OVER (ORDER BY MyColumnToOrderBy) AS RowNum,
*
FROM
Table_1
) sub
WHERE
RowNum = 23
A: Here's a generic version of a sproc I recently wrote for Oracle that allows for dynamic paging/sorting - HTH
-- p_LowerBound = first row # in the returned set; if second page of 10 rows,
-- this would be 11 (-1 for unbounded/not set)
-- p_UpperBound = last row # in the returned set; if second page of 10 rows,
-- this would be 20 (-1 for unbounded/not set)
OPEN o_Cursor FOR
SELECT * FROM (
SELECT
Column1,
Column2
rownum AS rn
FROM
(
SELECT
tbl.Column1,
tbl.column2
FROM MyTable tbl
WHERE
tbl.Column1 = p_PKParam OR
tbl.Column1 = -1
ORDER BY
DECODE(p_sortOrder, 'A', DECODE(p_sortColumn, 1, Column1, 'X'),'X'),
DECODE(p_sortOrder, 'D', DECODE(p_sortColumn, 1, Column1, 'X'),'X') DESC,
DECODE(p_sortOrder, 'A', DECODE(p_sortColumn, 2, Column2, sysdate),sysdate),
DECODE(p_sortOrder, 'D', DECODE(p_sortColumn, 2, Column2, sysdate),sysdate) DESC
))
WHERE
(rn >= p_lowerBound OR p_lowerBound = -1) AND
(rn <= p_upperBound OR p_upperBound = -1);
A: But really, isn't all this really just parlor tricks for good database design in the first place? The few times I needed functionality like this it was for a simple one off query to make a quick report. For any real work, using tricks like these is inviting trouble. If selecting a particular row is needed then just have a column with a sequential value and be done with it.
A: Nothing fancy, no special functions, in case you use Caché like I do...
SELECT TOP 1 * FROM (
SELECT TOP n * FROM <table>
ORDER BY ID Desc
)
ORDER BY ID ASC
Given that you have an ID column or a datestamp column you can trust.
A: For SQL server, the following will return the first row from giving table.
declare @rowNumber int = 1;
select TOP(@rowNumber) * from [dbo].[someTable];
EXCEPT
select TOP(@rowNumber - 1) * from [dbo].[someTable];
You can loop through the values with something like this:
WHILE @constVar > 0
BEGIN
declare @rowNumber int = @consVar;
select TOP(@rowNumber) * from [dbo].[someTable];
EXCEPT
select TOP(@rowNumber - 1) * from [dbo].[someTable];
SET @constVar = @constVar - 1;
END;
A: I suspect this is wildly inefficient but is quite a simple approach, which worked on a small dataset that I tried it on.
select top 1 field
from table
where field in (select top 5 field from table order by field asc)
order by field desc
This would get the 5th item, change the second top number to get a different nth item
SQL server only (I think) but should work on older versions that do not support ROW_NUMBER().
A: Contrary to what some of the answers claim, the SQL standard is not silent regarding this subject.
Since SQL:2003, you have been able to use "window functions" to skip rows and limit result sets.
And in SQL:2008, a slightly simpler approach had been added, using
OFFSET skip ROWS
FETCH FIRST n ROWS ONLY
Personally, I don't think that SQL:2008's addition was really needed, so if I were ISO, I would have kept it out of an already rather large standard.
A: Verify it on SQL Server:
Select top 10 * From emp
EXCEPT
Select top 9 * From emp
This will give you 10th ROW of emp table!
A: 1 small change: n-1 instead of n.
select *
from thetable
limit n-1, 1
A: PostgreSQL supports windowing functions as defined by the SQL standard, but they're awkward, so most people use (the non-standard) LIMIT / OFFSET:
SELECT
*
FROM
mytable
ORDER BY
somefield
LIMIT 1 OFFSET 20;
This example selects the 21st row. OFFSET 20 is telling Postgres to skip the first 20 records. If you don't specify an ORDER BY clause, there's no guarantee which record you will get back, which is rarely useful.
A: LIMIT n,1 doesn't work in MS SQL Server. I think it's just about the only major database that doesn't support that syntax. To be fair, it isn't part of the SQL standard, although it is so widely supported that it should be. In everything except SQL server LIMIT works great. For SQL server, I haven't been able to find an elegant solution.
A: In Sybase SQL Anywhere:
SELECT TOP 1 START AT n * from table ORDER BY whatever
Don't forget the ORDER BY or it's meaningless.
A: SELECT * FROM emp a
WHERE n = (
SELECT COUNT( _rowid)
FROM emp b
WHERE a. _rowid >= b. _rowid
);
A: T-SQL - Selecting N'th RecordNumber from a Table
select * from
(select row_number() over (order by Rand() desc) as Rno,* from TableName) T where T.Rno = RecordNumber
Where RecordNumber --> Record Number to Select
TableName --> To be Replaced with your Table Name
For e.g. to select 5 th record from a table Employee, your query should be
select * from
(select row_number() over (order by Rand() desc) as Rno,* from Employee) T where T.Rno = 5
A: SELECT
top 1 *
FROM
table_name
WHERE
column_name IN (
SELECT
top N column_name
FROM
TABLE
ORDER BY
column_name
)
ORDER BY
column_name DESC
I've written this query for finding Nth row.
Example with this query would be
SELECT
top 1 *
FROM
Employee
WHERE
emp_id IN (
SELECT
top 7 emp_id
FROM
Employee
ORDER BY
emp_id
)
ORDER BY
emp_id DESC
A: I'm a bit late to the party here but I have done this without the need for windowing or using
WHERE x IN (...)
SELECT TOP 1
--select the value needed from t1
[col2]
FROM
(
SELECT TOP 2 --the Nth row, alter this to taste
UE2.[col1],
UE2.[col2],
UE2.[date],
UE2.[time],
UE2.[UID]
FROM
[table1] AS UE2
WHERE
UE2.[col1] = ID --this is a subquery
AND
UE2.[col2] IS NOT NULL
ORDER BY
UE2.[date] DESC, UE2.[time] DESC --sorting by date and time newest first
) AS t1
ORDER BY t1.[date] ASC, t1.[time] ASC --this reverses the order of the sort in t1
It seems to work fairly fast although to be fair I only have around 500 rows of data
This works in MSSQL
A: unbelievable that you can find a SQL engine executing this one ...
WITH sentence AS
(SELECT
stuff,
row = ROW_NUMBER() OVER (ORDER BY Id)
FROM
SentenceType
)
SELECT
sen.stuff
FROM sentence sen
WHERE sen.row = (ABS(CHECKSUM(NEWID())) % 100) + 1
A: This is how I'd do it within DB2 SQL, I believe the RRN (relative record number) is stored within the table by the O/S;
SELECT * FROM (
SELECT RRN(FOO) AS RRN, FOO.*
FROM FOO
ORDER BY RRN(FOO)) BAR
WHERE BAR.RRN = recordnumber
A: select * from
(select * from ordered order by order_id limit 100) x order by
x.order_id desc limit 1;
First select top 100 rows by ordering in ascending and then select last row by ordering in descending and limit to 1. However this is a very expensive statement as it access the data twice.
A: It seems to me that, to be efficient, you need to 1) generate a random number between 0 and one less than the number of database records, and 2) be able to select the row at that position. Unfortunately, different databases have different random number generators and different ways to select a row at a position in a result set - usually you specify how many rows to skip and how many rows you want, but it's done differently for different databases. Here is something that works for me in SQLite:
select *
from Table
limit abs(random()) % (select count(*) from Words), 1;
It does depend on being able to use a subquery in the limit clause (which in SQLite is LIMIT <recs to skip>,<recs to take>) Selecting the number of records in a table should be particularly efficient, being part of the database's meta data, but that depends on the database's implementation. Also, I don't know if the query will actually build the result set before retrieving the Nth record, but I would hope that it doesn't need to. Note that I'm not specifying an "order by" clause. It might be better to "order by" something like the primary key, which will have an index - getting the Nth record from an index might be faster if the database can't get the Nth record from the database itself without building the result set.
A: Most suitable answer I have seen on this article for sql server
WITH myTableWithRows AS (
SELECT (ROW_NUMBER() OVER (ORDER BY myTable.SomeField)) as row,*
FROM myTable)
SELECT * FROM myTableWithRows WHERE row = 3
A: If you want to look at native functionalities:
MySQL, PostgreSQL, SQLite, and Oracle (basically SQL Server doesn't seem to have this function) you could ACTUALLY use the NTH_VALUE window function.
Oracle Source: Oracle Functions: NTH_VALUE
I've actually experimented with this in our Oracle DB to do some comparing of the first row (after ordering) to the second row (again, after ordering).
The code would look similar to this (in case you don't want to go to the link):
SELECT DISTINCT dept_id
, NTH_VALUE(salary,2) OVER (PARTITION BY dept_id ORDER BY salary DESC
RANGE BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING)
AS "SECOND HIGHEST"
, NTH_VALUE(salary,3) OVER (PARTITION BY dept_id ORDER BY salary DESC
RANGE BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING)
AS "THIRD HIGHEST"
FROM employees
WHERE dept_id in (10,20)
ORDER
BY dept_id;
I've found it quite interesting and I wish they'd let me use it.
A: WITH r AS (
SELECT TOP 1000 * FROM emp
)
SELECT * FROM r
EXCEPT
SELECT TOP 999 FROM r
This will give the 1000th row in SQL Server.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16568",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "480"
} |
Q: How do you deal with transport-level errors in SqlConnection? Every now and then in a high volume .NET application, you might see this exception when you try to execute a query:
System.Data.SqlClient.SqlException: A transport-level error has
occurred when sending the request to the server.
According to my research, this is something that "just happens" and not much can be done to prevent it. It does not happen as a result of a bad query, and generally cannot be duplicated. It just crops up maybe once every few days in a busy OLTP system when the TCP connection to the database goes bad for some reason.
I am forced to detect this error by parsing the exception message, and then retrying the entire operation from scratch, to include using a new connection. None of that is pretty.
Anybody have any alternate solutions?
A: I posted an answer on another question on another topic that might have some use here. That answer involved SMB connections, not SQL. However it was identical in that it involved a low-level transport error.
What we found was that in a heavy load situation, it was fairly easy for the remote server to time out connections at the TCP layer simply because the server was busy. Part of the reason was the defaults for how many times TCP will retransmit data on Windows weren't appropriate for our situation.
Take a look at the registry settings for tuning TCP/IP on Windows. In particular you want to look at TcpMaxDataRetransmissions and maybe TcpMaxConnectRetransmissions. These default to 5 and 2 respectively, try upping them a little bit on the client system and duplicate the load situation.
Don't go crazy! TCP doubles the timeout with each successive retransmission, so the timeout behavior for bad connections can go exponential on you if you increase these too much. As I recall upping TcpMaxDataRetransmissions to 6 or 7 solved our problem in the vast majority of cases.
A: This blog post by Michael Aspengren explains the error message "A transport-level error has occurred when sending the request to the server."
A: To answer your original question:
A more elegant way to detect this particular error, without parsing the error message, is to inspect the Number property of the SqlException.
(This actually returns the error number from the first SqlError in the Errors collection, but in your case the transport error should be the only one in the collection.)
A: I have seen this happen in my own environment a number of times. The client application in this case is installed on many machines. Some of those machines happen to be laptops people were leaving the application open disconnecting it and then plugging it back in and attempting to use it. This will then cause the error you have mentioned.
My first point would be to look at the network and ensure that servers aren't on DHCP and renewing IP Addresses causing this error. If that isn't the case then you have to start trawlling through your event logs looking for other network related.
Unfortunately it is as stated above a network error. The main thing you can do is just monitor the connections using a tool like netmon and work back from there.
Good Luck.
A: I had the same problem albeit it was with service requests to a SQL DB.
This is what I had in my service error log:
System.Data.SqlClient.SqlException: A transport-level error has occurred when sending the request to the server. (provider: TCP Provider, error: 0 - An existing connection was forcibly closed by the remote host.)
I have a C# test suite that tests a service. The service and DB were both on external servers so I thought that might be the issue. So I deployed the service and DB locally to no avail. The issue continued. The test suite isn't even a hard pressing performance test at all, so I had no idea what was happening. The same test was failing each time, but when I disabled that test, another one would fail continuously.
I tried other methods suggested on the Internet that didn't work either:
*
*Increase the registry values of TcpMaxDataRetransmissions and TcpMaxConnectRetransmissions.
*Disable the "Shared Memory" option within SQL Server Configuration Manager under "Client Protocols" and sort TCP/IP to 1st in the list.
*This might occur when you are testing scalability with a large number of client connection attempts. To resolve this issue, use the regedit.exe utility to add a new DWORD value named SynAttackProtect to the registry key HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters\ with value data of 00000000.
My last resort was to use the old age saying "Try and try again". So I have nested try-catch statements to ensure that if the TCP/IP connection is lost in the lower communications protocol that it does't just give up there but tries again. This is now working for me, however it's not a very elegant solution.
A: use Enterprise Services with transactional components
A: You should also check hardware connectivity to the database.
Perhaps this thread will be helpful:
http://channel9.msdn.com/forums/TechOff/234271-Conenction-forcibly-closed-SQL-2005/
A: I'm using reliability layer around my DB commands (abstracted away in the repository interfaece). Basically that's just code that intercepts any expected exception (DbException and also InvalidOperationException, that happens to get thrown on connectivity issues), logs it, captures statistics and retries everything again.
With that reliability layer present, the service has been able to survive stress-testing gracefully (constant dead-locks, network failures etc). Production is far less hostile than that.
PS: There is more on that here (along with a simple way to define reliability with the interception DSL)
A: I had the same problem. I asked my network geek friends, and all said what people have replied here: Its the connection between the computer and the database server. In my case it was my Internet Service Provider, or there router that was the problem. After a Router update, the problem went away. But do you have any other drop-outs of internet connection from you're computer or server? I had...
A: I experienced the transport error this morning in SSMS while connected to SQL 2008 R2 Express.
I was trying to import a CSV with \r\n. I coded my row terminator for 0x0d0x0a. When I changed it to 0x0a, the error stopped. I can change it back and forth and watch it happen/not happen.
BULK INSERT #t1 FROM 'C:\123\Import123.csv' WITH
( FIRSTROW = 1, FIELDTERMINATOR = ',', ROWTERMINATOR = '0x0d0x0a' )
I suspect I am not writing my row terminator correctly because SQL parses one character at a time right while I'm trying to pass two characters.
Anyhow, this error is 4 years old now, but it may provide a bit of information for the next user.
A: I just wanted to post a fix here that worked for our company on new software we've installed. We were getting the following error since day 1 on the client log file: Server was unable to process request. ---> A transport-level error has occurred when receiving results from the server. (provider: TCP Provider, error: 0 - The semaphore timeout period has expired.) ---> The semaphore timeout period has expired.
What completely fixed the problem was to set up a link aggregate (LAG) on our switch. Our Dell FX1 server has redundant fiber lines coming out of the back of it. We did not realize that the switch they're plugged into needed to have a LAG configured on those two ports. See details here: https://docs.meraki.com/display/MS/Switch+Ports#SwitchPorts-LinkAggregation
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16610",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "33"
} |
Q: Validation Patterns for Custom XML Documents I have a web application that generates a medium sized XML dataset to be consumed by a third party.
I thought it would be a good idea to provide some form of schema document for the XML that I generate so I pasted the XML into Visual Studio and got it to generate an XSD.
The annoying thing is that my XML doesn't validate to the XSD that was generated!
Is it better to roll your own XSD?
What about different schema docs like DTDs, Relax NG, or Schematron?
The key is that I would like to be able to validate my document using C#.
What are your XML validation strategies?
A: Whether you choose XSD and/or Schematron depends on what you are trying to validate. XSD is probably the most common validation strategy, but there are limits on what it can validate. If all you want to do is ensure that the right type of data is in each field, XSD should work for you. If you need to assert, for example, that the value of the <small> element is less than the value of the <big> element, or even more complex business rules involving multiple fields, you probably want Schematron or a hybrid approach.
A: You will be able to validate your XML with either an XML Schema or a DTD using C#. DTDs are older standards as compared to XML Schemas.
So, I recommend an XML Schema approach.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16611",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Is FindFirstChangeNotification the best API to use for file system change notification on windows? I'm new to windows programming and I'm trying to get notified of all changes to the file system (similar to the information that FileMon from SysInternals displays, but via an API). Is a FindFirstChangeNotification for each (non-network, non-substed) drive my best bet or are there other more suitable C/C++ APIs?
A: FindFirstChangeNotification is fine, but for slightly more ultimate power you should be using ReadDirectoryChangesW. (In fact, it's even recommended in the documentation!)
It doesn't require a function pointer, it does require you to manually decode a raw buffer, it uses Unicode file names, but it is generally better and more flexible.
On the other hand, if you want to do what FileMon does, you should probably do what FileMon does and use IFS to create and install a file system filter.
A: There are other ways to do it, but most of them involve effort on your part (or take performance from your app, or you have to block a thread to use them, etc). FindFirstChangeNotification is a bit complicated if you're not used to dealing with function pointers, etc, but it has the virtue of getting the OS to do the bulk of the work for you.
A: Actually FileSystemWatcher works perfectly with shared network drives. I am using it right now in an application which, among other things, monitors the file system for changes. (www.tabbles.net).
A: You can use FileSystemWatcher class. Very efficient but cannot work with Network shared drives.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16615",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "18"
} |
Q: ArgumentNullException for Integer In .NET, is it more appropriate to throw an argument null exception for an Integer if the value is Integer.MinValue or Integer = 0 (assuming that 0 is not a valid value)?
A: Throwing an ArgumentNullException isn't appropriate unless the argument is actually null. Throw an ArgumentOutOfRangeException instead (preferably with a message informing the user what values of int are actually acceptable).
ArgumentOutOfRangeException is thrown when a method is invoked and at least one of the arguments passed to the method is not a null reference (Nothing in Visual Basic) and does not contain a valid value.
A: Well, I think if you are using an int, then it would be better to say InvalidArgumentException.
Alternatively, you could make your INTs nullable by declaring them as int? (especially if you expect null values for your int.)
A: If the argument is not null, don't throw an ArgumentNullException. It would probably be more reasonable to throw an ArgumentException, explained here.
edit: ArgumentOutOfRangeException is probably even better, as suggested above by Avenger546.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16616",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |