text
stringlengths 8
267k
| meta
dict |
---|---|
Q: PHP performance What can I do to increase the performance/speed of my PHP scripts without installing software on my servers?
A: One reasonable technique that can easily be pulled off the shelf is caching. A vast amount of time tends to go into generating resources for clients that are common between requests (and even across clients); eliminating this runtime work can lead to dramatic speed increases. You can dump the generated resource (or resource fragment) into a file outside the web tree, and then read it back in when needed. Obviously, some profiling will be needed to ensure this is actually faster than regeneration - forcing the web server back to disk regularly can be detrimental, so the resource really does need to have heavy reuse.
You might also be surprised how much time is spent inside badly written database queries; time common generated queries and see if they can be rewritten. The amount of time spent executing actual PHP code is generally pretty limited, unless you're using some sub-optimal algorithms.
Neither of these are limited to PHP, though some of the PHP "magicy" approaches/functions can over-protect one from thinking about these concerns. For example, I recently updated a script that was using array_search to use a binary search over a sorted array, and gained the expected exponential speedup.
A: Really consider using XDebug profiler: it helps with checking how much a certain function is being executed against what you would have expected.
I try to decrease instructions while improving code readability by replacing logic with array-lookups when appropriate.
It's what Jeff Atwood wrote in [The Best Code is No Code At All][1].
*
*Also, avoid loops inside another
loop, and nested if/else statements.
*Short functions. Sometimes a lot of
code does not need to be executed
when the result-value is already
known.
*Unnecessary testing:
if (count($array) === 0) return;
can also be written as:
if (! $array) return;
Another function-call eliminated!
[1]: http://www.codinghorror.com/blog/archives/000878.html"The Best Code is No Code At All"
A: You can optimized the code with two basic things:
Optimizing PHP associated library and server
Go through https://www.digitalocean.com/community/articles/how-to-optimize-apache-web-server-performance Or
You can use profiling tool like xhprof to view what part of your code can by optimized and here is the link to follow: http://michaelsanford.com/compiling-xhprof-for-php-5-4/
Optimizing your code using code profiler and code analyzer
You need to install Netbeans in order to use this plugin.
Here are the steps you need to follow:
1) Open NetBeans then select option from menu bar Tools > Plugin. Then search plug-in name "phpcsmd" in the available plug-in tab and install it from there.
2) Now open the terminal and be there as the super user by typing command "sudo su".
3) Install PEAR library (if it is not installed) into your system by running following commands into your terminal
a) wget http://pear.php.net/go-pear.phar
b) php go-pear.phar
As we need this for the installation of further addons.
4) Then run the command
"pear config-set auto_discover 1"
This will be used to set auto discover the path "true" for the required plug-ins. So they get install to the desired location automatically.
5) Then run below command to install PHP code sniffer.
"pear install --alldeps pear/PHP_CodeSniffer"
6) Now to install the PHP Mess Detector by running following command
"pear install --alldeps phpmd/PHP_PMD"
If you get the error like "invalid package name/package file "phpmd/PHP_PMD"" while installing this module. You need to use this "pear channel-discover pear.phpmd.org" command to get rid of this error. After this command you can run the above command again to install Mess detector.
7) Now to install the PHP Depend by running following command
"pear install --alldeps pdepend/PHP_Depend"
8) Now install the PHP Copy Paste Detector by running following command
"pear install --alldeps phpunit/phpcpd"
9) Then run the command
"pear config-set auto_discover 0"
This will be used to set auto discover path "false".
10) Then open net beans and follow the path Tools>Options>PHP>PHPCSMD
A: There is no magic solution, and attempting to provide generic solutions could well just be a waste of time.
Where are your bottlenecks? For example are your scripts processor/database/memory intensive?
Have you performed any profiling?
A: Profile. Profile. Profile. I'm not sure if there is anything out there for PHP, but it should be simple to write a little tool to insert profiling information in your code. You will want to profile function times and SQL query times.
So where you have a function:
function foo($stuff) {
...
return ...;
}
I would change it to:
function foo($stuff) {
trace_push_fn('foo');
...
trace_pop_fn('foo');
return ...;
}
(This is one of those cases where multiple returns in a function become a hinderance.)
And SQL:
function bar($stuff) {
trace_push_fn('bar');
$query = ...;
trace_push_sql($query);
mysql_query($query);
trace_pop_sql($query);
trace_pop_fn('bar');
return ...;
}
In the end, you can generate a full trace of the program execution and use all sorts of techniques to identify your bottlenecks.
A: including files is slow, and requiring them is even slower. If you use __autoload for including every class then that will add up. for example.
I'm always a bit wary of trying to be too clever in terms of code optimisation, if it sacrifices code clairty. If you need to make code obscure to make it fast, would it not be cheaper to upgrade hardwear instead of wasting your time trying to tweak code? Processor cycles are cheaper than programmer cycles, after all.
A: The ones I can think of...
*
*Loop invariants are always a
good one to watch.
*Write E_STRICT and E_NOTICE
compliant code, particularly if you
are logging errors.
*Avoid the @ operator.
*Absolute paths for requires and
includes.
*Use strpos, str_replace etc. instead of regular expressions whenever possible.
Then there's a bunch of other methods that might work, but probably wont give you much benefit.
A: Whenever I look at performance problems, I think the best thing to do is time how long your pages take to run, and then look at the slowest ones. When you get these real metrics, you can often improve performance on the slowest ones by orders of magnitude, either by fixing a slow SQL query or perhaps tightening up the code a bit.
This of course requires no new hardware or special software, just a critical eye on the existing code.
That said, this will only work for so long... if you really are getting enough traffic to hit the limits of your hardware, and/or there is some code that is just inherently slow and really required, you will have to look at other possibilities.
A: I'm responsible for a large reporting system and we track the slowest reports kind of like that. I fire a unique key into the db when the report starts and then when it finishes I can determine how long it took. I'm using the database because that way I can detect when pages timeout (which happens a lot more than I'd like)
A: Follow some of the other advice first like profiling and making good resource allocation decisions, e.g. caching.
Also, take into account the performance of outside resources like your database. In MySQL you can check the slow query log for example. In addition make sure you didn't design your database an forget about it. Optimizing your queries (again for MySQL) against real data can pay of big.
A: Rasmus Lerdorf gave some good tips in his recent presentation "Simple is Hard" at FrOSCon '08. If you are using a bytecode cache (and you really should be using one), include path misses hurt a lot, so optimize your require/require_once.
A: You can use profiling tool like xhprof to view what part of your code can by optimized !
A: 1) Use latest version of PHP
The core team is working hard on improving the performance of PHP in every version.
2) Use a bytecode cache
Since PHP 5.5 a bytecode cache has been added to PHP named OPcache. Using OPcache can make a huge difference especially since PHP 7. It receives improvements in every PHP version and might even get a JIT implementation in the future.
3) Profiling
While developing profiling gives you great insight what exactly is happening. This helps a lot finding bottlenecks in your code.
One of the most used tools is XHProf but is not officially supported anymore and has issues with PHP >= 7. An alternative when you want to profile PHP >= 7 is Tideways which is a fork of XHProf.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/12576",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "16"
} |
Q: No trace info during processing of a cube in SSAS When I process a cube in Visual Studio 2005 I get following message:
Process succeeded. Trace information
is still being transferred. If you do
not want to wait for all of the
information to arrive press Stop.
and no trace info is displayed. Cube is processed OK by it is a little bit annoying. Any ideas? I access cubes via web server.
A: I get the same message when processing through GUI. I've never encountered this before, even on Dev or at other companies. I can't find a workaround and there seems to be a reall issue here (connect issue: http://connect.microsoft.com/SQLServer/feedback/details/536543/ssas-process-progress-window-blank-no-details)
Any workarounds would be great...
A: I get the same message when I process a cube, but if I wait for a few seconds the trace information arrives. Are you dealing with a very large quantity of data or a very complex cube? Maybe this is a silly question, but have you tried waiting a few minutes?
A: how are you processing the cube? through XMLA or through the GUI? If you do it in XMLA then you should see the results as they come in the output window
| {
"language": "en",
"url": "https://stackoverflow.com/questions/12578",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Using an XML catalog with Python's lxml? Is there a way, when I parse an XML document using lxml, to validate that document against its DTD using an external catalog file? I need to be able to work the fixed attributes defined in a document’s DTD.
A: You can add the catalog to the XML_CATALOG_FILES environment variable:
os.environ['XML_CATALOG_FILES'] = 'file:///to/my/catalog.xml'
See this thread. Note that entries in XML_CATALOG_FILES are space-separated URLs. You can use Python's pathname2url and urljoin (with file:) to generate the URL from a pathname.
A: Can you give an example? According to the lxml validation docs, lxml can handle DTD validation (specified in the XML doc or externally in code) and system catalogs, which covers most cases I can think of.
f = StringIO("<!ELEMENT b EMPTY>")
dtd = etree.DTD(f)
dtd = etree.DTD(external_id = "-//OASIS//DTD DocBook XML V4.2//EN")
A: It seems that lxml does not expose this libxml2 feature, grepping the source only turns up some #defines for the error handling:
C:\Dev>grep -ir --include=*.px[id] catalog lxml-2.1.1/src | sed -r "s/\s+/ /g"
lxml-2.1.1/src/lxml/dtd.pxi: catalog.
lxml-2.1.1/src/lxml/xmlerror.pxd: XML_FROM_CATALOG = 20 # The Catalog module
lxml-2.1.1/src/lxml/xmlerror.pxd: XML_WAR_CATALOG_PI = 93 # 93
lxml-2.1.1/src/lxml/xmlerror.pxd: XML_CATALOG_MISSING_ATTR = 1650
lxml-2.1.1/src/lxml/xmlerror.pxd: XML_CATALOG_ENTRY_BROKEN = 1651 # 1651
lxml-2.1.1/src/lxml/xmlerror.pxd: XML_CATALOG_PREFER_VALUE = 1652 # 1652
lxml-2.1.1/src/lxml/xmlerror.pxd: XML_CATALOG_NOT_CATALOG = 1653 # 1653
lxml-2.1.1/src/lxml/xmlerror.pxd: XML_CATALOG_RECURSION = 1654 # 1654
lxml-2.1.1/src/lxml/xmlerror.pxi:CATALOG=20
lxml-2.1.1/src/lxml/xmlerror.pxi:WAR_CATALOG_PI=93
lxml-2.1.1/src/lxml/xmlerror.pxi:CATALOG_MISSING_ATTR=1650
lxml-2.1.1/src/lxml/xmlerror.pxi:CATALOG_ENTRY_BROKEN=1651
lxml-2.1.1/src/lxml/xmlerror.pxi:CATALOG_PREFER_VALUE=1652
lxml-2.1.1/src/lxml/xmlerror.pxi:CATALOG_NOT_CATALOG=1653
lxml-2.1.1/src/lxml/xmlerror.pxi:CATALOG_RECURSION=1654
From the catalog implementation in libxml2 page it seems possible that the 'transparent' handling through installation in /etc/xml/catalog may still work in lxml, but if you need more than that you can always abandon lxml and use the default python bindings, which do expose the catalog functions.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/12591",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
} |
Q: Can you check that an exception is thrown with doctest in Python? Is it possible to write a doctest unit test that will check that an exception is raised?
For example, if I have a function foo(x) that is supposed to raise an exception if x < 0, how would I write the doctest for that?
A: >>> import math
>>> math.log(-2)
Traceback (most recent call last):
...
ValueError: math domain error
ellipsis flag # doctest: +ELLIPSIS is not required to use ... in Traceback doctest
A: >>> scope # doctest: +IGNORE_EXCEPTION_DETAIL
Traceback (most recent call last):
NameError: name 'scope' is not defined
Don't know why the previous answers don't have the IGNORE_EXCEPTION_DETAIL. I need this for it to work. Py versioin: 3.7.3.
A: Yes. You can do it. The doctest module documentation and Wikipedia has an example of it.
>>> x
Traceback (most recent call last):
...
NameError: name 'x' is not defined
| {
"language": "en",
"url": "https://stackoverflow.com/questions/12592",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "84"
} |
Q: Does System.Xml use MSXML? I'm developing a C# application that uses a handful of XML files and some classes in System.Xml. A coworker insists on adding the MSXML6 redistributable to our install, along with the .NET framework but I don't think the .NET framework uses or needs MSXML in anyway. I am well aware that using MSXML from .NET is not supported but I suppose its theoretically possible for System.Xml itself to wrap MSXML at a low level. I haven't found anything definitive that .NET has its own implementation but neither can I find anything to suggest it needs MSXML.
Help me settle the debate. Does System.Xml use MSXML?
A: System.Xml doesn't use MSXML6. They are seperate xml processing engines. See post here: MSXML 6.0 vs. System.Xml: Schema handling differences
A: System.Xml is in the core framework and not dependent on MSXML 6.0, but it shares a few common API (DOM parser, SAX parser, XPath node selection).
A: There is no need for this sort of stuff to be the subject of tedious workplace debates, because the source code for the framework is available, and with a minuscule amount of work you can download the whole lot onto your machine. http://www.codeplex.com/NetMassDownloader
With another tiny bit of work, you can make a VS project which contains all the framework source, which makes it even easier to look through.
A: .Net framework uses MSXML6 for installation on WinXP SP2 and W2K3 SP2 only where MSXML6 is not in the box. System.Xml is a different product to MSXML6 though some APIs share similar signature.
A: I think it's needed for some MsSql-XML functionality, but System.Xml is in the core framework.
You should test your installer on a fresh machine anyway, just to be sure.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/12593",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
} |
Q: Windows/C++: How do I determine the share name associated with a shared drive? Let's say I have a drive such as C:\, and I want to find out if it's shared and what it's share name (e.g. C$) is.
To find out if it's shared, I can use NetShareCheck.
How do I then map the drive to its share name? I thought that NetShareGetInfo would do it, but it looks like that takes the share name, not the local drive name, as an input.
A: If all else fails, you could always use NetShareEnum and call NetShareGetInfo on each.
A: I believe you're looking for WNetGetConnectionA or WNetGetConnectionW.
A: Use;
SHGetFileInfo with SHGFI_ATTRIBUTES
upon return check the dwAttributes flag for SFGAO_SHARE.
I'm not sure how to find the actual path tho.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/12594",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Why was the Profile provider not built into Web Apps? If you create an ASP.NET web file project you have direct access to the Profile information in the web.config file. If you convert that to a Web App and have been using ProfileCommon etc. then you have to jump through a whole bunch of hoops to get your web app to work.
Why wasn't the Profile provider built into the ASP.NET web app projects like it was with the web file projects?
A: Actually, Microsoft does have a solution for this known issue.
It's the "Web Profiler Builder". I used it for my Web App and it works great.
http://code.msdn.microsoft.com/WebProfileBuilder/Release/ProjectReleases.aspx?ReleaseId=980
A: The profile provider uses the ASP.NET Build Provider system, which doesn't work with Web Application Projects.
Adding a customized BuildProvider
class to the Web.config file works in
an ASP.NET Web site but does not work
in an ASP.NET Web application project.
In a Web application project, the code
that is generated by the BuildProvider
class cannot be included in the
application.
source: MSDN Build Provider documentation
| {
"language": "en",
"url": "https://stackoverflow.com/questions/12603",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Designing a Calendar system like Google Calendar I have to create something similiar to Google Calendar, so I created an events table that contains all the events for a user.
The hard part is handling re-occurring events, the row in the events table has an event_type field that tells you what kind of event it is, since an event can be for a single date only, OR a re-occuring event every x days.
The main design challenge is handling re-occurring events.
When a user views the calendar, using the month's view, how can I display all the events for the given month? The query is going to be tricky, so I thought it would be easier to create another table and create a row for each and every event, including the re-occuring events.
What do you guys think?
A: As previously stated, don't reinvent the wheel, just enhance it.
Checkout VCalendar, it is open source, and comes in PHP, ASP, and ASP.Net (C#)!
Also you could check out Day Pilot which offers a calendar written in Asp.Net 2.0. They offer a lite version that you could check out, and if it works for you, you could purchase a license.
Update (9/30/09):
Unless of course the wheel is broken! Also, you can put a shiny new coat of paint if you like (ie: make a better UI). But at least try to find some foundation to build off of, since the calendar system can be tricky (with Repeating events), and it's been done thousands of times.
A: I'm tackling exactly this problem, and I had completely spaced iCalendar (rfc 2445) up until reading this thread, so I have no idea how well this will or won't integrate with that. Anyway the design I've come up with so far looks something like this:
*
*You can't possibly store all the instances of a recurring event, at least not before they occur, so I simply have one table that stores the first instance of the event as an actual date, an optional expiration, and nullable repeat_unit and repeat_increment fields to describe the repetition. For single instances the repition fields are null, otherwise the units will be 'day', 'week', 'month', 'year' and increment is simply the multiple of units to add to start date for the next occurrence.
*Storing past events only seems advantageous if you need to establish relationships with other entities in your model, and even then it's not necessary to have an explicit "event instance" table in every case. If the other entities already have date/time "instance" data then a foreign key to the event (or join table for a many-to-many) would most likely be sufficient.
*To do "change this instance"/"change all future instances", I was planning on just duplicating the events and expiring the stale ones. So to change a single instances, you'd expire the old one at it's last occurrence, make a copy for the new, unique occurrence with the changes and without any repetition, and another copy of the original at the following occurrence that repeats into the future. Changing all future instances is similar, you would just expire the original and make a new copy with the changes and repition details.
The two problems I see with this design so far are:
*
*It makes MWF-type events hard to represent. It's possible, but forces the user to create three separate events that repeat weekly on M,W,F individually, and any changes they want to make will have to be done on each one separately as well. These kind of events aren't particularly useful in my app, but it does leave a wart on the model that makes it less universal than I'd like.
*By copying the events to make changes, you break the association between them, which could be useful in some scenarios (or, maybe it would just be occasionally problematic.) The event table could theoretically contain a "copied_from" id field to track where an event originated, but I haven't fully thought through how useful something like that would be. For one thing, parent/child hierarchical relationships are a pain to query from SQL, so the benefits would need to be pretty heavy to outweigh the cost for querying that data. You could use a nested-set instead, I suppose.
Lastly I think it's possible to compute events for a given timespan using straight SQL, but I haven't worked out the exact details and I think the queries usually end up being too cumbersome to be worthwhile. However for the sake of argument, you can use the following expression to compute the difference in months between the given month and year an event:
(:month + (:year * 12)) - (MONTH(occursOn) + (YEAR(occursOn) * 12))
Building on the last example, you could use MOD to determine whether difference in months is the correct multiple:
MOD(:month + (:year * 12)) - (MONTH(occursOn) + (YEAR(occursOn) * 12), repeatIncrement) = 0
Anyway this isn't perfect (it doesn't ignore expired events, doesn't factor in start / end times for the event, etc), so it's only meant as a motivating example. Generally speaking though I think most queries will end up being too complicated. You're probably better off querying for events that occur during a given range, or don't expire before the range, and computing the instances themselves in code rather than SQL. If you really want the database to do the processing then a stored procedure would probably make your life a lot easier.
A: Attempting to store each instance of every event seems like it would be really problematic and, well, impossible. If someone creates an event that occurs "every thursday, forever", you clearly cannot store all the future events.
You could try to generate the future events on demand, and populate the future events only when necessary to display them or to send notification about them. However, if you are going to build the "on-demand" generation code anyway, why not use it all the time? Instead of pulling from the event table, and then having to use on-demand event generation to fill in any new events that haven't been added to the table yet, just use the on-demand event generation exclusively. The end result will be the same. With this scheme, you only need to store the start and end dates and the event frequency.
I don't see any way that you can avoid having on-demand event generation, so I can't see the utility in the event table. If you want it for the sake of caching, then I think you're taking the wrong approach. First, it's a poor cache because you can't avoid on-demand event generation anyway. Second, you should probably be caching at a higher level anyway. If you want to cache, then cache generated pages, not events.
As far as making your polling more efficient, if you are only polling every 15 minutes, and your database and/or server can't handle the load, then you are already doomed. There's no way your database will be able to handle users if it can't handle much, much more frequent polling without breaking a sweat.
A: I would say start with the ical standard. If you use it as your model, then you'll be able to do everything that google calendar, outlook, mac ical (the program), and get virtually instant integration with them.
From there, time to bone up on your ajax and javascript cuz you can't have a flashy web ui with drag drop and multiple calendars without a ton of ajax and javascript.
A: You should have a start date, end date, and expiration date. Single day events would have the same start date and end date, and allows you to do partial day events as well. As for re-occuring events, then the start and end date would be for the same day, but have different times, then you have an enumeration or table that specifies the repeat frequency (daily, weekly, monthly, etc).
This allows you to say "this event appears every day" for daily, "this event appears on the 2nd day of every week" for weekly, "this event appears on the 5th day of every month" for monthly, "this event appears on the 215th day of every year" for yearly as long as the date is less than the expiration date.
A: Darren,
That is how I have designed my events table actually, but when thinking about it, say I have 100K users who have create events, this table will be hit pretty hard, especially if I am going to be sending out emails to remind people of their events (events can be for specific times of the day also!), so I could be polling this table every 15 minutes potentially!
This is why I wanted to create another table that would expand out all the events/re-occuring events, this way I can simple hit that table and get the users months view of events without doing any complicated querying and business logic, AND it will make polling much more effecient also.
The question is, should this secondary table be for the next day or month? What makes more sense? Since the maximum a user can view is a months view, I am leaning towards a table that writes out all the events for a given month.
(ofcourse I will have to maintain this secondary table for any edits the user might make to the original events table).
ChanChan,
I have designed it with the same sort of functionality actually, but I am just referring to how I will go about storing events, specifically how to handle re-occurring events.
A: The brute-force-ish but still reasonable way would be to create a new row in your single events table for every instance of the recurring event, all pointing not to the event preceding it in the series but to the first event in the series. This simplifies selecting and/or deleting all elements in a particular series, since you can select based on parent id. It also allows users to delete individual items from a series without affecting the rest of them.
This query gets you the series that starts on element 3:
SELECT * FROM events WHERE id = 3 OR parentid = 3
To get all items for this month, assuming you'd have a start date and an end date in your events table, all you'd have to do is:
SELECT * FROM events WHERE startdate >= '2008-08-01' AND enddate <= '2008-08-31'
Handling the creation/modification of series programmatically wouldn't be very difficult, but it really would depend on the feature set you want to provide and how you think it'll be used. If you want to differentiate between series and events, you could have a separate series table and a nullable series_id on your events, allowing you the freedom to muck about with individual events while still retaining control over your series.
A: From past experience I would create a new record for each occurring event and then have a column which references the previous event so you can track all events in a series.
This has two advantages:
*
*No complicated routines to work out the next event date
*Individual occurrences can be edited without effecting the rest
Hope this gives you some food for thought :)
A: I have to agree with @ChanChan on reading the ical spec for how to store these things. There is no easy way to handle recurrences, especially ones that have exceptions. I've built and rebuilt and rebuilt a database to handle this, and I keep coming back to ical.
It's not a bad idea to create a subordinate table, depending on use cases. The algorithm for calculating exactly when occurrences . . . um, occur . . . can indeed be quite complex. There's no getting away from running it, but it's worth considering caching the results.
A: @GateKiller
I hadn't thought of the case where you edit individual occurrences. It makes sense you would store the occurrences separately in that case.
When you do that, though, how far in the future do you store events? Do you pick an arbitrary date? Do you auto-generate the new occurrences the first time a user browses out into future months/years?
Also, how do you handle the case where the user wants to edit the whole series. "We've had a meeting every Tuesday morning at 10:30 but we're going to start meeting on Wednesday at 8"
A: I think I understand your second paragraph to mean you are considering a second events table that has a row for each occurrence of an event. I would avoid that.
Re-occurring events should have a start date and a stop date (which could be Null for events that continue every X days "forever") You'll have to decide what kinds of frequency you want to allow -- every X days, the Nth day of each month, every given weekday, etc.
I'd probably tend toward two tables - one for one time events and a second for recurring events. You'll have to query and display the two separately.
If I were going to tackle this (and I'd try as hard as I can to avoid reinventing this wheel) I'd look for open-source libraries or, at the very least, open source projects with Calendars that you can look at. Any recommendations guys?
A:
undefined wrote:
…this table will be hit pretty
hard, especially if I am going to be
sending out emails to remind people of
their events (events can be for
specific times of the day also!), so I could be polling this table every 15 minutes potentially!
Create a table for the notifications. Poll only it.
Update the notification table when events (recurring or otherwise) are updated.
EDIT: A database View might not violate normal forms, if that's a concern. But, you'll probably want to track which notifications were sent and which have not yet been sent somewhere anyway.
A: Derek Park,
I would be creating each and every instance of an event in a table, and this table would be regenerated every month (so any event that was set to reoccurr 'forever' would be regenerated one month in advance using a windows service or maybe at the sql server level).
The polling won't only be done every 15 minutes, that might only be for polls related to email notifications. When someone wants to view their events for a month, I will have to fetech all their events, and re-occuring events and figure out which events to display (since a re-occuring event might have been created 6 months ago, but relates to a month the user is viewing).
Zack, i'm not too concerned with having a perfectly normalized database, the fact that I'm thinking of creating a secondary table is already breaking one of the rules hehe. My core database tables are following 'the rules', but I don't mind creating secondary tables/columns at times when it benefits things performance wise.
A:
That is how I have designed my events table actually, but when thinking about it, say I have 100K users who have create events, this table will be hit pretty hard, especially if I am going to be sending out emails to remind people of their events (events can be for specific times of the day also!), so I could be polling this table every 15 minutes potentially!
Databases do exception jobs of handling sets of data, so i wouldn't be too worried about that. What you should do is use this as your primary table, and then as events expire then move them into another table (like an archive).
The next thing is you want to try is to query the db as little as possible, so move the information into a caching tier (like velocity) and just persist data to the database.
Then, you can partition the information across multiple databases for scaling purposes. ie users 1-10000 calendars exist on server 1, 10001 - 20000 exist on server 2, etc.
That's how i would scale a solution like this, but i still think the original solution i proposed is the way to go, it's just how you scale it that becomes the question.
A: The Ra-Ajax Calendar starter-kit features a sample of handling the RenderDate event which can modify the dates of specifically. Though the "recurring events" is more of an algorithmic thing and here I doubt very few calendars will help you much...
A: If anyone is doing Ruby there's a great library Runt that does this kind of thing. Worth checking out. http://runt.rubyforge.org/
| {
"language": "en",
"url": "https://stackoverflow.com/questions/12611",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15"
} |
Q: Access Control Lists & Access Control Objects, good tutorial? we're developing a web app to cover all aspects of a printing company from finances, to payroll, to job costing. Its important to be able to control who can access what parts of these applications. Don't want a line employee giving himself a raise, etc...
I've heard of the concept of ACL & ACO, but haven't found a good example that we could adapt to our project.
Anyone know where I can find good information to work from?
A: A brief rundown on ACLs, where they should be used and how they should be structured and implemented for various applications and user levels can be found here:
LINK
A: I've had to implement that type of security a couple of times. Unfortunately I don't know of any really good articles that provide examples. My implementations were mainly piecing together the parts through trial and error.
However, I did come across this link on MSDN:
http://msdn.microsoft.com/en-us/library/52kd59t0(VS.71).aspx
It has some of the concepts.
After my original post, I did some more research. I found this article:
http://www.aspfree.com/c/a/C-Sharp/Implementing-Role-Based-Security-using-CSharp/
it seems pretty promising, I didn't go through all the details, but it at least guides you through the high-level topics.
A: If you're using .NET/Windows you might want to look into Windows Authorization Manager (AzMan). There are support for AzMan in Enterprise Library but there are other ways of using it as well.
http://msdn.microsoft.com/en-us/library/ms998336.aspx
http://alt.pluralsight.com/wiki/default.aspx/Keith.GuideBook/WhatIsAuthorizationManager.html
| {
"language": "en",
"url": "https://stackoverflow.com/questions/12612",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Specify a Port Number in Emacs sql-mysql I've been using Emacs's sql interactive mode to talk to the MySQL db server and gotten to enjoy it. A developer has set up another db on a new non-default port number but I don't know how to access it using sql-mysql.
How do I specify a port number when I'm trying to connect to a database?
It would be even better if Emacs can prompt me for a port number and just use the default if I don't specify. Any chances of that?
A: (setq sql-mysql-options (list "-P <port number>"))
A: I found the option using:
M-x customize-group
SQL
That included a setting labeled:
Mysql Options:
If you set the option and save it, there will be a new line added to your .emacs:
(custom-set-variables
'(sql-mysql-options (quote ("-P ???"))))
(Obviously, you ought to use the actual port number.)
I use XEmacs, so your mileage may vary.
A: After digging through the sql.el file, I found a variable that allows me to specify a port when I try to create a connection.
This option was added GNU Emacs 24.1.
sql-mysql-login-params
List of login parameters needed to connect to MySQL.
I added this to my Emacs init file:
(setq sql-mysql-login-params (append sql-mysql-login-params '(port)))
The default port is 0. If you'd like to set that to the default MySQL port you can customize sql-port
(setq sql-port 3306) ;; default MySQL port
There is a sql-*-login-params variable for all the popular RDMS systems in GNU Emacs 24.1. sql-port is used for both MySQL and PostreSQL
| {
"language": "en",
"url": "https://stackoverflow.com/questions/12613",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "19"
} |
Q: What is the easiest way to parse an INI File in C++? I'm trying to parse an INI file using C++. Any tips on what is the best way to achieve this? Should I use the Windows API tools for INI file processing (with which I am totally unfamiliar), an open-source solution or attempt to parse it manually?
A: this question is a bit old, but I will post my answer. I have tested various INI classes (you can see them on my website) and I also use simpleIni because I want to work with INI files on both windows and winCE.
Window's GetPrivateProfileString() works only with the registry on winCE.
It is very easy to read with simpleIni. Here is an example:
#include "SimpleIni\SimpleIni.h"
CSimpleIniA ini;
ini.SetUnicode();
ini.LoadFile(FileName);
const char * pVal = ini.GetValue(section, entry, DefaultStr);
A: inih is a simple ini parser written in C, it comes with a C++ wrapper too. Example usage:
#include "INIReader.h"
INIReader reader("test.ini");
std::cout << "version="
<< reader.GetInteger("protocol", "version", -1) << ", name="
<< reader.Get("user", "name", "UNKNOWN") << ", active="
<< reader.GetBoolean("user", "active", true) << "\n";
The author has also a list of existing libraries here.
A: Have you tried libconfig; very JSON-like syntax. I prefer it over XML configuration files.
A: I ended up using inipp which is not mentioned in this thread.
https://github.com/mcmtroffaes/inipp
Was a MIT licensed header only implementation which was simple enough to add to a project and 4 lines to use.
A: If you are interested in platform portability, you can also try Boost.PropertyTree. It supports ini as persistancy format, though the property tree my be 1 level deep only.
A: I have never parsed ini files, so I can't be too specific on this issue.
But i have one advice:
Don't reinvent the wheel as long as an existing one meets your requirements
http://en.wikipedia.org/wiki/INI_file#Accessing_INI_files
http://sdl-cfg.sourceforge.net/
http://sourceforge.net/projects/libini/
http://www.codeproject.com/KB/files/config-file-parser.aspx
Good luck :)
A: If you are already using Qt
QSettings my_settings("filename.ini", QSettings::IniFormat);
Then read a value
my_settings.value("GroupName/ValueName", <<DEFAULT_VAL>>).toInt()
There are a bunch of other converter that convert your INI values into both standard types and Qt types. See Qt documentation on QSettings for more information.
A: Unless you plan on making the app cross-platform, using the Windows API calls would be the best way to go. Just ignore the note in the API documentation about being provided only for 16-bit app compatibility.
A: I know this question is very old, but I came upon it because I needed something cross platform for linux, win32... I wrote the function below, it is a single function that can parse INI files, hopefully others will find it useful.
rules & caveats:
buf to parse must be a NULL terminated string. Load your ini file into a char array string and call this function to parse it.
section names must have [] brackets around them, such as this [MySection], also values and sections must begin on a line without leading spaces. It will parse files with Windows \r\n or with Linux \n line endings. Comments should use # or // and begin at the top of the file, no comments should be mixed with INI entry data. Quotes and ticks are trimmed from both ends of the return string. Spaces are only trimmed if they are outside of the quote. Strings are not required to have quotes, and whitespaces are trimmed if quotes are missing. You can also extract numbers or other data, for example if you have a float just perform a atof(ret) on the ret buffer.
// -----note: no escape is nessesary for inner quotes or ticks-----
// -----------------------------example----------------------------
// [Entry2]
// Alignment = 1
// LightLvl=128
// Library = 5555
// StrValA = Inner "quoted" or 'quoted' strings are ok to use
// StrValB = "This a "quoted" or 'quoted' String Value"
// StrValC = 'This a "tick" or 'tick' String Value'
// StrValD = "Missing quote at end will still work
// StrValE = This is another "quote" example
// StrValF = " Spaces inside the quote are preserved "
// StrValG = This works too and spaces are trimmed away
// StrValH =
// ----------------------------------------------------------------
//12oClocker super lean and mean INI file parser (with section support)
//set section to 0 to disable section support
//returns TRUE if we were able to extract a string into ret value
//NextSection is a char* pointer, will be set to zero if no next section is found
//will be set to pointer of next section if it was found.
//use it like this... char* NextSection = 0; GrabIniValue(X,X,X,X,X,&NextSection);
//buf is data to parse, ret is the user supplied return buffer
BOOL GrabIniValue(char* buf, const char* section, const char* valname, char* ret, int retbuflen, char** NextSection)
{
if(!buf){*ret=0; return FALSE;}
char* s = buf; //search starts at "s" pointer
char* e = 0; //end of section pointer
//find section
if(section)
{
int L = strlen(section);
SearchAgain1:
s = strstr(s,section); if(!s){*ret=0; return FALSE;} //find section
if(s > buf && (*(s-1))!='\n'){s+=L; goto SearchAgain1;} //section must be at begining of a line!
s+=L; //found section, skip past section name
while(*s!='\n'){s++;} s++; //spin until next line, s is now begining of section data
e = strstr(s,"\n["); //find begining of next section or end of file
if(e){*e=0;} //if we found begining of next section, null the \n so we don't search past section
if(NextSection) //user passed in a NextSection pointer
{ if(e){*NextSection=(e+1);}else{*NextSection=0;} } //set pointer to next section
}
//restore char at end of section, ret=empty_string, return FALSE
#define RESTORE_E if(e){*e='\n';}
#define SAFE_RETURN RESTORE_E; (*ret)=0; return FALSE
//find valname
int L = strlen(valname);
SearchAgain2:
s = strstr(s,valname); if(!s){SAFE_RETURN;} //find valname
if(s > buf && (*(s-1))!='\n'){s+=L; goto SearchAgain2;} //valname must be at begining of a line!
s+=L; //found valname match, skip past it
while(*s==' ' || *s == '\t'){s++;} //skip spaces and tabs
if(!(*s)){SAFE_RETURN;} //if NULL encounted do safe return
if(*s != '='){goto SearchAgain2;} //no equal sign found after valname, search again
s++; //skip past the equal sign
while(*s==' ' || *s=='\t'){s++;} //skip spaces and tabs
while(*s=='\"' || *s=='\''){s++;} //skip past quotes and ticks
if(!(*s)){SAFE_RETURN;} //if NULL encounted do safe return
char* E = s; //s is now the begining of the valname data
while(*E!='\r' && *E!='\n' && *E!=0){E++;} E--; //find end of line or end of string, then backup 1 char
while(E > s && (*E==' ' || *E=='\t')){E--;} //move backwards past spaces and tabs
while(E > s && (*E=='\"' || *E=='\'')){E--;} //move backwards past quotes and ticks
L = E-s+1; //length of string to extract NOT including NULL
if(L<1 || L+1 > retbuflen){SAFE_RETURN;} //empty string or buffer size too small
strncpy(ret,s,L); //copy the string
ret[L]=0; //null last char on return buffer
RESTORE_E;
return TRUE;
#undef RESTORE_E
#undef SAFE_RETURN
}
How to use... example....
char sFileData[] = "[MySection]\r\n"
"MyValue1 = 123\r\n"
"MyValue2 = 456\r\n"
"MyValue3 = 789\r\n"
"\r\n"
"[MySection]\r\n"
"MyValue1 = Hello1\r\n"
"MyValue2 = Hello2\r\n"
"MyValue3 = Hello3\r\n"
"\r\n";
char str[256];
char* sSec = sFileData;
char secName[] = "[MySection]"; //we support sections with same name
while(sSec)//while we have a valid sNextSec
{
//print values of the sections
char* next=0;//in case we dont have any sucessful grabs
if(GrabIniValue(sSec,secName,"MyValue1",str,sizeof(str),&next)) { printf("MyValue1 = [%s]\n",str); }
if(GrabIniValue(sSec,secName,"MyValue2",str,sizeof(str),0)) { printf("MyValue2 = [%s]\n",str); }
if(GrabIniValue(sSec,secName,"MyValue3",str,sizeof(str),0)) { printf("MyValue3 = [%s]\n",str); }
printf("\n");
sSec = next; //parse next section, next will be null if no more sections to parse
}
A: I use SimpleIni. It's cross-platform.
A: You can use the Windows API functions, such as GetPrivateProfileString() and GetPrivateProfileInt().
A: If you need a cross-platform solution, try Boost's Program Options library.
A: Maybe a late answer..But, worth knowing options..If you need a cross-platform solution , definitely you can try GLIB,, its interesting.. (https://developer.gnome.org/glib/stable/glib-Key-value-file-parser.html)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/12633",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "92"
} |
Q: Building an auditing system; MS Access frontend on SQL Server backend So basically I'm building an app for my company and it NEEDS to be built using MS Access and it needs to be built on SQL Server.
I've drawn up most of the plans but am having a hard time figuring out a way to handle the auditing system.
Since it is being used internally only and you won't even be able to touch the db from outside the building we are not using a login system as the program will only be used once a user has already logged in to our internal network via Active Directory. Knowing this, we're using a system to detect automatically the name of the Active Directory user and with their permissions in one of the DB tables, deciding what they can or cannot do.
So the actual audit table will have 3 columns (this design may change but for this question it doesn't matter); who (Active Directory User), when (time of addition/deletion/edit), what (what was changed)
My question is how should I be handling this. Ideally I know I should be using a trigger so that it is impossible for the database to be updated without an audit being logged, however I don't know how I could grab the Active Directory User that way. An alternate would be to code it directly into the Access source so that whenever something changes I run an INSERT statement. Obviously that is flawed because if something happens to Access or the database is touched by something else then it will not log the audit.
Any advice, examples or articles that may help me would be greatly appreciated!
A: Does this work for you?
select user_name(),suser_sname()
Doh! I forgot to escape my code.
A: Ok, it's working here. I'm seeing my windows credentials when I update my tables. So, I bet we missed a step. Let me put together a 1,2,3 sequence of what I did and maybe we can track down where this is breaking for you.
*
*Create a new MSAccess database (empty)
*Click on the tables section
*Select external data
*Pick ODBC database
*Pick Link to the datasource by creating a linked table
*Select Machine datasource
*Pick New...
*System Datasource
*Pick SQL Server from the list and click Next, Finish.
*Give the new datasource a name and description, and select (local) for the server. Click Next.
*Pick "With Windows NT authentication using the network login ID". Click Next.
*Check Change the default database to, and pick the DB. Click Next. Click Finish.
*Test the datasource.
*Pick the table that the Trigger is associated with and click OK.
*Open the table in Access and modify one of the entries (the trigger doesn't fire on Insert, just Update)
*Select * from your audit table
A: If you specify SSPI in your connection string to Sql, I think your Windows credentials are provided.
A: I tried playing with Access a bit to see if I could find a way for you. I think you can specify a new datasource to your SQL table, and select Windows NT Authentication as your connection type.
A: Sure :)
There should be a section in Access called "External Data" (I'm running a new version of Access, so the menu choice might be different).
Form this there should be an option to specify an ODBC connection.
I get an option to Link to the datasource by creating a linked table.
I then created a Machine datasource. I selected SqlServer from the drop down list. Then when I click Next, I'm prompted for how I want to authenticate.
A: CREATE TRIGGER testtrigger1
ON testdatatable
AFTER update
AS
BEGIN
INSERT INTO testtable (datecol,usercol1,usercol2) VALUES (getdate(),user_name(),suser_sname());
END
GO
A: We also have a database system that is used exclusively within the organisation and use Window NT logins. This function returns the current users login name:
CREATE FUNCTION dbo.UserName() RETURNS varchar(50)
AS
BEGIN
RETURN (SELECT nt_username FROM master.dbo.sysprocesses WHERE spid = @@SPID)
END
You can use this function in your triggers.
A: How many users of the app will there be? Is there possibility of using windows integrated authentication for SQL authentication?
Updated: If you can give each user a SQL login (windows integrated) then you can pickup the logged on user using the SYSTEM_USER function.
A: It should be
select user name(),suser sname()
replace spaces with underscores
A: you need to connect with integrated security aka trusted connection see (http://www.connectionstrings.com/?carrier=sqlserver)
A: My solution would be not to let Access modify the data with linked tables.
I would only create the UI in Access and create an ADO connection to the server using windows authenticated in the connection string. Compile you Access application as dbe to protect the VB code.
I would not issue SQL statement, but I would call stored procedures to perform the changes in the database, and create the audit log entry in an atomic transaction.
The UI (Access) does not need to know the inner works on the server. All it needs to do is request and update/insert/delete using the stored procedures you would create for this purpose. The server should handle the work.
Retrieve a record set with ADO using a view with the hint NOLOCK implemented in the server and cache this data in Access for local display. Or retrieve a single record and lock only that row for editing.
Using linked tables your users will be locking each other.
With ADO connections you will not have the trouble to set ODBCs on every single client.
Create a table to set the server status. You application will check it before any action. you can use it to close the server to the application in case that you need to perform changes or maintenance.
Access is a great tool. But it should only handle its local data and not be allowed to mess with the precious server.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/12638",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Upload binary data with Silverlight 2b2 I am trying to upload a file or stream of data to our web server and I cant find a decent way of doing this. I have tried both WebClient and WebRequest both have their problems.
WebClient
Nice and easy but you do not get any notification that the asynchronous upload has completed, and the UploadProgressChanged event doesnt get called back with anything useful. The alternative is to convert your binary data to a string and use UploadStringASync because then at least you get a UploadStringCompleted, problem is you need a lot of ram for big files as its encoding all the data and uploading it in one go.
HttpWebRequest
Bit more complicated but still does what is needed, problem I am getting is that even though it is called on a background thread (supposedly), it still seems to be blocking my UI and the whole browser until the upload has completed which doesnt seem quite right.
Normal .net does have some appropriate WebClient methods for OnUploadDataCompleted and progress but these arent available in Silverlight .net ... big omission I think!
Does anyone have any solutions, I need to upload multiple binary files preferrably with a progress but I need to perform some actions when the files have completed their upload.
Look forward to some help with this.
A: The way i get around it is through INotifyPropertyChanged and event notification.
The essentials:
public void DoIt(){
this.IsUploading = True;
WebRequest postRequest = WebRequest.Create(new Uri(ServiceURL));
postRequest.BeginGetRequestStream(new AsyncCallback(RequestOpened), postRequest);
}
private void RequestOpened(IAsyncResult result){
WebRequest req = result.AsyncState as WebRequest;
req.BeginGetResponse(new AsyncCallback(GetResponse), req);
}
private void GetResponse(IAsyncResult result)
{
WebRequest req = result.AsyncState as WebRequest;
string serverresult = string.Empty;
WebResponse postResponse = req.EndGetResponse(result);
StreamReader responseReader = new StreamReader(postResponse.GetResponseStream());
this.IsUploading= False;
}
private Bool_IsUploading;
public Bool IsUploading
{
get { return _IsUploading; }
private set
{
_IsUploading = value;
OnPropertyChanged("IsUploading");
}
}
Right now silverlight is a PiTA because of the double and triple Async calls.
A: Matt Berseth had some thoughts in this, might help:
http://mattberseth.com/blog/2008/07/aspnet_file_upload_with_realti_1.html
@Dan - Apologies mate, I coulda sworn Matt's article was about Silverlight, but it's quite clearly not. Blame it on those two big glasses of Chilean red I just downed. :-)
A: Thanks, problem I can see with the article is that its not talking about Silverlight, and Silverlight has limited functions, for some reason they have removed some necessary events and methods for binary transfers for no reason.
I need to use Silverlight as I need/want to upload multiple files and HTML does not allow for a multiple file upload.
A: This was pretty much what I was doing, the problem I was getting was that my UI was getting locked up.
As you suggested what I was doing already, I presumed the problem was somewhere else so I used the old divide and conquer to narrow down the problem and it wasnt the actual update code, it was my attempt to Dispatch a request to update my progress bar during the upload stream code.
Thanks for the advice.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/12642",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: How do I tell if a variable has a numeric value in Perl? Is there a simple way in Perl that will allow me to determine if a given variable is numeric? Something along the lines of:
if (is_number($x))
{ ... }
would be ideal. A technique that won't throw warnings when the -w switch is being used is certainly preferred.
A: Not perfect, but you can use a regex:
sub isnumber
{
shift =~ /^-?\d+\.?\d*$/;
}
A: The original question was how to tell if a variable was numeric, not if it "has a numeric value".
There are a few operators that have separate modes of operation for numeric and string operands, where "numeric" means anything that was originally a number or was ever used in a numeric context (e.g. in $x = "123"; 0+$x, before the addition, $x is a string, afterwards it is considered numeric).
One way to tell is this:
if ( length( do { no warnings "numeric"; $x & "" } ) ) {
print "$x is numeric\n";
}
If the bitwise feature is enabled, that makes & only a numeric operator and adds a separate string &. operator, you must disable it:
if ( length( do { no if $] >= 5.022, "feature", "bitwise"; no warnings "numeric"; $x & "" } ) ) {
print "$x is numeric\n";
}
(bitwise is available in perl 5.022 and above, and enabled by default if you use 5.028; or above.)
A: Check out the CPAN module Regexp::Common. I think it does exactly what you need and handles all the edge cases (e.g. real numbers, scientific notation, etc). e.g.
use Regexp::Common;
if ($var =~ /$RE{num}{real}/) { print q{a number}; }
A: A slightly more robust regex can be found in Regexp::Common.
It sounds like you want to know if Perl thinks a variable is numeric. Here's a function that traps that warning:
sub is_number{
my $n = shift;
my $ret = 1;
$SIG{"__WARN__"} = sub {$ret = 0};
eval { my $x = $n + 1 };
return $ret
}
Another option is to turn off the warning locally:
{
no warnings "numeric"; # Ignore "isn't numeric" warning
... # Use a variable that might not be numeric
}
Note that non-numeric variables will be silently converted to 0, which is probably what you wanted anyway.
A: rexep not perfect... this is:
use Try::Tiny;
sub is_numeric {
my ($x) = @_;
my $numeric = 1;
try {
use warnings FATAL => qw/numeric/;
0 + $x;
}
catch {
$numeric = 0;
};
return $numeric;
}
A: Use Scalar::Util::looks_like_number() which uses the internal Perl C API's looks_like_number() function, which is probably the most efficient way to do this.
Note that the strings "inf" and "infinity" are treated as numbers.
Example:
#!/usr/bin/perl
use warnings;
use strict;
use Scalar::Util qw(looks_like_number);
my @exprs = qw(1 5.25 0.001 1.3e8 foo bar 1dd inf infinity);
foreach my $expr (@exprs) {
print "$expr is", looks_like_number($expr) ? '' : ' not', " a number\n";
}
Gives this output:
1 is a number
5.25 is a number
0.001 is a number
1.3e8 is a number
foo is not a number
bar is not a number
1dd is not a number
inf is a number
infinity is a number
See also:
*
*perldoc Scalar::Util
*perldoc perlapi for looks_like_number
A: Usually number validation is done with regular expressions. This code will determine if something is numeric as well as check for undefined variables as to not throw warnings:
sub is_integer {
defined $_[0] && $_[0] =~ /^[+-]?\d+$/;
}
sub is_float {
defined $_[0] && $_[0] =~ /^[+-]?\d+(\.\d+)?$/;
}
Here's some reading material you should look at.
A: A simple (and maybe simplistic) answer to the question is the content of $x numeric is the following:
if ($x eq $x+0) { .... }
It does a textual comparison of the original $x with the $x converted to a numeric value.
A: Try this:
If (($x !~ /\D/) && ($x ne "")) { ... }
A: I found this interesting though
if ( $value + 0 eq $value) {
# A number
push @args, $value;
} else {
# A string
push @args, "'$value'";
}
A: Personally I think that the way to go is to rely on Perl's internal context to make the solution bullet-proof. A good regexp could match all the valid numeric values and none of the non-numeric ones (or vice versa), but as there is a way of employing the same logic the interpreter is using it should be safer to rely on that directly.
As I tend to run my scripts with -w, I had to combine the idea of comparing the result of "value plus zero" to the original value with the no warnings based approach of @ysth:
do {
no warnings "numeric";
if ($x + 0 ne $x) { return "not numeric"; } else { return "numeric"; }
}
A: You can use Regular Expressions to determine if $foo is a number (or not).
Take a look here:
How do I determine whether a scalar is a number
A: There is a highly upvoted accepted answer around using a library function, but it includes the caveat that "inf" and "infinity" are accepted as numbers. I see some regex stuff for answers too, but they seem to have issues. I tried my hand at writing some regex that would work better (I'm sorry it's long)...
/^0$|^[+-]?[1-9][0-9]*$|^[+-]?[1-9][0-9]*(\.[0-9]+)?([eE]-?[1-9][0-9]*)?$|^[+-]?[0-9]?\.[0-9]+$|^[+-]?[1-9][0-9]*\.[0-9]+$/
That's really 5 patterns separated by "or"...
Zero: ^0$
It's a kind of special case. It's the only integer that can start with 0.
Integers: ^[+-]?[1-9][0-9]*$
That makes sure the first digit is 1 to 9 and allows 0 to 9 for any of the following digits.
Scientific Numbers: ^[+-]?[1-9][0-9]*(\.[0-9]+)?([eE]-?[1-9][0-9]*)?$
Uses the same idea that the base number can't start with zero since in proper scientific notation you start with the highest significant bit (meaning the first number won't be zero). However, my pattern allows for multiple digits left of the decimal point. That's incorrect, but I've already spent too much time on this... you could replace the [1-9][0-9]* with just [0-9] to force a single digit before the decimal point and allow for zeroes.
Short Float Numbers: ^[+-]?[0-9]?\.[0-9]+$
This is like a zero integer. It's special in that it can start with 0 if there is only one digit left of the decimal point. It does overlap the next pattern though...
Long Float Numbers: ^[+-]?[1-9][0-9]*\.[0-9]+$
This handles most float numbers and allows more than one digit left of the decimal point while still enforcing that the higher number of digits can't start with 0.
The simple function...
sub is_number {
my $testVal = shift;
return $testVal =~ /^0$|^[+-]?[1-9][0-9]*$|^[+-]?[1-9][0-9]*(\.[0-9]+)?([eE]-?[1-9][0-9]*)?$|^[+-]?[0-9]?\.[0-9]+$|^[+-]?[1-9][0-9]*\.[0-9]+$/;
}
A: if ( defined $x && $x !~ m/\D/ ) {}
or
$x = 0 if ! $x;
if ( $x !~ m/\D/) {}
This is a slight variation on Veekay's answer but let me explain my reasoning for the change.
Performing a regex on an undefined value will cause error spew and will cause the code to exit in many if not most environments. Testing if the value is defined or setting a default case like i did in the alternative example before running the expression will, at a minimum, save your error log.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/12647",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "97"
} |
Q: Variable parameter/result binding with prepared statements In a project that I'm about to wrap up, I've written and implemented an object-relational mapping solution for PHP. Before the doubters and dreamers cry out "how on earth?", relax -- I haven't found a way to make late static binding work -- I'm just working around it in the best way that I possibly can.
Anyway, I'm not currently using prepared statements for querying, because I couldn't come up with a way to pass a variable number of arguments to the bind_params() or bind_result() methods.
Why do I need to support a variable number of arguments, you ask? Because the superclass of my models (think of my solution as a hacked-up PHP ActiveRecord wannabe) is where the querying is defined, and so the find() method, for example, doesn't know how many parameters it would need to bind.
Now, I've already thought of building an argument list and passing a string to eval(), but I don't like that solution very much -- I'd rather just implement my own security checks and pass on statements.
Does anyone have any suggestions (or success stories) about how to get this done? If you can help me solve this first problem, perhaps we can tackle binding the result set (something I suspect will be more difficult, or at least more resource-intensive if it involves an initial query to determine table structure).
A: The more modern way to bind parameters dynamically is via the splat/spread operator (...).
Assuming:
*
*you have a non-empty array of values to bind to your query and
*your array values are suitably processed as string type values in the context of the query and
*your input array is called $values
Code for PHP5.6 and higher:
$stmt->bind_param(str_repeat('s', count($values)), ...$values);
In fact, all of the arguments fed to bind_param() can be unpacked with the splat/spread operator if you wish -- the data types string just needs to be the first element of the array.
array_unshift($values, str_repeat('s', count($values)));
$stmt->bind_param(...$values);
A: In PHP you can pass a variable number of arguments to a function or method by using call_user_func_array. An example for a method would be:
call_user_func_array(array(&$stmt, 'bindparams'), $array_of_params);
The function will be called with each member in the array passed as its own argument.
A: You've got to make sure that $array_of_params is array of links to variables, not values themselves. Should be:
$array_of_params[0] = &$param_string; //link to variable that stores types
And then...
$param_string .= "i";
$user_id_var = $_GET['user_id'];//
$array_of_params[] = &$user_id_var; //link to variable that stores value
Otherwise (if it is array of values) you'll get:
PHP Warning: Parameter 2 to mysqli_stmt::bind_param() expected to be a reference
One more example:
$bind_names[] = implode($types); //putting types of parameters in a string
for ($i = 0; $i < count($params); $i++)
{
$bind_name = 'bind'.$i; //generate a name for variable bind1, bind2, bind3...
$$bind_name = $params[$i]; //create a variable with this name and put value in it
$bind_names[] = & $$bind_name; //put a link to this variable in array
}
and BOOOOOM:
call_user_func_array( array ($stmt, 'bind_param'), $bind_names);
A: I am not allowed to edit, but I believe in the code
call_user_func_array(array(&$stmt, 'bindparams'), $array_of_params);
The reference in front of $stmt is not necessary. Since $stmt is the object and bindparams is the method in that object, the reference is not necessary. It should be:
call_user_func_array(array($stmt, 'bindparams'), $array_of_params);
For more information, see the PHP manual on Callback Functions."
A: call_user_func_array(array(&$stmt, 'bindparams'), $array_of_params);
Didn't work for me in my environment but this answer set me on the right track. What actually worked was:
$sitesql = '';
$array_of_params = array();
foreach($_POST['multiselect'] as $value){
if($sitesql!=''){
$sitesql .= "OR siteID=? ";
$array_of_params[0] .= 'i';
$array_of_params[] = $value;
}else{
$sitesql = " siteID=? ";
$array_of_params[0] .= 'i';
$array_of_params[] = $value;
}
}
$stmt = $linki->prepare("SELECT IFNULL(SUM(hours),0) FROM table WHERE ".$sitesql." AND week!='0000-00-00'");
call_user_func_array(array(&$stmt, 'bind_param'), $array_of_params);
$stmt->execute();
| {
"language": "en",
"url": "https://stackoverflow.com/questions/12656",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
} |
Q: Can I create a ListView with dynamic GroupItemCount? I'm using the new ASP.Net ListView control to list database items that will be grouped together in sections based on one of their columns like so:
region1
store1
store2
store3
region2
store4
region3
store5
store6
Is this possible to do with the ListView's GroupItemTemplate? Every example I have seen uses a static number of items per group, which won't work for me. Am I misunderstanding the purpose of the GroupItem?
A: I haven't used GroupItemCount, but I have taken this example written up by Matt Berseth titled Building a Grouping Grid with the ASP.NET 3.5 LinqDataSource and ListView Controls and have grouped items by a key just like you want.
It involves using an outer and inner ListView control. Works great, give it a try.
A: Make sure you're doing a DataBind AFTER setting the GroupItemCount property. I had the same problem and that's what I did to resolve it.
A: I tried using GroupItemCount programmatically but it didn't give me the expected results.
I followed Otto's suggestion and implemented an outer and inner ListView control. This seems to be the best available solution.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/12657",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Efficient JPEG Image Resizing in PHP What's the most efficient way to resize large images in PHP?
I'm currently using the GD function imagecopyresampled to take high resolution images, and cleanly resize them down to a size for web viewing (roughly 700 pixels wide by 700 pixels tall).
This works great on small (under 2 MB) photos and the entire resize operation takes less than a second on the server. However, the site will eventually service photographers who may be uploading images up to 10 MB in size (or images up to 5000x4000 pixels in size).
Doing this kind of resize operation with large images tends to increase the memory usage by a very large margin (larger images can spike the memory usage for the script past 80 MB). Is there any way to make this resize operation more efficient? Should I be using an alternate image library such as ImageMagick?
Right now, the resize code looks something like this
function makeThumbnail($sourcefile, $endfile, $thumbwidth, $thumbheight, $quality) {
// Takes the sourcefile (path/to/image.jpg) and makes a thumbnail from it
// and places it at endfile (path/to/thumb.jpg).
// Load image and get image size.
$img = imagecreatefromjpeg($sourcefile);
$width = imagesx( $img );
$height = imagesy( $img );
if ($width > $height) {
$newwidth = $thumbwidth;
$divisor = $width / $thumbwidth;
$newheight = floor( $height / $divisor);
} else {
$newheight = $thumbheight;
$divisor = $height / $thumbheight;
$newwidth = floor( $width / $divisor );
}
// Create a new temporary image.
$tmpimg = imagecreatetruecolor( $newwidth, $newheight );
// Copy and resize old image into new image.
imagecopyresampled( $tmpimg, $img, 0, 0, 0, 0, $newwidth, $newheight, $width, $height );
// Save thumbnail into a file.
imagejpeg( $tmpimg, $endfile, $quality);
// release the memory
imagedestroy($tmpimg);
imagedestroy($img);
A: From you quesion, it seems you are kinda new to GD, I will share some experence of mine,
maybe this is a bit off topic, but I think it will be helpful to someone new to GD like you:
Step 1, validate file. Use the following function to check if the $_FILES['image']['tmp_name'] file is valid file:
function getContentsFromImage($image) {
if (@is_file($image) == true) {
return file_get_contents($image);
} else {
throw new \Exception('Invalid image');
}
}
$contents = getContentsFromImage($_FILES['image']['tmp_name']);
Step 2, get file format Try the following function with finfo extension to check file format of the file(contents). You would say why don't you just use $_FILES["image"]["type"] to check file format? Because it ONLY check file extension not file contents, if someone rename a file originally called world.png to world.jpg, $_FILES["image"]["type"] will return jpeg not png, so $_FILES["image"]["type"] may return wrong result.
function getFormatFromContents($contents) {
$finfo = new \finfo();
$mimetype = $finfo->buffer($contents, FILEINFO_MIME_TYPE);
switch ($mimetype) {
case 'image/jpeg':
return 'jpeg';
break;
case 'image/png':
return 'png';
break;
case 'image/gif':
return 'gif';
break;
default:
throw new \Exception('Unknown or unsupported image format');
}
}
$format = getFormatFromContents($contents);
Step.3, Get GD resource Get GD resource from contents we have before:
function getGDResourceFromContents($contents) {
$resource = @imagecreatefromstring($contents);
if ($resource == false) {
throw new \Exception('Cannot process image');
}
return $resource;
}
$resource = getGDResourceFromContents($contents);
Step 4, get image dimension Now you can get image dimension with the following simple code:
$width = imagesx($resource);
$height = imagesy($resource);
Now, Let's see what variable we got from the original image then:
$contents, $format, $resource, $width, $height
OK, lets move on
Step 5, calculate resized image arguments This step is related to your question, the purpose of the following function is to get resize arguments for GD function imagecopyresampled(), the code is kinda long, but it works great, it even has three options: stretch, shrink, and fill.
stretch: output image's dimension is the same as the new dimension you set. Won't keep height/width ratio.
shrink: output image's dimension won't exceed the new dimension you give, and keep image height/width ratio.
fill: output image's dimension will be the same as new dimension you give, it will crop & resize image if needed, and keep image height/width ratio. This option is what you need in your question.
function getResizeArgs($width, $height, $newwidth, $newheight, $option) {
if ($option === 'stretch') {
if ($width === $newwidth && $height === $newheight) {
return false;
}
$dst_w = $newwidth;
$dst_h = $newheight;
$src_w = $width;
$src_h = $height;
$src_x = 0;
$src_y = 0;
} else if ($option === 'shrink') {
if ($width <= $newwidth && $height <= $newheight) {
return false;
} else if ($width / $height >= $newwidth / $newheight) {
$dst_w = $newwidth;
$dst_h = (int) round(($newwidth * $height) / $width);
} else {
$dst_w = (int) round(($newheight * $width) / $height);
$dst_h = $newheight;
}
$src_x = 0;
$src_y = 0;
$src_w = $width;
$src_h = $height;
} else if ($option === 'fill') {
if ($width === $newwidth && $height === $newheight) {
return false;
}
if ($width / $height >= $newwidth / $newheight) {
$src_w = (int) round(($newwidth * $height) / $newheight);
$src_h = $height;
$src_x = (int) round(($width - $src_w) / 2);
$src_y = 0;
} else {
$src_w = $width;
$src_h = (int) round(($width * $newheight) / $newwidth);
$src_x = 0;
$src_y = (int) round(($height - $src_h) / 2);
}
$dst_w = $newwidth;
$dst_h = $newheight;
}
if ($src_w < 1 || $src_h < 1) {
throw new \Exception('Image width or height is too small');
}
return array(
'dst_x' => 0,
'dst_y' => 0,
'src_x' => $src_x,
'src_y' => $src_y,
'dst_w' => $dst_w,
'dst_h' => $dst_h,
'src_w' => $src_w,
'src_h' => $src_h
);
}
$args = getResizeArgs($width, $height, 150, 170, 'fill');
Step 6, resize image Use $args, $width, $height, $format and $resource we got from above into the following function and get the new resource of the resized image:
function runResize($width, $height, $format, $resource, $args) {
if ($args === false) {
return; //if $args equal to false, this means no resize occurs;
}
$newimage = imagecreatetruecolor($args['dst_w'], $args['dst_h']);
if ($format === 'png') {
imagealphablending($newimage, false);
imagesavealpha($newimage, true);
$transparentindex = imagecolorallocatealpha($newimage, 255, 255, 255, 127);
imagefill($newimage, 0, 0, $transparentindex);
} else if ($format === 'gif') {
$transparentindex = imagecolorallocatealpha($newimage, 255, 255, 255, 127);
imagefill($newimage, 0, 0, $transparentindex);
imagecolortransparent($newimage, $transparentindex);
}
imagecopyresampled($newimage, $resource, $args['dst_x'], $args['dst_y'], $args['src_x'], $args['src_y'], $args['dst_w'], $args['dst_h'], $args['src_w'], $args['src_h']);
imagedestroy($resource);
return $newimage;
}
$newresource = runResize($width, $height, $format, $resource, $args);
Step 7, get new contents, Use the following function to get contents from the new GD resource:
function getContentsFromGDResource($resource, $format) {
ob_start();
switch ($format) {
case 'gif':
imagegif($resource);
break;
case 'jpeg':
imagejpeg($resource, NULL, 100);
break;
case 'png':
imagepng($resource, NULL, 9);
}
$contents = ob_get_contents();
ob_end_clean();
return $contents;
}
$newcontents = getContentsFromGDResource($newresource, $format);
Step 8 get extension, Use the following function to get extension of from image format(note, image format is not equal to image extension):
function getExtensionFromFormat($format) {
switch ($format) {
case 'gif':
return 'gif';
break;
case 'jpeg':
return 'jpg';
break;
case 'png':
return 'png';
}
}
$extension = getExtensionFromFormat($format);
Step 9 save image If we have a user named mike, you can do the following, it will save to the same folder as this php script:
$user_name = 'mike';
$filename = $user_name . '.' . $extension;
file_put_contents($filename, $newcontents);
Step 10 destroy resource Don't forget destroy GD resource!
imagedestroy($newresource);
or you can write all your code into a class, and simply use the following:
public function __destruct() {
@imagedestroy($this->resource);
}
TIPS
I recommend not to convert file format that user upload, you will meet many problems.
A: People say that ImageMagick is much faster. At best just compare both libraries and measure that.
*
*Prepare 1000 typical images.
*Write two scripts -- one for GD, one
for ImageMagick.
*Run both of them a few times.
*Compare results (total execution
time, CPU and I/O usage, result
image quality).
Something which the best everyone else, could not be the best for you.
Also, in my opinion, ImageMagick has much better API interface.
A: I suggest that you work something along these lines:
*
*Perform a getimagesize( ) on the uploaded file to check image type and size
*Save any uploaded JPEG image smaller than 700x700px in to the destination folder "as-is"
*Use GD library for medium size images (see this article for code sample: Resize Images Using PHP and GD Library)
*Use ImageMagick for large images. You can use ImageMagick in background if you prefer.
To use ImageMagick in background, move the uploaded files to a temporary folder and schedule a CRON job that "convert"s all files to jpeg and resizes them accordingly. See command syntax at: imagemagick-command line processing
You can prompt the user that file is uploaded and scheduled to be processed. The CRON job could be scheduled to run daily at a specific interval. The source image could be deleted after processing to assure that an image is not processed twice.
A: I've heard big things about the Imagick library, unfortunately I couldn't install it at my work computer and neither at home (and trust me, I spent hours and hours on all kinds of forums).
Afterwords, I've decided to try this PHP class:
http://www.verot.net/php_class_upload.htm
It's pretty cool and I can resize all kinds of images (I can convert them to JPG also).
A: ImageMagick is multithreaded, so it appears to be faster, but actually uses a lot more resources than GD. If you ran several PHP scripts in parallel all using GD then they'd beat ImageMagick in speed for simple operations. ExactImage is less powerful than ImageMagick but a lot faster, though not available through PHP, you'll have to install it on the server and run it through exec.
A: Here's a snippet from the php.net docs that I've used in a project and works fine:
<?
function fastimagecopyresampled (&$dst_image, $src_image, $dst_x, $dst_y, $src_x, $src_y, $dst_w, $dst_h, $src_w, $src_h, $quality = 3) {
// Plug-and-Play fastimagecopyresampled function replaces much slower imagecopyresampled.
// Just include this function and change all "imagecopyresampled" references to "fastimagecopyresampled".
// Typically from 30 to 60 times faster when reducing high resolution images down to thumbnail size using the default quality setting.
// Author: Tim Eckel - Date: 09/07/07 - Version: 1.1 - Project: FreeRingers.net - Freely distributable - These comments must remain.
//
// Optional "quality" parameter (defaults is 3). Fractional values are allowed, for example 1.5. Must be greater than zero.
// Between 0 and 1 = Fast, but mosaic results, closer to 0 increases the mosaic effect.
// 1 = Up to 350 times faster. Poor results, looks very similar to imagecopyresized.
// 2 = Up to 95 times faster. Images appear a little sharp, some prefer this over a quality of 3.
// 3 = Up to 60 times faster. Will give high quality smooth results very close to imagecopyresampled, just faster.
// 4 = Up to 25 times faster. Almost identical to imagecopyresampled for most images.
// 5 = No speedup. Just uses imagecopyresampled, no advantage over imagecopyresampled.
if (empty($src_image) || empty($dst_image) || $quality <= 0) { return false; }
if ($quality < 5 && (($dst_w * $quality) < $src_w || ($dst_h * $quality) < $src_h)) {
$temp = imagecreatetruecolor ($dst_w * $quality + 1, $dst_h * $quality + 1);
imagecopyresized ($temp, $src_image, 0, 0, $src_x, $src_y, $dst_w * $quality + 1, $dst_h * $quality + 1, $src_w, $src_h);
imagecopyresampled ($dst_image, $temp, $dst_x, $dst_y, 0, 0, $dst_w, $dst_h, $dst_w * $quality, $dst_h * $quality);
imagedestroy ($temp);
} else imagecopyresampled ($dst_image, $src_image, $dst_x, $dst_y, $src_x, $src_y, $dst_w, $dst_h, $src_w, $src_h);
return true;
}
?>
http://us.php.net/manual/en/function.imagecopyresampled.php#77679
A: For larger images use phpThumb(). Here is how to use it: http://abcoder.com/php/problem-with-resizing-corrupted-images-using-php-image-functions/. It also works for large corrupted images.
A: phpThumb uses ImageMagick whenever possible for speed (falling back to GD if necessary) and seems to cache pretty well to reduce the load on the server. It's pretty lightweight to try out (to resize an image, just call phpThumb.php with a GET query that includes the graphic filename and output dimensions), so you might give it a shot to see if it meets your needs.
A: For larger images use libjpeg to resize on image load in ImageMagick and thereby significantly reducing memory usage and improving performance, it is not possible with GD.
$im = new Imagick();
try {
$im->pingImage($file_name);
} catch (ImagickException $e) {
throw new Exception(_('Invalid or corrupted image file, please try uploading another image.'));
}
$width = $im->getImageWidth();
$height = $im->getImageHeight();
if ($width > $config['width_threshold'] || $height > $config['height_threshold'])
{
try {
/* send thumbnail parameters to Imagick so that libjpeg can resize images
* as they are loaded instead of consuming additional resources to pass back
* to PHP.
*/
$fitbyWidth = ($config['width_threshold'] / $width) > ($config['height_threshold'] / $height);
$aspectRatio = $height / $width;
if ($fitbyWidth) {
$im->setSize($config['width_threshold'], abs($width * $aspectRatio));
} else {
$im->setSize(abs($height / $aspectRatio), $config['height_threshold']);
}
$im->readImage($file_name);
/* Imagick::thumbnailImage(fit = true) has a bug that it does fit both dimensions
*/
// $im->thumbnailImage($config['width_threshold'], $config['height_threshold'], true);
// workaround:
if ($fitbyWidth) {
$im->thumbnailImage($config['width_threshold'], 0, false);
} else {
$im->thumbnailImage(0, $config['height_threshold'], false);
}
$im->setImageFileName($thumbnail_name);
$im->writeImage();
}
catch (ImagickException $e)
{
header('HTTP/1.1 500 Internal Server Error');
throw new Exception(_('An error occured reszing the image.'));
}
}
/* cleanup Imagick
*/
$im->destroy();
| {
"language": "en",
"url": "https://stackoverflow.com/questions/12661",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "84"
} |
Q: Resources for getting started with web development? Let's say I woke up today and wanted to create a clone of StackOverflow.com, and reap the financial windfall of millions $0.02 ad clicks. Where do I start?
My understanding of web technologies are:
*
*HTML is what is ultimately displayed
*CSS is a mechanism for making HTML look pleasing
*ASP.NET lets you add functionality using .NET(?)
*JavaScript does stuff
*AJAX does asyncronous stuff
*... and the list goes on!
To write a good website to I just need to buy seven books and read them all? Are Web 2.0 sites really the synergy of all these technologies?
Where does someone go to get started down the path to creating professional-looking web sites, and what steps are there along the way.
A: I think that this series of Opera Articles will give you a good idea of web standards and basic concepts of web development.
2014 update: the Opera docs were relocated in 2012 to this section of webplatform.org:
http://docs.webplatform.org/wiki/Main_Page
A: Ian's answer has a lot of weight. You could buy all those books and read them all and know nothing about web development. What you really need to do is start with something that is not nearly as big as Stack Overflow. Start with your personal site. Read some web dev/css articles on a list apart. Learn about doctypes and why to use them. Add some css and change the colors around. Go over to quirksmode and peruse the site. Add some js. Follow some links on Crockfords site. You will probably stumble across his awesome video lectures, which you should watch. Then after that go back to all the js that you wrote and rewrite it. Then pick a server side language that you want to learn. Python is pretty easy, but it really doesn't matter what you pick. Then come back and integrate all those together in your site. At this point you will at least be getting started with web development and will have worked with several different technologies.
EDIT: I forgot to mention. READ BOOKS.
Many developers that I have worked with in the past have gotten through their career without really advancing after a certain point. I could be totally wrong, but I attribute it to not reading enough books and relying on using their same bad code over and over.
A: You could go out and buy a bunch of books and start reading them and quickly get overwhelmed in the seemingly massive learning curve it takes to go from nowhere, which is where it appears you are, to a rich internet entrepreneur, which is where you want to be.
Alternatively, and what I would suggest is, you could define a problem you want to solve, and then go about finding the solution to that problem. Start with something small. "I have a problem: I don't have a web site about myself.". Define what you need to do to solve that problem, learn the basics, and do it. Then, define a new problem, which probably relies on the solution to the first problem, find what you need to do, and do it.
This is how all technology professionals evolve. My first website was a personal site with nothing but text. Then I added some jokes and some movie quotes. Then I got tired of man-handling all the updates to I learned how to put them into a database and retrieve them from the database for display. It goes on and on.
Call me when you've got more money from your financial windfall than you know what to do with.
A: If you really just want to jump in with both feet, I would suggest looking at ColdFusion from Adobe. The developer edition is free and runs on windows, os x and linux. The documentation is authoritative and extensive, there is a very active developer community and only a few books you might want to dig into. The definitive guide is a series of books that can be found on Amazon
The nice thing about ColdFusion is that you can use it as a stepping stone to other languages and remain productive along the way. You can even mix it together with Java since it is itself written in java. There are also lots of goodies built in that you would have to scour the web for or pay more for in other languages. Things like full text indexing, graphing, server monitoring, ajax based controls, flash/flex integration, asynch os calls, etc.
You even have the choice of building object oriented code or procedural code, although some people would not count that as a benefit. Those people rarely agree on which style should win, though.
Cheers!
A: I think sitepoint is the best resource for learning best practices in web development. They have great articles, good references, and probably one of the best forums. However the people there can be a bit grumpy. ;)
If you are a real nerd, reading the specs for HTML 5 and CSS is also a good way to learn.
A: While I have built my knowledge largely based on using the internet to search out what I want to know (w3schools.com helped a lot, as did A List Apart), a few good books have helped me along the way, though they have been platform/language-specific, so I'll avoid mentioning them unless someone is curious. For me, at least, having a book open so that I don't have to resize windows or switch between them is very valuable.
The first part of your list is ok, but the last few items need tweaking. ASP.NET adds server-side functionality (for the most part) to your application. This lives outside of the browser and is thus quite powerful and easily shared with a variety of end-users.
The problem (some say) with server-side processing is that your application must make a new HTTP request when you ask for an action to be performed. So if you click on a link to a page that yields a new set of data, you don't get instant results. The page reloads, or loads a separate page.
Javascript solves this to a degree--it allows you to respond to user input instantaneously. Do you want to display the sum of two numbers when the user clicks a button? You can do it with Javascript.
The problem with Javascript is that it can't talk directly to databases, or explore your server's file system, or other stuff like that. It lives in the browser--period.
AJAX bridges the gap between your user's browser and your server. With AJAX, Javascript makes the HTTP request without refreshing your page or loading a new one. Javascript talks to a server-side script (not necessarily ASP, either--works with PHP, Rails, Coldfusion, etc.) and sends and receives information. And because Javascript isn't dependent on page loads, a quick, snappy AJAX script can almost give the feeling of a common desktop application, in which you don't have to wait for HTTP requests when performing simple actions on your application's data.
A: I'm with Ian on this one. Reading books is all well and good, but nothing beats getting stuck in. I actually started with a Dummies Guide to ASP (that'd be "classic" ASP), back in 1999.
If I was going to start from scratch today I'd be looking at something that covered a full stack solution, whether Apache/PHP/MySQL, RoR or whatever.
ATM I have no experience of Rails, but it might be a pretty good place to start as it includes a lot of stuff that you'd have to figure out early on otherwise (integration with a Scriptaculous, a JS framework) - you can always learn what going on under the hood at a later date.
.NET is always an option, and if you're comfortable with Visual Studio it may be the way to go, but it's not the easiest thing to pick up otherwise.
If you know a bit of HTML but are basically new to server-side programming you might look at ColdFusion. It's actually extremely powerful and like Rails includes lots of "out of the box" benefits. There's a Swiss company called Railo who are currently in the process of releasing an Open Source ColdFusion engine that is affiliated with JBoss.
Last and not least - don't forget databases! Sooner or later you'll need to get to grips with some pretty serious SQL...
A: CFML (aka "ColdFusion" even though that's really an Adobe product, not the language) is definitely easy to learn, and if you want FOSS for CFML, in addition to Railo you can use Open BlueDragon which is a GPL CFML engine.
A: Designing with Web Standards is a great first read!
http://www.zeldman.com/dwws/
A: I would recommend this book:
http://www.amazon.com/MCTS-Self-Paced-Training-Exam-70-528/dp/0735623341/ref=sr_1_1?ie=UTF8&s=books&qid=1218830714&sr=8-1
I have just read it to take the exam, and although I knew the web theory part, I found it to be of great value.
This of course is a ASP.NET specific book, but that is what I would recommend learning anyways.
After you learn all the ASP.NET stuff, I would suggest reading up on JQuery.
Happy coding :)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/12669",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14"
} |
Q: How can I pass data from an aspx page to an ascx modal popup? I'm fairly new to ASP.NET and trying to learn how things are done. I come from a C# background so the code-behind portion is easy, but thinking like a web developer is unfamiliar.
I have an aspx page that contains a grid of checkboxes. I have a button that is coded via a Button_Click event to collect a list of which rows are checked and create a session variable out of that list. The same button is referenced (via TargetControlID) by my ascx page's ModalPopupExtender which controls the panel on the ascx page.
When the button is clicked, the modal popup opens but the Button_Click event is never fired, so the modal doesn't get its session data.
Since the two pages are separate, I can't call the ModalPopupExtender from the aspx.cs code, I can't reach the list of checkboxes from the ascx.cs code, and I don't see a way to populate my session variable and then programmatically activate some other hidden button or control which will then open my modal popup.
Any thoughts?
A: All a usercontrol(.ascx) file is is a set of controls that you have grouped together to provide some reusable functionality. The controls defined in it are still added to the page's control collection (.aspx) durring the page lifecylce. The ModalPopupExtender uses javascript and dhtml to show and hide the controls in the usercontrol client-side. What you are seeing is that the click event is being handled client-side by the ModalPoupExtender and it is canceling the post-back to the server. This is the default behavior by design. You certainly can access the page's control collection from the code-behind of your usercontrol though because it is all part of the same control tree. Just use the FindControl(xxx) method of any control to search for the child of it you need.
A: After some research following DancesWithBamboo's answer, I figured out how to make it work.
An example reference to my ascx page within my aspx page:
<uc1:ChildPage ID="MyModalPage" runat="server" />
The aspx code-behind to grab and open the ModalPopupExtender (named modalPopup) would look like this:
AjaxControlToolkit.ModalPopupExtender mpe =
(AjaxControlToolkit.ModalPopupExtender)
MyModalPage.FindControl("modalPopup");
mpe.Show();
A: Sorry, but I'm confused. You can't call an ascx directly, so...
Is your modal code that you are calling from within the same page, like a hidden panel, etc;
Or is it another aspx page that you are calling on a click event?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/12671",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: IronPython and ASP.NET Has anyone built a website with IronPython and ASP.NET. What were your experiences and is the combination ready for prime-time?
A: The current version of ASP.NET integration for IronPython is not very up-to-date and is more of a "proof-of-concept." I don't think I'd build a production website based on it.
Edit:: I have a very high level of expectation for how things like this should work, and might setting the bar a little high. Maybe you should take what's in "ASP.NET Futures", write a test application for it and see how it works for you. If you're successful, I'd like to hear about it. Otherwise, I think there should be a newer CTP of this in the next six months.
(I'm a developer on IronPython and IronRuby.)
Edit 2: Since I originally posted this, a newer version has been released.
A: Check out the Dynamic Languages in ASP.NET page on Codeplex. This has the newest IronPython bits. It doesn't give you any Visual Studio integration, other than the sample website project, but that's coming.
A: Keep a look out for ASP.NET MVC
The IronRuby guys have got some internal builds of MVC to work with IronRuby, and IronPython 2 and IronRuby have a lot of code in common with the DLR.
I'm not sure if they'll support IronPython/IronRuby when MVC is released, but it's definitely worth keeping your eye on anyway - The old ASP.NET forms-based development model is old, busted, and the sooner it goes away the better.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/12692",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: Returning DataTables in WCF/.NET I have a WCF service from which I want to return a DataTable. I know that this is often a highly-debated topic, as far as whether or not returning DataTables is a good practice. Let's put that aside for a moment.
When I create a DataTable from scratch, as below, there are no problems whatsoever. The table is created, populated, and returned to the client, and all is well:
[DataContract]
public DataTable GetTbl()
{
DataTable tbl = new DataTable("testTbl");
for(int i=0;i<100;i++)
{
tbl.Columns.Add(i);
tbl.Rows.Add(new string[]{"testValue"});
}
return tbl;
}
However, as soon as I go out and hit the database to create the table, as below, I get a CommunicationException "The underlying connection was closed: The connection was closed unexpectedly."
[DataContract]
public DataTable GetTbl()
{
DataTable tbl = new DataTable("testTbl");
//Populate table with SQL query
return tbl;
}
The table is being populated correctly on the server side. It is significantly smaller than the test table that I looped through and returned, and the query is small and fast - there is no issue here with timeouts or large data transfer. The same exact functions and DataContracts/ServiceContracts/BehaviorContracts are being used.
Why would the way that the table is being populated have any bearing on the table returning successfully?
A: For anyone having similar problems, I have solved my issue. It was several-fold.
*
*As Darren suggested and Paul backed up, the Max..Size properties in the configuration needed to be enlarged. The SvcTraceViewer utility helped in determining this, but it still does not always give the most helpful error messages.
*It also appears that when the Service Reference is updated on the client side, the configuration will sometimes not update properly (e.g. Changing config values on the server will not always properly update on the client. I had to go in and change the Max..Size properties multiple times on both the client and server sides in the course of my debugging)
*For a DataTable to be serializable, it needs to be given a name. The default constructor does not give the table a name, so:
return new DataTable();
will not be serializable, while:
return new DataTable("someName");
will name the table whatever is passed as the parameter.
Note that a table can be given a name at any time by assigning a string to the TableName property of the DataTable.
var table = new DataTable();
table.TableName = "someName";
Hopefully that will help someone.
A: Other than setting maximum values for all binding attributes.
Make sure each table you are passing/returning from webservice must have a table name, meaning the table.tablename property should not be blank.
A: I added the Datable to a data set and returned the table like so...
DataTable result = new DataTable("result");
//linq to populate the table
Dataset ds = new DataSet();
ds.Tables.Add(result);
return ds.Tables[0];
Hope it helps
:)
A: The attribute you want is OperationContract (on interface) / Operation Behavior (on method):
[ServiceContract]
public interface ITableProvider
{
[OperationContract]
DataTable GetTbl();
}
[OperationBehavior]
public DataTable GetTbl(){
DataTable tbl = new DataTable("testTbl");
//Populate table with SQL query
return tbl;
}
Also, in the... I think service configuration... you want to specify that errors can be sent. You might be hitting an error that is something like the message size is to big, etc. You can fix that by fudging with the reader quotas and such.
By default wsHttpBinding has a receive size quota of like 65 KB, so if the serialized data table's XML is more than that, it would throw an error (and I'm 95% sure the data table is more than 65 KB with data in it).
You can change the settings for the reader quotas and such in the web.config / app.config or you can set it on a binding instance in code. But yeah, that's probably what your problem is, if you haven't changed it by default.
WSHttpBindingBase Members - Look at ReaderQuotas property as well as the MaxReceivedMessageSize property.
A: You probably blew your quota - the datatable is larger than the allowed maximum packet size for your connection.
You probably need to set MaxReceivedMessageSize and MaxBufferSize to higher values on your connection.
A: The best way to diagnose these kinds of WCF errors (the ones that really don't tell you much) is to enable tracing. In your web.config file, add the following:
<system.diagnostics>
<sources>
<source name="System.ServiceModel"
switchValue="Information"
propagateActivity="true">
<listeners>
<add name="ServiceModelTraceListener"
type="System.Diagnostics.XmlWriterTraceListener, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"
initializeData="wcf-traces.svclog"/>
</listeners>
</source>
</sources>
</system.diagnostics>
You can then open the resulting file in the SvcTraceViewer.exe utility which comes in the .NET Framework SDK (or with Visual Studio). On my machine, it can be found at %PROGRAMFILES%\Microsoft SDKs\Windows\v6.0A\Bin\SvcTraceViewer.exe.
Just look for an error message (in bold red) and that will tell you specifically what your problem is.
A: There are 3 reason for failed return type as datatable in WCF services
*
*You have to specify data table name like:
MyTable=new DataTable("tableName");
*When you are adding reference on client side of WCF service select reusable dll system.data
*Specify attribute on datatable member variable like
[DataMember]
public DataTable MyTable{ get; set; }
A: I think Darren is most likely correct - the default values provided for WCF are laughably small and if you bump into them you end up with errors that can be difficult to track down. They seem to appear as soon as you attempt to do anything beyond a simple test case. I wasted more time than I'd like to admit debugging problems that turned out to be related to the various configuration (size) settings on both the client and server. I think I ended up modifying almost all of them, ex. MaxBufferPoolSize, MaxBufferSize, MaxConnections, MaxReceivedMessageSize, etc.
Having said that, the SvcTraceViewer utility also mentioned is great. I did run into a few cases where it wasn't as helpful as I would have liked, but overall it's a nice tool for analyzing the communications flow and errors.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/12702",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "51"
} |
Q: Best way to custom edit records in ASP.NET? I'm coming from a Rails background and doing some work on a ASP.NET project (not ASP MVC). Newbie question: what's the easiest way to make a custom editor for a table of records?
For example: I have a bunch of data rows and want to change the "category" field on each -- maybe a dropdown, maybe a link, maybe the user types it in.
In Rails, I'd iterate over the rows to build a table, and would have a form for each row. The form would have an input box or dropdown, and submit the data to a controller like "/item/edit/15?category=foo" where 15 was the itemID and the new category was "foo".
I'm new to the ASP.NET model and am not sure of the "right" way to do this -- just the simplest way to get back the new data & save it off. Would I make a custom control and append it to each row? Any help appreciated.
A: You can REALLY cheat nowadays and take a peek at the new Dynamic Data that comes with .NET 3.5 SP1. Scott Guthrie has a blog entry demoing on how quick and easy it'll flow for you here:
http://weblogs.asp.net/scottgu/archive/2007/12/14/new-asp-net-dynamic-data-support.aspx
Without getting THAT cutting edge, I'd use the XSD generator to generate a strongly typed DataSet that coincides with the table in question. This will also generate the TableAdapter you can use to do all your CRUD statements.
From there, bind it to a DataGrid and leverage all the standard templates/events involved with that, such as EditIndex, SelectedIndex, RowEditing, RowUpdated, etc.
I've been doing this since the early 1.0 days of .NET and this kind of functionality has only gotten more and more streamlined with every update of the Framework.
EDIT: I want to give a quick nod to the Matt Berseth blog as well. I've been following a lot of his stuff for a while now and it is great!
A: There are a few controls that will do this for you, with varying levels of complexity depending on their relative flexibility.
The traditional way to do this would be the DataGrid control, which gives you a table layout. If you want something with more flexibility in appearance, the DataList and ListView controls also have built-in support for editing, inserting or deleting fields as well.
Check out Matt Berseth's blog for some excellent examples of asp.net controls in action.
A: Thanks for the answers guys. It looks like customizing the DataGrid is the way to go. For any ASP.NET newbies, here's what I'm doing
<asp:DataGrid ID="GridView1" runat="server" AutoGenerateColumns="False">
<Columns>
<asp:BoundColumn DataField="RuleID" Visible="False" HeaderText="RuleID"></asp:BoundColumn>
<asp:TemplateColumn HeaderText="Category">
<ItemTemplate>
<!-- in case we want to display an image -->
<asp:Literal ID="litImage" runat="server">
</asp:Literal>
<asp:DropDownList ID="categoryListDropdown" runat="server"></asp:DropDownList>
</ItemTemplate>
</asp:TemplateColumn>
</Columns>
</asp:DataGrid>
This creates a datagrid. We can then bind it to a data source (DataTable in my case) and use things like
foreach (DataGridItem item in this.GridView1.Items)
{
DropDownList categoryListDropdown = ((DropDownList)item.FindControl("categoryListDropdown"));
categoryListDropdown.Items.AddRange(listItems.ToArray());
}
to populate the intial dropdown in the data grid. You can also access item.Cells[0].text to get the RuleID in this case.
Notes for myself: The ASP.NET model does everything in the codebehind file. At a high level you can always iterate through GridView1.Items to get each row, and item.findControl("ControlID") to query the value stored at each item, such as after pressing an "Update" button.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/12706",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Load an XmlNodeList into an XmlDocument without looping? I originally asked this question on RefactorMyCode, but got no responses there...
Basically I'm just try to load an XmlNodeList into an XmlDocument and I was wondering if there's a more efficient method than looping.
Private Function GetPreviousMonthsXml(ByVal months As Integer, ByVal startDate As Date, ByVal xDoc As XmlDocument, ByVal path As String, ByVal nodeName As String) As XmlDocument
'' build xpath string with list of months to return
Dim xp As New StringBuilder("//")
xp.Append(nodeName)
xp.Append("[")
For i As Integer = 0 To (months - 1)
'' get year and month portion of date for datestring
xp.Append("starts-with(@Id, '")
xp.Append(startDate.AddMonths(-i).ToString("yyyy-MM"))
If i < (months - 1) Then
xp.Append("') or ")
Else
xp.Append("')]")
End If
Next
'' *** This is the block that needs to be refactored ***
'' import nodelist into an xmldocument
Dim xnl As XmlNodeList = xDoc.SelectNodes(xp.ToString())
Dim returnXDoc As New XmlDocument(xDoc.NameTable)
returnXDoc = xDoc.Clone()
Dim nodeParents As XmlNodeList = returnXDoc.SelectNodes(path)
For Each nodeParent As XmlNode In nodeParents
For Each nodeToDelete As XmlNode In nodeParent.SelectNodes(nodeName)
nodeParent.RemoveChild(nodeToDelete)
Next
Next
For Each node As XmlNode In xnl
Dim newNode As XmlNode = returnXDoc.ImportNode(node, True)
returnXDoc.DocumentElement.SelectSingleNode("//" & node.ParentNode.Name & "[@Id='" & newNode.Attributes("Id").Value.Split("-")(0) & "']").AppendChild(newNode)
Next
'' *** end ***
Return returnXDoc
End Function
A: Dim returnXDoc As New XmlDocument(xDoc.NameTable)
returnXDoc = xDoc.Clone()
The first line here is redundant - you are creating an instance of an XmlDocument, then reassigning the variable:
Dim returnXDoc As XmlDocument = xDoc.Clone()
This does the same.
Seeing as you appear to be inserting each XmlNode from your node list into a different place in the new XmlDocument then I can't see how you could possibly do this any other way.
There may be faster XPath expressions you could write, for example pre-pending an XPath expression with "//" is almost always the slowest way to do something, especially if your XML is well structured. You haven't shown your XML so I couldn't really comment on this further however.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/12709",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Problems with #import of .NET out-of-proc server In C++ program, I am trying to #import TLB of .NET out-of-proc server.
I get errors like:
z:\server.tlh(111) : error C2146: syntax error : missing ';' before identifier 'GetType'
z:\server.tlh(111) : error C2501: '_TypePtr' : missing storage-class or type specifiers
z:\server.tli(74) : error C2143: syntax error : missing ';' before 'tag::id'
z:\server.tli(74) : error C2433: '_TypePtr' : 'inline' not permitted on data declarations
z:\server.tli(74) : error C2501: '_TypePtr' : missing storage-class or type specifiers
z:\server.tli(74) : fatal error C1004: unexpected end of file found
The TLH looks like:
_bstr_t GetToString();
VARIANT_BOOL Equals (const _variant_t & obj);
long GetHashCode();
_TypePtr GetType();
long Open();
I am not really interested in the having the base object .NET object methods like GetType(), Equals(), etc. But GetType() seems to be causing problems.
Some google research indicates I could #import mscorlib.tlb (or put it in path), but I can't get that to compile either.
Any tips?
A: #import "C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\mscorlib.tlb"
Was the solution for me.
A: Added no_namespace and raw_interfaces_only to my #import:
#import "server.tlb" no_namespace named_guids
Also using TLBEXP.EXE instead of REGASM.EXE seems to help this issue.
A: Often, when MSVC compile COM source to a TLB, there'll be hints left behind like:
#import "C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\mscorlib.tlb"
You should add that to stdafx.h before the line
#import "your_own.tlb"
After that, basic types like _Type, _ObjRef will be added to your project for the prototypes generated.
I hope that solves your problem.
but the bigger problem is that after everything is done, there might be runtime errors when you call a Ptr in you program
anyone can help?
A: It seems that you need to use
[ClassInterface(ClassInterfaceType.None)]
Here is another discussion about the similar problem.
A: Also, make sure your C# class doesn't have this attribute:
[ClassInterface(ClassInterfaceType.AutoDual)] <-- Seems to cause errors in C++ with _TypePtr
| {
"language": "en",
"url": "https://stackoverflow.com/questions/12716",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Setting up Subversion on Windows as a service When installing subversion as a service, I used this command:
c:\>svnservice -install --daemon --root "c:\documents and settings\my_repository"
And then I got this error:
Could not create service in service control manager.
After looking at some MSDN docs on the service control manager, I tried granting full control to everyone in the permissions on the registry key at HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services, but that hasn't had any effect.
Anybody know what I did wrong, or how to overcome this?
Note #1: I am running as an administrator on this box
*Note #2: I was following the instructions given here, so maybe my choice of directory is misguided. And my repository is not actually called "my_repository". I used the name of an actual project which is currently under source control in gasp VSS.*
A: VisualSVN Server installs as a Windows service. It is free, includes Apache, OpenSSL, and a repository / permission management tool. It can also integrate with Active Directory for user authentication. I highly recommend it for hosting SVN on Windows.
A: I've followed the instructions given at the Collabnet site:
http://svn.apache.org/repos/asf/subversion/trunk/notes/windows-service.txt
They use the windows SC to create the service (which runs svnserve). This has worked for me without any problems (using svn 1.4 and 1.5)
A: I think svnservice is obsolete, because since 1.4, svnserve itself has been able to run as a Windows service. (svnserve comes as a part of the normal SVN binary distribution)
http://svn.apache.org/repos/asf/subversion/trunk/notes/windows-service.txt contains the details of how to set it up.
And the binaries you want are here: http://subversion.tigris.org/servlets/ProjectDocumentList?folderID=91
But as others have said, there are now more friendly packages containing the svn stuff - VisualSVN Server (so badly named it makes me weep) and the Collabnet distribution - the later is Apache only, and is hand rolled on the thighs of virgins, which means that it always seems to appear about three weeks later than everyone else.
A: I've never used the command line installer for this. I assume you are downloading the latest from:
http://svnservice.tigris.org/
I run the installer, and then use the configuration tool (in the Start Menu, SVN Service, SVN Service Administration) to set it up.
A: The only thing I can currently think of, is the following: make sure you're running under an administrator account. That's absolutely necessary to install a service, AFAIK.
Have fun with Subversion, btw :)
A: I'd suggest you move your repository to somewhere a little safer, maybe "c:\SVNRepo".
I'd hesitate to put the Repository in "Documents and Settings". Is your repository actually called "my_repository"?
A: I recommend you tu use Visual SVN Server. Very easy to install
| {
"language": "en",
"url": "https://stackoverflow.com/questions/12718",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Add .NET 2.0 SP1 as a prerequisite for deployment project I have a .NET 2.0 application that has recently had contributions that are Service Pack 1 dependent. The deployment project has detected .NET 2.0 as a prerequisite, but NOT SP1. How do I include SP1 as a dependency/prerequisite in my deployment project?
A: You'll want to setup launch condition in your deployment project to make sure version 2.0 SP1 is installed. You'll want to set a requirement based off the MsiNetAssemblySupport variable, tied to the version number of .NET 2.0 SP1 (2.0.50727.1433 and above according to this page.)
Bootstrapping the project to actually download the framework if it isn't installed is a different matter, and there are plenty of articles out there on how to do that.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/12720",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Strange characters in PHP This is driving me crazy.
I have this one php file on a test server at work which does not work.. I kept deleting stuff from it till it became
<?
print 'Hello';
?>
it outputs
Hello
if I create a new file and copy / paste the same script to it it works!
Why does this one file give me the strange characters all the time?
A: Found it, file -> encoding -> UTF8 with BOM , changed to to UTF :-)
I should ahve asked before wasing time trying to figure it out :-)
A: Just in case, here is a list of bytes for BOM
Encoding Representation (hexadecimal)
UTF-8 EF BB BF
UTF-16 (BE) FE FF
UTF-16 (LE) FF FE
UTF-32 (BE) 00 00 FE FF
UTF-32 (LE) FF FE 00 00
UTF-7 2B 2F 76, and one of the following bytes: [ 38 | 39 | 2B | 2F ]†
UTF-1 F7 64 4C
UTF-EBCDIC DD 73 66 73
SCSU 0E FE FF
BOCU-1 FB EE 28 optionally followed by FF†
A: That's the BOM (Byte Order Mark) you are seeing.
In your editor, there should be a way to force saving without BOM which will remove the problem.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/12765",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
} |
Q: How is the HTML on this site so clean? I work with C# at work but dislike how with webforms it spews out a lot of JavaScript not including the many lines for viewstate that it creates.
That's why I like coding with PHP as I have full control.
But I was just wondering how this sites HTML is so clean and elegant?
Does using MVC have something to do with it? I see that JQuery is used but surely you still use asp:required validators? If you do, where is all the hideous code that it normally produces?
And if they arent using required field validators, why not? Surely it's quicker to develop in than using JQuery?
One of the main reasons I code my personal sites in PHP was due to the more elegant HTML that it produces but if I can produce code like this site then I will go full time .net!
A: One of the goals of ASP.NET MVC is to give you control of your markup. However, there have always been choices with ASP.NET which would allow you to generate relatively clean HTML.
For instance, ASP.NET has always offered a choice with validator controls. Do you value development speed over markup? Use validators. Value markup over development speed? Pick another validation mechanism. Your comments on validators are kind of contradictory there - it's possible to use ASP.NET and still make choices for markup purity over development speed.
Also, with webforms, we've had the CSS Friendly Control Adapters for a few years which will modify the controls to render more semantic markup. ASP.NET 3.5 included the ListView, which makes it really easy to write repeater type controls which emit semantic HTML. We used ASP.NET webforms on the Microsoft PDC site and have kept the HTML pretty clean: http://microsoftpdc.com/Agenda/Speakers.aspx - the Viewstate could probably be disabled on most pages, although in reality it's only a few dozen bytes.
A: You were on the right track. It is the fact that they are using the ASP.NET MVC web framework. It allows you to have full control of your output html.
A: The ASP.NET MVC Framework is an alternative to the normal "web forms" way of doing ASP.NET development. With it you lose a lot of abstraction, but gain a lot of control.
A: Yes - MVC doesn't utilize the ASP.NET view state junk.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/12768",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
} |
Q: Attaching VisualSVN Server to an existing repository All the recent VisualSVN Server posts made me want to check it out. I have SVN running right now through Apache, but I'd like to try out VisualSVN Server, mostly for the Active Directory integration. Their docs don't describe whether you can easily migrate an existing repository.
Anyone done this before?
A: There is an option on the VisualSVN Server Manager console to import an existing repository. You just give it the existing repository location and a name for the imported repository. Pretty simple.
A: VisualSVN Server will use your existing SVN repositories with no problems. I have successfully migrated repositories from SVN + Apache to VisualSVN Server on multiple occasions.
A: An SVN server doesn't really 'attach' to a repository, it just needs to be able to see its files. The repository itself doesn't know or care if it's being accessed via svnserve, Apache mod_svn or direct file:// URLs
A: There is a VisualSVN Server Knowledge Base article about the case:
How can I import my existing repository into newly installed VisualSVN Server?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/12774",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: How do I add a pre tag inside a code tag with jQuery? I'm trying to use jQuery to format code blocks, specifically to add a <pre> tag inside the <code> tag:
$(document).ready(function() {
$("code").wrapInner("<pre></pre>");
});
Firefox applies the formatting correctly, but IE puts the entire code block on one line. If I add an alert
alert($("code").html());
I see that IE has inserted some additional text into the pre tag:
<PRE jQuery1218834632572="null">
If I reload the page, the number following jQuery changes.
If I use wrap() instead of wrapInner(), to wrap the <pre> outside the <code> tag, both IE and Firefox handle it correctly. But shouldn't <pre> work inside <code> as well?
I'd prefer to use wrapInner() because I can then add a CSS class to the <pre> tag to handle all formatting, but if I use wrap(), I have to put page formatting CSS in the <pre> tag and text/font formatting in the <code> tag, or Firefox and IE both choke. Not a huge deal, but I'd like to keep it as simple as possible.
Has anyone else encountered this? Am I missing something?
A: Btw I don't know if it is related but pre tags inside code tags will not validate in strict mode.
A: That's the difference between block and inline elements. pre is a block level element. It's not legal to put it inside a code tag, which can only contain inline content.
Because browsers have to support whatever godawful tag soup they might find on the real web, Firefox tries to do what you mean. IE happens to handle it differently, which is fine by the spec; behavior in that case is unspecified, because it should never happen.
*
*Could you instead replace the code element with the pre? (Because of the block/inline issue, technically that should only work if the elements are inside an element with "flow" content, but the browsers might do what you want anyway.)
*Why is it a code element in the first place, if you want pre's behavior?
*You could also give the code element pre's whitespace preserving power with the CSS white-space: pre, but apparently IE 6 only honors that in Strict Mode.
A: Are you using the latest jQuery ?
What if you try
$("code").wrapInner(document.createElement("pre"));
Is it any better or do you get the same result ?
A: As markpasc stated, a PRE element inside CODE element is not allowed in HTML. The best solution is to change your HTML code to use <pre><code> (which means a preformatted block that contains code) directly in your HTML for code blocks.
A: You could use html() to wrap it:
$('code').each(function(i,e)
{
var self = $(e);
self.html('<pre>' + self.html() + '</pre>');
});
As mentioned above, you'd be better off changing your html. But this solution should work.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/12794",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13"
} |
Q: How to register COM from VS Setup project? I have marked my DLL as vsdraCOM, and I can see it in the registry after installing, but my application does not see the COM interface until I call RegAsm on it manually. Why could this be?
The COM registration does not work on Vista (confirmed myself) and on XP (confirmed by a colleague).
Using Visual Studio 2005 on XP.
A: Well, I have found a solution:
*
*Run RegAsm.exe with the /regfile option to generate the registry entries.
*Manually import the .reg file into the VS Setup project by viewing the registry, right clicking, and choosing "Import..."
| {
"language": "en",
"url": "https://stackoverflow.com/questions/12807",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Anyone Using Executable Requirements? In my limited experience with them executable requirements (i.e. specifying all requirements as broken automated tests) have proven to be amazingly successful. I've worked on one project in which we placed a heavy emphasis on creating high-level automated tests which exercised all the functionality of a given use case/user story. It was really amazing to me how much easier development became after we began this practice. Implementing features became so much easier after writing a test and we were able to make major architectural changes to the system with all the confidence in the world that everything still worked the same as it did yesterday.
The biggest problem we ran into was that the tools for managing these types of tests aren't very good. We used Fitnesse quite a bit and as a result I now hate the Fit framework.
I'd like to know 1) if anyone else has experience developing using this type of test-driven requirement definition and 2) what tools you all used to facilitate this.
A: The primary tool I've also used was FitNesse. I've used it at several companies, with very good results. We did have test cases numbering in the many thousands, and we had to be very disciplined in how we organized and used them.
I've tried some other tools, including writing my own DSL (domain-specific language) and using things like RSpec. I really like RSpec, but it is certainly more of a developer tool than a business one.
I know Rick Mugridge has been working on a tool called ZiBreve (http://www.zibreve.com/visit.php?page=index) which is supposed to have stronger refactoring support. I haven't used it myself, but I know Rick and have talked to him several times. I know there was discussion at Agile 2008 on some different ways to deal with the Fitnesse tests in general.
Other than that, I haven't seen a lot of good tools out there. Even tools like WinRunner are fine for QA type tests, but for exploratory testing of requirements by the business, FitNesse or a custom DSL seem to be the ways to go right now.
A: You might want to take look at Robot Framework (http://robotframework.org). It's FIT-like but hopefully easier to integrate to different testing tools, version control and continuous integration. Different abstraction levels in the test data also make it easier to maintain the data, and when the separate test data editor gets more mature maintenance gets even easier. The quick start guide introduces the most important features of the framework and acts also as an executable demo.
A: I've had to use, test and set up both fitnesse and one of it's competitor, GreenPepper for my work, and what I can say is :
GreenPepper is a confluence plugin (confluence is an enterprise wiki from atlassian) and have many of the things you need in an "enterprise" level tool with little to no additional work required :
*
*Better user friendly -rich text- wiki
syntax (makes it easier to work with
for non technical people)
*It integrates very well with many
development tools : Eclipse, VB,
maven2 and Nant plugin, I tested most
and was very pleased.
*User and access rights are managed by
confluence, which is to say it's good
and make uses of database of your
likin (which might be mandatory
depending on where you work)
*Many other functionalities that might
or might not be required : ssl support, remote execution (install the wiki on unix, execute on windows if you are working on a C# project, or reverse)
*Looks way better :D
Big downs for GreenPepper are : Configuration is quite hard and documentation is poor (although they seem to be working on it and they answer quite fast on their forum) and also it is not free, you have to pay for both confluence and GreenPepper, which might add up to quite a lot.
Fitnesse is very basic in my opinion, very easy to set up, it works but that's it, you can use some of the fitnesse plugins developed by the open source community, and even some Fit plugins, such as the Eclipse plugin (build the skeletton of the fixture from a fitnesse test file, provided it's in a .fit extension, very usefull). Integration is not ideal, authentification and access rights management is poor, but it's FREE and if you need something, you can do it because it's open source.
A: My experience is limited to personal projects and found much the same advantages you mentioned. I recommend http://metacpan.org/pod/Test::Simple::Tutorial which was my inspiration for trying out testing-based development. The perl testing modules seem pretty useful and flexible, though I have nothing to compare them to.
I also believe tests are vital for the maintenance period of a project. If you have good tests to begin with, it saves a lot of time and mistakes later on. I wish I had put more work into tests on my current project.
A: I've found that using contracts is a great approach. Metaprogramming contracts are generally lower-level than the types of integration tests you describe, but the two are certainly not mutually exclusive. I find contracts help keep documentation, implementation, and testing all in sync -- this is a major problem of TDD (not that it isn't a problem in non-TDD).
A: I've tried Fitnesse and its really awful (particularly integration with SVN).
And our company develop similar open-source tool with fit engine: FitPro
Another brilliant tool I've used is Concordion. It has the only disadvantage - requrements in html format
| {
"language": "en",
"url": "https://stackoverflow.com/questions/12815",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13"
} |
Q: Filter by zip code, or other location based data retrieval strategies My little site should be pooling list of items from a table using the active user's location as a filter. Think Craigslist, where you search for "dvd' but the results are not from all the DB, they are filtered by a location you select. My question has 2 levels:
*
*should I go a-la-craigslist, and ask users to use a city level location? My problem with this is that you need to generate what seems to me a hard coded, hand made list of locations.
*should I go a-la-zipCode. The idea of just asking the user to type his zipcode, and then pool all items that are in the same or in a certain distance from his zip code.
I seem to prefer the zip code way as it seems more elegant solution, but how on earth do one goes about creating a DB of all zip codes and implement the function that given zip code 12345, gets all zipcodes in 1 mile distance?
this should be fairly common "task" as many sites have a need similar to mine, so I am hoping not to re-invent the wheel here.
A: Getting a Zip Code database is no problem. You can try this free one:
http://zips.sourceforge.net/
Although I don't know how current it is, or you can use one of many providers. We have an annual subscription to ZipCodeDownload.com, and for maybe $100 we get monthly updates with the latest Zip Code data complete with Lat/Longs of the centroid of the zip code.
As for querying for all zips within a certain radius, you are going to need a spatial library of some sort. If you just have a table of zips with lats/longs, you will need a database-oriented mechanism. SQL Server 2008 has the capability built in, and there are open source libraries and commercial libraries that will add such capabilities to SQL Server 2005. The open source database PostgreSQL has a project, PostGIS that adds this capability to that database. It is here: http://postgis.refractions.net/
Other database platforms probably have similar projects, but those are the ones I am aware of. With one of these DB based libraries you should be able to directly query for any zip codes (or any rows of any kind that have lat/long columns) within a given radius.
If you want to go a different route you can use spatial tools with a mapping library. There are open source options here as well, such as SharpMap and many others (Google can help out) that can use the free Tiger maps for the united states as the data source. However, this route is somewhat more complicated and possibly less performant if all you need is a radius search.
Finally, you may want to look into a web service. This, as you say, is a common need, and I imagine there are any number ob web services that you can subscribe to that can provide all zip codes in a given radius from a provided zip code. A quick Google search turned up this:
http://www.zip-codes.com/free-zip-code-tools.asp#radius
But there are MANY resources to be had for the searching on this subject.
A:
how on earth do one [...] implement the function that given zip code 12345, gets all zipcodes in 1 mile distance?
Here is a sample on how to do that:
http://www.codeproject.com/KB/cs/zipcodeutil.aspx
A: Just to be technical... PostGIS isn't a project of the Postgres community... it's a stand-alone project that is built on top of Postgres. If you want help or support with PostGIS, you'll want to go to it's community instead of Postgres.
A: You can use PostGIS. Additionally, I've used deCarta's mapping libraries. They have technology which allows you to geokey any arbitrary data type. Then you can query these spatially.
disclaimer: I work for deCarta
A: Wouldn't it be more efficient to just figure out which cities are within a 1 mile radius and store that information in a table? Then you don't have to do calculations in the database all the time.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/12823",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: C# Database Access: DBNull vs null We have our own ORM we use here, and provide strongly typed wrappers for all of our db tables. We also allow weakly typed ad-hoc SQL to be executed, but these queries still go through the same class for getting values out of a data reader.
In tweaking that class to work with Oracle, we've come across an interesting question. Is it better to use DBNull.Value, or null? Are there any benefits to using DBNull.Value? It seems more "correct" to use null, since we've separated ourselves from the DB world, but there are implications (you can't just blindly ToString() when a value is null for example) so its definitely something we need to make a conscious decision about.
A: If you've written your own ORM, then I would say just use null, since you can use it however you want. I believe DBNull was originally used only to get around the fact that value types (int, DateTime, etc.) could not be null, so rather than return some value like zero or DateTime.Min, which would imply a null (bad, bad), they created DBNull to indicate this. Maybe there was more to it, but I always assumed that was the reason. However, now that we have nullable types in C# 3.0, DBNull is no longer necessary. In fact, LINQ to SQL just uses null all over the place. No problem at all. Embrace the future... use null. ;-)
A: From the experience I've had, the .NET DataTables and TableAdapters work better with DBNull. It also opens up a few special methods when strongly typed, such as DataRow.IsFirstNameNull when in place.
I wish I could give you a better technical answer than that, but for me the bottom line is use DBNull when working with the database related objects and then use a "standard" null when I'm dealing with objects and .NET related code.
A: I find it better to use null, instead of DB null.
The reason is because, as you said, you're separating yourself from the DB world.
It is generally good practice to check reference types to ensure they aren't null anyway. You're going to be checking for null for things other than DB data, and I find it is best to keep consistency across the system, and use null, not DBNull.
In the long run, architecturally I find it to be the better solution.
A: Use DBNull.
We encouintered some sort of problems when using null.
If I recall correctly you cannot INSERT a null value to a field, only DBNull.
Could be Oracle related only, sorry, I do not know the details anymore.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/12836",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13"
} |
Q: How to combine two projects in Mercurial? I have two separate mercurial repositories. At this point it makes sense that they "become one" because I want to work on the two projects simultaneously.
I'd really like the two projects to each be a subdirectory in the new repository.
*
*How do I merge the two projects?
*Is this a good idea, or should I
keep them separate?
It seems I ought to be able to push from one repository to the other... Maybe this is really straight forward?
A: hg started to have subrepo since 1.3 (2009-07-01). The early versions were incomplete and shaky but now it's pretty usable.
A: I was able to combine my two repositories in this way:
*
*Use hg clone first_repository to clone one of the repositories.
*Use hg pull -f other_repository to pull the code in from the other repository.
The -f (force) flag on the pull is the key -- it says to ignore the fact that the two repositories are not from the same source.
Here are the docs for this feature.
A: If you aren't using the same code across the projects, keep them separate. You can set your personal repository of each of those projects to be just a directory apart. Why mix all the branches, merges, and commit comments when you don't have to.
About your edit: Pushing from One repository to Another. You can always use the transplant command. Although, all this is really side stepping your desire to combine the two, so you might feel uncomfortable using my suggestions. Then you can use the forest extension, or something.
hg transplant -s REPOSITORY lower_rev:high_rev
| {
"language": "en",
"url": "https://stackoverflow.com/questions/12843",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "84"
} |
Q: Database Case Insensitive Index? I have a query where I am searching against a string:
SELECT county FROM city WHERE UPPER(name) = 'SAN FRANCISCO';
Now, this works fine, but it doesn't scale well, and I need to optimize it. I have found an option along the lines of creating a generated view, or something like that, but I was hoping for a simpler solution using an index.
We are using DB2, and I really want to use an expression in an index, but this option seems to only be available on z/OS, however we are running Linux. I tried the expression index anyways:
CREATE INDEX city_upper_name_idx
ON city UPPER(name) ALLOW REVERSE SCANS;
But of course, it chokes on the UPPER(name).
Is there another way I can create an index or something similar in this manner such that I don't have to restructure my existing queries to use a new generated view, or alter my existing columns, or any other such intrusive change?
EDIT: I'm open to hearing solutions for other databases... it might carry over to DB2...
A: You could add an indexed column holding a numerical hash key of the city name. (With duplicates allowed).
Then you could do a multi-clause where :
hash = [compute hash key for 'SAN FRANCISCO']
SELECT county
FROM city
WHERE cityHash = hash
AND UPPER(name) = 'SAN FRANCISCO' ;
Alternatively, go through your db manual and look at the options for creating table indexes. There might be something helpful.
A: Short answer, no.
Long answer, yes if you're running on the mainframe, but you're not, so you have to use other trickery.
DB2 (as of DB2/LUW v8) now has generated columns so you can:
CREATE TABLE tbl (
lname VARCHAR(20),
fname VARCHAR(20),
ulname VARCHAR(20) GENERATED ALWAYS AS UPPER(lname)
);
and then create an index on ulname. I'm not sure you're going to get it simpler than that.
Before that, you used to have to use a combination of insert and update triggers to ensure the ulname column was kept in sync, and this was a nightmare to maintain. Also, now that this functionality is part of the core DBMS, it's been highly optimized (it's much faster than the trigger-based solution) and doesn't get in the way of real user triggers, so no extra DB objects to maintain.
See here for details.
A: I don't know whether this would work in DB2, but I'll tell you how I'd do this in SQL Server. I think the way MSSQL does this is ANSI standard, though the specific collation strings may differ. Anyway, if you can do this without trashing the rest of your application -- are there other places where the "name" column needs to be case-sensitive? -- try making that whole column case-insensitive by changing the collation, then index the column.
ALTER TABLE city ALTER COLUMN name nvarchar(200)
COLLATE SQL_Latin1_General_CP1_CI_AS
...where "nvarchar(200)" stands in for whatever's your current column data type. The "CI" part of the collation string is what marks it as case-insensitive in MSSQL.
To explain... my understanding is that the index will store values in the order of the indexed column's collation. Making the column's collation be case-insensitive would make the index store 'San Francisco', 'SAN FRANCISCO', and 'san francisco' all together. Then you should just have to remove the "UPPER()" from your query, and DB2 should know that it can use your index.
Again, this is based solely on what I know about SQL Server, plus a couple minutes looking at the SQL-92 spec; it may or may not work for DB2.
A: Oracle supports function-based indexes. Their canonical example:
create index emp_upper_idx on emp(upper(ename));
A: PostgreSQL also supports indexing the results of a function:
CREATE INDEX mytable_lower_col1_idx ON mytable (lower(col1));
The only other option I can think of is to de-normalize your data a bit by creating another column to hold the upper-case version (updated by triggers) and index that. Blech!
A: DB2 isn't strong regarding collation. And it doesn't have function-based indexes.
Niek Sanders's suggestion would work, if you can accept that the hashing has to happen in your application (as DB2 doesn't have SHA or MD5 functions, as far as I know).
However, if I were you, I'd create a materialized view (MQT == Materialized Query Table, in db2 parlance) using CREATE TABLE AS, adding a column with a pre-computed upper-case variant of the name. Note: You may add indexes to materialized views in DB2.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/12855",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: Mercurial stuck "waiting for lock" Got a bluescreen in windows while cloning a mercurial repository.
After reboot, I now get this message for almost all hg commands:
c:\src\>hg commit
waiting for lock on repository c:\src\McVrsServer held by '\x00\x00\x00\x00\x00\
x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00'
interrupted!
Google is no help.
Any tips?
A: I had the same problem. Got the following message when I tried to commit:
waiting for lock on working directory of <MyProject> held by '...'
hg debuglock showed this:
lock: free
wlock: (66722s)
So I did the following command, and that fixed the problem for me:
hg debuglocks -W
Using Win7 and TortoiseHg 4.8.7.
A: I had the same problem on Win 7.
The solution was to remove following files:
*
*.hg/store/phaseroots
*.hg/wlock
As for .hg/store/lock - there was no such file.
A: I do not expect this to be a winning answer, but it is a fairly unusual situation.
Mentioning in case someone other than me runs into it.
Today I got the "waiting for lock on repository" on an hg push command.
When I killed the hung hg command I could see no .hg/store/lock
When I looked for .hg/store/lock while the command was hung, it existed. But the lockfile was deleted when the hg command was killed.
When I went to the target of the push, and executed hg pull, no problem.
Eventually I realized that the process ID on the hg push was lock waiting message was changing each time. It turns out that the "hg push" was hanging waiting for a lock held by itself (or possibly a subprocess, I did not investigate further).
It turns out that the two workspaces, let's call them A and B, had .hg trees shared by symlink:
A/.hg --symlinked-to--> B/.hg
This is NOT a good thing to do with Mercurial. Mercurial does not understand the concept of two workspaces sharing the same repository. I do understand, however, how somebody coming to Mercurial from another VCS might want this (Perforce does, although not a DVCS; the Bazaar DVCS reportedly can do so). I am surprised that a symlinked REP-ROOT/.hg works at all, although it seems to except for this push.
A: When "waiting for lock on repository", delete the repository file: .hg/wlock (or it may be in .hg/store/lock)
When deleting the lock file, you must make sure nothing else is accessing the repository. (If the lock is a string of zeros or blank, this is almost certainly true).
A: I had this problem with no detectable lock files. I found the solution here: http://schooner.uwaterloo.ca/twiki/bin/view/MAG/HgLockError
Here is a transcript from Tortoise Hg Workbench console
% hg debuglocks
lock: user None, process 7168, host HPv32 (114213199s)
wlock: free
[command returned code 1 Sat Jan 07 18:00:18 2017]
% hg debuglocks --force-lock
[command completed successfully Sat Jan 07 18:03:15 2017]
cmdserver: Process crashed
PaniniDev% hg debuglocks
% hg debuglocks
lock: free
wlock: free
[command completed successfully Sat Jan 07 18:03:30 2017]
After this the aborted pull ran sucessfully.
The lock had been set more than 2 years ago, by a process on a machine that is no longer on the LAN. Shame on the hg developers for a) not documenting locks adequately; b) not timestamping them for automatic removal when they get stale.
A: When waiting for lock on working directory, delete .hg/wlock.
A: Coworker had this exact problem today, after a BSoD while trying to push. He had to:
*
*delete the file .hg/store/lock (as per the accepted answer)
*delete the file .hg/store/phaseroots (as per this TortoiseHG bug report)
Then his repo worked again.
EDIT: As per @Marmoute's comment - when dealing with lock-related issues, using hg debuglock is a safer alternative to blindly deleting the .hg/store/lock file.
A: If the locked repo was the original, I can't imagine it was modifying it to clone it, so it was only preventing you from changing it in the middle and messing up the clone. It should be fine after removing the lock.
The new cloned copy (if it was a local clone) could be in any sort of malformed state, though, so you should throw it out and start it over. (If it was a remote clone, I would hope it failed and already threw out the incomplete copy.)
A: I encountered this problem on Mac OS X 10.7.5 and Mercurial 2.6.2 when trying to push. After upgrading to Mercurial 3.2.1, I got "no changes found" instead of "waiting for lock on repository". I found out that somehow the default path had gotten set to point to the same repository, so it's not too surprising that Mercurial would get confused.
A: I am very familiar with Mercurial's locking code (as of 1.9.1). The above advice is good, but I'd add that:
*
*I've seen this in the wild, but rarely, and only on Windows machines.
*Deleting lock files is the easiest fix, BUT you have to make sure nothing else is accessing the repository. (If the lock is a string of zeros, this is almost certainly true).
(For the curious: I haven't yet been able to catch the cause of this problem, but suspect it's either an older version of Mercurial accessing the repository or a problem in Python's socket.gethostname() call on certain versions of Windows.)
A: If it only happens on mapped drives it might be bug https://bitbucket.org/tortoisehg/thg/issue/889/cant-commit-file-over-network-share. Using UNC path instead of drive letter seems to sidestep the issue.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/12865",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "360"
} |
Q: Arrays of Arrays in Java This is a nasty one for me... I'm a PHP guy working in Java on a JSP project. I know how to do what I'm attempting through too much code and a complete lack of finesse.
I'd prefer to do it right. Here is the situation:
I'm writing a small display to show customers what days they can water their lawns based on their watering group (ABCDE) and what time of year it is. Our seasons look like this:
Summer (5-1 to 8-31)
Spring (3-1 to 4-30)
Fall (9-1 to 10-31)
Winter (11-1 to 2-28)
An example might be:
If I'm in group A, here would be my allowed times:
Winter: Mondays only
Spring: Tues, Thurs, Sat
Summer: Any Day
Fall: Tues, Thurs, Sat
If I was writing this in PHP I would use arrays like this:
//M=Monday,t=Tuesday,T=Thursday.... etc
$schedule["A"]["Winter"]='M';
$schedule["A"]["Spring"]='tTS';
$schedule["A"]["Summer"]='Any';
$schedule["A"]["Fall"]='tTS';
$schedule["B"]["Winter"]='t';
I COULD make the days arrays (array("Tuesday","Thursday","Saturday")) etc, but it is not necessary for what I'm really trying to accomplish.
I will also need to setup arrays to determine what season I'm in:
$seasons["Summer"]["start"]=0501;
$seasons["Summer"]["end"]=0801;
Can anyone suggest a really cool way to do this? I will have today's date and the group letter. I will need to get out of my function a day (M) or a series of days (tTS), (Any).
A: Don't try to be as dynamic as PHP is. You could try to first define what you need.
interface Season
{
public string getDays();
}
interface User
{
public Season getWinter();
public Season getSpring();
public Season getSummer();
public Season getFall();
}
interface UserMap
{
public User getUser(string name);
}
And please, read the documentation of Hashtable before using it. This class is synchronized which means that each call is protected against multithreading which really slows the access when you don't need the extra protection. Please use any Map implementation instead like HashMap or TreeMap.
A: It seems like everyone is trying to find the Java way to do it like you're doing it in PHP, instead of the way it ought to be done in Java. Just consider each piece of your array an object, or, at the very least, the first level of the array as an object and each sub level as variables inside the object. The build a data structure that you populate with said objects and access the objects through the data structure's given accessors.
Something like:
class Schedule
{
private String group;
private String season;
private String rundays;
public Schedule() { this.group = null; this.season = null; this.rundays= null; }
public void setGroup(String g) { this.group = g; }
public String getGroup() { return this.group; }
...
}
public ArrayList<Schedule> schedules = new ArrayList<Schedule>();
Schedule s = new Schedule();
s.setGroup(...);
...
schedules.add(s);
...
Of course that probably isn't right either. I'd make each season an object, and maybe each weekday list as an object too. Anyway, its more easily reused, understood, and extensible than a hobbled-together Hashtable that tries to imitate your PHP code. Of course, PHP has objects too, and you should use them in a similar fashion instead of your uber-arrays, wherever possible. I do understand the temptation to cheat, though. PHP makes it so easy, and so fun!
A: Here's one way it could look like, you can figure the rest out:
A = new Group();
A.getSeason(Seasons.WINTER).addDay(Days.MONDAY);
A.getSeason(Seasons.SPRING).addDay(Days.TUESDAY).addDay(Days.THURSDAY);
A.getSeason(Seasons.SPRING).addDays(Days.MONDAY, Days.TUESDAY, ...);
schedule = new Schedule();
schedule.addWateringGroup( A );
A: I'm not a Java programmer, but getting away from Java and just thinking in terms that are more language agnostic - a cleaner way to do it might be to use either constants or enumerated types. This should work in any langauge that supports multi-dimensional arrays.
If using named constants, where, for example:
int A = 0;
int B = 1;
int C = 2;
int D = 3;
int Spring = 0;
int Summer = 1;
int Winter = 2;
int Fall = 3;
...
Then the constants serve as more readable array subscripts:
schedule[A][Winter]="M";
schedule[A][Spring]="tTS";
schedule[A][Summer]="Any";
schedule[A][Fall]="tTS";
schedule[B][Winter]="t";
Using enumerated types:
enum groups
{
A = 0,
B = 1,
C = 2,
D = 3
}
enum seasons
{
Spring = 0,
Summer = 1,
Fall = 2,
Winter = 3
}
...
schedule[groups.A][seasons.Winter]="M";
schedule[groups.A][seasons.Spring]="tTS";
schedule[groups.A][seasons.Summer]="Any";
schedule[groups.A][seasons.Fall]="tTS";
schedule[groups.B][seasons.Winter]="t";
A: I'm totally at a loss as to why some of you seem to think that throwing gobs of objects at the code is the way to go. For example, there are exactly four seasons, and they don't do or store anything. How does it simplify anything to make them objects? Wing is quite right that these should probably be constants (or maybe enums).
What Bruce needs, at it's heart, is simply a lookup table. He doesn't need a hierarchy of objects and interfaces; he needs a way to look up a schedule based on a season and a group identifier. Turning things into objects only makes sense if they have responsibilities or state. If they have neither, then they are simply identifiers, and building special objects for them just makes the codebase larger.
You could build, e.g., Group objects that each contain a set of schedule strings (one for each season), but if all the Group object does is provide lookup functionality, then you've reinvented the lookup table in a much less intuitive fashion. If he has to look up the group, and then lookup the schedule, all he has is a two-step lookup table that took longer to code, is more likely to be buggy, and will be harder to maintain.
A: I'm with those that suggest encapsulating function in objects.
import java.util.Date;
import java.util.Map;
import java.util.Set;
public class Group {
private String groupName;
private Map<Season, Set<Day>> schedule;
public String getGroupName() {
return groupName;
}
public void setGroupName(String groupName) {
this.groupName = groupName;
}
public Map<Season, Set<Day>> getSchedule() {
return schedule;
}
public void setSchedule(Map<Season, Set<Day>> schedule) {
this.schedule = schedule;
}
public String getScheduleFor(Date date) {
Season now = Season.getSeason(date);
Set<Day> days = schedule.get(now);
return Day.getDaysForDisplay(days);
}
}
EDIT: Also, your date ranges don't take leap years into account:
Our seasons look like this: Summer
(5-1 to 8-31) Spring (3-1 to 4-30)
Fall (9-1 to 10-31) Winter (11-1 to
2-28)
A: You could do essentially the same code with Hashtables (or some other Map):
Hashtable<String, Hashtable<String, String>> schedule
= new Hashtable<String, Hashtable<String, String>>();
schedule.put("A", new Hashtable<String, String>());
schedule.put("B", new Hashtable<String, String>());
schedule.put("C", new Hashtable<String, String>());
schedule.put("D", new Hashtable<String, String>());
schedule.put("E", new Hashtable<String, String>());
schedule.get("A").put("Winter", "M");
schedule.get("A").put("Spring", "tTS");
// Etc...
Not as elegant, but then again, Java isn't a dynamic language, and it doesn't have hashes on the language level.
Note: You might be able to do a better solution, this just popped in my head as I read your question.
A: I agree that you should definitely put this logic behind the clean interface of:
public String lookupDays(String group, String date);
but maybe you should stick the data in a properties file. I'm not against hardcoding this data in your source files but, as you noticed, Java can be pretty wordy when it comes to nested Collections. Your file might looks like:
A.Summer=M
A.Spring=tTS
B.Summer=T
Usually I don't like to move static data like this to an external file because it increases the "distance" between the data and the code that uses it. However, whenever you're dealing with nested Collections, especially maps, things can get real ugly, real fast.
If you don't like this idea, maybe you can do something like this:
public class WaterScheduler
{
private static final Map<String, String> GROUP2SEASON = new HashMap<String, String>();
static
{
addEntry("A", "Summer", "M");
addEntry("A", "Spring", "tTS");
addEntry("B", "Summer", "T");
}
private static void addEntry(String group, String season, String value)
{
GROUP2SEASON.put(group + "." + season, value);
}
}
You lose some readability but at least the data is closer to where it's going to be used.
A: A better solution would perhaps be to put all that data into a database, instead of hard-coding it in your sources or using properties files.
Using a database will be much easier to maintain, and there are a variety of free database engines to choose from.
Two of these database engines are implemented entirely in Java and can be embedded in an application just by including a jar file. It's a little heavyweight, sure, but it's a lot more scalable and easier to maintain. Just because there are 20 records today doesn't mean there won't be more later due to changing requirements or feature creep.
If, in a few weeks or months, you decide you want to add, say, time of day watering restrictions, it will be much easier to add that functionality if you're already using a database. Even if that never happens, then you've spent a few hours learning how to embed a database in an application.
A: Does the "date" have to be a parameter? If you're just showing the current watering schedule the WateringSchedule class itself can figure out what day it is, and therefore what season it is. Then just have a method which returns a map where the Key is the group letter. Something like:
public Map<String,List<String>> getGroupToScheduledDaysMap() {
// instantiate a date or whatever to decide what Map to return
}
Then in the JSP page
<c:forEach var="day" items="${scheduler.groupToScheduledDaysMap["A"]}">
${day}
</c:forEach>
If you need to show the schedules for more than one season, you should have a method in the WateringSchedule class that returns a map where Seasons are the keys, and then Maps of groupToScheduledDays are the values.
A: There is no pretty solution. Java just doesn't do things like this well. Mike's solution is pretty much the way to do it if you want strings as the indices (keys). Another option if the hash-of-hashes setup is too ugly is to append the strings together (shamelessly stolen from Mike and modified):
Hashtable<String, String> schedule = new Hashtable<String, String>();
schedule.put("A-Winter", "M");
schedule.put("A-Spring", "tTS");
and then lookup:
String val = schedule.get(group + "-" + season);
If you're unhappy with the general ugliness (and I don't blame you), put it all behind a method call:
String whenCanIWater(String group, Date date) { /* ugliness here */ }
A: tl;dr
Using modern Java language features and classes, define your own enums to represent the seasons and groups.
Schedule.daysForGroupOnDate(
Group.D ,
LocalDate.now()
)
That method yields a Set of DayOfWeek enum objects (not mere text!), such as DayOfWeek.TUESDAY & DayOfWeek.THURSDAY.
Modern Java
Can anyone suggest a really cool way to do this?
Yes.
Modern Java has built-in classes, collections, and enums to help you with this problem.
The java.time framework built into Java offers the Month enum and DayOfWeek enum.
The EnumSet and EnumMap provide implementations of Set and Map that are optimized for use with enums for fast execution in very little memory.
You can define your own enums to represent your season and your groups (A, B, and so on). The enum facility in Java is far more useful and powerful than seen in other languages. If not familiar, see the Oracle Tutorial.
The simple syntax of defining your own enums actually provides much of the functionality needed to solve this Question, eliminating some complicated coding. New literals syntax for sets and maps in the collections factory methods in Java 9 (JEP 269) makes the code even simpler now.
Here is a complete working app. Note how little code there is for algorithms. Defining your own custom enums does most of the heavy-lifting.
One caveat with this app code: It assumes nothing changes in your business definitions, or at least if there is a change you only care about the current rules, “latest is greatest”. If your rules change over time and you need to represent all the past, present, and future versions, I would build a very different app, probably with a database to store the rules. But this app here solves the Question as asked.
Season
Represent your season as an enum Season. Each season object holds a List of Month enum objects that define the length of that particular season. The new List.of syntax added to Java 9 defines an immutable list in a literals syntax via static factory methods.
package com.basilbourque.watering;
import java.time.LocalDate;
import java.time.Month;
import java.util.EnumSet;
import java.util.List;
import java.util.Set;
public enum Season
{
SPRING( List.of( Month.MARCH , Month.APRIL ) ),
SUMMER( List.of( Month.MAY , Month.JUNE, Month.JULY , Month.AUGUST ) ),
FALL( List.of( Month.SEPTEMBER , Month.OCTOBER ) ),
WINTER( List.of( Month.NOVEMBER , Month.DECEMBER , Month.JANUARY , Month.FEBRUARY ) );
private List< Month > months;
// Constructor
Season ( List < Month > monthsArg )
{
this.months = monthsArg;
}
public List < Month > getMonths ( )
{
return this.months;
}
// For any given month, determine the season.
static public Season ofLocalMonth ( Month monthArg )
{
Season s = null;
for ( Season season : EnumSet.allOf( Season.class ) )
{
if ( season.getMonths().contains( monthArg ) )
{
s = season;
break; // Bail out of this FOR loop.
}
}
return s;
}
// For any given date, determine the season.
static public Season ofLocalDate ( LocalDate localDateArg )
{
Month month = localDateArg.getMonth();
Season s = Season.ofLocalMonth( month );
return s;
}
// Run `main` for demo/testing.
public static void main ( String[] args )
{
// Dump all these enum objects to console.
for ( Season season : EnumSet.allOf( Season.class ) )
{
System.out.println( "Season: " + season.toString() + " = " + season.getMonths() );
}
}
}
Group
Represent each grouping of customers’ lawns/yards, (A, B, C, D, E) as an enum named Group. Each of these enum objects holds a Map, mapping a Season enum object to a Set of DayOfWeek enum objects. For example, Group.A in Season.SPRING allows watering on two days, DayOfWeek.TUESDAY & DayOfWeek.THURSDAY.
package com.basilbourque.watering;
import java.time.DayOfWeek;
import java.util.EnumMap;
import java.util.EnumSet;
import java.util.Map;
import java.util.Set;
public enum Group
{
A(
Map.of(
Season.SPRING , EnumSet.of( DayOfWeek.TUESDAY , DayOfWeek.THURSDAY ) ,
Season.SUMMER , EnumSet.allOf( DayOfWeek.class ) ,
Season.FALL , EnumSet.of( DayOfWeek.TUESDAY , DayOfWeek.THURSDAY ) ,
Season.WINTER , EnumSet.of( DayOfWeek.TUESDAY )
)
),
B(
Map.of(
Season.SPRING , EnumSet.of( DayOfWeek.FRIDAY ) ,
Season.SUMMER , EnumSet.allOf( DayOfWeek.class ) ,
Season.FALL , EnumSet.of( DayOfWeek.TUESDAY , DayOfWeek.FRIDAY ) ,
Season.WINTER , EnumSet.of( DayOfWeek.FRIDAY )
)
),
C(
Map.of(
Season.SPRING , EnumSet.of( DayOfWeek.MONDAY ) ,
Season.SUMMER , EnumSet.allOf( DayOfWeek.class ) ,
Season.FALL , EnumSet.of( DayOfWeek.MONDAY , DayOfWeek.TUESDAY ) ,
Season.WINTER , EnumSet.of( DayOfWeek.MONDAY )
)
),
D(
Map.of(
Season.SPRING , EnumSet.of( DayOfWeek.WEDNESDAY , DayOfWeek.FRIDAY ) ,
Season.SUMMER , EnumSet.allOf( DayOfWeek.class ) ,
Season.FALL , EnumSet.of( DayOfWeek.FRIDAY ) ,
Season.WINTER , EnumSet.of( DayOfWeek.WEDNESDAY )
)
),
E(
Map.of(
Season.SPRING , EnumSet.of( DayOfWeek.TUESDAY ) ,
Season.SUMMER , EnumSet.allOf( DayOfWeek.class ) ,
Season.FALL , EnumSet.of( DayOfWeek.TUESDAY , DayOfWeek.WEDNESDAY ) ,
Season.WINTER , EnumSet.of( DayOfWeek.WEDNESDAY )
)
);
private Map < Season, Set < DayOfWeek > > map;
// Constructor
Group ( Map < Season, Set < DayOfWeek > > mapArg )
{
this.map = mapArg;
}
// Getter
private Map < Season, Set < DayOfWeek > > getMapOfSeasonToDaysOfWeek() {
return this.map ;
}
// Retrieve the DayOfWeek set for this particular Group.
public Set<DayOfWeek> daysForSeason (Season season ) {
Set<DayOfWeek> days = this.map.get( season ) ; // Retrieve the value (set of days) for this key (a season) for this particular grouping of lawns/yards.
return days;
}
// Run `main` for demo/testing.
public static void main ( String[] args )
{
// Dump all these enum objects to console.
for ( Group group : EnumSet.allOf( Group.class ) )
{
System.out.println( "Group: " + group.toString() + " = " + group.getMapOfSeasonToDaysOfWeek() );
}
}
}
Schedule
Pull it all together in this Schedule class.
This class makes use of the two enums defined above to get useful work done. The only method implemented so far tells you which days of the week are allowed for a particular group on a particular date. The method determines which Season applies for that date.
Run the main method here to dump the contents of our two enums, and report on the days of the week allowing for watering in each group for a particular hard-coded date.
package com.basilbourque.watering;
import java.time.DayOfWeek;
import java.time.LocalDate;
import java.time.ZoneId;
import java.time.format.DateTimeFormatter;
import java.time.temporal.ChronoField;
import java.time.temporal.IsoFields;
import java.util.EnumSet;
import java.util.Set;
public class Schedule
{
static private DateTimeFormatter isoWeekFormatter = DateTimeFormatter.ofPattern( "uuuu-'W'ww" ) ;
static public Set < DayOfWeek > daysForGroupOnDate ( Group group , LocalDate localDate )
{
Season season = Season.ofLocalDate( localDate );
Set < DayOfWeek > days = group.daysForSeason( season );
return days;
}
// Run `main` for demo/testing.
public static void main ( String[] args )
{
Season.main( null );
Group.main( null );
// Dump all these enum objects to console.
for ( Group group : EnumSet.allOf( Group.class ) )
{
LocalDate localDate = LocalDate.now( ZoneId.of( "Africa/Tunis" ) );
Set < DayOfWeek > days = Schedule.daysForGroupOnDate( group , localDate );
String week = localDate.format( Schedule.isoWeekFormatter ) ; // Standard ISO 8601 week, where week number one has the first Thursday of the calendar year, and week starts on Monday, so year is either 52 or 53 weeks long.
String message = "Group " + group + " – Watering days on " + localDate + " week # " + week + " is: " + days;
System.out.println( message );
}
}
}
Console
When running Schedule.main, we see this dumped to the console.
Season: SPRING = [MARCH, APRIL]
Season: SUMMER = [MAY, JUNE, JULY, AUGUST]
Season: FALL = [SEPTEMBER, OCTOBER]
Season: WINTER = [NOVEMBER, DECEMBER, JANUARY, FEBRUARY]
Group: A = {SPRING=[TUESDAY, THURSDAY], FALL=[TUESDAY, THURSDAY], SUMMER=[MONDAY, TUESDAY, WEDNESDAY, THURSDAY, FRIDAY, SATURDAY, SUNDAY], WINTER=[TUESDAY]}
Group: B = {SPRING=[FRIDAY], FALL=[TUESDAY, FRIDAY], SUMMER=[MONDAY, TUESDAY, WEDNESDAY, THURSDAY, FRIDAY, SATURDAY, SUNDAY], WINTER=[FRIDAY]}
Group: C = {SPRING=[MONDAY], FALL=[MONDAY, TUESDAY], SUMMER=[MONDAY, TUESDAY, WEDNESDAY, THURSDAY, FRIDAY, SATURDAY, SUNDAY], WINTER=[MONDAY]}
Group: D = {SPRING=[WEDNESDAY, FRIDAY], FALL=[FRIDAY], SUMMER=[MONDAY, TUESDAY, WEDNESDAY, THURSDAY, FRIDAY, SATURDAY, SUNDAY], WINTER=[WEDNESDAY]}
Group: E = {SPRING=[TUESDAY], FALL=[TUESDAY, WEDNESDAY], SUMMER=[MONDAY, TUESDAY, WEDNESDAY, THURSDAY, FRIDAY, SATURDAY, SUNDAY], WINTER=[WEDNESDAY]}
Group A – Watering days on 2018-01-30 week # 2018-W05 is: [TUESDAY]
Group B – Watering days on 2018-01-30 week # 2018-W05 is: [FRIDAY]
Group C – Watering days on 2018-01-30 week # 2018-W05 is: [MONDAY]
Group D – Watering days on 2018-01-30 week # 2018-W05 is: [WEDNESDAY]
Group E – Watering days on 2018-01-30 week # 2018-W05 is: [WEDNESDAY]
ISO 8601 week
You may find it helpful to learn about the ISO 8601 standard for a definition of week. The standard gives a specific meaning to “week” and defines a textual format for representing a particular week or a particular day within that week.
For working with such weeks within Java, consider adding the ThreeTen-Extra library to your project to make use of the YearWeek class.
LocalDate
The LocalDate class represents a date-only value without time-of-day and without time zone.
A time zone is crucial in determining a date. For any given moment, the date varies around the globe by zone. For example, a few minutes after midnight in Paris France is a new day while still “yesterday” in Montréal Québec.
If no time zone is specified, the JVM implicitly applies its current default time zone. That default may change at any moment, so your results may vary. Better to specify your desired/expected time zone explicitly as an argument.
Specify a proper time zone name in the format of continent/region, such as America/Montreal, Africa/Casablanca, or Pacific/Auckland. Never use the 3-4 letter abbreviation such as EST or IST as they are not true time zones, not standardized, and not even unique(!).
ZoneId z = ZoneId.of( "America/Montreal" ) ;
LocalDate today = LocalDate.now( z ) ;
If you want to use the JVM’s current default time zone, ask for it and pass as an argument. If omitted, the JVM’s current default is applied implicitly. Better to be explicit.
ZoneId z = ZoneId.systemDefault() ; // Get JVM’s current default time zone.
Or specify a date. You may set the month by a number, with sane numbering 1-12 for January-December.
LocalDate ld = LocalDate.of( 1986 , 2 , 23 ) ; // Years use sane direct numbering (1986 means year 1986). Months use sane numbering, 1-12 for January-December.
Or, better, use the Month enum objects pre-defined, one for each month of the year. Tip: Use these Month objects throughout your codebase rather than a mere integer number to make your code more self-documenting, ensure valid values, and provide type-safety.
LocalDate ld = LocalDate.of( 1986 , Month.FEBRUARY , 23 ) ;
Immutable collections
The lists, sets, and maps seen above should ideally be immutable collections, as changing the membership of those collections is likely to be confounding and erroneous.
The new Java 9 syntax List.of and Map.of are already promised to be immutable. However, in our case the Map should ideally be an EnumMap for efficiency of performance and memory. The current implementation of Map.of and Set.of apparently do not detect the use of enums as members and automatically optimize with an internal use of EnumMap and EnumSet. There is a OpenJDK issue opened to consider such issues: consider enhancements to EnumMap and EnumSet.
One way to obtain an immutable EnumSet and immutable EnumMap is through the Google Guava library:
*
*Sets.immutableEnumSet( … )
*Maps.immutableEnumMap( … )
The results of each use an underlying EnumSet/EnumMap. Operations such as get and put throw an exception. So you get the enum-related optimizations along with immutability.
Here are the Season and Group classes seen above modified to use the Google Guava 23.6 library.
Season with immutability
package com.basilbourque.watering;
import java.time.LocalDate;
import java.time.Month;
import java.util.EnumSet;
import java.util.List;
public enum Season
{
SPRING( List.of( Month.MARCH , Month.APRIL ) ), // `List.of` provides literals-style syntax, and returns an immutable `List`. New in Java 9.
SUMMER( List.of( Month.MAY , Month.JUNE, Month.JULY , Month.AUGUST ) ),
FALL( List.of( Month.SEPTEMBER , Month.OCTOBER ) ),
WINTER( List.of( Month.NOVEMBER , Month.DECEMBER , Month.JANUARY , Month.FEBRUARY ) );
private List< Month > months;
// Constructor
Season ( List < Month > monthsArg )
{
this.months = monthsArg;
}
public List < Month > getMonths ( )
{
return this.months;
}
// For any given month, determine the season.
static public Season ofLocalMonth ( Month monthArg )
{
Season s = null;
for ( Season season : EnumSet.allOf( Season.class ) )
{
if ( season.getMonths().contains( monthArg ) )
{
s = season;
break; // Bail out of this FOR loop.
}
}
return s;
}
// For any given date, determine the season.
static public Season ofLocalDate ( LocalDate localDateArg )
{
Month month = localDateArg.getMonth();
Season s = Season.ofLocalMonth( month );
return s;
}
// Run `main` for demo/testing.
public static void main ( String[] args )
{
// Dump all these enum objects to console.
for ( Season season : EnumSet.allOf( Season.class ) )
{
System.out.println( "Season: " + season.toString() + " = " + season.getMonths() );
}
}
}
Group with immutability
package com.basilbourque.watering;
import com.google.common.collect.Maps;
import com.google.common.collect.Sets;
import java.time.DayOfWeek;
import java.util.EnumSet;
import java.util.Map;
import java.util.Set;
public enum Group
{
A(
Maps.immutableEnumMap(
Map.of( // `Map.of` provides literals-style syntax, and returns an immutable `Map`. New in Java 9.
Season.SPRING , Sets.immutableEnumSet( DayOfWeek.TUESDAY , DayOfWeek.THURSDAY ) ,
Season.SUMMER , Sets.immutableEnumSet( EnumSet.allOf( DayOfWeek.class ) ) ,
Season.FALL , Sets.immutableEnumSet( DayOfWeek.TUESDAY , DayOfWeek.THURSDAY ) ,
Season.WINTER , Sets.immutableEnumSet( DayOfWeek.TUESDAY )
)
)
),
B(
Maps.immutableEnumMap(
Map.of(
Season.SPRING , Sets.immutableEnumSet( DayOfWeek.FRIDAY ) ,
Season.SUMMER , Sets.immutableEnumSet( EnumSet.allOf( DayOfWeek.class ) ) ,
Season.FALL , Sets.immutableEnumSet( DayOfWeek.TUESDAY , DayOfWeek.FRIDAY ) ,
Season.WINTER , Sets.immutableEnumSet( DayOfWeek.FRIDAY )
)
)
),
C(
Maps.immutableEnumMap(
Map.of(
Season.SPRING , Sets.immutableEnumSet( DayOfWeek.MONDAY ) ,
Season.SUMMER , Sets.immutableEnumSet( EnumSet.allOf( DayOfWeek.class ) ) ,
Season.FALL , Sets.immutableEnumSet( DayOfWeek.MONDAY , DayOfWeek.TUESDAY ) ,
Season.WINTER , Sets.immutableEnumSet( DayOfWeek.MONDAY )
)
)
),
D(
Maps.immutableEnumMap(
Map.of(
Season.SPRING , Sets.immutableEnumSet( DayOfWeek.WEDNESDAY , DayOfWeek.FRIDAY ) ,
Season.SUMMER , Sets.immutableEnumSet( EnumSet.allOf( DayOfWeek.class ) ) ,
Season.FALL , Sets.immutableEnumSet( DayOfWeek.FRIDAY ) ,
Season.WINTER , Sets.immutableEnumSet( DayOfWeek.WEDNESDAY )
)
)
),
E(
Maps.immutableEnumMap(
Map.of(
Season.SPRING , Sets.immutableEnumSet( DayOfWeek.TUESDAY ) ,
Season.SUMMER , Sets.immutableEnumSet( EnumSet.allOf( DayOfWeek.class ) ) ,
Season.FALL , Sets.immutableEnumSet( EnumSet.of( DayOfWeek.TUESDAY , DayOfWeek.WEDNESDAY ) ) ,
Season.WINTER , Sets.immutableEnumSet( DayOfWeek.WEDNESDAY )
)
)
);
private Map < Season, Set < DayOfWeek > > map;
// Constructor
Group ( Map < Season, Set < DayOfWeek > > mapArg )
{
this.map = mapArg;
}
// Getter
private Map < Season, Set < DayOfWeek > > getMapOfSeasonToDaysOfWeek ( )
{
return this.map;
}
// Retrieve the DayOfWeek set for this particular Group.
public Set < DayOfWeek > daysForSeason ( Season season )
{
Set < DayOfWeek > days = this.map.get( season ); // Retrieve the value (set of days) for this key (a season) for this particular grouping of lawns/yards.
return days;
}
// Run `main` for demo/testing.
public static void main ( String[] args )
{
// Dump all these enum objects to console.
for ( Group group : EnumSet.allOf( Group.class ) )
{
System.out.println( "Group: " + group.toString() + " = " + group.getMapOfSeasonToDaysOfWeek() );
}
}
}
About java.time
The java.time framework is built into Java 8 and later. These classes supplant the troublesome old legacy date-time classes such as java.util.Date, Calendar, & SimpleDateFormat.
The Joda-Time project, now in maintenance mode, advises migration to the java.time classes.
To learn more, see the Oracle Tutorial. And search Stack Overflow for many examples and explanations. Specification is JSR 310.
Where to obtain the java.time classes?
*
*Java SE 8, Java SE 9, and later
*
*Built-in.
*Part of the standard Java API with a bundled implementation.
*Java 9 adds some minor features and fixes.
*Java SE 6 and Java SE 7
*
*Much of the java.time functionality is back-ported to Java 6 & 7 in ThreeTen-Backport.
*Android
*
*Later versions of Android bundle implementations of the java.time classes.
*For earlier Android, the ThreeTenABP project adapts ThreeTen-Backport (mentioned above). See How to use ThreeTenABP….
The ThreeTen-Extra project extends java.time with additional classes. This project is a proving ground for possible future additions to java.time. You may find some useful classes here such as Interval, YearWeek, YearQuarter, and more.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/12870",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17"
} |
Q: Oracle SQL Developer not responsive when trying to view tables (or suggest an Oracle Mac client) I just get the beach ball all day long (it's been doing nothing for hours). It's not taking CPU, not reading from disk, not using the network.
I'm using Java 1.6 on Mac OS X 10.5.4. It worked once, now even restarts of the computer won't help. Activity Monitor says it's "(Not Responding)". Only thing that I can do is kill -9 that sucker.
When I sample the process I see this:
mach_msg_trap 16620
read 831
semaphore_wait_trap 831
An acceptable answer that doesn't fix this would include a url for a decent free Oracle client for the Mac.
Edit:
@Mark Harrison sadly this happens every time I start it up, it's not an old connection. I'll like to avoid running Windows on my laptop. I'm giving some plugins for my IDE a whirl, but still no solution for me.
@Matthew Schinckel Navicat seems to only have a non-commercial Oracle product...I need a commercial friendly one (even if it costs money).
A: I get the same problem after there's been an active connection sitting idle for a while. I solve it by restarting sql developer every once in a while.
I also have Toad for Oracle running on a vmware XP session, and it works great. If you don't mind the money, try that.
A: The company Navicat has released an Oracle client for Mac (and they do a Windows version too).
It's not free, but I think you can get a 30 day demo.
A: Have you looked at http://www.aquafold.com/? They have a very JDBC/java Mac-friendly utility, Aqua Data Studio (ADS) that you can try for I think 30 days. It's not free, but...
Excellent support via Yahoo groups. VERY responsive re bugs or enhancement requests.
No affiliation with them - just a fan.
A: Squirrel is a nice database agonstic application development client. No Oracle specific features, but runs well on the mac
A: I use SQLDeveloper on the Mac and have had problems where it becomes unresponsive. Usually, I can fix this by going into the Activity Monitor and killing the process. However, this doesn't always work to end the process. When that happens, I go to the Terminal and find the process id and send it a SIGKILL and then the next time it will work correctly.
However, more importantly I evaluated SQLGrinder at one point. I didn't end up buying the software, largely because I have a Mac laptop and a windows desktop. Therefore, I more often use Toad on the windows desktop and it wasn't worth purchasing SQLGrinder for me.
A: Use RazorSQL. Do yourself a favor and spend the 60 bucks. It will pay for itself in the first hour or two of use. You may even be able to get 60 days for free out of it.
A: The latest version of SQL Developer is very good and I have experienced no problems with it on my Mac Pro. DB Solo 3 is also quite good.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/12877",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: What is a good way to denormalize a mysql database? I have a large database of normalized order data that is becoming very slow to query for reporting. Many of the queries that I use in reports join five or six tables and are having to examine tens or hundreds of thousands of lines.
There are lots of queries and most have been optimized as much as possible to reduce server load and increase speed. I think it's time to start keeping a copy of the data in a denormalized format.
Any ideas on an approach? Should I start with a couple of my worst queries and go from there?
A: MySQL 5 does support views, which may be helpful in this scenario. It sounds like you've already done a lot of optimizing, but if not you can use MySQL's EXPLAIN syntax to see what indexes are actually being used and what is slowing down your queries.
As far as going about normalizing data (whether you're using views or just duplicating data in a more efficient manner), I think starting with the slowest queries and working your way through is a good approach to take.
A: I know more about mssql that mysql, but I don't think the number of joins or number of rows you are talking about should cause you too many problems with the correct indexes in place. Have you analyzed the query plan to see if you are missing any?
http://dev.mysql.com/doc/refman/5.0/en/explain.html
That being said, once you are satisifed with your indexes and have exhausted all other avenues, de-normalization might be the right answer. If you just have one or two queries that are problems, a manual approach is probably appropriate, whereas some sort of data warehousing tool might be better for creating a platform to develop data cubes.
Here's a site I found that touches on the subject:
http://www.meansandends.com/mysql-data-warehouse/?link_body%2Fbody=%7Bincl%3AAggregation%7D
Here's a simple technique that you can use to keep denormalizing queries simple, if you're just doing a few at a time (and I'm not replacing your OLTP tables, just creating a new one for reporting purposes). Let's say you have this query in your application:
select a.name, b.address from tbla a
join tblb b on b.fk_a_id = a.id where a.id=1
You could create a denormalized table and populate with almost the same query:
create table tbl_ab (a_id, a_name, b_address);
-- (types elided)
Notice the underscores match the table aliases you use
insert tbl_ab select a.id, a.name, b.address from tbla a
join tblb b on b.fk_a_id = a.id
-- no where clause because you want everything
Then to fix your app to use the new denormalized table, switch the dots for underscores.
select a_name as name, b_address as address
from tbl_ab where a_id = 1;
For huge queries this can save a lot of time and makes it clear where the data came from, and you can re-use the queries you already have.
Remember, I'm only advocating this as the last resort. I bet there's a few indexes that would help you. And when you de-normalize, don't forget to account for the extra space on your disks, and figure out when you will run the query to populate the new tables. This should probably be at night, or whenever activity is low. And the data in that table, of course, will never exactly be up to date.
[Yet another edit] Don't forget that the new tables you create need to be indexed too! The good part is that you can index to your heart's content and not worry about update lock contention, since aside from your bulk insert the table will only see selects.
A: I know this is a bit tangential, but have you tried seeing if there are more indexes you can add?
I don't have a lot of DB background, but I am working with databases a lot recently, and I've been finding that a lot of the queries can be improved just by adding indexes.
We are using DB2, and there is a command called db2expln and db2advis, the first will indicate whether table scans vs index scans are being used, and the second will recommend indexes you can add to improve performance. I'm sure MySQL has similar tools...
Anyways, if this is something you haven't considered yet, it has been helping a lot with me... but if you've already gone this route, then I guess it's not what you are looking for.
Another possibility is a "materialized view" (or as they call it in DB2), which lets you specify a table that is essentially built of parts from multiple tables. Thus, rather than normalizing the actual columns, you could provide this view to access the data... but I don't know if this has severe performance impacts on inserts/updates/deletes (but if it is "materialized", then it should help with selects since the values are physically stored separately).
A: In line with some of the other comments, i would definately have a look at your indexing.
One thing i discovered earlier this year on our MySQL databases was the power of composite indexes. For example, if you are reporting on order numbers over date ranges, a composite index on the order number and order date columns could help. I believe MySQL can only use one index for the query so if you just had separate indexes on the order number and order date it would have to decide on just one of them to use. Using the EXPLAIN command can help determine this.
To give an indication of the performance with good indexes (including numerous composite indexes), i can run queries joining 3 tables in our database and get almost instant results in most cases. For more complex reporting most of the queries run in under 10 seconds. These 3 tables have 33 million, 110 million and 140 millions rows respectively. Note that we had also already normalised these slightly to speed up our most common query on the database.
More information regarding your tables and the types of reporting queries may allow further suggestions.
A: For MySQL I like this talk: Real World Web: Performance & Scalability, MySQL Edition. This contains a lot of different pieces of advice for getting more speed out of MySQL.
A: You might also want to consider selecting into a temporary table and then performing queries on that temporary table. This would avoid the need to rejoin your tables for every single query you issue (assuming that you can use the temporary table for numerous queries, of course). This basically gives you denormalized data, but if you are only doing select calls, there's no concern about data consistency.
A: Further to my previous answer, another approach we have taken in some situations is to store key reporting data in separate summary tables. There are certain reporting queries which are just going to be slow even after denormalising and optimisations and we found that creating a table and storing running totals or summary information throughout the month as it came in made the end of month reporting much quicker as well.
We found this approach easy to implement as it didn't break anything that was already working - it's just additional database inserts at certain points.
A: I've been toying with composite indexes and have seen some real benefits...maybe I'll setup some tests to see if that can save me here..at least for a little longer.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/12890",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "22"
} |
Q: parsing raw email in php I'm looking for good/working/simple to use PHP code for parsing raw email into parts.
I've written a couple of brute force solutions, but every time, one small change/header/space/something comes along and my whole parser fails and the project falls apart.
And before I get pointed at PEAR/PECL, I need actual code. My host has some screwy config or something, I can never seem to get the .so's to build right. If I do get the .so made, some difference in path/environment/php.ini doesn't always make it available (apache vs cron vs CLI).
Oh, and one last thing, I'm parsing the raw email text, NOT POP3, and NOT IMAP. It's being piped into the PHP script via a .qmail email redirect.
I'm not expecting SOF to write it for me, I'm looking for some tips/starting points on doing it "right". This is one of those "wheel" problems that I know has already been solved.
A: I cobbled this together, some code isn't mine but I don't know where it came from... I later adopted the more robust "MimeMailParser" but this works fine, I pipe my default email to it using cPanel and it works great.
#!/usr/bin/php -q
<?php
// Config
$dbuser = 'emlusr';
$dbpass = 'pass';
$dbname = 'email';
$dbhost = 'localhost';
$notify= 'services@.com'; // an email address required in case of errors
function mailRead($iKlimit = "")
{
// Purpose:
// Reads piped mail from STDIN
//
// Arguements:
// $iKlimit (integer, optional): specifies after how many kilobytes reading of mail should stop
// Defaults to 1024k if no value is specified
// A value of -1 will cause reading to continue until the entire message has been read
//
// Return value:
// A string containing the entire email, headers, body and all.
// Variable perparation
// Set default limit of 1024k if no limit has been specified
if ($iKlimit == "") {
$iKlimit = 1024;
}
// Error strings
$sErrorSTDINFail = "Error - failed to read mail from STDIN!";
// Attempt to connect to STDIN
$fp = fopen("php://stdin", "r");
// Failed to connect to STDIN? (shouldn't really happen)
if (!$fp) {
echo $sErrorSTDINFail;
exit();
}
// Create empty string for storing message
$sEmail = "";
// Read message up until limit (if any)
if ($iKlimit == -1) {
while (!feof($fp)) {
$sEmail .= fread($fp, 1024);
}
} else {
while (!feof($fp) && $i_limit < $iKlimit) {
$sEmail .= fread($fp, 1024);
$i_limit++;
}
}
// Close connection to STDIN
fclose($fp);
// Return message
return $sEmail;
}
$email = mailRead();
// handle email
$lines = explode("\n", $email);
// empty vars
$from = "";
$subject = "";
$headers = "";
$message = "";
$splittingheaders = true;
for ($i=0; $i < count($lines); $i++) {
if ($splittingheaders) {
// this is a header
$headers .= $lines[$i]."\n";
// look out for special headers
if (preg_match("/^Subject: (.*)/", $lines[$i], $matches)) {
$subject = $matches[1];
}
if (preg_match("/^From: (.*)/", $lines[$i], $matches)) {
$from = $matches[1];
}
if (preg_match("/^To: (.*)/", $lines[$i], $matches)) {
$to = $matches[1];
}
} else {
// not a header, but message
$message .= $lines[$i]."\n";
}
if (trim($lines[$i])=="") {
// empty line, header section has ended
$splittingheaders = false;
}
}
if ($conn = @mysql_connect($dbhost,$dbuser,$dbpass)) {
if(!@mysql_select_db($dbname,$conn))
mail($email,'Email Logger Error',"There was an error selecting the email logger database.\n\n".mysql_error());
$from = mysql_real_escape_string($from);
$to = mysql_real_escape_string($to);
$subject = mysql_real_escape_string($subject);
$headers = mysql_real_escape_string($headers);
$message = mysql_real_escape_string($message);
$email = mysql_real_escape_string($email);
$result = @mysql_query("INSERT INTO email_log (`to`,`from`,`subject`,`headers`,`message`,`source`) VALUES('$to','$from','$subject','$headers','$message','$email')");
if (mysql_affected_rows() == 0)
mail($notify,'Email Logger Error',"There was an error inserting into the email logger database.\n\n".mysql_error());
} else {
mail($notify,'Email Logger Error',"There was an error connecting the email logger database.\n\n".mysql_error());
}
?>
A: There are Mailparse Functions you could try: http://php.net/manual/en/book.mailparse.php, not in default php conf, however.
A: What are you hoping to end up with at the end? The body, the subject, the sender, an attachment? You should spend some time with RFC2822 to understand the format of the mail, but here's the simplest rules for well formed email:
HEADERS\n
\n
BODY
That is, the first blank line (double newline) is the separator between the HEADERS and the BODY. A HEADER looks like this:
HSTRING:HTEXT
HSTRING always starts at the beginning of a line and doesn't contain any white space or colons. HTEXT can contain a wide variety of text, including newlines as long as the newline char is followed by whitespace.
The "BODY" is really just any data that follows the first double newline. (There are different rules if you are transmitting mail via SMTP, but processing it over a pipe you don't have to worry about that).
So, in really simple, circa-1982 RFC822 terms, an email looks like this:
HEADER: HEADER TEXT
HEADER: MORE HEADER TEXT
INCLUDING A LINE CONTINUATION
HEADER: LAST HEADER
THIS IS ANY
ARBITRARY DATA
(FOR THE MOST PART)
Most modern email is more complex than that though. Headers can be encoded for charsets or RFC2047 mime words, or a ton of other stuff I'm not thinking of right now. The bodies are really hard to roll your own code for these days to if you want them to be meaningful. Almost all email that's generated by an MUA will be MIME encoded. That might be uuencoded text, it might be html, it might be a uuencoded excel spreadsheet.
I hope this helps provide a framework for understanding some of the very elemental buckets of email. If you provide more background on what you are trying to do with the data I (or someone else) might be able to provide better direction.
A: Try the Plancake PHP Email parser:
https://github.com/plancake/official-library-php-email-parser
I have used it for my projects. It works great, it is just one class and it is open source.
A: The Pear lib Mail_mimeDecode is written in plain PHP that you can see here: Mail_mimeDecode source
A: There is a library for parsing raw email message into php array - http://flourishlib.com/api/fMailbox#parseMessage.
The static method parseMessage() can be used to parse a full MIME
email message into the same format that fetchMessage() returns, minus
the uid key.
$parsed_message =
fMailbox::parseMessage(file_get_contents('/path/to/email'));
Here is an example of a parsed message:
array(
'received' => '28 Apr 2010 22:00:38 -0400',
'headers' => array(
'received' => array(
0 => '(qmail 25838 invoked from network); 28 Apr 2010 22:00:38 -0400',
1 => 'from example.com (HELO ?192.168.10.2?) (example) by example.com with (DHE-RSA-AES256-SHA encrypted) SMTP; 28 Apr 2010 22:00:38 -0400'
),
'message-id' => '<4BD8E815.1050209@flourishlib.com>',
'date' => 'Wed, 28 Apr 2010 21:59:49 -0400',
'from' => array(
'personal' => 'Will Bond',
'mailbox' => 'tests',
'host' => 'flourishlib.com'
),
'user-agent' => 'Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.9) Gecko/20100317 Thunderbird/3.0.4',
'mime-version' => '1.0',
'to' => array(
0 => array(
'mailbox' => 'tests',
'host' => 'flourishlib.com'
)
),
'subject' => 'This message is encrypted'
),
'text' => 'This message is encrypted',
'decrypted' => TRUE,
'uid' => 15
);
A: This https://github.com/zbateson/MailMimeParser works for me, and don't need mailparse extension.
<?php
echo $message->getHeaderValue('from'); // user@example.com
echo $message
->getHeader('from')
->getPersonName(); // Person Name
echo $message->getHeaderValue('subject'); // The email's subject
echo $message->getTextContent(); // or getHtmlContent
A: You're probably not going to have much fun writing your own MIME parser. The reason you are finding "overdeveloped mail handling packages" is because MIME is a really complex set of rules/formats/encodings. MIME parts can be recursive, which is part of the fun. I think your best bet is to write the best MIME handler you can, parse a message, throw away everything that's not text/plain or text/html, and then force the command in the incoming string to be prefixed with COMMAND: or something similar so that you can find it in the muck. If you start with rules like that you have a decent chance of handling new providers, but you should be ready to tweak if a new provider comes along (or heck, if your current provider chooses to change their messaging architecture).
A: I'm not sure if this will be of help to you - hope so - but it will surely help others interested in finding out more about email. Marcus Bointon did one of the best presentations entitled "Mail() and life after Mail()" at the PHP London conference in March this year and the slides and MP3 are online. He speaks with some authority, having worked extensively with email and PHP at a deep level.
My perception is that you are in for a world of pain trying to write a truly generic parser.
EDIT - The files seem to have been removed on the PHP London site; found the slides on Marcus' own site: Part 1 Part 2 Couldn't see the MP3 anywhere though
A: Parsing email in PHP isn't an impossible task. What I mean is, you don't need a team of engineers to do it; it is attainable as an individual. Really the hardest part I found was creating the FSM for parsing an IMAP BODYSTRUCTURE result. Nowhere on the Internet had I seen this so I wrote my own. My routine basically creates an array of nested arrays from the command output, and the depth one is at in the array roughly corresponds to the part number(s) needed to perform the lookups. So it handles the nested MIME structures quite gracefully.
The problem is that PHP's default imap_* functions don't provide much granularity...so I had to open a socket to the IMAP port and write the functions to send and retrieve the necessary information (IMAP FETCH 1 BODY.PEEK[1.2] for example), and that involves looking at the RFC documentation.
The encoding of the data (quoted-printable, base64, 7bit, 8bit, etc.), length of the message, content-type, etc. is all provided to you; for attachments, text, html, etc. You may have to figure out the nuances of your mail server as well since not all fields are always implemented 100%.
The gem is the FSM...if you have a background in Comp Sci it can be really really fun to make this (they key is that brackets are not a regular grammar ;)); otherwise it will be a struggle and/or result in ugly code, using traditional methods. Also you need some time!
Hope this helps!
A: yeah, ive been able to write a basic parser, based off that rfc and some other basic tutorials. but its the multipart mime nested boundaries that keep messing me up.
i found out that MMS (not SMS) messages sent from my phone are just standard emails, so i have a system that reads the incoming email, checks the from (to only allow from my phone), and uses the body part to run different commands on my server. its sort of like a remote control by email.
because the system is designed to send pictures, its got a bunch of differently encoded parts. a mms.smil.txt part, a text/plain (which is useless, just says 'this is a html message'), a application/smil part (which the part that phones would pic up on), a text/html part with a advertisement for my carrier, then my message, but all wrapped in html, then finally a textfile attachment with my plain message (which is the part i use) (if i shove an image as an attachment in the message, its put at attachment 1, base64 encoded, then my text portion is attached as attachment 2)
i had it working with the exact mail format from my carrier, but when i ran a message from someone elses phone through it, it failed in a whole bunch of miserable ways.
i have other projects i'd like to extend this phone->mail->parse->command system to, but i need to have a stable/solid/generic parser to get the different parts out of the mail to use it.
my end goal would be to have a function that i could feed the raw piped mail into, and get back a big array with associative sub-arrays of headers var:val pairs, and one for the body text as a whole string
the more and more i search on this, the more i find the same thing: giant overdeveloped mail handling packages that do everything under the sun thats related to mails, or useless (to me, in this project) tutorials.
i think i'm going to have to bite the bullet and just carefully write something my self.
A: This library work very well:
http://www.phpclasses.org/package/3169-PHP-Decode-MIME-e-mail-messages.html
A: I met the same problem so I wrote the following class: Email_Parser. It takes in a raw email and turns it into a nice object.
It requires PEAR Mail_mimeDecode but that should be easy to install via WHM or straight from command line.
Get it here : https://github.com/optimumweb/php-email-reader-parser
A: Simple PhpMimeParser https://github.com/breakermind/PhpMimeParser Yuo can cut mime messages from files, string. Get files, html and inline images.
$str = file_get_contents('mime-mixed-related-alternative.eml');
// MimeParser
$m = new PhpMimeParser($str);
// Emails
print_r($m->mTo);
print_r($m->mFrom);
// Message
echo $m->mSubject;
echo $m->mHtml;
echo $m->mText;
// Attachments and inline images
print_r($m->mFiles);
print_r($m->mInlineList);
A: If you're trying to do this from a Docker container, use PEAR to install Mail and Mail_mimeDecode at build.
FROM php:7.4-apache
WORKDIR /var/www/html
EXPOSE 80
WORKDIR /var/www
RUN chown -R www-data html
RUN docker-php-ext-install mysqli
RUN pear install --alldeps mail
RUN pear install Mail_mimeDecode
Then in your PHP code, something like this:
<?php
require_once "/usr/local/lib/php/Mail.php";
require_once "/usr/local/lib/php/Mail/mimeDecode.php";
$mailfiles = ['/var/www/mail/mailFile1','/var/www/mail/mailFile2'];
foreach($mailfiles as $filename){
$theFile = fopen($filename, "r") or die("Unable to open file!");
$rawEmail = fread($theFile, filesize($filename));
fclose($theFile);
$args = [];
$args['include_bodies'] = true;
$args['decode_bodies'] = FALSE;
$args['decode_headers'] = FALSE;
$objMail = new Mail_mimeDecode($rawEmail);
$return = $objMail->decode($args);
if (PEAR::isError($return)) {
echo("<p>" . $return->getMessage() . "</p>");
var_dump($return);
} else {
//echo("No error in PEAR::isError(return)");
}
if($return->body){
$decoded = base64_decode($return->body, true);
var_dump($decoded);
}//end if(body)
}//end foreach(mailfiles as file)
?>
| {
"language": "en",
"url": "https://stackoverflow.com/questions/12896",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "35"
} |
Q: Creating Infopath 2007 addins that manipulate the design-time form I'm experimenting with creating an add-in for Infopath 2007. The documentation is very skimpy. What I'm trying to determine is what kind of actions an add-in can take while designing a form. Most of the discussion and samples are for when the user is filling out the form. Can I, for example, add a new field to the form in the designer? Add a new item to the schema? Move a form field on the design surface? It doesn't appear so, but I can't find anything definitive.
A: There is no Object Model for the InfoPath designer.
I believe the closest that you can get is the exposed API for the Visual Studio hosting that InfoPath supports; but I don't believe that this will give you the programatic control of the designer that you'd like.
http://msdn.microsoft.com/en-us/library/aa813327.aspx#office2007infopathVSTO_InfoPathDesignerAPIIntegratingInfoPath2007VisualStudio
Sorry Kevin.
A: Unfortunatly Bryan is probably right.
And I have tried to make a VS plugin for use with InfoPath development. It is very restrictive and hard to use. Not very effective for quick scripting work.
I have found AutoHotKey to be the best ad hoc scripting tool for use with InfoPath. It doesn't integrate directly with InfoPath, but I have found key+mouse automation to accomplish most of what I have needed.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/12905",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Find out complete SQL Server database size I need to know how much space occupies all the databases inside an SQL Server 2000. I did some research but could not found any script to help me out.
A: Source: http://searchsqlserver.techtarget.com/tip/0,289483,sid87_gci1313431,00.html
Works with SQL2000,2005,2008
USE master;
GO
IF OBJECT_ID('dbo.sp_SDS', 'P') IS NOT NULL
DROP PROCEDURE dbo.sp_SDS;
GO
CREATE PROCEDURE dbo.sp_SDS
@TargetDatabase sysname = NULL, -- NULL: all dbs
@Level varchar(10) = 'Database', -- or "File"
@UpdateUsage bit = 0, -- default no update
@Unit char(2) = 'MB' -- Megabytes, Kilobytes or Gigabytes
AS
/**************************************************************************************************
**
** author: Richard Ding
** date: 4/8/2008
** usage: list db size AND path w/o SUMmary
** test code: sp_SDS -- default behavior
** sp_SDS 'maAster'
** sp_SDS NULL, NULL, 0
** sp_SDS NULL, 'file', 1, 'GB'
** sp_SDS 'Test_snapshot', 'Database', 1
** sp_SDS 'Test', 'File', 0, 'kb'
** sp_SDS 'pfaids', 'Database', 0, 'gb'
** sp_SDS 'tempdb', NULL, 1, 'kb'
**
**************************************************************************************************/
SET NOCOUNT ON;
IF @TargetDatabase IS NOT NULL AND DB_ID(@TargetDatabase) IS NULL
BEGIN
RAISERROR(15010, -1, -1, @TargetDatabase);
RETURN (-1)
END
IF OBJECT_ID('tempdb.dbo.##Tbl_CombinedInfo', 'U') IS NOT NULL
DROP TABLE dbo.##Tbl_CombinedInfo;
IF OBJECT_ID('tempdb.dbo.##Tbl_DbFileStats', 'U') IS NOT NULL
DROP TABLE dbo.##Tbl_DbFileStats;
IF OBJECT_ID('tempdb.dbo.##Tbl_ValidDbs', 'U') IS NOT NULL
DROP TABLE dbo.##Tbl_ValidDbs;
IF OBJECT_ID('tempdb.dbo.##Tbl_Logs', 'U') IS NOT NULL
DROP TABLE dbo.##Tbl_Logs;
CREATE TABLE dbo.##Tbl_CombinedInfo (
DatabaseName sysname NULL,
[type] VARCHAR(10) NULL,
LogicalName sysname NULL,
T dec(10, 2) NULL,
U dec(10, 2) NULL,
[U(%)] dec(5, 2) NULL,
F dec(10, 2) NULL,
[F(%)] dec(5, 2) NULL,
PhysicalName sysname NULL );
CREATE TABLE dbo.##Tbl_DbFileStats (
Id int identity,
DatabaseName sysname NULL,
FileId int NULL,
FileGroup int NULL,
TotalExtents bigint NULL,
UsedExtents bigint NULL,
Name sysname NULL,
FileName varchar(255) NULL );
CREATE TABLE dbo.##Tbl_ValidDbs (
Id int identity,
Dbname sysname NULL );
CREATE TABLE dbo.##Tbl_Logs (
DatabaseName sysname NULL,
LogSize dec (10, 2) NULL,
LogSpaceUsedPercent dec (5, 2) NULL,
Status int NULL );
DECLARE @Ver varchar(10),
@DatabaseName sysname,
@Ident_last int,
@String varchar(2000),
@BaseString varchar(2000);
SELECT @DatabaseName = '',
@Ident_last = 0,
@String = '',
@Ver = CASE WHEN @@VERSION LIKE '%9.0%' THEN 'SQL 2005'
WHEN @@VERSION LIKE '%8.0%' THEN 'SQL 2000'
WHEN @@VERSION LIKE '%10.0%' THEN 'SQL 2008'
END;
SELECT @BaseString =
' SELECT DB_NAME(), ' +
CASE WHEN @Ver = 'SQL 2000' THEN 'CASE WHEN status & 0x40 = 0x40 THEN ''Log'' ELSE ''Data'' END'
ELSE ' CASE type WHEN 0 THEN ''Data'' WHEN 1 THEN ''Log'' WHEN 4 THEN ''Full-text'' ELSE ''reserved'' END' END +
', name, ' +
CASE WHEN @Ver = 'SQL 2000' THEN 'filename' ELSE 'physical_name' END +
', size*8.0/1024.0 FROM ' +
CASE WHEN @Ver = 'SQL 2000' THEN 'sysfiles' ELSE 'sys.database_files' END +
' WHERE '
+ CASE WHEN @Ver = 'SQL 2000' THEN ' HAS_DBACCESS(DB_NAME()) = 1' ELSE 'state_desc = ''ONLINE''' END + '';
SELECT @String = 'INSERT INTO dbo.##Tbl_ValidDbs SELECT name FROM ' +
CASE WHEN @Ver = 'SQL 2000' THEN 'master.dbo.sysdatabases'
WHEN @Ver IN ('SQL 2005', 'SQL 2008') THEN 'master.sys.databases'
END + ' WHERE HAS_DBACCESS(name) = 1 ORDER BY name ASC';
EXEC (@String);
INSERT INTO dbo.##Tbl_Logs EXEC ('DBCC SQLPERF (LOGSPACE) WITH NO_INFOMSGS');
-- For data part
IF @TargetDatabase IS NOT NULL
BEGIN
SELECT @DatabaseName = @TargetDatabase;
IF @UpdateUsage <> 0 AND DATABASEPROPERTYEX (@DatabaseName,'Status') = 'ONLINE'
AND DATABASEPROPERTYEX (@DatabaseName, 'Updateability') <> 'READ_ONLY'
BEGIN
SELECT @String = 'USE [' + @DatabaseName + '] DBCC UPDATEUSAGE (0)';
PRINT '*** ' + @String + ' *** ';
EXEC (@String);
PRINT '';
END
SELECT @String = 'INSERT INTO dbo.##Tbl_CombinedInfo (DatabaseName, type, LogicalName, PhysicalName, T) ' + @BaseString;
INSERT INTO dbo.##Tbl_DbFileStats (FileId, FileGroup, TotalExtents, UsedExtents, Name, FileName)
EXEC ('USE [' + @DatabaseName + '] DBCC SHOWFILESTATS WITH NO_INFOMSGS');
EXEC ('USE [' + @DatabaseName + '] ' + @String);
UPDATE dbo.##Tbl_DbFileStats SET DatabaseName = @DatabaseName;
END
ELSE
BEGIN
WHILE 1 = 1
BEGIN
SELECT TOP 1 @DatabaseName = Dbname FROM dbo.##Tbl_ValidDbs WHERE Dbname > @DatabaseName ORDER BY Dbname ASC;
IF @@ROWCOUNT = 0
BREAK;
IF @UpdateUsage <> 0 AND DATABASEPROPERTYEX (@DatabaseName, 'Status') = 'ONLINE'
AND DATABASEPROPERTYEX (@DatabaseName, 'Updateability') <> 'READ_ONLY'
BEGIN
SELECT @String = 'DBCC UPDATEUSAGE (''' + @DatabaseName + ''') ';
PRINT '*** ' + @String + '*** ';
EXEC (@String);
PRINT '';
END
SELECT @Ident_last = ISNULL(MAX(Id), 0) FROM dbo.##Tbl_DbFileStats;
SELECT @String = 'INSERT INTO dbo.##Tbl_CombinedInfo (DatabaseName, type, LogicalName, PhysicalName, T) ' + @BaseString;
EXEC ('USE [' + @DatabaseName + '] ' + @String);
INSERT INTO dbo.##Tbl_DbFileStats (FileId, FileGroup, TotalExtents, UsedExtents, Name, FileName)
EXEC ('USE [' + @DatabaseName + '] DBCC SHOWFILESTATS WITH NO_INFOMSGS');
UPDATE dbo.##Tbl_DbFileStats SET DatabaseName = @DatabaseName WHERE Id BETWEEN @Ident_last + 1 AND @@IDENTITY;
END
END
-- set used size for data files, do not change total obtained from sys.database_files as it has for log files
UPDATE dbo.##Tbl_CombinedInfo
SET U = s.UsedExtents*8*8/1024.0
FROM dbo.##Tbl_CombinedInfo t JOIN dbo.##Tbl_DbFileStats s
ON t.LogicalName = s.Name AND s.DatabaseName = t.DatabaseName;
-- set used size and % values for log files:
UPDATE dbo.##Tbl_CombinedInfo
SET [U(%)] = LogSpaceUsedPercent,
U = T * LogSpaceUsedPercent/100.0
FROM dbo.##Tbl_CombinedInfo t JOIN dbo.##Tbl_Logs l
ON l.DatabaseName = t.DatabaseName
WHERE t.type = 'Log';
UPDATE dbo.##Tbl_CombinedInfo SET F = T - U, [U(%)] = U*100.0/T;
UPDATE dbo.##Tbl_CombinedInfo SET [F(%)] = F*100.0/T;
IF UPPER(ISNULL(@Level, 'DATABASE')) = 'FILE'
BEGIN
IF @Unit = 'KB'
UPDATE dbo.##Tbl_CombinedInfo
SET T = T * 1024, U = U * 1024, F = F * 1024;
IF @Unit = 'GB'
UPDATE dbo.##Tbl_CombinedInfo
SET T = T / 1024, U = U / 1024, F = F / 1024;
SELECT DatabaseName AS 'Database',
type AS 'Type',
LogicalName,
T AS 'Total',
U AS 'Used',
[U(%)] AS 'Used (%)',
F AS 'Free',
[F(%)] AS 'Free (%)',
PhysicalName
FROM dbo.##Tbl_CombinedInfo
WHERE DatabaseName LIKE ISNULL(@TargetDatabase, '%')
ORDER BY DatabaseName ASC, type ASC;
SELECT CASE WHEN @Unit = 'GB' THEN 'GB' WHEN @Unit = 'KB' THEN 'KB' ELSE 'MB' END AS 'SUM',
SUM (T) AS 'TOTAL', SUM (U) AS 'USED', SUM (F) AS 'FREE' FROM dbo.##Tbl_CombinedInfo;
END
IF UPPER(ISNULL(@Level, 'DATABASE')) = 'DATABASE'
BEGIN
DECLARE @Tbl_Final TABLE (
DatabaseName sysname NULL,
TOTAL dec (10, 2),
[=] char(1),
used dec (10, 2),
[used (%)] dec (5, 2),
[+] char(1),
free dec (10, 2),
[free (%)] dec (5, 2),
[==] char(2),
Data dec (10, 2),
Data_Used dec (10, 2),
[Data_Used (%)] dec (5, 2),
Data_Free dec (10, 2),
[Data_Free (%)] dec (5, 2),
[++] char(2),
Log dec (10, 2),
Log_Used dec (10, 2),
[Log_Used (%)] dec (5, 2),
Log_Free dec (10, 2),
[Log_Free (%)] dec (5, 2) );
INSERT INTO @Tbl_Final
SELECT x.DatabaseName,
x.Data + y.Log AS 'TOTAL',
'=' AS '=',
x.Data_Used + y.Log_Used AS 'U',
(x.Data_Used + y.Log_Used)*100.0 / (x.Data + y.Log) AS 'U(%)',
'+' AS '+',
x.Data_Free + y.Log_Free AS 'F',
(x.Data_Free + y.Log_Free)*100.0 / (x.Data + y.Log) AS 'F(%)',
'==' AS '==',
x.Data,
x.Data_Used,
x.Data_Used*100/x.Data AS 'D_U(%)',
x.Data_Free,
x.Data_Free*100/x.Data AS 'D_F(%)',
'++' AS '++',
y.Log,
y.Log_Used,
y.Log_Used*100/y.Log AS 'L_U(%)',
y.Log_Free,
y.Log_Free*100/y.Log AS 'L_F(%)'
FROM
( SELECT d.DatabaseName,
SUM(d.T) AS 'Data',
SUM(d.U) AS 'Data_Used',
SUM(d.F) AS 'Data_Free'
FROM dbo.##Tbl_CombinedInfo d WHERE d.type = 'Data' GROUP BY d.DatabaseName ) AS x
JOIN
( SELECT l.DatabaseName,
SUM(l.T) AS 'Log',
SUM(l.U) AS 'Log_Used',
SUM(l.F) AS 'Log_Free'
FROM dbo.##Tbl_CombinedInfo l WHERE l.type = 'Log' GROUP BY l.DatabaseName ) AS y
ON x.DatabaseName = y.DatabaseName;
IF @Unit = 'KB'
UPDATE @Tbl_Final SET TOTAL = TOTAL * 1024,
used = used * 1024,
free = free * 1024,
Data = Data * 1024,
Data_Used = Data_Used * 1024,
Data_Free = Data_Free * 1024,
Log = Log * 1024,
Log_Used = Log_Used * 1024,
Log_Free = Log_Free * 1024;
IF @Unit = 'GB'
UPDATE @Tbl_Final SET TOTAL = TOTAL / 1024,
used = used / 1024,
free = free / 1024,
Data = Data / 1024,
Data_Used = Data_Used / 1024,
Data_Free = Data_Free / 1024,
Log = Log / 1024,
Log_Used = Log_Used / 1024,
Log_Free = Log_Free / 1024;
DECLARE @GrantTotal dec(11, 2);
SELECT @GrantTotal = SUM(TOTAL) FROM @Tbl_Final;
SELECT
CONVERT(dec(10, 2), TOTAL*100.0/@GrantTotal) AS 'WEIGHT (%)',
DatabaseName AS 'DATABASE',
CONVERT(VARCHAR(12), used) + ' (' + CONVERT(VARCHAR(12), [used (%)]) + ' %)' AS 'USED (%)',
[+],
CONVERT(VARCHAR(12), free) + ' (' + CONVERT(VARCHAR(12), [free (%)]) + ' %)' AS 'FREE (%)',
[=],
TOTAL,
[=],
CONVERT(VARCHAR(12), Data) + ' (' + CONVERT(VARCHAR(12), Data_Used) + ', ' +
CONVERT(VARCHAR(12), [Data_Used (%)]) + '%)' AS 'DATA (used, %)',
[+],
CONVERT(VARCHAR(12), Log) + ' (' + CONVERT(VARCHAR(12), Log_Used) + ', ' +
CONVERT(VARCHAR(12), [Log_Used (%)]) + '%)' AS 'LOG (used, %)'
FROM @Tbl_Final
WHERE DatabaseName LIKE ISNULL(@TargetDatabase, '%')
ORDER BY DatabaseName ASC;
IF @TargetDatabase IS NULL
SELECT CASE WHEN @Unit = 'GB' THEN 'GB' WHEN @Unit = 'KB' THEN 'KB' ELSE 'MB' END AS 'SUM',
SUM (used) AS 'USED',
SUM (free) AS 'FREE',
SUM (TOTAL) AS 'TOTAL',
SUM (Data) AS 'DATA',
SUM (Log) AS 'LOG'
FROM @Tbl_Final;
END
RETURN (0)
GO
A: I know this might sound a little arcanine but why not just stat the directory that contains the database.
A: if you use SQL Server Management Studio (from SQL 2005 or newer) you can just right-click properties on the database itself, and then click on the storage tab.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/12906",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: If you have a Java application that is consuming CPU when it isn't doing anything, how do you determine what it is doing? I am calling a vendor's Java API, and on some servers it appears that the JVM goes into a low priority polling loop after logging into the API (CPU at 100% usage). The same app on other servers does not exhibit this behavior. This happens on WebSphere and Tomcat. The environment is tricky to set up so it is difficult to try to do something like profiling within Eclipse.
Is there a way to profile (or some other method of inspecting) an existing Java app running in Tomcat to find out what methods are being executed while it's in this spinwait kind of state? The app is only executing one method when it gets in this state (vendor's method). Vendor can't replicate the behavior (of course).
Update:
Using JConsole I was able to determine who was running and what they were doing. It took me a few hours to then figure out why it was doing it. The problem ended up being that the vendor's API jar that was being used did not match exactly to the the database configuration that it was using. It was defaulting to having tracing and performance monitoring enabled on the servers that had the slight mis-match in configuration. I used a different jar and all is well.
So thanks, Joshua, for your answer. JConsole was extremely easy to setup and use to monitor an existing application.
@Cringe - I did some experimenting with some of the options you suggested. I had some problems with getting JProfiler set up, it looks good (but pricey). Going forward I went ahead and added the Eclipse Profiler plugin and I'll be looking over the different open source profilers to compare functionality.
A: Facing the same problem I used YourKit profiler. It's loader doesn't activate unless you actually connect to it (though it does open a port to listen for connections). The profiler itself has a nice "get amount of time spent in each method" while working in it's less obtrusive mode.
Another way is to detect CPU load (via JNI, so you'd need an external library for this) in a "watchdog" thread with highest priority and start logging all threads when the CPU is high enough for a long enough time. You might find this article enlightining.
A: If it's for professional purpose and you have some money to spend, try to get your hands on JProfiler. If you just want to get some insights, try out the Eclipse Profiler Plugin. I used it several times, but I don't know the current state.
A new(?) project from the eclipse project itself is available too: http://www.eclipse.org/tptp/ (See this article). Never used it, so I can't tell if it is worth the effort.
There's also a very good list of open source profilers available at http://www.manageability.org/blog/stuff/open-source-profilers-for-java
A: If JConsole can't be used you can
*
*press CTRL+BREAK under Windows
*send kill -3 <process id> under Linux
to get a full Thread Dump. This doesn't affect performance and can always be run in production.
A: JRockit Mission Control Latency Analyzer.
The Latency Analyzer that comes with JRockit shows you what the JVM is "doing" when it's not doing anything. In the latest version you can see latencies for:
*
*Java wait/blocked/sleep/parked.
*File I/O
*Network I/O
*Memory allocation
*GC pauses
*JVM latencies, e.g code generation and class loading
*Thread suspension
The tool will give you the stack trace when the latency occurred. You can view the latency data in many different ways (aggregated traces, as a histogram, in a thread graph etc.). The tool also allows you to see transitions between threads, for instance when one thread notifies another.
latency analyzer http://blogs.oracle.com/hirt/WindowsLiveWriter/The.0LatencyAnalyserMigratedfromtheoldBE_7246/latency_graph_2.png
The overhead is negligible and unlike many other tools it can be used in a production environment.
This blog post gives you a brief introduction and the program can be downloaded here.
It's free to use for development!
A: If you are using Java 5 or later, you can connect to your application using jconsole to view all running threads. jstack also will do a stack dump. I think this should still work even inside a container like Tomcat.
Both of these tools are included with JDK5 and later (I assume the process needs to be at least Java 5, though I could be wrong)
Update:
It's also worth noting that starting with JDK 1.6 update 7 there is now a bundled profiler called VisualVM which can be launched with 'jvisualvm'. It looks like it is a java.net project, so additional info may be available at that page. I haven't used this yet but it looks useful for more serious analysis.
Hope that helps
A: Use a profiler. Yes they cost money, and using them can occasionally be a bit awkward, but they do provide you with a great deal more real evidence rather than guesswork.
Human beings are universally bad at guessing where performance bottlenecks are. It just seems to be something our brains aren't build to do very well. It may seem obvious, you may have great ideas about what the problem is, but the real world often turns out to be doing something different. And optimising the wrong part of code means, at best, lots of work for minimal benefit. More often it makes things slower, and sometimes it breaks things entirely. So before you make any changes for the sake of optimisation, you should always have real evidence from a profiler or other accurate tool.
As mentioned, both JProfiler and YourKit are both fairly good and not prohibitively expensive. Last time I looked, they both had free demos too.
A: For completeness sake: even though my company more or less standardizes on Eclipse we use Netbeans (6 and up) with its included, free profiler on a daily basis. It works better than the Eclipse TPTP plugin (last checked 3 months ago) and for us it removes any need for a commercial profiler such as JProfiler, which is excellent, but fast becoming unnecessary.
A: VisualVM should be the profiler from netbeans as standalone. I tried the TPTP for eclipse but visualVm seems as a much nicer option!
| {
"language": "en",
"url": "https://stackoverflow.com/questions/12927",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "16"
} |
Q: Is a Homogeneous development platform good for the industry? Is it in best interests of the software development industry for one framework, browser or language to win the war and become the de facto standard? On one side it takes away the challenges of cross platform, but it opens it up for a single point of failure. Would it also result in a stagnation of innovation, or would it allow the industry to focus on more important things (whatever those might be).
A: I believe whenever there is only 1 option, it will definitely stagnate innovation. If all we had was 1 language, then we wouldn't be able to solve anything but what that language was designed to solve.
Imperative languages like Java and C# solve a certain set of problems pretty well, but it also helps to think in a functional manner sometimes, such as with Haskell and Lisp.
Furthermore, cross platform issues are not an issue if you are talking about a web application, because you control the hardware and software (note, I am talking of the server side code of course, the browser cross platform issue is separate).
Paul Graham wrote a great essay on how the Web lets you as a developer use the tool you think will solve the problem best.
A: Defacto standards are bad because they are usually controlled by a single party. What is best for the industry is for there to be a foundation of open standards on top of which everyone can compete.
The web is a perfect example. When IE won the browser war, it stagnated for years, and is only just now starting to improve because it's hemorrhaging marketshare. The Netscape years prior to that weren't much better. The CSS 2.1 standard was released ten years ago and still isn't supported well. As a consequence, web development is a Black Art of hacks and work-arounds to get websites to render consistently.
My job would be a hundred times easier if I could build a website according to web standards and be confident it would display correctly. Just think of all the cool things we could have been working on instead of fixing IE's rendering errors.
A: No. Competition is good. It may make a web developers job easier, but I think it's bad for the industry. I personally prefer having choices.
I believe Joel Spolsky's technique of creating his own language (Wasabi) to insulate his company from being platform specific is a good one. I also believe it is a good idea to use products that accomplish similar things that are more targeted at specific problems like JQuery.
A: I'm gonna have to agree with Mike on this one and say that without competition there is very little incentive to innovate.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/12929",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Is using PHP accelerators such as MMCache or Zend Accelerator making PHP faster? Does anybody have experience working with PHP accelerators such as MMCache or Zend Accelerator? I'd like to know if using either of these makes PHP comparable to faster web-technologies. Also, are there trade offs for using these?
A: MMCache has been deprecated. I recommend either http://pecl.php.net/package/APC or http://xcache.lighttpd.net/, both of which also give you variable storage (like Memcache).
A: Both are interesting and will provide speed boost since they compile source code into binary representation which is then executed by the PHP engine.
Any huge web site running with PHP (Facebook for example) is running some sort of opcode cache system like MMCache.
The problem is that they are not very easy to set up depending on your system.
A: Depending on how much of your PHP code is actually executed and how long that execution takes they can be a really big win. It certainly isn't going to hurt, but the gain you see will very much depend on where your time is currently spent.
btw mmcache has been rolled into a different project now, I forget the name but Google will tell you.
A: I use APC on my production servers and it works pretty well out of the box. Compile it and add it to PHP and there isn't much tweaking left to do for it. I check it every once in a while just to review stats but since I use MVC a lot all of the main files (routers, controllers, etc) rarely change on a day-to-day basis so that code stays compiled and runs pretty efficiently.
A: Note that Zend Optimizer and MMCache (or similar applications) are totally different things. While Zend Optimizer tries to optimize the program opcode MMCache will cache the scripts in memory and reuse the precompiled code.
I did some benchmarks some time ago and you can find the results in my blog (in German though). The basic results:
Zend Optimizer alone didn't help at all. Actually my scripts were slower than without optimizer.
When it comes to caches:
* fastest: eAccelerator
* XCache
* APC
And: You DO want to install a opcode cache!
For example:
alt text http://blogs.interdose.com/dominik/wp-content/uploads/2008/04/opcode_wordpress.png
This is the duration it took to call the wordpress homepage 10.000 times.
Edit: BTW, eAccelerator contains an optimizer itself.
A: currently we use apc, free and was just a simple plug and play on our live servers. Provided a huge performance increase for our site, especially as the project size increased. I also have the apc.stat disabled so it doesn't check if the code has been updated, so whenever we need to update the code on the live site we restart apache.
A: I use APC, and can attest that it can dramatically reduce the CPU and I/O load on an app server if you maintain a high cache-hit rate. It not only saves you from having to compile, it can save you from having to read the php files from disk at all. (i.e. the bytecodes are served directly from main memory, so it's super fast) It lowers the speed to render a single page, and increases the requests per second your server can handle.
If you use RedHat or CentOS, installing APC is super simple:
yum install php-devel httpd-devel php-pear
pecl install apc
echo "extension=apc.so" > /etc/php.d/apc.ini
# if you're using SELinux:
chcon "system_u:object_r:textrel_shlib_t" /usr/lib/php/modules/apc.so
/etc/init.d/httpd restart
You asked about downsides. The only downside is that it requires some memory. The default on APC is 30MB, but it can be adjusted, and the cost of a little bit of memory more than pays for itself with the increased speed and response rate.
A: BlaM's testing included all the DB calls made by WordPress. When you're making fewer DB calls, you'll see the performance gain of opcode caches be even more dramatic.
A: Have you checked out Phalanger? It compiles PHP to .NET code. Here are some benchmarks which show that it can dramatically improve performance.
A: I used Zend Accelerator a little back in the day (2004-ish). It certainly gave some significant performance wins on code it could work with, but unfortunately the system I was using was designed to quite often dynamically load code and then eval it, which Zend Accelerator couldn't do much with at the time (and I'd guess still can't).
On the down side, we certainly saw some caching issues (where the code would be changes, but the compiled version sync with the change for one reason or another). I imagine those problems have likely been ironed out by now.
Anyway, I don't have any hard comparison numbers, and certainly didn't write the same system in different environments for comparison, but for the vast majority of systems, PHP isn't going to kill you performance wise.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/12936",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17"
} |
Q: Database replication. 2 servers, Master database and the 2nd is read-only Say you have 2 database servers, one database is the 'master' database where all write operations are performed, it is treated as the 'real/original' database. The other server's database is to be a mirror copy of the master database (slave?), which will be used for read only operations for a certain part of the application.
How do you go about setting up a slave database that mirrors the data on the master database? From what I understand, the slave/readonly database is to use the master db's transaction log file to mirror the data correct?
What options do I have in terms of how often the slave db mirrors the data? (real time/every x minutes?).
A: What you want is called Transactional Replication in SQL Server 2005. It will replicate changes in near real time as the publisher (i.e. "master") database is updated.
Here is a pretty good walk through of how to set it up.
A: SQL Server 2008 has three different modes of replication.
*
*Transactional for one way read only replication
*Merge for two way replication
*Snapshot
A:
From what I understand, the slave/readonly database is to use the master db's transaction log file to mirror the data correct?
What options do I have in terms of how often the slave db mirrors the data? (real time/every x minutes?).
This sounds like you're talking about log shipping instead of replication. For what you're planning on doing though I'd agree with Jeremy McCollum and say do transactional replication. If you're going to do log shipping when the database is restored every x minutes the database won't be available.
Here's a good walkthrough of the difference between the two. Sad to say you have to sign up for an account to read it though. =/ http://www.sqlservercentral.com/articles/Replication/logshippingvsreplication/1399/
A: The answer to this will vary depending on the database server you are using to do this.
Edit: Sorry, maybe i need to learn to look at the tags and not just the question - i can see you tagged this as sqlserver.
A: Transactional replication is real time.
If you do not have any updates to be done on your database , what you need is just retrieving of data say once a day : then use snapshot replication instead of transactional replication. In snapshot replication, changes will replicate when and as defined by the user say once in 24 hrs.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/12946",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: What is your preferred method of sending complex data over a web service? It's 2008, and I'm still torn on this one. So I'm developing a web method that needs a complex type passed into it and returned from it. The two options I'm toying with are:
*
*Pass and return actual business objects with both data and behavior. When wsdl.exe is run, it will automatically create proxy classes that contain just the data portion, and these will be automatically converted to and from my real business objects on the server side. On the client side, they will only get to use the dumb proxy type, and they will have to map them into some real business objects as they see fit. A big drawback here is that if I "own" both the server and client side, and I want to use the same set of real business objects, I can run into certain headaches with name conflicts, etc. (Since the real objects and the proxies are named the same.)
*Forget trying to pass "real" business objects. Instead, just create simple DataTransfer objects which I will map back and forth to my real business objects manually. They still get copied to new proxy objects by wsdl.exe anyway, but at least I'm not tricking myself into thinking that web services can natively handle objects with business logic in them.
By the way - Does anyone know how to tell wsdl.exe to not make a copy of the object? Shouldn't we be able to just tell it, "Hey, use this existing type right over here. Don't copy it!"
Anyway, I've kinda settled on #2 for now, but I'm curious what you all think. I have a feeling there are way better ways to do this in general, and I may not even be totally accurate on all my points, so please let me know what your experiences have been.
Update: I just found out that VS 2008 has an option to reuse existing types when adding a "Service Reference", rather than creating brand new identical type in the proxy file. Sweet.
A: I'd do a hybrid. I would use an object like this
public class TransferObject
{
public string Type { get; set; }
public byte[] Data { get; set; }
}
then i have a nice little utility that serializes an object then compresses it.
public static class CompressedSerializer
{
/// <summary>
/// Decompresses the specified compressed data.
/// </summary>
/// <typeparam name="T"></typeparam>
/// <param name="compressedData">The compressed data.</param>
/// <returns></returns>
public static T Decompress<T>(byte[] compressedData) where T : class
{
T result = null;
using (MemoryStream memory = new MemoryStream())
{
memory.Write(compressedData, 0, compressedData.Length);
memory.Position = 0L;
using (GZipStream zip= new GZipStream(memory, CompressionMode.Decompress, true))
{
zip.Flush();
var formatter = new System.Runtime.Serialization.Formatters.Binary.BinaryFormatter();
result = formatter.Deserialize(zip) as T;
}
}
return result;
}
/// <summary>
/// Compresses the specified data.
/// </summary>
/// <typeparam name="T"></typeparam>
/// <param name="data">The data.</param>
/// <returns></returns>
public static byte[] Compress<T>(T data)
{
byte[] result = null;
using (MemoryStream memory = new MemoryStream())
{
using (GZipStream zip= new GZipStream(memory, CompressionMode.Compress, true))
{
var formatter = new System.Runtime.Serialization.Formatters.Binary.BinaryFormatter();
formatter.Serialize(zip, data);
}
result = memory.ToArray();
}
return result;
}
}
Then you'd just pass the transfer object that would have the type name. So you could do something like this
[WebMethod]
public void ReceiveData(TransferObject data)
{
Type originType = Type.GetType(data.Type);
object item = CompressedSerializer.Decompress<object>(data.Data);
}
right now the compressed serializer uses generics to make it strongly typed, but you could make a method easily to take in a Type object to deserialize using originType above, all depends on your implementation.
hope this gives you some ideas. Oh, and to answer your other question, wsdl.exe doesn't support reusing types, WCF does though.
A:
Darren wrote: I'd do a hybrid. I would use an object like this...
Interesting idea... passing a serialized version of the object instead of the (wsdl-ed) object itself. In a way, I like its elegance, but in another way, it seems to defeat the purpose of exposing your web service to potential third parties or partners or whatever. How would they know what to pass? Would they have to rely purely on documentation? It also loses some of the "heterogeneous client" aspect, since the serialization is very .Net specific. I don't mean to be critical, I'm just wondering if what you're proposing is also meant for these types of use cases. I don't see anything wrong with using it in a closed environment though.
I should look into WCF... I've been avoiding it, but maybe it's time.
A: oh, for sure, i only do this when i'm the consumer of the webservice or if you have some sort of controller that they request an object from and then you handle the serialization and sending rather than them directly consuming the web service. But really, if they are directly consuming the webservice, then they wouldn't need or necessarily have the assembly that would have the type in it in the first place, and should be using the objects that wsdl generates.
And yes, what i put forth is very .NET specific because i don't like to use anything else. The only other time i consume webservices outside of .net was in javascript, but now i only use json responses instead of xml webservice responses :)
A: there is also an argument for separating the tiers - have a set of serializable objects that get passed to and from the web service and a translator to map and convert between that set and the business objects (which might have properties not suitable for passing over the wire)
Its the approach favoured by the web service software factory service factory and means that you can change your business objects without breaking the web service interface/contract
| {
"language": "en",
"url": "https://stackoverflow.com/questions/12982",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
} |
Q: How to setup site-wide variables in php? I want to define something like this in php:
$EL = "\n<br />\n";
and then use that variable as an "endline" marker all over my site, like this:
echo "Blah blah blah{$EL}";
How do I define $EL once (in only 1 file), include it on every page on my site, and not have to reference it using the (strangely backwards) global $EL; statement in every page function?
A: Most PHP sites should have a file (I call it a header) that you include on every single page of the site. If you put that first line of code in the header file, then include it like this on every page:
include 'header.php';
you won't have to use the global keyword or anything, the second line of code you wrote should work.
Edit: Oh sorry, that won't work inside functions... now I see your problem.
Edit #2: Ok, take my original advice with the header, but use a define() rather than a variable. Those work inside functions after being included.
A: Sounds like the job of a constant. See the function define().
A: Do this
define ('el','\n\<\br/>\n');
save it as el.php
then you can include any files you want to use, i.e
echo 'something'.el; // note I just add el at end of line or in front
Hope this help
NOTE please remove the '\' after < br since I had to put it in or it wont show br tag on the answer...
A: Are you using PHP5? If you define the __autoload() function and use a class with some constants, you can call them where you need them. The only aggravating thing about this is that you have to type something a little longer, like
MyClass::MY_CONST
The benefit is that if you ever decide to change the way that you handle new lines, you only have to change it in one place.
Of course, a possible negative is that you're calling including an extra function (__autoload()), running that function (when you reference the class), which then loads another file (your class file). That might be more overhead than it's worth.
If I may offer a suggestion, it would be avoiding this sort of echoing that requires echoing tags (like <br />). If you could set up something a little more template-esque, you could handle the nl's without having to explicitly type them. So instead of
echo "Blah Blah Blah\n<br />\n";
try:
<?php
if($condition) {
?>
<p>Blah blah blah
<br />
</p>
<?php
}
?>
It just seems to me like calling up classes or including variables within functions as well as out is a lot of work that doesn't need to be done, and, if at all possible, those sorts of situations are best avoided.
A: @svec yes this will, you just have to include the file inside the function also. This is how most of my software works.
function myFunc()
{
require 'config.php';
//Variables from config are available now.
}
A: Another option is to use an object with public static properties. I used to use $GLOBALS but most editors don't auto complete $GLOBALS. Also, un-instantiated classes are available everywhere (because you can instatiate everywhere without telling PHP you are going to use the class). Example:
<?php
class SITE {
public static $el;
}
SITE::$el = "\n<br />\n";
function Test() {
echo SITE::$el;
}
Test();
?>
This will output <br />
This is also easier to deal with than costants as you can put any type of value within the property (array, string, int, etc) whereas constants cannot contain arrays.
This was suggested to my by a user on the PhpEd forums.
A: svec, use a PHP framework. Just any - there's plenty of them out there.
This is the right way to do it. With framework you have single entry
point for your application, so defining site-wide variables is easy and
natural. Also you don't need to care about including header files nor
checking if user is logged in on every page - decent framework will do
it for you.
See:
*
*Zend framework
*CakePHP
*Symfony
*Kohana
Invest some time in learning one of them and it will pay back very soon.
A: You can use the auto_prepend_file directive to pre parse a file. Add the directive to your configuration, and point it to a file in your include path. In that file add your constants, global variables, functions or whatever you like.
So if your prepend file contains:
<?php
define('FOO', 'badger');
In another Php file you could access the constant:
echo 'this is my '. FOO;
A: You might consider using a framework to achieve this. Better still you can use
Include 'functions.php';
require('functions');
Doing OOP is another alternative
A: IIRC a common solution is a plain file that contains your declarations, that you include in every source file, something like 'constants.inc.php'. There you can define a bunch of application-wide variables that are then imported in every file.
Still, you have to provide the include directive in every single source file you use. I even saw some projects using this technique to provide localizations for several languages. I'd prefer the gettext way, but maybe this variant is easier to work with for the average user.
edit For your problem I recomment the use of $GLOBALS[], see Example #2 for details.
If that's still not applicable, I'd try to digg down PHP5 objects and create a static Singleton that provides needed static constants (http://www.developer.com/lang/php/article.php/3345121)
A: Sessions are going to be your best bet, if the data is user specific, else just use a conifg file.
config.php:
<?php
$EL = "\n<br />\n";
?>
Then on each page add
require 'config.php'
the you will be able to access $EL on that page.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/13000",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: Is there a "concise" way to do namespacing in JavaScript? I've frequently encountered sites that put all of their JavaScript inside a namespace structure along the lines of:
namespaces = { com : { example: { example.com's data} }
However, setting this up safely with respect to other namespaced frameworks seems to require a relatively hefty amount of code (defined as > 2 lines). I was wondering whether anyone knows of a concise way to do this? Furthermore, whether there's a relatively standard/consistent way to structure it? For example, is the com namespace directly attached to the global object, or is it attached through a namespace object?
[Edit: whoops, obviously {com = { ... } } wouldn't accomplish anything close to what I intended, thanks to Shog9 for pointing that out.]
A: I try to follow the Yahoo convention of making a single parent object out in the global scope to contain everything;
var FP = {};
FP.module = {};
FP.module.property = 'foo';
A: To make sure you don't overwrite an existing object, you should so something like:
if(!window.NameSpace) {
NameSpace = {};
}
or
var NameSpace = window.NameSpace || {};
This way you can put this at the top of every file in your application/website without worrying about the overwriting the namespace object. Also, this would enable you to write unit tests for each file individually.
A: The YUI library library has code which handles namespacing using a function which you may find preferable. Other libraries may do this also.
A: Javascript doesn't have stand-alone namespaces. It has functions, which can provide scope for resolving names, and objects, which can contribute to the named data accessible in a given scope.
Here's your example, corrected:
var namespaces = { com: { example: { /* example.com's data */ } } }
This is a variable namespaces being assigned an object literal. The object contains one property: com, an object with one property: example, an object which presumably would contain something interesting.
So, you can type something like namespaces.com.example.somePropertyOrFunctionOnExample and it'll all work. Of course, it's also ridiculous. You don't have a hierarchical namespace, you have an object containing an object containing an object with the stuff you actually care about.
var com_example_data = { /* example.com's data */ };
That works just as well, without the pointless hierarchy.
Now, if you actually want to build a hierarchy, you can try something like this:
com_example = com_example || {};
com_example.flags = com_example.flags || { active: false, restricted: true};
com_example.ops = com_example.ops || (function()
{
var launchCodes = "38925491753824"; // hidden / private
return {
activate: function() { /* ... */ },
destroyTheWorld: function() { /* ... */ }
};
})();
...which is, IMHO, reasonably concise.
A: Here was an interesting article by Peter Michaux on Javascript Namespacing. He discusses 3 different types of Javascript namespacing:
*
*Prefix Namespacing
*Single Object Namespacing
*Nested Object Namespacing
I won't plagiarize what he said here but I think his article is very informative.
Peter even went so far as to point out that there are performance considerations with some of them. I think this topic would be interesting to talk about considering that the new ECMAScript Harmony plans have dropped the 4.0 plans for namespacing and packaging.
A: As an alternative to a dot or an underscore, you could use the dollar sign character:
var namespaces$com$example = "data";
A: I like also this (source):
(function() {
var a = 'Invisible outside of anonymous function';
function invisibleOutside() {
}
function visibleOutside() {
}
window.visibleOutside = visibleOutside;
var html = '--INSIDE Anonymous--';
html += '<br/> typeof invisibleOutside: ' + typeof invisibleOutside;
html += '<br/> typeof visibleOutside: ' + typeof visibleOutside;
contentDiv.innerHTML = html + '<br/><br/>';
})();
var html = '--OUTSIDE Anonymous--';
html += '<br/> typeof invisibleOutside: ' + typeof invisibleOutside;
html += '<br/> typeof visibleOutside: ' + typeof visibleOutside;
contentDiv.innerHTML += html + '<br/>';
A: Use an object literal and either the this object or the explicit name to do namespacing based on the sibling properties of the local variable which contains the function. For example:
var foo = { bar: function(){return this.name; }, name: "rodimus" }
var baz = { bar: function(){return this.name; }, name: "optimus" }
console.log(foo.bar());
console.log(baz.bar());
Or without the explicit name property:
var foo = { bar: function rodimus(){return this; } }
var baz = { bar: function optimus(){return this; } }
console.log(foo.bar.name);
console.log(baz.bar.name);
Or without using this:
var foo = { bar: function rodimus(){return rodimus; } }
var baz = { bar: function optimus(){return optimus; } }
console.log(foo.bar.name);
console.log(baz.bar.name);
Use the RegExp or Object constructor functions to add name properties to counter variables and other common names, then use a hasOwnProperty test to do checking:
var foo = RegExp(/bar/);
/* Add property */
foo.name = "alpha";
document.body.innerHTML = String("<pre>" + ["name", "value", "namespace"] + "</pre>").replace(/,/g, "	");
/* Check type */
if (foo.hasOwnProperty("name"))
{
document.body.innerHTML += String("<pre>" + ["foo", String(foo.exec(foo)), foo.name] + "</pre>").replace(/,/g, "	");
}
/* Fallback to atomic value */
else
{
foo = "baz";
}
var counter = Object(1);
/* Add property */
counter.name = "beta";
if (counter.hasOwnProperty("name"))
{
document.body.innerHTML += String("<pre>" + ["counter", Number(counter), counter.name] + "</pre>").replace(/,/g, "	");
}
else
{
/* Fallback to atomic value */
counter = 0;
}
The DOM uses the following convention to namespace HTML and SVG Element interface definitions:
*
*HTMLTitleElement
*SVGTitleElement
*SVGScriptElement
*HTMLScriptElement
JavaScript core uses prototypes to namespace the toString method as a simple form of polymorphism.
References
*
*Does javascript autobox?
*Getting sibling value of key in a JavaScript object literal
*MDN: JavaScript Operators - this as an object method
*Lua: technical note 7 - Modules and Packages
*HTMLTitleElement.webidl
*SVGTitleElement.webidl
*SVGScriptElement.webidl
*HTMLScriptElement.webidl
*The names of functions in ES6
*ECMAScript 2015 Language Specification - 19.2.4.2 Function Instances: name | ECMA-262 6th Edition
| {
"language": "en",
"url": "https://stackoverflow.com/questions/13021",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15"
} |
Q: Is there a tool that can display a SVN repository visually ( i.e. pretty charts )? Real strange.
I cannot find a tool that enables one to display a SVN repository in graphical form.
I would like the ability to see changes in revision / time , branch / time graphs.
Does anyone know of one. Ideally it would be platform neutral or even better web based.
Solutions offered so far in brief:
*
*svn-graph
*Fisheye ( you want how
much !£?* )
A: You might also give StatSVN a try.
It's written in Java (meets your platform-neutral requirement) and generates a static html tree with your revision history and commit graphs. You can use Ant or a batch file to automate the process of calling it.
I've also heard good things about Trac.
A: for simplicity, tortoise svn gives a basic revision graph
A: I am writing subverion statistics graph generation utility named SVNPlot. It is inspired by the graphs generated by StatSVN. However, the SVNPlot graph generation is in two steps (a) first it creates a sqlite3 database from the subversion log information (b) actual graphs are then generated by extracting the database sqlite database (using simple sql queries).
I think using sql to extract the graph data from the log information is resulting in greater flexibility and good performance. Right now the SVNPlot only generates graphs but it very easy to extract any other stats from the generated sqlite database.
SVNPlot is written in python and it uses excellent Matplotlib package to generate the graphs. The code is available on SVNPlot page on google code (license is New BSD license). The sample graphs generated for Rietveld repository are available at http://thinkingcraftsman.in/projects/svnplot/index.htm
A: The only tool that I've ever encountered is the svn-graph.pl perl script from the svn tools. It spits out a graphviz dot file which can be rendered in a variety of image formats. This could be wrapped up in a cgi script to form a basic web graph tool.
A: Trac is a wiki and issue tracking tool, which happens to include an SVN browser.
The RevtreePlugin, for Trac will allow you to display your repo in a graphical form.
Trac is still a very young application(latest version is 0.11.1), but we use it at work for our software development and it's proved very useful so far.
A: Maybe you could elaborate a little on what "visual display" and "pretty
charts" you are after?
A roundabout way would be to clone the svn repository with git-svn, then you can use the graphical gitk or giggle tools on it to visualize branches and merging as well as browsing the specifics.
(You would then get the distributed thing, that git does so well, as a nice side effect.)
A: Fisheye, from Atlassian, looks at an SVN repository and can show you a few graphs. Also provides a handy web interface for blame, diff, etc.
for example, some sample images at one of the demo servers:
*
*
*
And if you like some pretty code metrics, here are some samples.
A: Trac includes a source code browser and limited statistics analysis. It's web-based, of course.
A: There is also nice application SmartSVN with nice graph.
But version with graph is not free.
A: There is also https://github.com/justinmassiot/svn-graph-branches. Although no activity since 2010 and it didn't work for me (not compatible with my dot version).
A: You could also try MPY SVN STATS. Here is an example graph for Zope.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/13039",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "33"
} |
Q: What's the difference between struct and class in .NET? What's the difference between struct and class in .NET?
A: Just to make it complete, there is another difference when using the Equals method, which is inherited by all classes and structures.
Lets's say we have a class and a structure:
class A{
public int a, b;
}
struct B{
public int a, b;
}
and in the Main method, we have 4 objects.
static void Main{
A c1 = new A(), c2 = new A();
c1.a = c1.b = c2.a = c2.b = 1;
B s1 = new B(), s2 = new B();
s1.a = s1.b = s2.a = s2.b = 1;
}
Then:
s1.Equals(s2) // true
s1.Equals(c1) // false
c1.Equals(c2) // false
c1 == c2 // false
So, structures are suited for numeric-like objects, like points (save x and y coordinates). And classes are suited for others. Even if 2 people have same name, height, weight..., they are still 2 people.
A: Difference between Structs and Classes:
*
*Struct are value types whereas Classes are reference types.
*Structs are stored on the stack whereas Classes are stored on the
heap.
*Value types hold their value in memory where they are declared, but a
reference type holds a reference to an object in memory.
*Value types are destroyed immediately after the scope is lost whereas
reference type only destroys the variable after the scope is lost. The object is later destroyed by the garbage collector.
*When you copy a struct into another struct, a new copy of that struct
gets created. The modified struct won't affect the value of the
other struct.
*When you copy a class into another class, it only copies the
reference variable.
*Both reference variables point to the same object on the heap.
Changes done to one variable will affect the other reference variable.
*Structs can not have destructors, but classes can have destructors.
*Structs can not have explicit parameter-less constructors whereas classes can. Structs don't support inheritance, but classes do. Both
support inheritance from an interface.
*Structs are sealed type.
A: *
*Events declared in a class have their += and -= access automatically locked via a lock(this) to make them thread safe (static events are locked on the typeof the class). Events declared in a struct do not have their += and -= access automatically locked. A lock(this) for a struct would not work since you can only lock on a reference type expression.
*Creating a struct instance cannot cause a garbage collection (unless the constructor directly or indirectly creates a reference type instance) whereas creating a reference type instance can cause garbage collection.
*A struct always has a built-in public default constructor.
class DefaultConstructor
{
static void Eg()
{
Direct yes = new Direct(); // Always compiles OK
InDirect maybe = new InDirect(); // Compiles if constructor exists and is accessible
//...
}
}
This means that a struct is always instantiable whereas a class might not be since all its constructors could be private.
class NonInstantiable
{
private NonInstantiable() // OK
{
}
}
struct Direct
{
private Direct() // Compile-time error
{
}
}
*A struct cannot have a destructor. A destructor is just an override of object.Finalize in disguise, and structs, being value types, are not subject to garbage collection.
struct Direct
{
~Direct() {} // Compile-time error
}
class InDirect
{
~InDirect() {} // Compiles OK
}
And the CIL for ~Indirect() looks like this:
.method family hidebysig virtual instance void
Finalize() cil managed
{
// ...
} // end of method Indirect::Finalize
*A struct is implicitly sealed, a class isn't.
A struct can't be abstract, a class can.
A struct can't call : base() in its constructor whereas a class with no explicit base class can.
A struct can't extend another class, a class can.
A struct can't declare protected members (for example, fields, nested types) a class can.
A struct can't declare abstract function members, an abstract class can.
A struct can't declare virtual function members, a class can.
A struct can't declare sealed function members, a class can.
A struct can't declare override function members, a class can.
The one exception to this rule is that a struct can override the virtual methods of System.Object, viz, Equals(), and GetHashCode(), and ToString().
A: In .NET the struct and class declarations differentiate between reference types and value types.
When you pass round a reference type there is only one actually stored. All the code that accesses the instance is accessing the same one.
When you pass round a value type each one is a copy. All the code is working on its own copy.
This can be shown with an example:
struct MyStruct
{
string MyProperty { get; set; }
}
void ChangeMyStruct(MyStruct input)
{
input.MyProperty = "new value";
}
...
// Create value type
MyStruct testStruct = new MyStruct { MyProperty = "initial value" };
ChangeMyStruct(testStruct);
// Value of testStruct.MyProperty is still "initial value"
// - the method changed a new copy of the structure.
For a class this would be different
class MyClass
{
string MyProperty { get; set; }
}
void ChangeMyClass(MyClass input)
{
input.MyProperty = "new value";
}
...
// Create reference type
MyClass testClass = new MyClass { MyProperty = "initial value" };
ChangeMyClass(testClass);
// Value of testClass.MyProperty is now "new value"
// - the method changed the instance passed.
Classes can be nothing - the reference can point to a null.
Structs are the actual value - they can be empty but never null. For this reason structs always have a default constructor with no parameters - they need a 'starting value'.
A: As previously mentioned: Classes are reference type while Structs are value types with all the consequences.
As a thumb of rule Framework Design Guidelines recommends using Structs instead of classes if:
*
*It has an instance size under 16 bytes
*It logically represents a single value, similar to primitive types (int, double, etc.)
*It is immutable
*It will not have to be boxed frequently
A: Besides the basic difference of access specifier, and few mentioned above I would like to add some of the major differences including few of the mentioned above with a code sample with output, which will give a more clear idea of the reference and value
Structs:
*
*Are value types and do not require heap allocation.
*Memory allocation is different and is stored in stack
*Useful for small data structures
*Affect performance, when we pass value to method, we pass the entire data structure and all is passed to the stack.
*Constructor simply returns the struct value itself (typically in a temporary location on the stack), and this value is then copied as necessary
*The variables each have their own copy of the data, and it is not possible for operations on one to affect the other.
*Do not support user-specified inheritance, and they implicitly inherit from type object
Class:
*
*Reference Type value
*Stored in Heap
*Store a reference to a dynamically allocated object
*Constructors are invoked with the new operator, but that does not allocate memory on the heap
*Multiple variables may have a reference to the same object
*It is possible for operations on one variable to affect the object referenced by the other variable
Code Sample
static void Main(string[] args)
{
//Struct
myStruct objStruct = new myStruct();
objStruct.x = 10;
Console.WriteLine("Initial value of Struct Object is: " + objStruct.x);
Console.WriteLine();
methodStruct(objStruct);
Console.WriteLine();
Console.WriteLine("After Method call value of Struct Object is: " + objStruct.x);
Console.WriteLine();
//Class
myClass objClass = new myClass(10);
Console.WriteLine("Initial value of Class Object is: " + objClass.x);
Console.WriteLine();
methodClass(objClass);
Console.WriteLine();
Console.WriteLine("After Method call value of Class Object is: " + objClass.x);
Console.Read();
}
static void methodStruct(myStruct newStruct)
{
newStruct.x = 20;
Console.WriteLine("Inside Struct Method");
Console.WriteLine("Inside Method value of Struct Object is: " + newStruct.x);
}
static void methodClass(myClass newClass)
{
newClass.x = 20;
Console.WriteLine("Inside Class Method");
Console.WriteLine("Inside Method value of Class Object is: " + newClass.x);
}
public struct myStruct
{
public int x;
public myStruct(int xCons)
{
this.x = xCons;
}
}
public class myClass
{
public int x;
public myClass(int xCons)
{
this.x = xCons;
}
}
Output
Initial value of Struct Object is: 10
Inside Struct Method
Inside Method value of Struct Object is: 20
After Method call value of Struct Object is: 10
Initial value of Class Object is: 10
Inside Class Method
Inside Method value of Class Object is: 20
After Method call value of Class Object is: 20
Here you can clearly see the difference between call by value and call by reference.
A: From Microsoft's Choosing Between Class and Struct ...
As a rule of thumb, the majority of types in a framework should be
classes. There are, however, some situations in which the
characteristics of a value type make it more appropriate to use
structs.
✓ CONSIDER a struct instead of a class:
*
*If instances of the type are small and commonly short-lived or are commonly embedded in other objects.
X AVOID a struct unless the type has all of the following
characteristics:
*
*It logically represents a single value, similar to primitive types (int, double, etc.).
*It has an instance size under 16 bytes.
*It is immutable. (cannot be changed)
*It will not have to be boxed frequently.
A: I ♥ visualizations, and here I've created a one to show the basic differences between structs and classes.
And the textual representation just in case ;)
+--------------------------------------------------+------+----------------------------------------------+
| Struct | | Class |
+--------------------------------------------------+------+----------------------------------------------+
| - 1 per Thread. | | - 1 per application. |
| | | |
| - Holds value types. | | - Holds reference types. |
| | | |
| - Types in the stack are positioned | | - No type ordering (data is fragmented). |
| using the LIFO principle. | | |
| | | |
| - Can't have a default constructor and/or | | - Can have a default constructor |
| finalizer(destructor). | | and/or finalizer. |
| | | |
| - Can be created with or without a new operator. | | - Can be created only with a new operator. |
| | | |
| - Can't derive from the class or struct | VS | - Can have only one base class and/or |
| but can derive from the multiple interfaces. | | derive from multiple interfaces. |
| | | |
| - The data members can't be protected. | | - Data members can be protected. |
| | | |
| - Function members can't be | | - Function members can be |
| virtual or abstract. | | virtual or abstract. |
| | | |
| - Can't have a null value. | | - Can have a null value. |
| | | |
| - During an assignment, the contents are | | - Assignment is happening |
| copied from one variable to another. | | by reference. |
+--------------------------------------------------+------+----------------------------------------------+
For more information look below:
*
*Classes and structs (official documentation).
*Choosing Between Class and Struct (official documentation).
A: There is one interesting case of "class vs struct" puzzle - situation when you need to return several results from the method: choose which to use. If you know the ValueTuple story - you know that ValueTuple (struct) was added because it should be more effective then Tuple (class). But what does it mean in numbers? Two tests: one is struct/class that have 2 fields, other with struct/class that have 8 fields (with dimension more then 4 - class should become more effective then struct in terms processor ticks, but of course GC load also should be considered).
P.S. Another benchmark for specific case 'sturct or class with collections' is there: https://stackoverflow.com/a/45276657/506147
BenchmarkDotNet=v0.10.10, OS=Windows 10 Redstone 2 [1703, Creators Update] (10.0.15063.726)
Processor=Intel Core i5-2500K CPU 3.30GHz (Sandy Bridge), ProcessorCount=4
Frequency=3233540 Hz, Resolution=309.2586 ns, Timer=TSC
.NET Core SDK=2.0.3
[Host] : .NET Core 2.0.3 (Framework 4.6.25815.02), 64bit RyuJIT
Clr : .NET Framework 4.7 (CLR 4.0.30319.42000), 64bit RyuJIT-v4.7.2115.0
Core : .NET Core 2.0.3 (Framework 4.6.25815.02), 64bit RyuJIT
Method | Job | Runtime | Mean | Error | StdDev | Min | Max | Median | Rank | Gen 0 | Allocated |
------------------ |----- |-------- |---------:|----------:|----------:|---------:|---------:|---------:|-----:|-------:|----------:|
TestStructReturn | Clr | Clr | 17.57 ns | 0.1960 ns | 0.1834 ns | 17.25 ns | 17.89 ns | 17.55 ns | 4 | 0.0127 | 40 B |
TestClassReturn | Clr | Clr | 21.93 ns | 0.4554 ns | 0.5244 ns | 21.17 ns | 23.26 ns | 21.86 ns | 5 | 0.0229 | 72 B |
TestStructReturn8 | Clr | Clr | 38.99 ns | 0.8302 ns | 1.4097 ns | 37.36 ns | 42.35 ns | 38.50 ns | 8 | 0.0127 | 40 B |
TestClassReturn8 | Clr | Clr | 23.69 ns | 0.5373 ns | 0.6987 ns | 22.70 ns | 25.24 ns | 23.37 ns | 6 | 0.0305 | 96 B |
TestStructReturn | Core | Core | 12.28 ns | 0.1882 ns | 0.1760 ns | 11.92 ns | 12.57 ns | 12.30 ns | 1 | 0.0127 | 40 B |
TestClassReturn | Core | Core | 15.33 ns | 0.4343 ns | 0.4063 ns | 14.83 ns | 16.44 ns | 15.31 ns | 2 | 0.0229 | 72 B |
TestStructReturn8 | Core | Core | 34.11 ns | 0.7089 ns | 1.4954 ns | 31.52 ns | 36.81 ns | 34.03 ns | 7 | 0.0127 | 40 B |
TestClassReturn8 | Core | Core | 17.04 ns | 0.2299 ns | 0.2150 ns | 16.68 ns | 17.41 ns | 16.98 ns | 3 | 0.0305 | 96 B |
Code test:
using System;
using System.Text;
using System.Collections.Generic;
using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Attributes.Columns;
using BenchmarkDotNet.Attributes.Exporters;
using BenchmarkDotNet.Attributes.Jobs;
using DashboardCode.Routines.Json;
namespace Benchmark
{
//[Config(typeof(MyManualConfig))]
[RankColumn, MinColumn, MaxColumn, StdDevColumn, MedianColumn]
[ClrJob, CoreJob]
[HtmlExporter, MarkdownExporter]
[MemoryDiagnoser]
public class BenchmarkStructOrClass
{
static TestStruct testStruct = new TestStruct();
static TestClass testClass = new TestClass();
static TestStruct8 testStruct8 = new TestStruct8();
static TestClass8 testClass8 = new TestClass8();
[Benchmark]
public void TestStructReturn()
{
testStruct.TestMethod();
}
[Benchmark]
public void TestClassReturn()
{
testClass.TestMethod();
}
[Benchmark]
public void TestStructReturn8()
{
testStruct8.TestMethod();
}
[Benchmark]
public void TestClassReturn8()
{
testClass8.TestMethod();
}
public class TestStruct
{
public int Number = 5;
public struct StructType<T>
{
public T Instance;
public List<string> List;
}
public int TestMethod()
{
var s = Method1(1);
return s.Instance;
}
private StructType<int> Method1(int i)
{
return Method2(++i);
}
private StructType<int> Method2(int i)
{
return Method3(++i);
}
private StructType<int> Method3(int i)
{
return Method4(++i);
}
private StructType<int> Method4(int i)
{
var x = new StructType<int>();
x.List = new List<string>();
x.Instance = ++i;
return x;
}
}
public class TestClass
{
public int Number = 5;
public class ClassType<T>
{
public T Instance;
public List<string> List;
}
public int TestMethod()
{
var s = Method1(1);
return s.Instance;
}
private ClassType<int> Method1(int i)
{
return Method2(++i);
}
private ClassType<int> Method2(int i)
{
return Method3(++i);
}
private ClassType<int> Method3(int i)
{
return Method4(++i);
}
private ClassType<int> Method4(int i)
{
var x = new ClassType<int>();
x.List = new List<string>();
x.Instance = ++i;
return x;
}
}
public class TestStruct8
{
public int Number = 5;
public struct StructType<T>
{
public T Instance1;
public T Instance2;
public T Instance3;
public T Instance4;
public T Instance5;
public T Instance6;
public T Instance7;
public List<string> List;
}
public int TestMethod()
{
var s = Method1(1);
return s.Instance1;
}
private StructType<int> Method1(int i)
{
return Method2(++i);
}
private StructType<int> Method2(int i)
{
return Method3(++i);
}
private StructType<int> Method3(int i)
{
return Method4(++i);
}
private StructType<int> Method4(int i)
{
var x = new StructType<int>();
x.List = new List<string>();
x.Instance1 = ++i;
return x;
}
}
public class TestClass8
{
public int Number = 5;
public class ClassType<T>
{
public T Instance1;
public T Instance2;
public T Instance3;
public T Instance4;
public T Instance5;
public T Instance6;
public T Instance7;
public List<string> List;
}
public int TestMethod()
{
var s = Method1(1);
return s.Instance1;
}
private ClassType<int> Method1(int i)
{
return Method2(++i);
}
private ClassType<int> Method2(int i)
{
return Method3(++i);
}
private ClassType<int> Method3(int i)
{
return Method4(++i);
}
private ClassType<int> Method4(int i)
{
var x = new ClassType<int>();
x.List = new List<string>();
x.Instance1 = ++i;
return x;
}
}
}
}
A: A short summary of each:
Classes Only:
*
*Can support inheritance
*Are reference (pointer) types
*The reference can be null
*Have memory overhead per new instance
Structs Only:
*
*Cannot support inheritance
*Are value types
*Are passed by value (like integers)
*Cannot have a null reference (unless Nullable is used)
*Do not have a memory overhead per new instance - unless 'boxed'
Both Classes and Structs:
*
*Are compound data types typically used to contain a few variables that have some logical relationship
*Can contain methods and events
*Can support interfaces
A: In addition to all differences described in the other answers:
*
*Structs cannot have an explicit parameterless constructor whereas a class can
*Structs cannot have destructors, whereas a class can
*Structs can't inherit from another struct or class whereas a class can inherit from another class. (Both structs and classes can implement from an interface.)
If you are after a video explaining all the differences, you can check out Part 29 - C# Tutorial - Difference between classes and structs in C#.
A:
Struct
Class
Type
Value-type
Reference-type
Where
On stack / Inline in containing type
On Heap
Deallocation
Stack unwinds / containing type gets deallocated
Garbage Collected
Arrays
Inline, elements are the actual instances of the value type
Out of line, elements are just references to instances of the reference type residing on the heap
Al-Del Cost
Cheap allocation-deallocation
Expensive allocation-deallocation
Memory usage
Boxed when cast to a reference type or one of the interfaces they implement, Unboxed when cast back to value type (Negative impact because boxes are objects that are allocated on the heap and are garbage-collected)
No boxing-unboxing
Assignments
Copy entire data
Copy the reference
Change to an instance
Does not affect any of its copies
Affect all references pointing to the instance
Mutability
Should be immutable
Mutable
Population
In some situations
Majority of types in a framework should be classes
Lifetime
Short-lived
Long-lived
Destructor
Cannot have
Can have
Inheritance
Only from an interface
Full support
Polymorphism
No
Yes
Sealed
Yes
When have sealed keyword (C#), or Sealed attribute (F#)
Constructor
Can not have explicit parameterless constructors
Any constructor
Null-assignments
When marked with nullable question mark
Yes (When marked with nullable question mark in C# 8+ and F# 5+ 1)
Abstract
No
When have abstract keyword (C#), or AbstractClass attribute (F#)
Member Access Modifiers
public, private, internal
public, protected, internal, protected internal, private protected
1 Using null is not encouraged in F#, use Option type instead.
A:
Structs are the actual value - they can be empty but never null
This is true, however also note that as of .NET 2 structs support a Nullable version and C# supplies some syntactic sugar to make it easier to use.
int? value = null;
value = 1;
A: Instances of classes are stored on the managed heap. All variables 'containing' an instance are simply a reference to the instance on the heap. Passing an object to a method results in a copy of the reference being passed, not the object itself.
Structures (technically, value types) are stored wherever they are used, much like a primitive type. The contents may be copied by the runtime at any time and without invoking a customised copy-constructor. Passing a value type to a method involves copying the entire value, again without invoking any customisable code.
The distinction is made better by the C++/CLI names: "ref class" is a class as described first, "value class" is a class as described second. The keywords "class" and "struct" as used by C# are simply something that must be learned.
A: Structure vs Class
A structure is a value type so it is stored on the stack, but a class is a reference type and is stored on the heap.
A structure doesn't support inheritance, and polymorphism, but a class supports both.
By default, all the struct members are public but class members are by default private in nature.
As a structure is a value type, we can't assign null to a struct object, but it is not the case for a class.
A: To add to the other answers, there is one fundamental difference that is worth noting, and that is how the data is stored within arrays as this can have a major effect on performance.
*
*With a struct, the array contains the instance of the struct
*With a class, the array contains a pointer to an instance of the class elsewhere in memory
So an array of structs looks like this in memory
[struct][struct][struct][struct][struct][struct][struct][struct]
Whereas an array of classes looks like this
[pointer][pointer][pointer][pointer][pointer][pointer][pointer][pointer]
With an array of classes, the values you're interested in are not stored within the array, but elsewhere in memory.
For a vast majority of applications this difference doesn't really matter, however, in high performance code this will affect locality of data within memory and have a large impact on the performance of the CPU cache. Using classes when you could/should have used structs will massively increase the number of cache misses on the CPU.
The slowest thing a modern CPU does is not crunching numbers, it's fetching data from memory, and an L1 cache hit is many times faster than reading data from RAM.
Here's some code you can test. On my machine, iterating through the class array takes ~3x longer than the struct array.
private struct PerformanceStruct
{
public int i1;
public int i2;
}
private class PerformanceClass
{
public int i1;
public int i2;
}
private static void DoTest()
{
var structArray = new PerformanceStruct[100000000];
var classArray = new PerformanceClass[structArray.Length];
for (var i = 0; i < structArray.Length; i++)
{
structArray[i] = new PerformanceStruct();
classArray[i] = new PerformanceClass();
}
long total = 0;
var sw = new Stopwatch();
sw.Start();
for (var loops = 0; loops < 100; loops++)
for (var i = 0; i < structArray.Length; i++)
{
total += structArray[i].i1 + structArray[i].i2;
}
sw.Stop();
Console.WriteLine($"Struct Time: {sw.ElapsedMilliseconds}");
sw = new Stopwatch();
sw.Start();
for (var loops = 0; loops < 100; loops++)
for (var i = 0; i < classArray.Length; i++)
{
total += classArray[i].i1 + classArray[i].i2;
}
Console.WriteLine($"Class Time: {sw.ElapsedMilliseconds}");
}
A: In .NET, there are two categories of types, reference types and value types.
Structs are value types and classes are reference types.
The general difference is that a reference type lives on the heap, and a value type lives inline, that is, wherever it is your variable or field is defined.
A variable containing a value type contains the entire value type value. For a struct, that means that the variable contains the entire struct, with all its fields.
A variable containing a reference type contains a pointer, or a reference to somewhere else in memory where the actual value resides.
This has one benefit, to begin with:
*
*value types always contains a value
*reference types can contain a null-reference, meaning that they don't refer to anything at all at the moment
Internally, reference types are implemented as pointers, and knowing that, and knowing how variable assignment works, there are other behavioral patterns:
*
*copying the contents of a value type variable into another variable, copies the entire contents into the new variable, making the two distinct. In other words, after the copy, changes to one won't affect the other
*copying the contents of a reference type variable into another variable, copies the reference, which means you now have two references to the same somewhere else storage of the actual data. In other words, after the copy, changing the data in one reference will appear to affect the other as well, but only because you're really just looking at the same data both places
When you declare variables or fields, here's how the two types differ:
*
*variable: value type lives on the stack, reference type lives on the stack as a pointer to somewhere in heap memory where the actual memory lives (though note Eric Lipperts article series: The Stack Is An Implementation Detail.)
*class/struct-field: value type lives completely inside the type, reference type lives inside the type as a pointer to somewhere in heap memory where the actual memory lives.
A: Well, for starters, a struct is passed by value rather than by reference. Structs are good for relatively simple data structures, while classes have a lot more flexibility from an architectural point of view via polymorphism and inheritance.
Others can probably give you more detail than I, but I use structs when the structure that I am going for is simple.
A: Every variable or field of a primitive value type or structure type holds a unique instance of that type, including all its fields (public and private). By contrast, variables or fields of reference types may hold null, or may refer to an object, stored elsewhere, to which any number of other references may also exist. The fields of a struct will be stored in the same place as the variable or field of that structure type, which may be either on the stack or may be part of another heap object.
Creating a variable or field of a primitive value type will create it with a default value; creating a variable or field of a structure type will create a new instance, creating all fields therein in the default manner. Creating a new instance of a reference type will start by creating all fields therein in the default manner, and then running optional additional code depending upon the type.
Copying one variable or field of a primitive type to another will copy the value. Copying one variable or field of structure type to another will copy all the fields (public and private) of the former instance to the latter instance. Copying one variable or field of reference type to another will cause the latter to refer to the same instance as the former (if any).
It's important to note that in some languages like C++, the semantic behavior of a type is independent of how it is stored, but that isn't true of .NET. If a type implements mutable value semantics, copying one variable of that type to another copies the properties of the first to another instance, referred to by the second, and using a member of the second to mutate it will cause that second instance to be changed, but not the first. If a type implements mutable reference semantics, copying one variable to another and using a member of the second to mutate the object will affect the object referred to by the first variable; types with immutable semantics do not allow mutation, so it doesn't matter semantically whether copying creates a new instance or creates another reference to the first.
In .NET, it is possible for value types to implement any of the above semantics, provided that all of their fields can do likewise. A reference type, however, can only implement mutable reference semantics or immutable semantics; value types with fields of mutable reference types are limited to either implementing mutable reference semantics or weird hybrid semantics.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/13049",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "876"
} |
Q: What is boxing and unboxing and what are the trade offs? I'm looking for a clear, concise and accurate answer.
Ideally as the actual answer, although links to good explanations welcome.
A: Boxing & unboxing is the process of converting a primitive value into an object oriented wrapper class (boxing), or converting a value from an object oriented wrapper class back to the primitive value (unboxing).
For example, in java, you may need to convert an int value into an Integer (boxing) if you want to store it in a Collection because primitives can't be stored in a Collection, only objects. But when you want to get it back out of the Collection you may want to get the value as an int and not an Integer so you would unbox it.
Boxing and unboxing is not inherently bad, but it is a tradeoff. Depending on the language implementation, it can be slower and more memory intensive than just using primitives. However, it may also allow you to use higher level data structures and achieve greater flexibility in your code.
These days, it is most commonly discussed in the context of Java's (and other language's) "autoboxing/autounboxing" feature. Here is a java centric explanation of autoboxing.
A: Boxing is the process of conversion of a value type into a reference type. Whereas Unboxing is the conversion of a reference type into a value type.
EX: int i = 123;
object o = i;// Boxing
int j = (int)o;// UnBoxing
Value Types are: int, char and structures, enumerations.
Reference Types are:
Classes,interfaces,arrays,strings and objects
A: The language-agnostic meaning of a box is just "an object contains some other value".
Literally, boxing is an operation to put some value into the box. More specifically, it is an operation to create a new box containing the value. After boxing, the boxed value can be accessed from the box object, by unboxing.
Note that objects (not OOP-specific) in many programming languages are about identities, but values are not. Two objects are same iff. they have identities not distinguishable in the program semantics. Values can also be the same (usually under some equality operators), but we do not distinguish them as "one" or "two" unique values.
Providing boxes is mainly about the effort to distinguish side effects (typically, mutation) from the states on the objects otherwise probably invisible to the users.
A language may limit the allowed ways to access an object and hide the identity of the object by default. For example, typical Lisp dialects has no explicit distinctions between objects and values. As a result, the implementation has the freedom to share the underlying storage of the objects until some mutation operations occurs on the object (so the object must be "detached" after the operation from the shared instance to make the effect visible, i.e. the mutated value stored in the object could be different than the other objects having the old value). This technique is sometimes called object interning.
Interning makes the program more memory efficient at runtime if the objects are shared without frequent needs of mutation, at the cost that:
*
*The users cannot distinguish the identity of the objects.
*
*There are no way to identify an object and to ensure it has states explicitly independent to other objects in the program before some side effects have actually occur (and the implementation does not aggressively to do the interning concurrently; this should be the rare case, though).
*There may be more problems on interoperations which require to identify different objects for different operations.
*There are risks that such assumptions can be false, so the performance is actually made worse by applying the interning.
*
*This depends on the programming paradigm. Imperative programming which mutates objects frequently certainly would not work well with interning.
*Implementations depending on COW (copy-on-write) to ensure interning can incur serious performance degradation in concurrent environments.
*
*Even local sharing specifically for a few internal data structures can be bad. For example, ISO C++ 11 did not allow sharing of the internal elements of std::basic_string for this reason exactly, even at the cost of breaking the ABI on at least one mainstream implementation (libstdc++).
*Boxing and unboxing incur performance penalties. This is obvious especially when these operations can be naively avoided by hand but actually not easy for the optimizer. The concrete measurement of the cost depends (on per-implementation or even per-program basis), though.
Mutable cells, i.e. boxes, are well-established facilities exactly to resolve the problems of the 1st and 2nd bullets listed above. Additionally, there can be immutable boxes for implementation of assignment in a functional language. See SRFI-111 for a practical instance.
Using mutable cells as function arguments with call-by-value strategy implements the visible effects of mutation being shared between the caller and the callee. The object contained by an box is effectively "called by shared" in this sense.
Sometimes, the boxes are referred as references (which is technically false), so the shared semantics are named "reference semantics". This is not correct, because not all references can propagate the visible side effects (e.g. immutable references). References are more about exposing the access by indirection, while boxes are the efforts to expose minimal details of the accesses like whether indirection or not (which is uninterested and better avoided by the implementation).
Moreover, "value semantic" is irrelevant here. Values are not against to references, nor to boxes. All the discussions above are based on call-by-value strategy. For others (like call-by-name or call-by-need), no boxes are needed to shared object contents in this way.
Java is probably the first programming language to make these features popular in the industry. Unfortunately, there seem many bad consequences concerned in this topic:
*
*The overall programming paradigm does not fit the design.
*Practically, the interning are limited to specific objects like immutable strings, and the cost of (auto-)boxing and unboxing are often blamed.
*Fundamental PL knowledge like the definition of the term "object" (as "instance of a class") in the language specification, as well as the descriptions of parameter passing, are biased compared to the the original, well-known meaning, during the adoption of Java by programmers.
*
*At least CLR languages are following the similar parlance.
Some more tips on implementations (and comments to this answer):
*
*Whether to put the objects on the call stacks or the heap is an implementation details, and irrelevant to the implementation of boxes.
*
*Some language implementations do not maintain a contiguous storage as the call stack.
*Some language implementations do not even make the (per thread) activation records a linear stack.
*Some language implementations do allocate stacks on the free store ("the heap") and transfer slices of frames between the stacks and the heap back and forth.
*These strategies has nothing to do boxes. For instance, many Scheme implementations have boxes, with different activation records layouts, including all the ways listed above.
*Besides the technical inaccuracy, the statement "everything is an object" is irrelevant to boxing.
*
*Python, Ruby, and JavaScript all use latent typing (by default), so all identifiers referring to some objects will evaluate to values having the same static type. So does Scheme.
*Some JavaScript and Ruby implementations use the so-called NaN-boxing to allow inlining allocation of some objects. Some others (including CPython) do not. With NaN boxing, a normal double object needs no unboxing to access its value, while a value of some other types can be boxed in a host double object, and there is no reference for double or the boxed value. With the naive pointer approach, a value of host object pointer like PyObject* is an object reference holding a box whose boxed value is stored in the dynamically allocated space.
*At least in Python, objects are not "everything". They are also not known as "boxed values" unless you are talking about interoperability with specific implementations.
A: In .Net:
Often you can't rely on what the type of variable a function will consume, so you need to use an object variable which extends from the lowest common denominator - in .Net this is object.
However object is a class and stores its contents as a reference.
List<int> notBoxed = new List<int> { 1, 2, 3 };
int i = notBoxed[1]; // this is the actual value
List<object> boxed = new List<object> { 1, 2, 3 };
int j = (int) boxed[1]; // this is an object that can be 'unboxed' to an int
While both these hold the same information the second list is larger and slower. Each value in the second list is actually a reference to an object that holds the int.
This is called boxed because the int is wrapped by the object. When its cast back the int is unboxed - converted back to it's value.
For value types (i.e. all structs) this is slow, and potentially uses a lot more space.
For reference types (i.e. all classes) this is far less of a problem, as they are stored as a reference anyway.
A further problem with a boxed value type is that it's not obvious that you're dealing with the box, rather than the value. When you compare two structs then you're comparing values, but when you compare two classes then (by default) you're comparing the reference - i.e. are these the same instance?
This can be confusing when dealing with boxed value types:
int a = 7;
int b = 7;
if(a == b) // Evaluates to true, because a and b have the same value
object c = (object) 7;
object d = (object) 7;
if(c == d) // Evaluates to false, because c and d are different instances
It's easy to work around:
if(c.Equals(d)) // Evaluates to true because it calls the underlying int's equals
if(((int) c) == ((int) d)) // Evaluates to true once the values are cast
However it is another thing to be careful of when dealing with boxed values.
A: Boxed values are data structures that are minimal wrappers around primitive types*. Boxed values are typically stored as pointers to objects on the heap.
Thus, boxed values use more memory and take at minimum two memory lookups to access: once to get the pointer, and another to follow that pointer to the primitive. Obviously this isn't the kind of thing you want in your inner loops. On the other hand, boxed values typically play better with other types in the system. Since they are first-class data structures in the language, they have the expected metadata and structure that other data structures have.
In Java and Haskell generic collections can't contain unboxed values. Generic collections in .NET can hold unboxed values with no penalties. Where Java's generics are only used for compile-time type checking, .NET will generate specific classes for each generic type instantiated at run time.
Java and Haskell have unboxed arrays, but they're distinctly less convenient than the other collections. However, when peak performance is needed it's worth a little inconvenience to avoid the overhead of boxing and unboxing.
* For this discussion, a primitive value is any that can be stored on the call stack, rather than stored as a pointer to a value on the heap. Frequently that's just the machine types (ints, floats, etc), structs, and sometimes static sized arrays. .NET-land calls them value types (as opposed to reference types). Java folks call them primitive types. Haskellions just call them unboxed.
** I'm also focusing on Java, Haskell, and C# in this answer, because that's what I know. For what it's worth, Python, Ruby, and Javascript all have exclusively boxed values. This is also known as the "Everything is an object" approach***.
*** Caveat: A sufficiently advanced compiler / JIT can in some cases actually detect that a value which is semantically boxed when looking at the source, can safely be an unboxed value at runtime. In essence, thanks to brilliant language implementors your boxes are sometimes free.
A: The .NET FCL generic collections:
List<T>
Dictionary<TKey, UValue>
SortedDictionary<TKey, UValue>
Stack<T>
Queue<T>
LinkedList<T>
were all designed to overcome the performance issues of boxing and unboxing in previous collection implementations.
For more, see chapter 16, CLR via C# (2nd Edition).
A: from C# 3.0 In a Nutshell:
Boxing is the act of casting a value
type into a reference type:
int x = 9;
object o = x; // boxing the int
unboxing is... the reverse:
// unboxing o
object o = 9;
int x = (int)o;
A: Boxing and unboxing facilitates value types to be treated as objects. Boxing means converting a value to an instance of the object reference type. For example, Int is a class and int is a data type. Converting int to Int is an exemplification of boxing, whereas converting Int to int is unboxing. The concept helps in garbage collection, Unboxing, on the other hand, converts object type to value type.
int i=123;
object o=(object)i; //Boxing
o=123;
i=(int)o; //Unboxing.
A: Like anything else, autoboxing can be problematic if not used carefully. The classic is to end up with a NullPointerException and not be able to track it down. Even with a debugger. Try this:
public class TestAutoboxNPE
{
public static void main(String[] args)
{
Integer i = null;
// .. do some other stuff and forget to initialise i
i = addOne(i); // Whoa! NPE!
}
public static int addOne(int i)
{
return i + 1;
}
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/13055",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "149"
} |
Q: What do ref, val and out mean on method parameters? I'm looking for a clear, concise and accurate answer.
Ideally as the actual answer, although links to good explanations welcome.
This also applies to VB.Net, but the keywords are different - ByRef and ByVal.
A: One additional note about ref vs. out: The distinction between the two is enforced by the C# compiler. The CLR does not distinguish between between out and ref. This means that you cannot have two methods whose signatures differ only by an out or ref
void foo(int value) {}
// Only one of the following would be allowed
// valid to overload with ref
void foo(ref int value) {}
// OR with out
void foo(out int value) {}
A: out means that the parameter will be initialised by the method:
int result; //not initialised
if( int.TryParse( "123", out result ) )
//result is now 123
else
//if TryParse failed result has still be
// initialised to its default value (0)
ref will force the underlying reference to be passed:
void ChangeMyClass1( MyClass input ) {
input.MyProperty = "changed by 1";
input = null;
//can't see input anymore ...
// I've only nulled my local scope's reference
}
void ChangeMyClass2( ref MyClass input ) {
input.MyProperty = "changed by 2";
input = null;
//the passed reference is now null too.
}
MyClass tester = new MyClass { MyProperty = "initial value" };
ChangeMyClass1( tester );
// now tester.MyProperty is "changed by 1"
ChangeMyClass2( ref tester );
// now tester is null
A: By default (in C#), passing an object to a function actually passes a copy of the reference to that object. Changing the parameter itself only changes the value in the parameter, and not the variable that was specified.
void Test1(string param)
{
param = "new value";
}
string s1 = "initial value";
Test1(s1);
// s1 == "initial value"
Using out or ref passes a reference to the variable specified in the call to the function. Any changes to the value of an out or ref parameter will be passed back to the caller.
Both out and ref behave identically except for one slight difference: ref parameters are required to be initialised before calling, while out parameters can be uninitialised. By extension, ref parameters are guaranteed to be initialised at the start of the method, while out parameters are treated as uninitialised.
void Test2(ref string param)
{
param = "new value";
}
void Test3(out string param)
{
// Use of param here will not compile
param = "another value";
}
string s2 = "initial value";
string s3;
Test2(ref s2);
// s2 == "new value"
// Test2(ref s3); // Passing ref s3 will not compile
Test3(out s2);
// s2 == "another value"
Test3(out s3);
// s3 == "another value"
Edit: As dp points out, the difference between out and ref is only enforced by the C# compiler, not by the CLR. As far as I know, VB has no equivalent for out and implements ref (as ByRef) only, matching the support of the CLR.
A: One of my own questions at stackoverflow handles this topic too.
It handles about "pass by reference" and "pass by value" in different types of languages, c# is included so maybe you can find some extra information there as well.
Basically it comes down to:
*
*ref: the parameter with the ref keyword will be passed by reference
*out: the parameter with the out keyword will be treated as an output parameter
but that's really the most basic answer you can give, as it is a little more complex than it is stated here
| {
"language": "en",
"url": "https://stackoverflow.com/questions/13060",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "22"
} |
Q: Delphi resources for existing .NET developer Can anyone recommend some decent resources for a .NET developer who wishes to get a high level overview of the Delphi language?
We are about acquire a small business whose main product is developed in Delphi and I am wanting to build up enough knowledge to be able to talk the talk with them.
Books, websites etc all appreciated.
Thanks.
A: *
*DelphiBasics gives a good overview of basic syntax, library functions etc.
*Essential Delphi is a free e-book by Marco Cantu that should give a good overview, also of the VCL
Feel free to ask around here as well, or in the Delphi newsgroups, if you encounter specific issues :)
[edit] @Martin:
*
*There's a free "Turbo" edition available at the Codegear/Embarcadero website. I guess it has some limitations, so you could also try downloading the trial version.
A: There's also a Delphi wiki
This even has a "Beginning Delphi" page with lots of external links on it. (some of them already mentioned)
A: http://www.delphifeeds.com/ is a good place to start, it has most news about what is going on in the delphi community.
A: There are a number of videos by Alister Christie at codegearguru - check them out :)
edit... @Martin, check out the Turbo products at CodeGear
A: @Martin there is a free version.
Turbo Delphi
If you are comfortable with c# you will see many similarities with Delphi.
I also found the community surrounding the newsgroups to be active and helpful. They have a smilar concept to MVPs they were called Team B (but as Borland doesn't own them the name may have changed now).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/13086",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: How do I convert a date to a HTTP-formatted date in .Net / C# How does one convert a .Net DateTime into a valid HTTP-formatted date string?
A: Dates can be converted to HTTP valid dates (RFC 1123) by using the "r" format string in .Net. HTTP dates need to be GMT / not offset - this can be done using the ToUniversalTime() method.
So, in C# for example:
string HttpDate = SomeDate.ToUniversalTime().ToString("r");
Right now, that produces HttpDate = "Sat, 16 Aug 2008 10:38:39 GMT"
See Standard Date and Time Format Strings for a list of .Net standard date & time format strings.
See Protocol Parameters for the HTTP date specification, and background to other valid (but dated) RFC types for HTTP dates.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/13087",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "56"
} |
Q: Should I support ASP.NET 1.1? I've just started working on an ASP.NET project which I hope to open source once it gets to a suitable stage. It's basically going to be a library that can be used by existing websites. My preference is to support ASP.NET 2.0 through 3.5, but I wondered how many people I would be leaving out by not supporting ASP.NET 1.1? More specifically, how many people are there still using ASP.NET 1.1 for whom ASP.NET 2.0/3.5 is not an option? If upgrading your server is not an option for you, why not?
A: Increasingly I think not.
The kind of large rigid organisation currently still clinging to 1.1 (probably because they're only just upgraded to it) is also the kind that's highly unlikely to look at open source solutions.
If I were starting a new ASP.Net project right now I'd stick with .Net 3.5 and probably the new MVC previews.
A: Remember that .NET 1.1 is going out of general support in October of this year (and that includes ASP.NET 1.1).
A: I think you would be perfectly fine with targeting just 2.0 and above, someone who would use your library would most likely be doing new development and using at least ASP.NET 2.0. I think it would be a very small group of people doing new development in 1.1.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/13106",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: PHP: Access Array Value on the Fly In php, I often need to map a variable using an array ... but I can not seem to be able to do this in a one liner. c.f. example:
// the following results in an error:
echo array('a','b','c')[$key];
// this works, using an unnecessary variable:
$variable = array('a','b','c');
echo $variable[$key];
This is a minor problem, but it keeps bugging every once in a while ... I don't like the fact, that I use a variable for nothing ;)
A: This might not be directly related.. But I came to this post finding solution to this specific problem.
I got a result from a function in the following form.
Array
(
[School] => Array
(
[parent_id] => 9ce8e78a-f4cc-ff64-8de0-4d9c1819a56a
)
)
what i wanted was the parent_id value "9ce8e78a-f4cc-ff64-8de0-4d9c1819a56a".
I used the function like this and got it.
array_pop( array_pop( the_function_which_returned_the_above_array() ) )
So, It was done in one line :)
Hope It would be helpful to somebody.
A: The technical answer is that the Grammar of the PHP language only allows subscript notation on the end of variable expressions and not expressions in general, which is how it works in most other languages. I've always viewed it as a deficiency in the language, because it is possible to have a grammar that resolves subscripts against any expression unambiguously. It could be the case, however, that they're using an inflexible parser generator or they simply don't want to break some sort of backwards compatibility.
Here are a couple more examples of invalid subscripts on valid expressions:
$x = array(1,2,3);
print ($x)[1]; //illegal, on a parenthetical expression, not a variable exp.
function ret($foo) { return $foo; }
echo ret($x)[1]; // illegal, on a call expression, not a variable exp.
A: This is called array dereferencing. It has been added in php 5.4.
http://www.php.net/releases/NEWS_5_4_0_alpha1.txt
update[2012-11-25]: as of PHP 5.5, dereferencing has been added to contants/strings as well as arrays
A: function doSomething()
{
return $somearray;
}
echo doSomething()->get(1)->getOtherPropertyIfThisIsAnObject();
A: I wouldn't bother about that extra variable, really. If you want, though, you could also remove it from memory after you've used it:
$variable = array('a','b','c');
echo $variable[$key];
unset($variable);
Or, you could write a small function:
function indexonce(&$ar, $index) {
return $ar[$index];
}
and call this with:
$something = indexonce(array('a', 'b', 'c'), 2);
The array should be destroyed automatically now.
A: actually, there is an elegant solution:) The following will assign the 3rd element of the array returned by myfunc to $myvar:
$myvar = array_shift(array_splice(myfunc(),2));
A: Or something like this, if you need the array value in a variable
$variable = array('a','b','c');
$variable = $variable[$key];
A: There are several oneliners you could come up with, using php array_* functions. But I assure you that doing so it is total redundant comparing what you want to achieve.
Example you can use something like following, but it is not an elegant solution and I'm not sure about the performance of this;
array_pop ( array_filter( array_returning_func(), function($key){ return $key=="array_index_you_want"? TRUE:FALSE; },ARRAY_FILTER_USE_KEY ) );
if you are using a php framework and you are stuck with an older version of php, most frameworks has helping libraries.
example: Codeigniter array helpers
A: though the fact that dereferencing has been added in PHP >=5.4 you could have done it in one line using ternary operator:
echo $var=($var=array(0,1,2,3))?$var[3]:false;
this way you don't keep the array only the variable. and you don't need extra functions to do it...If this line is used in a function it will automatically be destroyed at the end but you can also destroyed it yourself as said with unset later in the code if it is not used in a function.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/13109",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "50"
} |
Q: How can I combine several C/C++ libraries into one? I'm tired of adding ten link libraries into my project, or requiring eight of them to use my own. I'd like to take existing libraries like libpng.a, libz.a, libjpeg.a, and combine them into one single .a library. Is that possible? How about combining .lib libraries?
A: On Unix like systems, the ld and ar utilities can do this. Check out http://en.wikipedia.org/wiki/Ar_(Unix) or lookup the man pages on any Linux box or through Google, e.g., 'Unix man ar'.
Please note that you might be better off linking to a shared (dynamic) library. This would add a dependency to your executable, but it will dramatically reduce its size, especially if you're writing a graphic application.
A: You could extract the object files from each library with
ar x <library name>
and then merge them all into a new library with
ar cs <new library name> <list each extracted object file>
A: On Linux, MinGW, or Cygwin, with the GNU toolchain:
ar -M <<EOM
CREATE libab.a
ADDLIB liba.a
ADDLIB libb.a
SAVE
END
EOM
ranlib libab.a
Or if you can keep the existence of liba.a and libb.a:
ar crsT libab.a liba.a libb.a
On Windows, with the Microsoft Visual C++ toolchain:
lib.exe /OUT:libab.lib liba.lib libb.lib
A: I'm not sure how to physically combine them into a single file, however you could employ an abstraction of a sort and just include a single "AllMyLibs.a/h" which in turn includes everything you want. You could also put this into the location where your compiler searches for libraries, so it would work for any project.
A: Maybe I'm misunderstanding, but don't you only have to ship the libraries if the end-user code calls them directly? If all the access to JPEG methods, etc. is from your code in your static library, then just link the libraries into your library.
I.e.,
---------------------
| End-user EXE file |
---------------------
|
| makes calls to
|
v
--------------------
| Your static lib.a |
--------------------
| makes calls to and links
v
------------------------------------ .....
| | |
------------- -------- ----------
| libjpeg.a | |libz.a| |libpng.a|
------------- -------- ----------
I.e., it's only an issue if the end code needs to make direct calls into libz.a, libpng.a, etc.
If the application code has a legitimate need to call libz.a, for example, then that would as mentioned above be a case for using a dynamic module.
A: Combining several third-party libraries into one could create more problems for you—for instance, if two of those libraries define a common symbol which your program doesn't use. Now you've got to extract all (or all-but-one) of the instances of the common symbol before you combine the libraries.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/13128",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "45"
} |
Q: Is it safe to install SVN on a production win2008 web server? I have a web server and I am currently paying for SVN hosting at a company that specializes in hosting svn.
I am thinking of just installing SVN on my dedicated server, are there any security implications of this?
A: Small correction to Jedi: it's VisualSVN Server from http://www.visualsvn.com/server/
A: I use VisualSVN Server from http://www.visualsvn.com/server/, very easy to install and can be integrate with active directory. It's install on a win2008 with no problem.
A: It depends on how far you want to consider this issue.
If you want to install a Subversion server on your own, it looks like you have two options:
*
*Apache
*Subversions own protocol
In either case, the problem isn't what the two do, but what the two unintentionally do.
If there are bugs in Apache that allows an outside attacker to gain access to your data, then that is bad. If there are bugs in Subversions own server that allows the same, that is bad.
What you need to do is consider risk and consequences for the scenarios, and come up with a server setup that meets your requirements, if possible.
The cases you would at least have to consider would be:
*
*Bug in either system that allows an attacker to sink your server (example: something which makes your server use an inordinate amount of CPU time)
*Bug in either system that allows an attacker access to the data on that server
*Bug in either system that allows an attacker access to your domain (ie. all your servers and machines available from that public server)
Personally I have considered how many are hosting subversion servers through Apache now, and installed VisualSVN Server to host my own source code without a doubt.
A: For simple security requirements, setting up Subversion with svnserve is almost trivial. Even getting it running under Apache for more extensive security needs is not overly difficult.
This is a good walk-through:
http://donie.homeip.net:8080/pebble/Steve/2006/02/27/1141079943879.html
A: Apache and SVN are fairly easy to get running together but there are a number of steps. It is definitely easier today than 2 years ago when I first tried. Make sure you have matching versions of the modules and spend some time playing with Apache locally before deploying to your server. There are versions of Apache with and without SSL. Check you have the one with OpenSSL included to protect credentials on the wire.
Install Apache so that it can be manually started eg. not as a service. You'll want to do this to avoid a collision with any IIS apps on your server. You can install Apache to run as a service later, once your config is right.
Normally Apache will use Basic Authentication. You need to secure this using SSL, the credentials are not encrypted in transit. You put user details in a test file on disk. If you want to authenticate users against windows or active directory, you will have a larger task on your hands (perhaps see VisualSVN for this).
I had a quick look at VisuaSVN and it seems to be a good option. However a little Apache config experience can go a long way. Coming from an IIS background it wasn't too difficult, it just took some time to review all of the options/settings.
A: VisualSVN is such a rare beast - a setup tool that is easier to install SVN on Windows than it is on Linux. And considering Linux you just have to type "yum install subversion", that's some praise.
However, if you're really worried, I would install VMware alongside and run your SVN server in a guest OS on your web server.
Security: if you run svnserve, then simply block access to that port (3960) to all but your computers. If not, you'll need to secure the svn auth config files.
If you're running it using apache, then its just another website to secure, in the same ways as usual.
A: The production svn server is important in terms of availability but it's never going to be enough and it doesn't matter whether it's Windows Srvr 2007, RH Linux whatever.. You'll need a well thought backup policy and taking care access management.
A: SVN is very difficult to get setup in the Windows environment, at least if you want hosted SVN, a local repository is different. My suggestion is stick with the company or search out a cheaper SVN that will not cost as much money. They are not difficult to setup, but you would hate to lose all your source code because of an improper backup.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/13154",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Best practice for webservices I've created a webservice and when I want to use its methods I instantiate it in the a procedure, call the method, and I finally I dispose it, however I think also it could be okay to instantiate the webservice in the "private void Main_Load(object sender, EventArgs e)" event.
The thing is that if I do it the first way I have to instantiate the webservice every time I need one of its methods but in the other way I have to keep a webservice connected all the time when I use it in a form for example.
I would like to know which of these practices are better or if there's a much better way to do it
Strategy 1
private void btnRead_Click(object sender, EventArgs e)
{
try
{
//Show clock
this.picResult.Image = new Bitmap(pathWait);
Application.DoEvents();
//Connect to webservice
svc = new ForPocketPC.ServiceForPocketPC();
svc.Credentials = new System.Net.NetworkCredential(Settings.UserName, Settings.Password);
svc.AllowAutoRedirect = false;
svc.UserAgent = Settings.UserAgent;
svc.PreAuthenticate = true;
svc.Url = Settings.Url;
svc.Timeout = System.Threading.Timeout.Infinite;
svc.CallMethod();
...
}
catch (Exception ex)
{
ShowError(ex);
}
finally
{
if (svc != null)
svc.Dispose();
}
}
Strategy 2
private myWebservice svc;
private void Main_Load(object sender, EventArgs e)
{
//Connect to webservice
svc = new ForPocketPC.ServiceForPocketPC();
svc.Credentials = new System.Net.NetworkCredential(Settings.UserName, Settings.Password);
svc.AllowAutoRedirect = false;
svc.UserAgent = Settings.UserAgent;
svc.PreAuthenticate = true;
svc.Url = Settings.Url;
svc.Timeout = System.Threading.Timeout.Infinite;
}
private void btnRead_Click(object sender, EventArgs e)
{
try
{
//Show clock
this.picResult.Image = new Bitmap(pathWait);
Application.DoEvents();
svc.CallMethod();
...
}
catch (Exception ex)
{
ShowError(ex);
}
}
private void Main_Closing(object sender, CancelEventArgs e)
{
svc.Dispose();
}
A: It depends on how often you are going to be calling the web service. If you're going to be calling it almost constantly, it would probably be better to use method #2. However, if it's not going to be getting called quite so often, you are better off using method #1, and only instantiating it when you need it.
A: Right now I made a solution for a mobile device and it turns to be used on irregular times, it could be used in 10 minutes, 1 hour, 4 hours its very variable, it seems that the better aproach is the first strategy.
Last year we went on a project where we used webservices, the fact is that we instantiated our webservices at the Sub New() procedure and it run it very well, however, sometimes some users claimed at us that they woke up from their chairs and when they returned and tried to continue on the application they received a timeout error message and they had to re-login again.
We thougth that maybe that was Ok because maybe the users went out for a very long time out of their seats, but once in a presentation of the application with the CEOs it happened exactly the same scenario and personally I didn't like that behaviour and that's why the question.
Thanks for the answer.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/13160",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: A ThreadStateException occures when trying to restart a thread From time to time I get a System.Threading.ThreadStateException when attempting to restart a thread. The code in question is as follows:
// Make sure the thread is done stopping
while (this.mThread.ThreadState == ThreadState.Running)
{
Thread.Sleep(0);
}
// Respawn a thread if the current one is stopped or doesn't exist
if (this.mThread == null || this.mThread.ThreadState == ThreadState.Stopped)
{
this.mThread = new Thread(new ParameterizedThreadStart(Monitor)); }
// Start the thread
if (check)
{
this.mThread.Start(60000);
}
else
{
this.mThread.Start(0);
}
So two questions - is this the correct way of doing things, and it is, is there a way to prevent the error from occurring?
A: It's possible for a thread to be in more than one state at once therefore the ThreadState property is actually a bitmap of possible states. So testing for equality with just one state will not give you the right result. You would need to do something like:
if((mThread.ThreadState & ThreadState.Running) != 0)
However, checking thread state is the wrong to do anything. I'm not entirely clear what you're trying to achieve but I will guess that you're waiting for a thread to terminate before restarting it. In that case you should do:
mThread.Join();
mThread = new Thread(new ParameterizedThreadStart(Monitor));
if(check)
mThread.Start(60000);
else
mThread.Start(0);
Although if you describe the problem you're trying to solve in more detail I'm almost certain there will be a better solution. Waiting around for a thread to end just to restart it again doesn't seem that efficient to me. Perhaps you just need some kind of inter-thread communication?
John.
A: The problem is that you have code that first checks if it should create a new thread object, and another piece of code that determines wether to start the thread object. Due to race conditions and similar things, your code might end up trying to call .Start on an existing thread object. Considering you don't post the details behind the check variable, it's impossible to know what might trigger this behavior.
You should reorganize your code so that .Start is guaranteed to only be called on new objects. In short, you should put the Start method into the same if-statement as the one that creates a new thread object.
Personally, I would try to reorganize the entire code so that I didn't need to create another thread, but wrap the code inside the thread object inside a loop so that the thread just keeps on going.
A: A ThreadStateException is thrown because you're trying to start a thread that's not in a startable state. The most likely situations would be that it's already running, or that it has fully exited.
There are potentially a couple things that might be happening. First is, the thread might have transitioned from Running to StopRequested, which isn't fully stopped yet, so your logic doesn't create a new thread, and you're trying to start a thread which has just finished running or is about to finish running (neither of which is a valid state for restarting).
The other possibility is that the thread was aborted. Threads which are aborted go to the Aborted state, not the Stopped state, and of course are also not valid for restarting.
Really, the only kind of thread that is still alive that can be "restarted" is one that's suspended. You might want to use this conditional instead:
if (this.mThread == null || this.mThread.ThreadState != ThreadState.Suspended)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/13170",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: How do I integrate my continuous integration system with my bug tracking system? I use cruisecontrol.rb for CI and FogBugz for bug tracking, but the more general the answers, the better.
First is the technical problem: is there an API for FogBugz? Are there good tutorials, or better yet, pre-written code?
Second is the procedural problem: what, exactly, should the CI put in the bug tracker when the build breaks? Perhaps:
Title: "#{last committer} broke the build!"
Body: "#{ error traces }"
I suppose this presupposes the answer to this question: should I even put CI breaks into my bug tracking?
A: At my company we've recently adopted the (commercial) Atlassian stack - including JIRA for issue tracking and Bamboo for builds. Much like the Microsoft world (I'm guessing - we're a Java shop), if you get all your products from a single vendor you get the bonus of tight integration.
For an example of how they've done interoperability, view their interoperability page.
Enough shilling. Generally speaking, I can summarize their general approach as:
*
*Create issues in your bug tracker (ex: issue key of PROJ-123).
*When you commit code, add "PROJ-123" to your commit comment to indicate what bug this code change fixes.
*When your CI server checks out the code, scan the commit comments of the diffs. Record any strings matching the regex of your issue keys.
*When the build completes, generate a report of what issue keys were found.
Specifically to your second problem:
Your CI doesn't doesn't have to put anything into your bug tracker. Bamboo doesn't put anything into JIRA. Instead, the Atlassian folks have provided a plugin to JIRA that will make a remote api call into Bamboo, asking the question "Bamboo, to what builds am I (a JIRA issue) related?". This is probably best explained with a screenshot.
A: All the CI setups I've worked with send an email (to a list), but if you did want—especially if your team uses FogBugz much as a todo system—you could just open a case in FogBugz 6. It has an API that lets you open cases. For that matter, you could just configure it to send the email to your FogBugz' email submission address, but the API might let you do more, like assign the case to the last committer.
Brian's answer suggests to me, if your CI finds a failure in a commit that had a case number, you might even just reopen the existing case. Like codifying a case field for every little thing, though, there's a point where the CI automation could be "too smart," get it wrong, and just be annoying. Opening a new case could be plenty.
And thanks: this makes me wonder if I should try integrating our Chimps setup with our FogBugz!
A: CC comes with a utility that warns you when builds fail, it probably isn't worth logging the failing build in FogBugz - you don't need to track issues that are immediately resolved (as most broken builds will be)
To go the other way round (FogBugz showing checkins that fixed the issue) you need a web based repository browser - FogBugz is easy to configure so that it shows the right changes.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/13200",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: Why Doesn't My Cron Job Work Properly? I have a cron job on an Ubuntu Hardy VPS that only half works and I can't work out why. The job is a Ruby script that uses mysqldump to back up a MySQL database used by a Rails application, which is then gzipped and uploaded to a remote server using SFTP.
The gzip file is created and copied successfully but it's always zero bytes. Yet if I run the cron command directly from the command line it works perfectly.
This is the cron job:
PATH=/usr/bin
10 3 * * * ruby /home/deploy/bin/datadump.rb
This is datadump.rb:
#!/usr/bin/ruby
require 'yaml'
require 'logger'
require 'rubygems'
require 'net/ssh'
require 'net/sftp'
APP = '/home/deploy/apps/myapp/current'
LOGFILE = '/home/deploy/log/data.log'
TIMESTAMP = '%Y%m%d-%H%M'
TABLES = 'table1 table2'
log = Logger.new(LOGFILE, 5, 10 * 1024)
dump = "myapp-#{Time.now.strftime(TIMESTAMP)}.sql.gz"
ftpconfig = YAML::load(open('/home/deploy/apps/myapp/shared/config/sftp.yml'))
config = YAML::load(open(APP + '/config/database.yml'))['production']
cmd = "mysqldump -u #{config['username']} -p#{config['password']} -h #{config['host']} --add-drop-table --add-locks --extended-insert --lock-tables #{config['database']} #{TABLES} | gzip -cf9 > #{dump}"
log.info 'Getting ready to create a backup'
`#{cmd}`
# Strongspace
log.info 'Backup created, starting the transfer to Strongspace'
Net::SSH.start(ftpconfig['strongspace']['host'], ftpconfig['strongspace']['username'], ftpconfig['strongspace']['password']) do |ssh|
ssh.sftp.connect do |sftp|
sftp.open_handle("#{ftpconfig['strongspace']['dir']}/#{dump}", 'w') do |handle|
sftp.write(handle, open("#{dump}").read)
end
end
end
log.info 'Finished transferring backup to Strongspace'
log.info 'Removing local file'
cmd = "rm -f #{dump}"
log.debug "Executing: #{cmd}"
`#{cmd}`
log.info 'Local file removed'
I've checked and double-checked all the paths and they're correct. Both sftp.yml (SFTP credentials) and database.yml (MySQL credentials) are owned by the executing user (deploy) with read-only permissions for that user (chmod 400). I'm using the 1.1.x versions of net-ssh and net-sftp. I know they're not the latest, but they're what I'm familiar with at the moment.
What could be causing the cron job to fail?
A: When scripts run correctly interactively but not when run by cron, the problem is usually because of the environment environment settings in place ... for example the PATH as alrady mentioned by @Ted Percival, but may be other environment variables.
This is because cron will not invoke .bash_profile, .bashrc or /etc/profile before executing.
The best way to avoid this is to ensure any scripts invoked by cron do not make any assumptions about the environment when executing. Over-coming this can be as simple as including a few lines in your script to make sure the environment is setup properly. For example, in my case I have all the significant settings in /etc/profile (for RHEL), so I will include the following line in any scripts to be run under cron:
source /etc/profile
A: Looks like your PATH is missing a few directories, most importantly /bin (for /bin/rm). Here's what my system's /etc/crontab uses:
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
A: Are you sure the temporary file is being created correctly when running as a cron job? The working directory for your script will either be specified in the HOME environment variable, or the /etc/passwd entry for the user that installed the cron job. If deploy does not have write permissions for the directory in which it is executing, then you could specify an absolute path for the dump file to fix the problem.
A: Is cron sending emails with logs?
If not, pipe the output of cron to a log file.
Make sure to redirect STDERR to the log.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/13204",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: How to avoid conflict when not using ID in URLs I see often (rewritten) URLs without ID in it, like on some wordpress installations. What is the best way of achieve this?
Example: site.com/product/some-product-name/
Maybe to keep an array of page names and IDs in cache, to avoid DB query on every page request?
How to avoid conflicts, and what are other issues on using urls without IDs?
A: Using an ID presents the same conundrum, really--you're just checking for a different value in your database. The "some-product-name" part of your URL above is also something unique. Some people call them slugs (Wordpress, also permalinks). So instead of querying the database for a row that has the particular ID, you're querying the database for a row that has a particular slug. You don't need to know the ID to retrieve the record.
A: As long as product names are unique it shouldn't be an issue. It won't take any longer (at least not significant) to look up a product by unique name than numeric ID as long as the column is indexed.
A: Wordpress has a field in the wp_posts table for the slug. When you create the post, it creates a slug from the post title (if that's how you have it configured), replacing spaces with dashes (or I think you can set it to underscores). It also takes out the apostrophes, commas, or whatnot. I believe it also limits the overall length of the slug, too.
So, in short, it isn't dynamically decoding the URL into the post's title--there's a field in the table that matches the URL version of the post name directly.
A: As you may or may not know, the URLs are being re-written with Apache's mod_rewrite module. As mentioned here, Wordpress is, in the background, assigning a slug after sanitizing the title or post name.
But, to answer your question, what you're describing is Wordpress' "Pretty Permalinks" feature and you can learn more about it in the Wordpress codex. Newer versions of Wordpress do the re-writing internally (no .htaccess editin, wp_rewrite instead). Which is why you'll see the same ruleset for any permalink structure.
Though, if you do some digging you can find the old rewrite rules. For example:
RewriteRule ^([0-9]{4})/([0-9]{1,2})/([0-9]{1,2})/?$ /index.php?year=$1&monthnum=$2&day=$3 [QSA,L]
Will take a URL like /2008/01/01/ and direct it to /index.php?year=2008&monthnum=01&day=01 (and load a date category).
But, as mentioned, a page like product-name exists only because Wordpress already sanitized the post title and stored it as a field in the database.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/13213",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: How do I update my UI from within HttpWebRequest.BeginGetRequestStream in Silverlight I am uploading multiple files using the BeginGetRequestStream of HttpWebRequest but I want to update the progress control I have written whilst I post up the data stream.
How should this be done, I have tried calling Dispatch.BeginInvoke (as below) from within the loop that pushes the data into the stream but it locks the browser until its finished so it seems to be in some sort of worker/ui thread deadlock.
This is a code snippet of pretty much what I am doing:
class RequestState
{
public HttpWebRequest request; // holds the request
public FileDialogFileInfo file; // store our file stream data
public RequestState( HttpWebRequest request, FileDialogFileInfo file )
{
this.request = request;
this.file = file;
}
}
private void UploadFile( FileDialogFileInfo file )
{
UriBuilder ub = new UriBuilder( app.receiverURL );
ub.Query = string.Format( "filename={0}", file.Name );
// Open the selected file to read.
HttpWebRequest request = (HttpWebRequest)WebRequest.Create( ub.Uri );
request.Method = "POST";
RequestState state = new RequestState( request, file );
request.BeginGetRequestStream( new AsyncCallback( OnUploadReadCallback ), state );
}
private void OnUploadReadCallback( IAsyncResult asynchronousResult )
{
RequestState state = (RequestState)asynchronousResult.AsyncState;
HttpWebRequest request = (HttpWebRequest)state.request;
Stream postStream = request.EndGetRequestStream( asynchronousResult );
PushData( state.file, postStream );
postStream.Close();
state.request.BeginGetResponse( new AsyncCallback( OnUploadResponseCallback ), state.request );
}
private void PushData( FileDialogFileInfo file, Stream output )
{
byte[] buffer = new byte[ 4096 ];
int bytesRead = 0;
Stream input = file.OpenRead();
while( ( bytesRead = input.Read( buffer, 0, buffer.Length ) ) != 0 )
{
output.Write( buffer, 0, bytesRead );
bytesReadTotal += bytesRead;
App app = App.Current as App;
int totalPercentage = Convert.ToInt32( ( bytesReadTotal / app.totalBytesToUpload ) * 100 );
// enabling the following locks up my UI and browser
Dispatcher.BeginInvoke( () =>
{
this.ProgressBarWithPercentage.Percentage = totalPercentage;
} );
}
}
A: I was going to say that, I didn't think that Silverlight 2's HttpWebRequest supported streaming, because the request data gets buffered into memory entirely. It had been a while since the last time I looked at it though, therefore I went back to see if Beta 2 supported it. Well turns out it does. I am glad I went back and read before stating that. You can enable it by setting AllowReadStreamBuffering to false. Did you set this property on your HttpWebRequest? That could be causing your block.
*
*MSDN Reference
*File upload component for Silverlight and ASP.NET
Edit, found another reference for you. You may want to follow this approach by breaking the file into chunks. This was written last March, therefore I am not sure if it will work in Beta 2 or not.
A: Thanks for that, I will take a look at those links, I was considering chunking my data anyway, seems to be the only way I can get any reasonable progress reports out of it.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/13217",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Mobile device is detected as non mobile device I've included a mobile web form in my asp.net project, I thought that it could/should be seen just for my mobile users but I realize that it can also be seen from any browser, I don't see problem there cause I could diff the access using HttpBrowserCapabilities.IsMobileDevice=true and transferring to the appropiate aspx page, but it results that when I access to the web form from my mobile device it is identified as IsMobileDevice = false and sends me to another page.
How could it be possible that?
The mobile device runs Pocket PC 2003.
A: IMHO: The value of HttpContext.Current.Request.Headers("User-Agent") is a much safer bet as it actually indicates the browser that is making the request, and not the type of device per-se.
I've learnt from experience that if a smart phone makes a request through a third-party browser to your service, more-often-than-not any sort of "what type of device are you" test (HttpCapabilitiesBase.IsMobileDevice and/or HttpContext.Current.Request.Browser.IsMobileDevice) will fail.
Unfortunately though, short of a big list of allowed user-agents (or disallowed user-agents for that matter), you'll just have to make sure it doesn't start with Mozilla, iPhone or Opera before you render the page...
It's a hard arena to play in.
Good luck.
A: Some are not recognized, because the UserAgent has been messed with or a new browser is being used. Such as Opera Mobile 9.5. To fix this you need to create a Browser (*.browser) file specifically for defining this. I had to do it for the new Mozilla based UserAgent that is being sent from Google.
A: I think you should use other DDR better than Microsoft Browser Capabilities. I´m using http://wurfl.sourceforge.net>WURFL, it is open source and maybe it is more extended and updated. There is other commercial DDRs like DeviceAtlas.
There is many some .net libraries but i´m using Marg.Wurfl. It allows to rendering web mobile page using wurfl capabilities.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/13224",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: How can I refactor HTML markup out of my property files? I've recently inherited a internationalized and text-heavy Struts 1.1 web application. Many of the JSP files look like:
<p>
<bean:message key="alert" />
</p>
and the properties files look like:
messages.properties
alert=Please update your <a href="/address.do">address</a> and <a href="/contact.do">contact information</a>.
with the appropriate translations in N other languages (messages_fr.properties, etc).
Problems:
*
*DRY violation - I have N references to my Struts action URLs instead of 1, which makes refactoring action URLs error-prone.
*Mixed concerns - My application's markup is now in more than just my JSP files, making it difficult for a web specialist to tweak the markup (using CSS, etc).
*Post-translation markup - Anytime I receive newly-translated text, I must decide what to surround with the <a>...</a> markup. Easy for English but less so for unfamiliar languages.
I've considered adding placeholders in the messages file, like:
alert=Please update your {0} and {1}.
but then the words "address" and "contact information" would somehow need to be localized, wrapped with markup, and passed to my message tag - and I can't see an easy way to do it.
What can I do to improve this?
A:
Avoid creating links within long
blocks of text. Prefer shorter text
that can act as a logically complete
and independent link.
Generally, it will lead to fewer problems. Sometimes you have to compromise your UI design to accommodate localization; sometimes you need to compromise your localization process to accommodate the UI.
Any time a developer manually manipulates post-translation strings is a source of potentially expensive bugs. Cutting/pasting or string editing can result in character corruption, misplaced strings, etc. A translation defect needs the participation of outside parties to fix which involves cost and takes time.
Thinking on it, something like this might be less ugly:
<p>Please update your address and contact information.
<br />
<a href="/address.do">update address</a>
<br />
<a href="/contact.do">update contact information</a></p>
...but I'm no UI designer.
A: One approach that comes to mind is that you could store the translated replacement parameters i.e. "address" and "contact information" in a separate properties file, one per locale. Then have your Action class (or probably some helper class) look up the values from the correct ResourceBundle for the current locale and pass them to the message tag.
A: Perhaps:
#
alert=Please update your {0}address{1} and {2}contact information{3}.
A:
The message message tag API allows
only 5 parametric arguments
Ah! I blame my complete ignorance of the Struts API.
To quote the manual:
Some of the features in this taglib
are also available in the JavaServer
Pages Standard Tag Library (JSTL). The
Struts team encourages the use of the
standard tags over the Struts specific
tags when possible.
You could probably do this with the http://java.sun.com/jsp/jstl/fmt taglib.
<fmt:bundle basename="messages">
<fmt:message key="alert">
<fmt:param value='<a href="/">' />
<fmt:param value="</a>" />
<fmt:param value='<a href="/">' />
<fmt:param value="</a>" />
</fmt:message>
</fmt:bundle>
The downside is that this isn't valid XML and yanking the values to variables involves more indirection, lookups and verbosity. This is not a good solution.
I don't know Struts, but if it is anything like JavaServer Faces (same architect), then there is probably support for configuring a replacement control. I would either replace the existing control with a more flexible one or add a new one.
Anytime I receive newly-translated
text, I must decide what to surround
with the <a>...</a> markup.
There is no way you should be doing this and I see this as a fault in your translation process (I am an ex-localization engineer and ex-developer of localization tools). The {0} characters should be included in the files that are sent to the translators. The localization guidelines should explain the string's context and the meaning of any variables.
You can programmatically validate the property bundles on return. String-specific regex's might do the trick. It isn't outside the realms of possibility that "address" and "contact information" would swap order during translation.
The simplest solution is to redesign the messages to render:
<a href="/address.do">Please update your address.</a>
<a href="/contact.do">Please update your contact information.</a>
I accept that this might not be a solution in all cases and may have your UI designer spitting teeth.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/13225",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
} |
Q: OpenID authentication in Ruby on Rails I am a neophyte with Ruby on Rails but I've created a couple of small apps. Anyway, I'm really interested in OpenID and I would like to implement OpenID authentication and maybe some Sreg stuff in a Rails app. All of the research that I have done has come up with articles that are out of date or just don't work for me. Since I'm so new to Rails I'm having difficulty debugging the issues so...
What is the best way to implement OpenId in Rails?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/13254",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "23"
} |
Q: Any good resources or advice for working with languages with different orientations? (such as Japanese or Chinese) We have an enterprise web application where every bit of text in the system is localised to the user's browser's culture setting.
So far we have only supported English, American (similar but mis-spelt ;-) and French (for the Canadian Gov't - app in English or French depending on user preference). During development we also had some European languages in mind like Dutch and German that tend to concatenate words into very long ones.
We're currently investigating support for eastern languages: Chinese, Japanese, and so on. I understand that these use phonetic input converted to written characters. How does that work on the web? Do the same events fire while inputs and textareas are being edited (we're quite Ajax heavy).
What conventions do users of these top-down languages expect online?
What effect does their dual-input (phonetic typing + conversion) have on web controls?
With RTL languages like Arabic do users expect the entire interface to be mirrored? For instance should things like OK/Cancel buttons be swapped and on the left?
A: As an Arabic speaker, when I do look at Arabic websites, I do expect things like OK/Cancel to be swapped.
When reading Arabic, my eyes read from right to left. So, in situations where you'd want to reader to view an affirmative/action button (e.g. OK, Submit, Yes, etc.) before a negative/inaction button (Cancel, Clear, No, etc.), you'd probably want to put the former on the right.
Caveat: As weird as it sounds, the above only applies (to me personally) when the button text is in Arabic. If the button text is in English (in a mixed-language web page), I'd prefer to see the OK button on the left.
Hope that helps.
A: Read Globalization Step-by-Step by Microsoft.
I can answer the specifics on CJKV, but you probably want a book on this topic. I haven't read it but CJKV Information Processing is from O'Reilly (2nd ed due Dec, 2008).
I understand that these use phonetic input converted to written characters.
How does that work on the web?
The input is done by a class of software called an IME (Input Method Editor) on Windows, Mac, and Linux (e.g. SCIM). When an IME is turned on, the input from the keyboard first goes to the IME, and the user gets to pick the correct kanji/hiragana combo. When the user commits by hitting return key, the IME types in the kanji/hiragana into the web browser using the current encoding. Encoding situation was a big mess, but if you are writing a web app, go with an encoding of Unicode. I suggest UTF-8.
Do the same events fire while inputs and textareas are being edited?
A Unicode savvy web browser and OS combo handles multiple languages. For example, one can use English normal version of Firefox to browse and post to a Japanese website. From the browsers point of view, it's just an array of "bla bla bla" in Unicode. In other words, if the event fires up in English, the same event should fire up in CJKV if you use a Unicode variant.
What conventions do users of these top-down languages expect online?
CJKV readers expect left-to-right online. Math and science textbooks are written from left-to-right. Most word processors, including localized version of Word, write left-to-right.
What effect does their dual-input (phonetic typing + conversion) have on web controls?
For the most part you should not have to worry about it, unless you are trapping keyboard events. For example, I hate using Japanese keyboard with bunch of extra keyboard. So, when I have to assign IME on/off command to some key on US keyboard. I personally use right-Alt. Also, spacebar and enter key is used during conversion, but not sure if these events are passed to browser.
A: The directionality question is easy to answer for East Asian languages: websites are left-to-right, top-to-bottom as per usual.
In fact, the general web design layout principles much the same. Have a look at the websites of a newspaper (name top left, navigation bar under with "Home" on the left, headline links below with most important at the top) or a search engine (don't think I need to say which US site you should compare that layout to).
However, just as Arabic/Hebrew/etc right-to-left language users will expect left-to-right progression in some contexts (embedded English fragments and so on), there are situations, even on the web, where top-to-bottom layout is preferred. This is generally done by including an image with the text layout and font desired, or using flash.
Internet Explorer has actually offered tb-rl layout with the CSS writing-mode property since version 5.5 however none of the other browsers have bothered implementing it (or ruby, which is useful for sites aimed at a young audience). IE 5.5 was released in 2000, so that's eight years of support, and there was a W3C candidate recommendation in 2003 but text layout in CSS still being poked around.
As for your worries with text input and IMEs, as long as you're not doing something bogus like trying to manually translate the virtual keys given by keydown events into text strings, you're unlikely to run into problems.
There are some additional issues you've not mentioned however. The minimum comfortably readable font size is larger than for languages written with the Latin script. Bold and italic for emphasis in flow are generally not appropriate. Han unification means to need to be picky about specifying the right fonts for the different CJK languages when working with unicode. You may want to provide both traditional and simplified interfaces for Chinese, depending on what audience you are expecting.
I've been meaning to write up a more comprehensive guide along these lines for a while, if you need more information feel free to kick me.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/13279",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
} |
Q: How can I determine CodeIgniter speed? I am thinking of using a PHP framework called CodeIgniter.
One of the things I am interested in is its speed. I have, however, no way to find out how fast it is, and would rather not simply take the word of their website for it. Does anybody know how I can determine its speed myself, or can someone tell me of a site that can?
A: Yes, the problem is you have to build your application to profile it.
At work we had a couple of projects written outside which we load-tested before putting them on our main boxes. We were quite surprised to find critical performance problems with both; one was written in CakePHP and the other was written using Drupal. I don't think this highlights a problem with any framework or CMS other than the need to do profiling and load-testing on any site which is going to get significant traffic. In both cases it was what the developer had done, rather than the characteristics of the software platform, that caused the problem. For example, there was a recursive function call the developer had created in the Cake project which instantiated the entire Cake object every recursion and this would have taken out the server had it gone live under load.
In my opinion performance should not be a deciding factor in choosing a framework; the objective differences are likely to be marginal and the way you use it is likely to cause far more performance problems than the inherent performance of the framework.
I believe that to scale any PHP application to run under load, you will need an opcode cache and you'll need to write in intelligent content caching using something like memcached or whatever built-in caching your framework supports.
A: If your site is database-driven I would be very surprised if your bottleneck would be the application framework. "Fast" as in faster development is what I would worry about rather than "fast" as in speedy handling of requests. Significant optimization is better done by caching strategies and optimizing your database access.
Besides database access your own code will be where most of the time for each request is spent (and even that is usually not significant compared to database access), the framework will likely not be affecting the time spent on a request, unless it is really badly written.
It way be better to look for a framework which has good caching support (which Code Igniter may have, I don't know), that will almost always save you more time than the few milliseconds you could shave off the request handling by using a slightly faster framework.
Have a look at the Zend Framework too, it has the benefit of being PHP 5, whereas Code Igniter is still PHP 4, as I understand it. That may be an issue when it comes to speed, but in favor of which framework I don't know. Zend has good caching support and a database profiler that can help you find where your bottlenecks are.
A: i'd recommend testing it for yourself. use xdebug's profiler to create a cachegrind compatible file and webgrind to visualize the file.
that way you end up with very reliable information.
A: Code Igniter also has some built-in benchmarking tools:
http://codeigniter.com/user_guide/general/profiling.html
A: CodeIgniter is plenty fast for most projects. Some have posted here and if you Google, you will find that it compares favorably to other frameworks with respect to speed.
I would agree with another poster that performance is usually not a big concern when it comes to framework choice. The major frameworks all have sufficient performance for most projects.
A: I maintain a site that gets slammed a few times a year. Last year the development team rewrote the entire site using Codeigniter and we have had much luck in terms of performance. Additionally, the time it took to perform the rewrite was minimal as this framework is quite easy to work with. CakePHP in my opinion is also a good choice if you find that you don't like Codeigniter.
A: For CodeIgniter and other PHP frameworks, PHP Quick Profiler is very handy for benchmarking and measuring speed especially for database queries. You must check this out:
php-quick-profiler
It's very easy to install and provides an awesome GUI for examine different benchmarking tests.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/13293",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "19"
} |
Q: Image manipulation in asp.net/c# without System.Drawing/GDI+ Is there any alternative image manipulation library for .net? I would prefer something that is managed and open source.
I ask this because of two reasons:
*
*I have encountered hard to debug GDI+ errors with System.Drawing in the past
*I have read that using System.Drawing in asp.net web applications is not 100% supported.
Thanks!
edit: clarification, I know that System.Drawing can work asp.net web apps - I have used it in the past. I really just wonder if there are any managed image manipulation libraries for .net :)
A: You should look into the WPF Imaging libraries shipped with .NET 3.0. They're optimized and robust (used to run Aero, so you know they're efficient). They don't depend on the WPF dispatcher, are easily extensible, and officially supported. What more could you want?
A: I don't know of any fully-managed 2D drawing libraries that are either free or open-source (there appears to be a few commercially available, but OSS is the way to go). However, you might look into the Mono bindings to Cairo.
Cairo is a platform independent 2D drawing API. You can find more information about it at the Cairo homepage. The Cairo Wikipedia page also has some good info.
Cairo is also used fairly widely in the Open Source world, which to me says something about its robustness. Mozilla, Webkit, and Mono all use it, among others. Ironically, Mono actually uses it to back their System.Drawing implementation... go figure.
There might also be a way to use Mono's System.Drawing implementation as a drop-in replacement for the Microsoft implementation, though I'm not sure how or if that would even work. I would probably start by replacing the System.Drawing.dll reference with Mono's version, and then try to deal with any errors.
A: Anecdotal evidence #1: I have used GDI+ for on-the-fly image creation within ASP.NET with no problems. I'm not sure what the problems would even be.
A: With respect to (1), most of the hard to debug errors are due to not closing open handles (Dispose() in managed-land). I'm curious where you heard (2).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/13299",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12"
} |
Q: How much of your work day is spent coding? I've been thinking about software estimation lately, and I have a bunch of questions around time spent coding. I'm curious to hear from people who have had at least a couple years of experience developing software.
When you have to estimate the amount of time you'll spend working on something, how many hours of the day do you spend coding? What occupies the other non-coding hours?
Do you find you spend more or less hours than your teammates coding? Do you feel like you're getting more or less work done than they are?
What are your work conditions like? Private office, shared office, team room? Coding alone or as a pair? How has your working condition changed the amount of time you spend coding each day? If you can work from home, does that help or hurt your productivity?
What development methodology do you use? Waterfall? Agile? Has changing from one methodology to another had an impact on your coding hours per day?
Most importantly: Are you happy with your productivity? If not, what single change would you make that would have the most impact on it?
A: I work a 37.5 hour week.
30 of those hours (80%) I am supposed to be billing our clients.
In reality I find that I use about 60% coding on actual client systems, 20% experimenting with new techniques and reading blogs, and 20% is wasted on office politics and "socializing".
Am I happy about it?
Do I wish that I could stare at the screen 30 hours a week coding on my given assignments?
Well. Since 20% of the time is used bettering myself at my craft, in the 60% that is effective coding I probably produce more than I would in 90% of my time if I didn't.
Then again, try to explain that fact to the higher ups ;)
A:
Well, I generally come in at least
fifteen minutes late, ah, I use the
side door - that way Lumbergh can't
see me, heh heh - and, uh, after that
I just sorta space out for about an
hour.
...Yeah, I just stare at my desk; but
it looks like I'm working. I do that
for probably another hour after lunch,
too. I'd say in a given week I
probably only do about fifteen minutes
of real, actual, work.
For me, switching between projects is a big cause of procrastination. When I've just finished a project I tend to procrastinate on kicking off the next requirement assigned to me. My mind feels still like in coding mode, but I then have to estimate the expenses for creating a spec first. So I have to switch from coding to calling customers and the like, which feels uncomfortable.
What helps me most in being productive is to cut away any distraction in the first hours of the day and starting immediately with the day's most important task. I need to get into the flow as early as possible.
I recommend having a look at The Programmers’ Stone:
We know that stress impairs some cognitive functions. The loss of those functions can precisely explain why programming is hard, and show us many other opportunities to improve the ways we organize things. The consequences roll out to touch language, logic and cultural norms. Click here for the Introduction...
A: I spend about 40% of my day coding. 40% goes to non-coding activites (such as fighting with our sketchy build server or figuring out why NUnit failed with no error message again or trying to figure out why our code has stopped talking to the Oracle server downstaird... junk like that). The other 20% is usually spent procrastinating, or in meetings.
Am I happy with my productivity? Absolutely not. I work 7ish hours/day, and I spend about 2.5 of that coding. I would much rather be spending 5-6 hours of my day coding, with only an hour dedicated to all the other stuff (sadly, the one thing that would make that happen -- that the PM stop diddling with the build scripts every day -- isn't going to happen). Unfortunately, since I am a corporate developer, management doesn't see the time being frittered away. Because I get so much more done in that 40% of my day than most of the drones in the building get done in a week (including the PM), they think I'm productive.
A: @Bernard Dy: I have spent probably 30% of my career in corporate settings (am not at the moment). Usually its after some failed (or not failed, but fizzled) start up idea, or some kind of burnout/change. Its ok for a little bit, it is nice to meet people from totally different backgrounds (who would have thought that lawyers and actuaries could be so much fun to hang out with), but in the end, I just find it too hard to get up in the morning with motivation (or after a holiday dread going back) - probably for reasons like you define (just a lack of care). But its good experience and a source of ideas at the least. And you can meet brilliant people everywhere (its not just programmers who are smart - I always tried to seek out who the real brains were behind a business).
Interestingly the only time I have practiced strict agile/XP was in a corporate setting - in that case probably 7 hours a day was actual hands on code (in a pair) - I have never been so exhausted after a day of that. not sure if that is a good thing, perhaps I am just lazy.
A: I'm a corporate developer, the kind Joel Spolsky called "depressed" in a couple of the StackOverflow podcasts. Because my company is not a software company it has little business reason to implement many of the measures software experts recommend companies engage for developer productivity.
We don't get private offices and dual 30 inch monitors. Our source control system is Microsoft Visual Source Safe. Enough said. On the other hand, I get to do a lot of things that fill out my day and add some variety to my job. I get involved in business analysis, project management, development, production support, international implementations, training support, team planning, and process improvement.
I'd say I get 85% of my day to code, when I can focus and I have a major programming task. But more often I get about 50% of my day for coding. If production support (non coding-related) is heavy I may only get 15% of my day to code.
Most of the companies I've worked for were not actively engaged in evaluating agile processes or test-driven development, but they didn't do a good job of waterfall either; most of their developers worked like cut-and-paste cowboys with impugnity.
On occasion I do work from home and with kids, it's horrible. I'm more productive at work.
My productivity is good, but could be better if the interruption factor and cost of mental context switching was removed. Production support and project management overhead both create those types of interruptions. But both are necessary parts of the job, so I don't think I can get rid of them. What I would like to consider is a restructuring of the team so that people on projects could focus on projects while the others could block the interruptions by being dedicated to support. And then swapping when the project is over.
Unfortunately, no one wants to do support, so the other productivity improvement measure I'd wish for would be one of the following:
*
*Better testing tools/methodologies to speed up unit testing
*Better business analysis tools/skills to improve the quality of new development and limit its contributions to the production support load
A: To answer some of my own questions:
The current team I'm on does only does gross task estimation, so it's hard to track hours per days. I would say that, for my career, the time spent coding has been anywhere between 25% (mostly management) to 85%+ (working from home 4 days a week, get together for a meeting for half a day once a week). If I had to guess, though, the average is probably somewhere in the vicinity of 60%.
The biggest influence for me in time spent coding is the presence or absence of meetings. When I worked on agile projects with everybody in the same room, meetings tended to be ad-hoc and very short, so the time spent coding was very high. I also felt I spent less time -- sometimes a lot less time -- doing non-coding things when I was in a team room, because it's much easier to waste time, accidentally or otherwise, when nobody has a clear view of your monitor. :)
A: I do outsourcing and basically I code all day, I have two projects and I don't have much time to make anything else which it means that I can't take more work cause I could not finish anything, that is a good policy, you should take just as you can.
Remember also that that you should have spare time and very importantly is to rest enough because if you don't you won't be very productive. The key here is planning and discipline.
In my non-coding time I spent it with my wife, I also like to get out town and try not to think on my projects, the more I make this balance the more productive I am.
When I don't much work I like to read programing blogs and also I like to study programming.
And finally I would like to say that IMHO our carreer should not be seen as a work, instead you should see it like something fun.
A: Realistically, it probably averages out to 4 or 5 hours a day. Although its "lumpy" - there may be days where there could be 8 or 9 hours of it.
Of all the software developers I know, the ones that write production code (as opposed to research) 4 to 5 seems to be the max of actual coding. There is a lot of other stuff that goes on.
And to be honest there is a lot of procrastination. I find its a bit like writers block. sometimes its just hard to get started, but then a good 2 hour session is a LOT of work done. Its just all the preparation you have to go through, the experimentation to make sure you are taking the right approach. The endless amount of staring out the window and checking email etc...
A: I'm a software developer in an R&D department working 40 hour a week.
I spend like... 10% of my time actually coding.
In my non-coding hours I mostly test, evaluate, compare and put down results. I also spend a lot of time writing specification for the code I will write and researching for the code I will write, I participate in brainstorm meetings for the current projects, etc.
I could say that from my teammates (also software developers) I am the one who codes most at the moment; but in depends on which task we work at each time.
I would not quantify actually coding as working hard. If there is a good specification, a proper research and a good understanting of the project, coding is just a formality and goes on almost smoothly and quickly.
Here we have a sharred office, with two teams. We are mostly coding alone, rarely on a pair. My work changes a lot the amount of time I was coding; in the past I was spending most of my time coding, without a very good understanding of the coding. If I had a task I would immediately start coding, and re-coding each time I realised I did something wrong and so on. And it was very ineffective.
The development methodology is somewhere between prototyping and spiral now. It has clearly change the number of hour I code.
I am happy with my productivity, related to my deadlines and goals.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/13343",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "28"
} |
Q: Firefox plugin - sockets I've always wanted a way to make a socket connection to a server and allow the server to manipulate the page DOM. For example, this could be used in a stock quotes page, so the server can push new quotes as they become available.
I know this is a classic limitation (feature?) of HTTP's request/response protocol, but I think this could be implemented as a Firefox plugin (cross-browser compatibility is not important for my application). Java/Flash solutions are not acceptable, because (as far as i know) they live in a box and can't interact with the DOM.
Can anyone confirm whether this is within the ability of a Firefox plugin? Has someone already created this or something similar?
A: You may want to look at Comet which is a fancy name for a long running HTTP connection where the server can push updates to the page.
A: It should be possible. I have developed a xulrunner application that connects to a TCP server using sockets. Extension development would likely have the same capabilities. I used a library from mozdev - JSLib. Specifically check out the networking code. The fact that there is a Firefox add-on for JSlib add-on for Firefox makes more more confident.
Essentially, as I understand it, sockets are not part of JavaScript, but through XPCOM, you can get raw socket access like you would in any c/c++ application.
Warning: JSLib doesn't seem to receive a lot of attention and the mailing list is pretty sparse.
A:
Java/Flash solutions are not acceptable, because (as far as i know)
they live in a box and can't interact with the DOM.
That's not actually true of Java. You can interact with Java via JavaScript and make DOM changes.
http://stephengware.com/proj/javasocketbridge/
In this example there are two JavaScript methods for interaction
Send:
socket_send("This was sent via the socket\n\n");
Receive:
on_socket_get(message){ more_code(message); }
A:
You may want to look at Comet
a.k.a. server push. This does not let the server "update" the client page directly, but all the new data is sent to the page through a single connection.
Of course, a Firefox extension (as well as plugins, which are binary libraries that can do whatever any other application can do) can work with sockets too. See 1, 2.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/13345",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Developing for multiple monitors We are currently working on a new version of our main application. one thing that I really wish to work on is providing support for multiple monitors. Increasingly, our target users are adding second screens to their desktops and I think our product could leverage this extra space to improve user performance.
Our application is a financial package that supports leasing and fleet companies - a very specialised market. That being said, I am sure that many people with multiple monitors have a favourite bit of software that they think would be improved if it supported those extra screens better.
I'm looking for some opinions on those niggles that you have with current software, and how you think they could be improved to support multi-monitor setups. My aim is to then review these and decide how I can implement them and, hopefully, provide an even better environment for my users.
Your help is appreciated.
Thankyou.
A: Please Please Please. If you remember window positions for multiple monitors. Please detect if the second monitor is connected. I have a laptop that is sometimes docked. It is very annoying when I try to open a window and it opens off screen.
A: It's annoying when I drag a window to another monitor, and then if the application generates a popup dialog, or spawns another window, if that popup/dialog gets displayed back on the primary monitor.
I haven't developed for multi-monitors, but I think this can be better handled if you position child windows/dialogs centered on their parent window, rather than on the desktop center (which I'm guessing is what happens in the case I describe above).
A: I'm going to have to a give a nod in dbkk's direction as they captured a couple of the major points that you need to remember.
Also, I would suggestion paying attention to how you use dual monitors and try to keep that in mind as you are developing. Generally you should try to avoid doing the things that applications you work do that annoy you. Also, don't assume that just because the user has dual monitors that they are going to want to work with your application on dual monitors.
The biggest thing that I would stress is keeping track of where the focus is in the application and making sure that any pop-ups occur within that region, one of the things that people seem to dislike the most is having a window pop-up in a different window then the one they are working on.
A: Definitely keep dialogs near where you clicked to bring them up. Remember what monitor the window is on between sessions. Be aware that if they have less monitors than the last time your app was run that you need to bring the windows back to a visible area. Provide an icon or button to switch monitors. Depending on the type of app it may be useful to be able to easily tile your app's windows on a monitor or on all.
A: Few random tips:
*
*If multiple windows can be open at one time, allow users to have them on separate screens. Seems obvious, but some very popular apps (e.g. Visual Studio) fail miserably at this.
*Remember the position of the last opened window, and open new windows on the same screen as before. However, sometimes users switch between multiple and single-display (e.g. docking a laptop with an external CRT), so watch cover this case as well.
*Consider how your particular users work, and how having two maximized windows simultaneously might help. Often, there is a (fairly passive) window for reference (e.g. a web browser/help) and an active window for data entry (e.g. editor/database) that users switch between.
*Do not put toolboxes/toolbars on a different window than objects they operate on (it's inconvenient to move the mouse so far).
A: Apple's Human Interface Guidelines for the Mac have covered window management on multiple displays since 1987, when the Mac II was introduced with six slots that could all contain a graphics card. The guidelines offer a few good guidelines that you might not think of at first when implementing multiple window support. For example, if a window spans multiple displays, which display should new windows be opened on? There's an answer around Figure 14-33 in the chapter dealing with Window behavior.
Microsoft may have something similar now for Windows developers to follow; if that's the case, check it out and follow their guidelines since you don't want to behave differently than the other apps on the system (or that your users are used to) for no good reason. However, if there are no guidelines, follow Apple's as they're fairly well thought-through and were originally developed through experimentation and research.
A: One thing to keep in mind is that the user may have more than two monitors. My main system has six monitors, and I've run 4+ monitors on Linux, Windows, and Mac OS. Many applications--even multi-monitor utilities--will support 2 monitors but freak out over more than 2.
Applications work best when they know about where their windows are and relate to the locations of those windows. And as someone else mentioned, if you're going to remember where a window was, make sure that geometry still makes sense when the user comes back.
If the OS/window system dispatches an event related to the change of screen geometry, handle it if you're doing anything funky.
I think most applications that are well coded generally work these days.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/13347",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
} |
Q: What are the advantages of using a single database for EACH client? In a database-centric application that is designed for multiple clients, I've always thought it was "better" to use a single database for ALL clients - associating records with proper indexes and keys. In listening to the Stack Overflow podcast, I heard Joel mention that FogBugz uses one database per client (so if there were 1000 clients, there would be 1000 databases). What are the advantages of using this architecture?
I understand that for some projects, clients need direct access to all of their data - in such an application, it's obvious that each client needs their own database. However, for projects where a client does not need to access the database directly, are there any advantages to using one database per client? It seems that in terms of flexibility, it's much simpler to use a single database with a single copy of the tables. It's easier to add new features, it's easier to create reports, and it's just easier to manage.
I was pretty confident in the "one database for all clients" method until I heard Joel (an experienced developer) mention that his software uses a different approach -- and I'm a little confused with his decision...
I've heard people cite that databases slow down with a large number of records, but any relational database with some merit isn't going to have that problem - especially if proper indexes and keys are used.
Any input is greatly appreciated!
A: Scalability. Security. Our company uses 1 DB per customer approach as well. It also makes code a bit easier to maintain as well.
A: Assume there's no scaling penalty for storing all the clients in one database; for most people, and well configured databases/queries, this will be fairly true these days. If you're not one of these people, well, then the benefit of a single database is obvious.
In this situation, benefits come from the encapsulation of each client. From the code perspective, each client exists in isolation - there is no possible situation in which a database update might overwrite, corrupt, retrieve or alter data belonging to another client. This also simplifies the model, as you don't need to ever consider the fact that records might belong to another client.
You also get benefits of separability - it's trivial to pull out the data associated with a given client ,and move them to a different server. Or restore a backup of that client when the call up to say "We've deleted some key data!", using the builtin database mechanisms.
You get easy and free server mobility - if you outscale one database server, you can just host new clients on another server. If they were all in one database, you'd need to either get beefier hardware, or run the database over multiple machines.
You get easy versioning - if one client wants to stay on software version 1.0, and another wants 2.0, where 1.0 and 2.0 use different database schemas, there's no problem - you can migrate one without having to pull them out of one database.
I can think of a few dozen more, I guess. But all in all, the key concept is "simplicity". The product manages one client, and thus one database. There is never any complexity from the "But the database also contains other clients" issue. It fits the mental model of the user, where they exist alone. Advantages like being able to doing easy reporting on all clients at once, are minimal - how often do you want a report on the whole world, rather than just one client?
A: In regulated industries such as health care it may be a requirement of one database per customer, possibly even a separate database server.
The simple answer to updating multiple databases when you upgrade is to do the upgrade as a transaction, and take a snapshot before upgrading if necessary. If you are running your operations well then you should be able to apply the upgrade to any number of databases.
Clustering is not really a solution to the problem of indices and full table scans. If you move to a cluster, very little changes. If you have have many smaller databases to distribute over multiple machines you can do this more cheaply without a cluster. Reliability and availability are considerations but can be dealt with in other ways (some people will still need a cluster but majority probably don't).
I'd be interested in hearing a little more context from you on this because clustering is not a simple topic and is expensive to implement in the RDBMS world. There is a lot of talk/bravado about clustering in the non-relational world Google Bigtable etc. but they are solving a different set of problems, and lose some of the useful features from an RDBMS.
A: Here's one approach that I've seen before:
*
*Each customer has a unique connection string stored in a master customer database.
*The database is designed so that everything is segmented by CustomerID, even if there is a single customer on a database.
*Scripts are created to migrate all customer data to a new database if needed, and then only that customer's connection string needs to be updated to point to the new location.
This allows for using a single database at first, and then easily segmenting later on once you've got a large number of clients, or more commonly when you have a couple of customers that overuse the system.
I've found that restoring specific customer data is really tough when all the data is in the same database, but managing upgrades is much simpler.
When using a single database per customer, you run into a huge problem of keeping all customers running at the same schema version, and that doesn't even consider backup jobs on a whole bunch of customer-specific databases. Naturally restoring data is easier, but if you make sure not to permanently delete records (just mark with a deleted flag or move to an archive table), then you have less need for database restore in the first place.
A: To keep it simple. You can be sure that your client is only seeing their data. The client with fewer records doesn't have to pay the penalty of having to compete with hundreds of thousands of records that may be in the database but not theirs. I don't care how well everything is indexed and optimized there will be queries that determine that they have to scan every record.
A: Well, what if one of your clients tells you to restore to an earlier version of their data due to some botched import job or similar? Imagine how your clients would feel if you told them "you can't do that, since your data is shared between all our clients" or "Sorry, but your changes were lost because client X demanded a restore of the database".
A: As for the pain of upgrading 1000 database servers at once, some fairly simple automation should take care of that. As long as each database maintains an identical schema, then it won't really be an issue. We also use the database per client approach, and it works well for us.
Here is an article on this exact topic (yes, it is MSDN, but it is a technology independent article): http://msdn.microsoft.com/en-us/library/aa479086.aspx.
Another discussion of multi-tenancy as it relates to your data model here: http://www.ayende.com/Blog/archive/2008/08/07/Multi-Tenancy--The-Physical-Data-Model.aspx
A: There are a couple of meanings of "database"
*
*the hardware box
*the running software (e.g. "the oracle")
*the particular set of data files
*the particular login or schema
It's likely Joel means one of the lower layers. In this case, it's just a matter of software configuration management... you don't have to patch 1000 software servers to fix a security bug, for example.
I think it's a good idea, so that a software bug doesn't leak information across clients. Imagine the case with an errant where clause that showed me your customer data as well as my own.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/13348",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "67"
} |
Q: Override tab behavior in WinForms I have a UserControl that consists of three TextBoxes. On a form I can have one or more or my UserControl. I want to implement my own tab behavior so if the user presses Tab in the second TextBox I should only move to the third TextBox if the the second TextBox has anything entered. If nothing is entered in the second TextBox the next control of the form should get focus as per the normal tab behavior. If the user hasn't entered anything in the first or second TextBox and the presses tab there is this special case where a control on the form should be skipped.
By using the ProcessDialogKey I have managed to get it work kind of ok but I still have one problem. My question is if there is a way to detect how a WinForms control got focus since I would also like to know if the my UserControl got focus from a Tab or Shift-Tab and then do my weird stuff but if the user clicks the control I don't want to do anything special.
A: As a general rule, I would say overriding the standard behavior of the TAB key would be a bad idea. Maybe you can do something like disabling the 3rd text box until a valid entry is made in the 2nd text box.
Now, having said this, I've also broken this rule at the request of the customer. We made the enter key function like the tab key, where the enter key would save the value in a text field, and advance the cursor to the next field.
A: I don't think there's a built-in way that you could do it. All of the WinForms focus events (GotFocus,LostFocus,Enter,Leave) are called with empty EventArgs parameters, which will not give you any additional information.
Personally, I would disable the third textbox, as Rob Thomas said. If you're determined to do this, though, it wouldn't be difficult to set up a manual (read: hackish) solution. Once the tab key is pressed (if the focus is on the second textbox), set a variable inside your form. If the next object focused is then the third textbox, then you know exactly how it happened.
A: The reason for this odd tab behavior is all about speed in the input process. It was really good to get some input, I hadn't thought about disabling a textbox but that could actually work. But using the Enter key to accept the input hadn't even crossed my mind. That will work so much better. The user can enter the numbers and then press enter to accept the input and the next possible textbox will be the active one. It's like having the cake and eating it too, The speed factor is there since when using the enter key no unnecessary tabing must be done to get to the correct field and using the enter key next to the numeric keyboard makes it really smooth.
Thanks for the input!
A: I agree with DannySmurf. Messing with the tab order might give you hell later on if the requirements for the application change.
Another thing that you could do is to implement some kind of wizard for the user to go through.
A: Better than disabling controls, try monkeying around with TabStop - if this is false, the control will be simply skipped when tabbing.
I'd also suggest that the Changed event of the TextBox is the place to be updating TabStop on the other controls.
I've done something similar to this with a login control, where users could enter either a username or an email address (in separate fields), plus their password, and tabStop is what I used to get the job done.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/13353",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Scrolling Overflowed DIVs with JavaScript I've got a div that uses overflow:auto to keep the contents inside the div as it is resized and dragged around the page. I'm using some ajax to retrieve lines of text from the server, then append them to the end of the div, so the content is growing downwards. Every time this happens, I'd like to use JS to scroll the div to the bottom so the most recently added content is visible, similar to the way a chat room or command line console would work.
So far I've been using this snippet to do it (I'm also using jQuery, hence the $() function):
$("#thediv").scrollTop = $("#thediv").scrollHeight;
However it's been giving me inconsistent results. Sometimes it works, sometimes not, and it completely ceases to work if the user ever resizes the div or moves the scroll bar manually.
The target browser is Firefox 3, and it's being deployed in a controlled environment so it doesn't need to work in IE at all.
Any ideas guys? This one's got me stumped. Thanks!
A: Using a loop to iterate over a jQuery of one element is quite inefficient. When selecting an ID, you can just retrieve the first and unique element of the jQuery using get() or the [] notation.
var div = $("#thediv")[0];
// certain browsers have a bug such that scrollHeight is too small
// when content does not fill the client area of the element
var scrollHeight = Math.max(div.scrollHeight, div.clientHeight);
div.scrollTop = scrollHeight - div.clientHeight;
A: $("#thediv").scrollTop($("#thediv")[0].scrollHeight);
A: scrollHeight should be the total height of content. scrollTop specifies the pixel offset into that content to be displayed at the top of the element's client area.
So you really want (still using jQuery):
$("#thediv").each( function()
{
// certain browsers have a bug such that scrollHeight is too small
// when content does not fill the client area of the element
var scrollHeight = Math.max(this.scrollHeight, this.clientHeight);
this.scrollTop = scrollHeight - this.clientHeight;
});
...which will set the scroll offset to the last clientHeight worth of content.
A: scrollIntoView
The scrollIntoView method scrolls the element into view.
A: It can be done in plain JS. The trick is to set scrollTop to a value equal or greater than the total height of the element (scrollHeight):
const theDiv = document.querySelector('#thediv');
theDiv.scrollTop = Math.pow(10, 10);
From MDN:
If set to a value greater than the maximum available for the element,
scrollTop settles itself to the maximum value.
While the value of Math.pow(10, 10) did the trick using a too high value like Infintiy or Number.MAX_VALUE will reset scrollTop to 0 (Firefox 66).
A: I had a div wrapping 3 divs that were floating left, and whose contents were being resized. It helps to turn funky-colored borders/background on for the div-wrapper when you try to resolve this. The problem was that the resized div-content was overflowing outside the div-wrapper (and bled to underneath the area of content below the wrapper).
Resolved by using @Shog9's answer above. As applied to my situation, this was the HTML layout:
<div id="div-wrapper">
<div class="left-div"></div>
<div id="div-content" class="middle-div">
Some short/sweet content that will be elongated by Jquery.
</div>
<div class="right-div"></div>
</div>
This was the my jQuery to resize the div-wrapper:
<script>
$("#div-content").text("a very long string of text that will overflow beyond the width/height of the div-content");
//now I need to resize the div...
var contentHeight = $('#div-content').prop('scrollHeight')
$("#div-wrapper").height(contentHeight);
</script>
To note, $('#div-content').prop('scrollHeight') produces the height that the wrapper needs to resize to. Also I am unaware of any other way to obtain the scrollHeight an actual jQuery function; Neither of $('#div-content').scrollTop() and $('#div-content').height would produce the real content-height values. Hope this helps someone out there!
| {
"language": "en",
"url": "https://stackoverflow.com/questions/13362",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "50"
} |
Q: Replacement for for... if array iteration I love list comprehensions in Python, because they concisely represent a transformation of a list.
However, in other languages, I frequently find myself writing something along the lines of:
foreach (int x in intArray)
if (x > 3) //generic condition on x
x++
//do other processing
This example is in C#, where I'm under the impression LINQ can help with this, but is there some common programming construct which can replace this slightly less-than-elegant solution? Perhaps a data structure I'm not considering?
A: The increment in the original foreach loop will not affect the contents of the array, the only way to do this remains a for loop:
for(int i = 0; i < intArray.Length; ++i)
{
if(intArray[i] > 3) ++intArray[i];
}
Linq is not intended to modify existing collections or sequences. It creates new sequences based on existing ones. It is possible to achieve the above code using Linq, though it is slightly against its purposes:
var newArray1 = from i in intArray select ((i > 3) ? (i + 1) : (i));
var newArray2 = intArray.Select(i => (i > 3) ? (i + 1) : (i));
Using where (or equivalent), as shown in some of the other answers, will exclude any values less than or equal to 3 from the resulting sequence.
var intArray = new int[] { 10, 1, 20, 2 };
var newArray = from i in intArray where i > 3 select i + 1;
// newArray == { 11, 21 }
There is a ForEach method on arrays that will allow you to use a lambda function instead of a foreach block, though for anything more than a method call I would stick with foreach.
intArray.ForEach(i => DoSomething(i));
A: In C# you can apply selective processing on anything that lives inside an IEnumerable like this:
intArray.Where(i => i > 3).ConvertAll();
DoStuff(intArray.Where(i => i 3));
Etc..
A: In Python, you have filter and map, which can so what you want:
map(lambda x: foo(x + 1) filter(lambda x: x > 3, intArray))
There's also list comprehensions which can do both in one easy statement:
[f(x + 1) for x in intArray if x > 3]
A: in Ruby:
intArray.select { |x| x > 3 }.each do |x|
# do other processing
end
or if "other processing" is a short one-liner:
intArray.select { |x| x > 3 }.each { |x| something_that_uses x }
lastly, if you want to return a new array containing the results of the processing of those elements greater than 3:
intArray.select { |x| x > 3 }.map { |x| do_something_to x }
A: map(lambda x: test(x + 1) filter(lambda x: x > 3, arr))
A: Depends on the language and what you need to do, a "map" as it's called in many languages could be what you're looking for. I don't know C#, but according to this page, .NET 2.0 calls map "ConvertAll".
The meaning of "map" is pretty simple - take a list, and apply a function to each element of it, returning a new list. You may also be looking for "filter", which would give you a list of items that satisfy a predicate in another list.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/13396",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12"
} |
Q: Do you have "Slack" time? The CodePlex team has a Slack time policy, and it's worked out very well for them.
*
*Jim Newkirk and myself used it to work on the xUnit.net project.
*Jonathan Wanagel used it to work on SvnBridge.
*Scott Densmore and myself used it to work on an ObjectBuilder 2.0 prototype.
For others, it was a great time to explore things that were technically not on the schedule, but could eventually end up being of great use to the rest of the team. I'm so convinced of the value of this that if I'm ever running a team again, I'm going to make it part of the team culture.
Have you had a formalized Slack policy on your team? How did it work out?
Edited: I just realized I didn't define Slack. For those who haven't read the book, Slack is what Google's "20% time" is: you're given some slice of your day/week/month/year on which to work on things that are not necessarily directly related to your day-to-day job, but might have an indirect benefit (obviously if you work on stuff that's totally not useful for your job or your company, your manager probably won't think very well of the way you spent the time :-p).
A: I am currently a full time freelancer working for a single client. If I want to get a full 40 hours of pay, then every minute I spend coding needs to be accounted for on the approved project plan. Or at least it has to go towards some sort of realistic maintenance task. I guess you could say this is one of the disadvantages of contracting... there's really no room for slack or being idle. You just have to keep going and going on the task at hand. It can be quite draining, but then again I kinda like how it keeps me accountable. And of course the pay is a bit better than usual.
That said, I would love to have slack time available for working on pet projects, but no client would ever agree to pay for that.
Anyway, I just thought I'd point out how this exemplifies some of the big differences between freelancing and full time employment.
A: I've also never worked anywhere where there was a formal policy but I have always found was to squeeze in a little R&D/tool-building time on the side. Often times I will get productivity gains out of that which will allow me even more 'slack' time.
A: I just want to mention Google's policy on the subject.
20% of the day should be used for private projects and research.
I think it is time for managers to face the fact that most good developers are a bit lazy. If they weren't, we wouldn't have concepts like code reuse.
If this laziness can be focused into a creative force, and the developers can read up on technical issues and experiment with architecture and language features, I am certain that the end result will be better code and a more satisfied developer.
So, if you are a manager: Let your developers slack of now and then. Encourage them to hold small seminars with the team to discuss new ways of doing stuff.
If you are a developer: Read, learn and love your craft. You have one of the best jobs in the world, as long as you are willing to put some time into learning the best ways to do your job.
A: I've never worked anywhere that had a formalized policy, but practically every manager I've ever had has allowed me to spend some time on things that weren't directly related to the current project or fighting a fire.
I think the key is to talk about the things you'd like to try. Most managers want their teams to do something cool, something extraordinary, so if you can convince them that you might deliver something, you might get the chance. Or they might let you do it just to keep you happy.
Now that I'm a contractor rather than an employee, I don't get paid to do fun stuff, but I generally only work 30-35 hours per week, so I still have time to learn and to play.
A: We have slack time and we try to schedule them between releases. Once a release is out, we ask our developers to spend 60% of the day fixing bugs and then the other 40% for slack time. We have policies on what you can use the slack time for though. Then when a release creeps up again, we ask all the developers to spend all day on implementing features or fixing bugs for that release.
The policy lets the developer use the slack time for training, creating something new that the company could use, or just creating tools within the company to make things easier for ourselves. It has worked well for us. We think it is an awesome benefit.
A: We don't have a formal policy in my team - mostly because there is just so much work to do that justifying it would be hard. Which is pretty ironic.
I've started doing some formal things in the guise of "Development Meetings" in order to at least inject the essence of this into the team. An example of this is a development project that is intended to both teach new technologies and produce a cool app at the end of it.
It's early days, we'll see how it goes.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/13409",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "21"
} |
Q: Does CruiseControl.NET run on IIS 7.0? I'm new to development (an admin by trade) and I'm setting up my development environment and I would like to set up a CruiseControl.Net server on Server 2008. A quick Google did not turn up any instructions for getting it running on IIS 7.0, so I was wondering if anyone had experience getting this set up.
A: Here is a helpful article that worked for me:
Getting CruiseControl.NET working under IIS7
A: What Dale Ragan said; it installed flawlessly on our Windows Server 2008 machine, including the Dashboard running on IIS 7. Just give it a shot; should work fine.
A: I have never tried on Server 2008, but I have installed CruiseControl.NET on Vista which includes IIS 7.0. I don't remember there being any problems. You do have an admin background which should help if something does pop up.
Just use the CruiseControl.NET wiki to get you thru the install and getting it setup. That is all I did.
A: I got it running by following the steps in this blog. Additionally, I had to enable ASP.NET, as shown in this blog. Lastly, to get the package install working, I gave full permission to the local users on the webdashboard directory, as in this bug report.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/13430",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: Programatic access to call stack in .net How can I get programmatic access to the call stack?
A: You can use the StackTrace and StrackFrame classes in System.Diagnostics.
A: Try System.Diagnostics.StackTrace.
A: The right way is to use the StackTrace and StackFrame classes. Throwing an exception just to get the stack trace is completely misusing exceptions.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/13434",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12"
} |
Q: Python version of PHP's stripslashes I wrote a piece of code to convert PHP's striplashes into valid Python [backslash] escapes:
cleaned = stringwithslashes
cleaned = cleaned.replace('\\n', '\n')
cleaned = cleaned.replace('\\r', '\n')
cleaned = cleaned.replace('\\', '')
How can I condense it?
A: It sounds like what you want could be reasonably efficiently handled through regular expressions:
import re
def stripslashes(s):
r = re.sub(r"\\(n|r)", "\n", s)
r = re.sub(r"\\", "", r)
return r
cleaned = stripslashes(stringwithslashes)
A: Not totally sure this is what you want, but..
cleaned = stringwithslashes.decode('string_escape')
A: use decode('string_escape')
cleaned = stringwithslashes.decode('string_escape')
Using
string_escape : Produce a string that is suitable as string literal in Python source code
or concatenate the replace() like Wilson´s answer.
cleaned = stringwithslashes.replace("\\","").replace("\\n","\n").replace("\\r","\n")
A: You can obviously concatenate everything together:
cleaned = stringwithslashes.replace("\\n","\n").replace("\\r","\n").replace("\\","")
Is that what you were after? Or were you hoping for something more terse?
A: Python has a built-in escape() function analogous to PHP's addslashes, but no unescape() function (stripslashes), which in my mind is kind of ridiculous.
Regular expressions to the rescue (code not tested):
p = re.compile( '\\(\\\S)')
p.sub('\1',escapedstring)
In theory that takes anything of the form \\(not whitespace) and returns \(same char)
edit: Upon further inspection, Python regular expressions are broken as all hell;
>>> escapedstring
'This is a \\n\\n\\n test'
>>> p = re.compile( r'\\(\S)' )
>>> p.sub(r"\1",escapedstring)
'This is a nnn test'
>>> p.sub(r"\\1",escapedstring)
'This is a \\1\\1\\1 test'
>>> p.sub(r"\\\1",escapedstring)
'This is a \\n\\n\\n test'
>>> p.sub(r"\(\1)",escapedstring)
'This is a \\(n)\\(n)\\(n) test'
In conclusion, what the hell, Python.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/13454",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13"
} |
Q: The theory (and terminology) behind Source Control I've tried using source control for a couple projects but still don't really understand it. For these projects, we've used TortoiseSVN and have only had one line of revisions. (No trunk, branch, or any of that.) If there is a recommended way to set up source control systems, what are they? What are the reasons and benifits for setting it up that way? What is the underlying differences between the workings of a centralized and distributed source control system?
A: Think of source control as a giant "Undo" button for your source code. Every time you check in, you're adding a point to which you can roll back. Even if you don't use branching/merging, this feature alone can be very valuable.
Additionally, by having one 'authoritative' version of the source control, it becomes much easier to back up.
Centralized vs. distributed... the difference is really that in distributed, there isn't necessarily one 'authoritative' version of the source control, although in practice people usually still do have the master tree.
The big advantage to distributed source control is two-fold:
*
*When you use distributed source control, you have the whole source tree on your local machine. You can commit, create branches, and work pretty much as though you were all alone, and then when you're ready to push up your changes, you can promote them from your machine to the master copy. If you're working "offline" a lot, this can be a huge benefit.
*You don't have to ask anybody's permission to become a distributor of the source control. If person A is running the project, but person B and C want to make changes, and share those changes with each other, it becomes much easier with distributed source control.
A: I recommend checking out the following from Eric Sink:
http://www.ericsink.com/scm/source_control.html
Having some sort of revision control system in place is probably the most important tool a programmer has for reviewing code changes and understanding who did what to whom. Even for single person projects, it is invaluable to be able to diff current code against previous known working version to understand what might have gone wrong due to a change.
A: Here are two articles that are very helpful for understanding the basics. Beyond being informative, Sink's company sells a great source control product called Vault that is free for single users (I am not affiliated in any way with that company).
http://www.ericsink.com/scm/source_control.html
http://betterexplained.com/articles/a-visual-guide-to-version-control/
Vault info at www.vault.com.
A: Even if you don't branch, you may find it useful to use tags to mark releases.
Imagine that you rolled out a new version of your software yesterday and have started making major changes for the next version. A user calls you to report a serious bug in yesterday's release. You can't just fix it and copy over the changes from your development trunk because the changes you've just made the whole thing unstable.
If you had tagged the release, you could check out a working copy of it and use it to fix the bug.
Then, you might choose to create a branch at the tag and check the bug fix into it. That way, you can fix more bugs on that release while you continue to upgrade the trunk. You can also merge those fixes into the trunk so that they'll be present in the next release.
A: The common standard for setting up Subversion is to have three folders under the root of your repository: trunk, branches and tags. The trunk folder holds your current "main" line of development. For many shops and situations, this is all they ever use... just a single working repository of code.
The tags folder takes it one step further and allows you to "checkpoint" your code at certain points in time. For example, when you release a new build or sometimes even when you simply make a new build, you "tag" a copy into this folder. This just allows you to know exactly what your code looked like at that point in time.
The branches folder holds different kinds of branches that you might need in special situations. Sometimes a branch is a place to work on experimental feature or features that might take a long time to get stable (therefore you don't want to introduce them into your main line just yet). Other times, a branch might represent the "production" copy of your code which can be edited and deployed independently from your main line of code which contains changes intended for a future release.
Anyway, this is just one aspect of how to set up your system, but I think giving some thought to this structure is important.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/13460",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: Purpose of {1} in this regular expression to match url protocols I was reading this question about how to parse URLs out of web pages and had a question about the accepted answer which offered this solution:
((mailto\:|(news|(ht|f)tp(s?))\://){1}\S+)
The solution was offered by csmba and he credited it to regexlib.com. Whew. Credits done.
I think this is a fairly naive regular expression but it's a fine starting point for building something better. But, my question is this:
What is the point of {1}? It means "exactly one of the previous grouping", right? Isn't that the default behavior of a grouping in a regular expression? Would the expression be changed in any way if the {1} were removed?
If I saw this from a coworker I would point out his or her error but as I write this the response is rated at a 6 and the expression on regexlib.com is rated a 4 of 5. So maybe I'm missing something?
A: @Rob: I disagree. To enforce what you are asking for I think you would need to use negative-look-behind, which is possible but is certainly not related to use {1}. Neither version of the regexp address that particular issue.
To let the code speak:
tibook 0 /home/jj33/swap > cat text
Text this is http://example.com text this is
Text this is http://http://example.com text this is
tibook 0 /home/jj33/swap > cat p
#!/usr/bin/perl
my $re1 = '((mailto\:|(news|(ht|f)tp(s?))\://){1}\S+)';
my $re2 = '((mailto\:|(news|(ht|f)tp(s?))\://)\S+)';
while (<>) {
print "Evaluating: $_";
print "re1 saw \$1 = $1\n" if (/$re1/);
print "re2 saw \$1 = $1\n" if (/$re2/);
}
tibook 0 /home/jj33/swap > cat text | perl p
Evaluating: Text this is http://example.com text this is
re1 saw $1 = http://example.com
re2 saw $1 = http://example.com
Evaluating: Text this is http://http://example.com text this is
re1 saw $1 = http://http://example.com
re2 saw $1 = http://http://example.com
tibook 0 /home/jj33/swap >
So, if there is a difference between the two versions, it's doesn't seem to be the one you suggest.
A: I don't think the {1} has any valid function in that regex.
(**mailto:|(news|(ht|f)tp(s?))://){1}**
You should read this as: "capture the stuff in the parens exactly one time". But we don't really care about capturing this for use later, eg $1 in the replacement. So it's pointless.
A: I don't think it has any purpose. But because RegEx is almost impossible to understand/decompose, people rarely point out errors. That is probably why no one else pointed it out.
A: @Jeff Atwood, your interpretation is a little off - the {1} means match exactly once, but has no effect on the "capturing" - the capturing occurs because of the parens - the braces only specify the number of times the pattern must match the source - once, as you say.
I agree with @Marius, even if his answer is a little terse and may come off as being flippant. Regular expressions are tough, if one's not used to using them, and the {1} in the question isn't quite error - in systems that support it, it does mean "exactly one match". In this sense, it doesn't really do anything.
Unfortunately, contrary to a now-deleted post, it doesn't keep the regexp from matching http://http://example.org, since the \S+ at the end will match one or more non-whitespace characters, including the http://example.org in http://http://example.org (verified using Python 2.5, just in case my regexp reading was off). So, the regexp given isn't really the best. I'm not a URL expert, but probably something limiting the appearance of ":"s and "//"s after the first one would be necessary (but hardly sufficient) to ensure good URLs.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/13470",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Are there any "mind mapping" components for Delphi? (native VCL preferably) I'm looking for a pre-written component (w/source) for a Delphi project that I'm working on, to generate mind-maps / concept-maps similar to these:
http://en.wikipedia.org/wiki/Image:MindMeister_screenshot_OS_X.jpg
http://en.wikipedia.org/wiki/Image:XMIND_2008_in_Windows_Vista.png
Any ideas?
A: As a former Delphi developer, I sympathize. It used to be that you could find a free component with source for just about anything. You probably know about the Delphi Super Page (my old go-to source for everything Delphi). I looked; no mind-mapping components, there. (Of course, the site has not been updated in about 2 years).
I do have a suggestion, though, but it's not optimal: StarUML was written in Delphi, and it contains custom components for creating UML diagrams. The source is available for download, and it seems to me that the UML primitives (boxes, lines, clouds and such) could be adapted to your purpose. The web site is http://staruml.sourceforge.net/en/.
I know it's not ideal, but at least you would not have to start from scratch.
Good luck!
A: You might want to have a look at TMS Software's Diagram Studio, not specifically a mind-mapping component but it does offer diagramming functionality in your delphi app. and a developer license does come with source code.
A: An other great source for Delphi component is torry.net. Searching there I found an interesting looking component: Drawing objects.
A: Steema Software has got a great component called TeeTree which seems to do everything. I'm not sure how much it costs (costs is the operative word)
We use it for making pie charts and reports but it seems to do everything, you can get compiled demo's off their site.
It's a VCL component.
A: I know Graphcis32 has been used to implement visio-like diagramming. On the 'applications'-page there is a link to MindVisualizer, prooving that it can be used for mindmaps too. Not an out-of-the-box solution though....
TMS Diagram Studio, as already mentioned, and DevExpress OrgChart or FlowChart may do the trick.
A: There is only one mind mapping component that I know of : MetaTree. It supports Delphi 5, 6, 7 and 2005 and comes with full source code.
A: Not exactly what you are looking for but it could be an option if you can't find a native VCL component.
Tree GX http://www.componentsource.com/products/treegx/index-gbp.html is a .NET mind map component with source code.
You could then use Hydra http://www.remobjects.com/product/?id={B6BD1030-F630-4DA8-9018-73C03265A0EF} from RemObects to host a WinForms control inside a Delphi application if going pure .Net is not an option.
Edit: the preview seems to showing the links ok, but when I submit the post they are all wrong so I've had to put the urls inline
A: Please try
http://www.delphiarea.com/products/simplegraph/
A: JVCL : Demo called
JvDesignerDemo
| {
"language": "en",
"url": "https://stackoverflow.com/questions/13471",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: How does one rank an array (sort) by value? *With a twist* I would like to sort an array in ascending order using C/C++. The outcome is an array containing element indexes. Each index is corespondent to the element location in the sorted array.
Example
Input: 1, 3, 4, 9, 6
Output: 1, 2, 3, 5, 4
Edit: I am using shell sort procedure. The duplicate value indexes are arbitrarily chosen based on which duplicate values are first in the original array.
Update:
Despite my best efforts, I haven't been able to implement a sorting algorithm for an array of pointers. The current example won't compile.
Could someone please tell me what's wrong?
I'd very much appreciate some help!
void SortArray(int ** pArray, int ArrayLength)
{
int i, j, flag = 1; // set flag to 1 to begin initial pass
int * temp; // holding variable orig with no *
for (i = 1; (i <= ArrayLength) && flag; i++)
{
flag = 0;
for (j = 0; j < (ArrayLength - 1); j++)
{
if (*pArray[j + 1] > *pArray[j]) // ascending order simply changes to <
{
&temp = &pArray[j]; // swap elements
&pArray[j] = &pArray[j + 1]; //the problem lies somewhere in here
&pArray[j + 1] = &temp;
flag = 1; // indicates that a swap occurred.
}
}
}
};
A: Since you're using C++, I would do it something like this. The SortIntPointers function can be any sort algorithm, the important part is that it sorts the array of pointers based on the int that they are pointing to. Once that is done, you can go through the array of pointers and assign their sorted index which will end up in the original position in the original array.
int* intArray; // set somewhere else
int arrayLen; // set somewhere else
int** pintArray = new int*[arrayLen];
for(int i = 0; i < arrayLen; ++i)
{
pintArray[i] = &intArray[i];
}
// This function sorts the pointers according to the values they
// point to. In effect, it sorts intArray without losing the positional
// information.
SortIntPointers(pintArray, arrayLen);
// Dereference the pointers and assign their sorted position.
for(int i = 0; i < arrayLen; ++i)
{
*pintArray[i] = i;
}
Hopefully that's clear enough.
A: Ok, here is my atempt in C++
#include <iostream>
#include <algorithm>
struct mycomparison
{
bool operator() (int* lhs, int* rhs) {return (*lhs) < (*rhs);}
};
int main(int argc, char* argv[])
{
int myarray[] = {1, 3, 6, 2, 4, 9, 5, 12, 10};
const size_t size = sizeof(myarray) / sizeof(myarray[0]);
int *arrayofpointers[size];
for(int i = 0; i < size; ++i)
{
arrayofpointers[i] = myarray + i;
}
std::sort(arrayofpointers, arrayofpointers + size, mycomparison());
for(int i = 0; i < size; ++i)
{
*arrayofpointers[i] = i + 1;
}
for(int i = 0; i < size; ++i)
{
std::cout << myarray[i] << " ";
}
std::cout << std::endl;
return 0;
}
A: create a new array with increasing values from 0 to n-1 (where n is the length of the array you want to sort). Then sort the new array based on the values in the old array indexed by the values in the new array.
For example, if you use bubble sort (easy to explain), then instead of comparing the values in the new array, you compare the values in the old array at the position indexed by a value in the new array:
function bubbleRank(A){
var B = new Array();
for(var i=0; i<A.length; i++){
B[i] = i;
}
do{
swapped = false;
for(var i=0; i<A.length; i++){
if(A[B[i]] > A[B[i+1]]){
var temp = B[i];
B[i] = B[i+1];
B[i+1] = temp;
swapped = true;
}
}
}while(swapped);
return B;
}
A: create a new Array and use bubble sort to rank the elements
int arr[n];
int rank[n];
for(int i=0;i<n;i++)
for(int j=0;j<n;j++)
if(arr[i]>arr[j])
rank[i]++;
The rank of each element will be rank[i]+1 to be in the order of 1,2,....n
A: Well, there's a trival n^2 solution.
In python:
newArray = sorted(oldArray)
blankArray = [0] * len(oldArray)
for i in xrange(len(newArray)):
dex = oldArray.index(newArray[i])
blankArray[dex] = i
Depending on how large your list is, this may work. If your list is very long, you'll need to do some strange parallel array sorting, which doesn't look like much fun and is a quick way to introduce extra bugs in your code.
Also note that the above code assumes unique values in oldArray. If that's not the case, you'll need to do some post processing to solve tied values.
A: Parallel sorting of vector using boost::lambda...
std::vector<int> intVector;
std::vector<int> rank;
// set up values according to your example...
intVector.push_back( 1 );
intVector.push_back( 3 );
intVector.push_back( 4 );
intVector.push_back( 9 );
intVector.push_back( 6 );
for( int i = 0; i < intVector.size(); ++i )
{
rank.push_back( i );
}
using namespace boost::lambda;
std::sort(
rank.begin(), rank.end(),
var( intVector )[ _1 ] < var( intVector )[ _2 ]
);
//... and because you wanted to replace the values of the original with
// their rank
intVector = rank;
Note: I used vectorS instead of arrays because it is clearer/easier, also, I used C-style indexing which starts counting from 0, not 1.
A: This is a solution in c language
#include <stdio.h>
void swap(int *xp, int *yp) {
int temp = *xp;
*xp = *yp;
*yp = temp;
}
// A function to implement bubble sort
void bubbleSort(int arr[], int n) {
int i, j;
for (i = 0; i < n - 1; i++)
// Last i elements are already in place
for (j = 0; j < n - i - 1; j++)
if (arr[j] > arr[j + 1])
swap(&arr[j], &arr[j + 1]);
}
/* Function to print an array */
void printArray(int arr[], int size) {
for (int i = 0; i < size; i++)
printf("%d ", arr[i]);
printf("\n");
}
int main() {
int arr[] = {64, 34, 25, 12, 22, 11, 98};
int arr_original[] = {64, 34, 25, 12, 22, 11, 98};
int rank[7];
int n = sizeof(arr) / sizeof(arr[0]);
bubbleSort(arr, n);
printf("Sorted array: \n");
printArray(arr, n);
//PLACE RANK
//look for location of number in original array
//place the location in rank array
int counter = 1;
for (int k = 0; k < n; k++){
for (int i = 0; i < n; i++){
printf("Checking..%d\n", i);
if (arr_original[i] == arr[k]){
rank[i] = counter;
counter++;
printf("Found..%d\n", i);
}
}
}
printf("Original array: \n");
printArray(arr_original, n);
printf("Rank array: \n");
printArray(rank, n);
return 0;
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/13473",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: How do you keep two related, but separate, systems in sync with each other? My current development project has two aspects to it. First, there is a public website where external users can submit and update information for various purposes. This information is then saved to a local SQL Server at the colo facility.
The second aspect is an internal application which employees use to manage those same records (conceptually) and provide status updates, approvals, etc. This application is hosted within the corporate firewall with its own local SQL Server database.
The two networks are connected by a hardware VPN solution, which is decent, but obviously not the speediest thing in the world.
The two databases are similar, and share many of the same tables, but they are not 100% the same. Many of the tables on both sides are very specific to either the internal or external application.
So the question is: when a user updates their information or submits a record on the public website, how do you transfer that data to the internal application's database so it can be managed by the internal staff? And vice versa... how do you push updates made by the staff back out to the website?
It is worth mentioning that the more "real time" these updates occur, the better. Not that it has to be instant, just reasonably quick.
So far, I have thought about using the following types of approaches:
*
*Bi-directional replication
*Web service interfaces on both sides with code to sync the changes as they are made (in real time).
*Web service interfaces on both sides with code to asynchronously sync the changes (using a queueing mechanism).
Any advice? Has anyone run into this problem before? Did you come up with a solution that worked well for you?
A: I'm mid-way through a similar project except I have multiple sites that need to keep in sync over slow connections (dial-up in some cases).
Firstly you need to track changes, if you can use SQL 2008 (even the Express version is enough if the 2Gb limit isn't a problem) this will ease the pain greatly, just turn on Change Tracking on the database and each table. We're using SQL Server 2008 at the head office with the extended schema and SQL Express 2008 at each site with a sub-set of data and limited schema.
Secondly you need to track your changes, Sync Services does the trick nicely and supports using a WCF gateway into the main database. In this example you will need to use the Sync using SQL Express Client sample as a starting point, note that it's based on SQL 2005 so you'll need to update it to take advantage of the Change Tracking features in 2008. By default the Sync Services uses SQL CE on the clients, which I'm sure isn't enough in your case. You'll need a service that runs on your Web Server that periodically (could be as often as every 10 seconds if you want) runs the Synchronize() method. This will tell your main database about changes made locally and then ask the server for all changes made there. You can set up the get and apply SQL code to call stored procedures and you can add event handlers to handle conflicts (e.g. Client Update vs Server Update) and resolve them accordingly at each end.
A: This is a pretty common integration scenario, I believe. Personally, I think an asynchronous messaging solution using a queue is ideal.
You should be able to achieve near real time synchronization without the overhead or complexity of something like replication.
Synchronous web services are not ideal because your code will have to be very sophisticated to handle failure scenarios. What happens when one system is restarted while the other continues to publish changes? Does the sending system get timeouts? What does it do with those? Unless you are prepared to lose data, you'll want some sort of transactional queue (like MSMQ) to receive the change notices and take care of making sure they get to the other system. If either system is down, the changes (passed as messages) will just accumulate and as soon as a connection can be established the re-starting server will process all the queued messages and catch up, making system integrity much, much easier to achieve.
There are some open source tools that can really make this easy for you if you are using .NET (especially if you want to use MSMQ).
*
*nServiceBus by Udi Dahan
*Mass Transit by Dru Sellers and Chris Patterson
There are commercial products also, and if you are considering a commercial option see here for a list of of options on .NET. Of course, WCF can do async messaging using MSMQ bindings, but a tool like nServiceBus or MassTransit will give you a very simple Send/Receive or Pub/Sub API that will make your requirement a very straightforward job.
If you're using Java, there are any number of open source service bus implementations that will make this kind of bi-directional, asynchronous messaging a snap, like Mule or maybe just ActiveMQ.
You may also want to consider reading Udi Dahan's blog, listening to some of his podcasts. Here are some more good resources to get you started.
A: We have a shop as a client, with three stores connected to the same VPN
Two of the shops have a computer running as a "server" for that shop and the the third one has the "master database"
To synchronize all to the master we don't have the best solution, but it works: there is a dedicated PC running an application that checks the timestamp of every record in every table of the two stores and if it is different that the last time you synchronize, it copies the results
Note that this works both ways. I.e. if you update a product in the master database, this change will propagate to the other two shops. If you have a new order in one of the shops, it will be transmitted to the "master".
With some optimizations you can have all the shops synchronize in around 20minutes
A: Recently I have had a lot of success with SQL Server Service Broker which offers reliable, persisted asynchronous messaging out of the box with very little implementation pain.
*
*It is quick to set up and as you learn more you can use some of the more advanced features.
*Unknown to most, it is also part of the desktop editions so it can be used as a workstation messaging system
*If you have existing T-SQL skills they can be leveraged as all the code to read and write messages is done in SQL
*It is blindingly fast
It is a vastly under-hyped part of SQL Server and well worth a look.
A: I'd say just have a job that copies the data in the pub database input table into a private database pending table. Then once you update the data on the private side have it replicated to the public side. If you don't have any of the replicated data on the public side updated it should be a fairly easy transactional replication solution.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/13485",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "23"
} |
Q: simultaneous Outlook reminders on multiple devices Disclaimer: This is not actually a programming question, but I feel the audience on stackoverflow is more likely to have an answer than most question/answer sites out there.
Please forgive me, Joel, for stealing your question. Joel asked this question on a podcast a while back but I don't think it ever got resolved. I'm in the same situation, so I'm also looking for the answer.
I have multiple devices that all sync with MS-Outlook. PCs, Laptops, Smartphones, PDAs, etc. all have the capability to synchronize their data (calendars, emails, contacts, etc.) with the Exchange server. I like to use the Outlook meeting notice or appointment reminders to remind me of an upcoming meeting or doctors appointment or whatever. The problem lies in the fact that all the devices pop up the same reminder and I have to go to every single device individually in order to snooze or dismiss all of the identical the reminder popups.
Since this is a sync'ing technology, why doesn't the fact that I snooze or dismiss on one device sync up the other devices automatically. I've even tried to force a sync after dismissing a reminder and it still shows up on my other devices after a forced sync. This is utterly annoying to me.
Is there a setting that I'm overlooking or is there a 3rd party reminder utility that I should be using instead of the built-in stuff?
Thanks,
Kurt
A: At least for PCs, the fact that you dismiss an item does get sync'd, and fairly quickly for me. I'm not sure why phones don't seem to do it, though. Maybe the ActiveSync protocol doesn't offer that option.
A: I don't have an answer, but I feel your (and Joel's) pain every time my cell phone and my computer both buzz me within minutes of each other.
A: Thanks from me, too :)
Maybe it's because all your devices clocks are synchronized to a time server, so they all have the exactly correct atomic-clock time, and all the devices notify you within a couple of seconds of each other, so the "dismiss" synchronization just doesn't happen fast enough.
A: I'm assuming that the "client side" implementation is not standardized. For example, if your phone is already showing you a reminder-pop-up, the Outlook synch mechanism can't enforce removing that pop-up since it's device specific. The fact that it does so on "computers" leads me to assume that the protocol DOES support this, but some phone vendors just don't implement it.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/13498",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: Browser Sync across many machines Everyone remembers google browser sync right? I thought it was great. Unfortunately Google decided not to upgrade the service to Firefox 3.0. Mozilla is developing a replacement for google browser sync which will be a part of the Weave project. I have tried using Weave and found it to be very very slow or totally inoperable. Granted they are in a early development phase right now so I can not really complain.
This specific problem of browser sync got me to thinking though. What do all of you think of Mozilla or someone making a server/client package that we, the users, could run on your 'main' machine? Now you just have to know your own IP or have some way to announce it to your client browsers at work or wherever.
There are several problems I can think of with this: non static IPs, Opening up ports on your local comp etc. It just seems that Mozilla does not want to handle this traffic created by many people syncing their browsers. There is not a way for them to monetize this traffic since all the data uploaded must be encrypted.
A: Mozilla Weave is capable of running on personal servers. It uses WebDAV to communicate with HTTP servers and can be configured to connect to private servers. I've tried setting it up on my own servers but with no success (Mainly because I'm not very good at working with Apache to configure WebDAV)
I'm hoping Mozilla Weave eventually allows FTP access so I can easily use my server to host my firefox profile.
If you're interested in trying Mozilla Weave on a personal server, there's a tutorial here:
http://marios.tziortzis.com/page/blog/article/setting-up-mozilla-weave-on-your-server/
A: Browser Sync is up on Google Code now. Doesn't look like anything has been done with it yet though, as far as making it hosted on personal servers/computers.
A: I've been using the Firefox Scrapbook extension, sync'd via FolderShare. It takes a little setup, but the nice thing is that Scrapbook grabs a local copy of each page so it works offline or if the site goes away.
A: Not a complete solution to this problem, but I've found FoxMarks to be a really nice bookmark syncing extension.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/13518",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: GridView delete not working I'm using a GridView in C#.NET 3.5 and have just converted the underlying DataSource from Adapter model to an object which gets its data from LINQ to SQL - i.e. a Business object that returns a List<> for the GetData() function etc.
All was well in Denmark and the Update, and conditional Select statements work as expected but I can't get the Delete function to work. Just trying to pass in the ID or the entire object but it's being passed in a "new" object with none of the properties set. I'm just wondering if it's the old OldValuesParameterFormatString="original_{0}" monster in the ObjectDataSource causing confusion again.
Anybody have any ideas?
A: I found the solution. I had to set the GridView's DataKeyNames property to the unique key that my data was returning (in this case a classically named ID field). I'm guessing that this property "unset" itself when the Grid refreshed.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/13524",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Bootstrapping still requires outside support I've heard of the idea of bootstrapping a language, that is, writing a compiler/interpreter for the language in itself. I was wondering how this could be accomplished and looked around a bit, and saw someone say that it could only be done by either
*
*writing an initial compiler in a different language.
*hand-coding an initial compiler in Assembly, which seems like a special case of the first
To me, neither of these seem to actually be bootstrapping a language in the sense that they both require outside support. Is there a way to actually write a compiler in its own language?
A: The way I've heard of is to write an extremely limited compiler in another language, then use that to compile a more complicated version, written in the new language. This second version can then be used to compile itself, and the next version. Each time it is compiled the last version is used.
This is the definition of bootstrapping:
the process of a simple system activating a more complicated system that serves the same purpose.
EDIT: The Wikipedia article on compiler bootstrapping covers the concept better than me.
A: A super interesting discussion of this is in Unix co-creator Ken Thompson's Turing Award lecture.
He starts off with:
What I am about to describe is one of many "chicken and egg" problems that arise when compilers are written in their own language. In this ease, I will use a specific example from the C compiler.
and proceeds to show how he wrote a version of the Unix C compiler that would always allow him to log in without a password, because the C compiler would recognize the login program and add in special code.
The second pattern is aimed at the C compiler. The replacement code is a Stage I self-reproducing program that inserts both Trojan horses into the compiler. This requires a learning phase as in the Stage II example. First we compile the modified source with the normal C compiler to produce a bugged binary. We install this binary as the official C. We can now remove the bugs from the source of the compiler and the new binary will reinsert the bugs whenever it is compiled. Of course, the login command will remain bugged with no trace in source anywhere.
A: Check out podcast Software Engineering Radio episode 61 (2007-07-06) which discusses GCC compiler internals, as well as the GCC bootstrapping process.
A: Donald E. Knuth actually built WEB by writing the compiler in it, and then hand-compiled it to assembly or machine code.
A: Every example of bootstrapping a language I can think of (C, PyPy) was done after there was a working compiler. You have to start somewhere, and reimplementing a language in itself requires writing a compiler in another language first.
How else would it work? I don't think it's even conceptually possible to do otherwise.
A: As I understand it, the first Lisp interpreter was bootstrapped by hand-compiling the constructor functions and the token reader. The rest of the interpreter was then read in from source.
You can check for yourself by reading the original McCarthy paper, Recursive Functions of Symbolic Expressions and Their Computation by Machine, Part I.
A: It's the computer science version of the chicken-and-egg paradox. I can't think of a way not to write the initial compiler in assembler or some other language. If it could have been done, I should Lisp could have done it.
Actually, I think Lisp almost qualifies. Check out its Wikipedia entry. According to the article, the Lisp eval function could be implemented on an IBM 704 in machine code, with a complete compiler (written in Lisp itself) coming into being in 1962 at MIT.
A: Another alternative is to create a bytecode machine for your language (or use an existing one if it's features aren't very unusual) and write a compiler to bytecode, either in the bytecode, or in your desired language using another intermediate - such as a parser toolkit which outputs the AST as XML, then compile the XML to bytecode using XSLT (or another pattern matching language and tree-based representation). It doesn't remove the dependency on another language, but could mean that more of the bootstrapping work ends up in the final system.
A: The explanation you've read is correct. There's a discussion of this in Compilers: Principles, Techniques, and Tools (the Dragon Book):
*
*Write a compiler C1 for language X in language Y
*Use the compiler C1 to write compiler C2 for language X in language X
*Now C2 is a fully self hosting environment.
A:
Is there a way to actually write a compiler in its own language?
You have to have some existing language to write your new compiler in. If you were writing a new, say, C++ compiler, you would just write it in C++ and compile it with an existing compiler first. On the other hand, if you were creating a compiler for a new language, let's call it Yazzleof, you would need to write the new compiler in another language first. Generally, this would be another programming language, but it doesn't have to be. It can be assembly, or if necessary, machine code.
If you were going to bootstrap a compiler for Yazzleof, you generally wouldn't write a compiler for the full language initially. Instead you would write a compiler for Yazzle-lite, the smallest possible subset of the Yazzleof (well, a pretty small subset at least). Then in Yazzle-lite, you would write a compiler for the full language. (Obviously this can occur iteratively instead of in one jump.) Because Yazzle-lite is a proper subset of Yazzleof, you now have a compiler which can compile itself.
There is a really good writeup about bootstrapping a compiler from the lowest possible level (which on a modern machine is basically a hex editor), titled Bootstrapping a simple compiler from nothing. It can be found at https://web.archive.org/web/20061108010907/http://www.rano.org/bcompiler.html.
A: Some bootstrapped compilers or systems keep both the source form and the object form in their repository:
*
*ocaml is a language which has both a bytecode interpreter (i.e. a compiler to Ocaml bytecode) and a native compiler (to x86-64 or ARM, etc... assembler). Its svn repository contains both the source code (files */*.{ml,mli}) and the bytecode (file boot/ocamlc) form of the compiler. So when you build it is first using its bytecode (of a previous version of the compiler) to compile itself. Later the freshly compiled bytecode is able to compile the native compiler. So Ocaml svn repository contains both *.ml[i] source files and the boot/ocamlc bytecode file.
*The rust compiler downloads (using wget, so you need a working Internet connection) a previous version of its binary to compile itself.
*MELT is a Lisp-like language to customize and extend GCC. It is translated to C++ code by a bootstrapped translator. The generated C++ code of the translator is distributed, so the svn repository contains both *.melt source files and melt/generated/*.cc "object" files of the translator.
*J.Pitrat's CAIA artificial intelligence system is entirely self-generating. It is available as a collection of thousands of [A-Z]*.c generated files (also with a generated dx.h header file) with a collection of thousands of _[0-9]* data files.
*Several Scheme compilers are also bootstrapped. Scheme48, Chicken Scheme, ...
| {
"language": "en",
"url": "https://stackoverflow.com/questions/13537",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "104"
} |
Q: Insert Update stored proc on SQL Server I've written a stored proc that will do an update if a record exists, otherwise it will do an insert. It looks something like this:
update myTable set Col1=@col1, Col2=@col2 where ID=@ID
if @@rowcount = 0
insert into myTable (Col1, Col2) values (@col1, @col2)
My logic behind writing it in this way is that the update will perform an implicit select using the where clause and if that returns 0 then the insert will take place.
The alternative to doing it this way would be to do a select and then based on the number of rows returned either do an update or insert. This I considered inefficient because if you are to do an update it will cause 2 selects (the first explicit select call and the second implicit in the where of the update). If the proc were to do an insert then there'd be no difference in efficiency.
Is my logic sound here?
Is this how you would combine an insert and update into a stored proc?
A: MERGE is one of the new features in SQL Server 2008, by the way.
A: Your assumption is right, this is the optimal way to do it and it's called upsert/merge.
Importance of UPSERT - from sqlservercentral.com:
For every update in the case mentioned above we are removing one
additional read from the table if we
use the UPSERT instead of EXISTS.
Unfortunately for an Insert, both the
UPSERT and IF EXISTS methods use the
same number of reads on the table.
Therefore the check for existence
should only be done when there is a
very valid reason to justify the
additional I/O. The optimized way to
do things is to make sure that you
have little reads as possible on the
DB.
The best strategy is to attempt the
update. If no rows are affected by the
update then insert. In most
circumstances, the row will already
exist and only one I/O will be
required.
Edit:
Please check out this answer and the linked blog post to learn about the problems with this pattern and how to make it work safe.
A: You not only need to run it in transaction, it also needs high isolation level. I fact default isolation level is Read Commited and this code need Serializable.
SET transaction isolation level SERIALIZABLE
BEGIN TRANSACTION Upsert
UPDATE myTable set Col1=@col1, Col2=@col2 where ID=@ID
if @@rowcount = 0
begin
INSERT into myTable (ID, Col1, Col2) values (@ID @col1, @col2)
end
COMMIT TRANSACTION Upsert
Maybe adding also the @@error check and rollback could be good idea.
A: Please read the post on my blog for a good, safe pattern you can use. There are a lot of considerations, and the accepted answer on this question is far from safe.
For a quick answer try the following pattern. It will work fine on SQL 2000 and above. SQL 2005 gives you error handling which opens up other options and SQL 2008 gives you a MERGE command.
begin tran
update t with (serializable)
set hitCount = hitCount + 1
where pk = @id
if @@rowcount = 0
begin
insert t (pk, hitCount)
values (@id,1)
end
commit tran
A: If you are not doing a merge in SQL 2008 you must change it to:
if @@rowcount = 0 and @@error=0
otherwise if the update fails for some reason then it will try and to an insert afterwards because the rowcount on a failed statement is 0
A: Big fan of the UPSERT, really cuts down on the code to manage. Here is another way I do it: One of the input parameters is ID, if the ID is NULL or 0, you know it's an INSERT, otherwise it's an update. Assumes the application knows if there is an ID, so wont work in all situations, but will cut the executes in half if you do.
A: Modified Dima Malenko post:
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
BEGIN TRANSACTION UPSERT
UPDATE MYTABLE
SET COL1 = @col1,
COL2 = @col2
WHERE ID = @ID
IF @@rowcount = 0
BEGIN
INSERT INTO MYTABLE
(ID,
COL1,
COL2)
VALUES (@ID,
@col1,
@col2)
END
IF @@Error > 0
BEGIN
INSERT INTO MYERRORTABLE
(ID,
COL1,
COL2)
VALUES (@ID,
@col1,
@col2)
END
COMMIT TRANSACTION UPSERT
You can trap the error and send the record to a failed insert table.
I needed to do this because we are taking whatever data is send via WSDL and if possible fixing it internally.
A: If to be used with SQL Server 2000/2005 the original code needs to be enclosed in transaction to make sure that data remain consistent in concurrent scenario.
BEGIN TRANSACTION Upsert
update myTable set Col1=@col1, Col2=@col2 where ID=@ID
if @@rowcount = 0
insert into myTable (Col1, Col2) values (@col1, @col2)
COMMIT TRANSACTION Upsert
This will incur additional performance cost, but will ensure data integrity.
Add, as already suggested, MERGE should be used where available.
A: Your logic seems sound, but you might want to consider adding some code to prevent the insert if you had passed in a specific primary key.
Otherwise, if you're always doing an insert if the update didn't affect any records, what happens when someone deletes the record before you "UPSERT" runs? Now the record you were trying to update doesn't exist, so it'll create a record instead. That probably isn't the behavior you were looking for.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/13540",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "111"
} |
Q: .NET 3.5 SP1 and aspnet_client Crystal Reports I recently (a few days ago) installed .NET 3.5 SP1 and subsequently an aspnet_client folder with a bunch of Crystal Reports support code has been injected into my .net web apps.
Anybody else experienced this?
Am I correct in saying that this is a side effect of SP1?
What is this?
A: No it is a side effect of Crystal Reports. If you don't need it, remove it from your computer it is nothing but a headache. It is safe to delete the aspnet_client folder.
A: What do you need to remove? It keeps on adding that folder back to the project that I'm working on...
| {
"language": "en",
"url": "https://stackoverflow.com/questions/13545",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Productivity gains of using CASE tools for development I was using a CASE called MAGIC for a system I'm developing, I've never used this kind of tool before and at first sight I liked, a month later I had a lot of the application generated, I felt very productive and ... I would say ... satisfied.
In some way a felt uncomfortable, cause, there is no code and everything I was used to, but in the other hand I could speed up my developing. The fact is that eventually I returned to use C# because I find it more flexible to develop, I can make unit testing, use CVS, I have access to more resources and basically I had "all the control". I felt that this tool didn't give me confidence and I thought that later in the project I could not manage it due to its forced established rules of development. And also a lot of things like sending emails, using my own controls, and other things had their complication, it seemed that at some point it was not going to be as easy as initially I thought and as initially the product claims. This reminds me a very nice article called "No Silver Bullet".
This CASE had its advantages but on the other hand it doesn't have resources you can consult and actually the license and certification are very expensive. For me another dissapointing thing is that because of its simplistic approach for development I felt scared on first hand cause of my unexperience on these kind of tools and second cause I thought that if I continued using it maybe it would have turned to be a complex monster that I could not manage later in the project.
I think it's good to use these kind of solutions to speed up things but I wonder, why aren't these programs as popular as VS.Net, J2EE, Ruby, Python, etc. if they claim to enhance productivity better than the tools I've pointed?
A: We use a CASE tool at my current company for code generation and we are trying to move away from it.
The benefits that it brings - a graphical representation of the code making components 'easier' to pick up for new developers - are outweighed by the disadvantges in my opinion.
Those main disadvantages are:
*
*We cannot do automatic merges, making it close to impossible for parallel development on one component.
*Developers get dependant on the tool and 'forget' how to handcode.
A: Just a couple questions for you:
How much productivity do you gain compared to the control that you use?
How testable and reliant is the code you create?
How well can you implement a new pattern into your design?
I can't imagine that there is a CASE out there that I could write a test first and then use a CASE to generate the code I need. I'd rather stick to resharper which can easily do my mundane tasks and retain full control of my code.
A: The project I'm on originally went w/ the Oracle Development Suite to put together a web application.
Over time (5+ years), customer requirements became more complex than originally anticipated, and the screens were not easily maintainable. So, the team informally decided to start doing custom (hand coded) screens in web PL/SQL, instead of generating them using the Oracle Development Suite CASE tools (Oracle Designer).
The Oracle Report Builder component of the Development Suite is still being used by the team, as it seems to "get the job done" in a timely fashion. In general, the developers using the Report Builder tool are not very comfortable coding.
In this case, it seems that the productivity aspect of such CASE tools is heavily dependent on customer requirements and developer skill sets/training/background.
A: Unfortunaly the Magic tool doesn't generates code and also it can't implement a design pattern. I don't have control over the code cause as i stated before it doesn't have code to modify. Te bottom line is that it can speed up productivity in some way but it has the impossibility to user CVS, patterns also and I can't control all the details.
I agree with gary when he says "it seems that the productivity aspect of such CASE tools is heavily dependent on customer requirements and developer skill sets/training/background" but also I can't agree more with Klelky;
Those main disadvantages are:
1. We cannot do automatic merges, making it close to impossible for parallel development on one component.
2.Developers get dependant on the tool and 'forget' how to handcode.
Thanks
| {
"language": "en",
"url": "https://stackoverflow.com/questions/13550",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: user controls and asp.net mvc Here is one trivial question, that I am not sure how to handle.
I need to display list of categories on every page, and to be able to choose items from a specific category to be displayed. I use asp.net MVC, and have chosen to create a user control that will display categories. My question is: what is the best approach to pass data to a user control. I already found some information in these blog posts:
http://weblogs.asp.net/stephenwalther/archive/2008/08/12/asp-net-mvc-tip-31-passing-data-to-master-pages-and-user-controls.aspx
http://blog.matthidinger.com/2008/02/21/ASPNETMVCUserControlsStartToFinish.aspx
I would like also to hear your opinion.
PS. I'd like to hear Jeff's opinion, especially because of his experience with UC's on Stackoverflow
A: I'm using mvc components, which replaced ascx user controls in preview 4.
Example:
http://blog.wekeroad.com/blog/asp-net-mvc-preview-4-componentcontroller-is-now-renderaction/
So, you call components action from View, which then choose View to render. You can pass data in this call also.
A: it is the mvc futures project. i will probably try this
http://forums.asp.net/t/1303328.aspx. I need to render menu with categories.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/13561",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: mysqli or PDO - what are the pros and cons? In our place we're split between using mysqli and PDO for stuff like prepared statements and transaction support. Some projects use one, some the other. There is little realistic likelihood of us ever moving to another RDBMS.
I prefer PDO for the single reason that it allows named parameters for prepared statements, and as far as I am aware mysqli does not.
Are there any other pros and cons to choosing one over the other as a standard as we consolidate our projects to use just one approach?
A: PDO will make it a lot easier to scale if your site/web app gets really being as you can daily set up Master and slave connections to distribute the load across the database, plus PHP is heading towards moving to PDO as a standard.
PDO Info
Scaling a Web Application
A: In sense of speed of execution MySQLi wins, but unless you have a good wrapper using MySQLi, its functions dealing with prepared statements are awful.
There are still bugs in mine, but if anyone wants it, here it is.
So in short, if you are looking for a speed gain, then MySQLi; if you want ease of use, then PDO.
A: Moving an application from one database to another isn't very common, but sooner or later you may find yourself working on another project using a different RDBMS. If you're at home with PDO then there will at least be one thing less to learn at that point.
Apart from that I find the PDO API a little more intuitive, and it feels more truly object oriented. mysqli feels like it is just a procedural API that has been objectified, if you know what I mean. In short, I find PDO easier to work with, but that is of course subjective.
A: Personally I use PDO, but I think that is mainly a question of preference.
PDO has some features that help agains SQL injection (prepared statements), but if you are careful with your SQL you can achieve that with mysqli, too.
Moving to another database is not so much a reason to use PDO. As long as you don't use "special SQL features", you can switch from one DB to another. However as soon as you use for example "SELECT ... LIMIT 1" you can't go to MS-SQL where it is "SELECT TOP 1 ...". So this is problematic anyway.
A: Edited answer.
After having some experience with both these APIs, I would say that there are 2 blocking level features which renders mysqli unusable with native prepared statements.
They were already mentioned in 2 excellent (yet way underrated) answers:
*
*Binding values to arbitrary number of placeholders
*Returning data as a mere array
(both also mentioned in this answer)
For some reason mysqli failed with both.
Nowadays it got some improvement for the second one (get_result), but it works only on mysqlnd installations, means you can't rely on this function in your scripts.
Yet it doesn't have bind-by-value even to this day.
So, there is only one choice: PDO
All the other reasons, such as
*
*named placeholders (this syntax sugar is way overrated)
*different databases support (nobody actually ever used it)
*fetch into object (just useless syntax sugar)
*speed difference (there is none)
aren't of any significant importance.
At the same time both these APIs lacks some real important features, like
*
*identifier placeholder
*placeholder for the complex data types to make dynamical binding less toilsome
*shorter application code.
So, to cover the real life needs, one have to create their own abstraction library, based on one of these APIs, implementing manually parsed placeholders. In this case I'd prefer mysqli, for it has lesser level of abstraction.
A: In my benchmark script, each method is tested 10000 times and the difference of the total time for each method is printed. You should this on your own configuration, I'm sure results will vary!
These are my results:
*
*"SELECT NULL" -> PGO() faster by ~ 0.35 seconds
*"SHOW TABLE STATUS" -> mysqli() faster by ~ 2.3 seconds
*"SELECT * FROM users" -> mysqli() faster by ~ 33 seconds
Note: by using ->fetch_row() for mysqli, the column names are not added to the array, I didn't find a way to do that in PGO. But even if I use ->fetch_array() , mysqli is slightly slower but still faster than PGO (except for SELECT NULL).
A: One thing PDO has that MySQLi doesn't that I really like is PDO's ability to return a result as an object of a specified class type (e.g. $pdo->fetchObject('MyClass')). MySQLi's fetch_object() will only return an stdClass object.
A: I've started using PDO because the statement support is better, in my opinion. I'm using an ActiveRecord-esque data-access layer, and it's much easier to implement dynamically generated statements. MySQLi's parameter binding must be done in a single function/method call, so if you don't know until runtime how many parameters you'd like to bind, you're forced to use call_user_func_array() (I believe that's the right function name) for selects. And forget about simple dynamic result binding.
Most of all, I like PDO because it's a very reasonable level of abstraction. It's easy to use it in completely abstracted systems where you don't want to write SQL, but it also makes it easy to use a more optimized, pure query type of system, or to mix-and-match the two.
A: Well, you could argue with the object oriented aspect, the prepared statements, the fact that it becomes a standard, etc. But I know that most of the time, convincing somebody works better with a killer feature. So there it is:
A really nice thing with PDO is you can fetch the data, injecting it automatically in an object. If you don't want to use an ORM (cause it's a just a quick script) but you do like object mapping, it's REALLY cool :
class Student {
public $id;
public $first_name;
public $last_name
public function getFullName() {
return $this->first_name.' '.$this->last_name
}
}
try
{
$dbh = new PDO("mysql:host=$hostname;dbname=school", $username, $password)
$stmt = $dbh->query("SELECT * FROM students");
/* MAGIC HAPPENS HERE */
$stmt->setFetchMode(PDO::FETCH_INTO, new Student);
foreach($stmt as $student)
{
echo $student->getFullName().'<br />';
}
$dbh = null;
}
catch(PDOException $e)
{
echo $e->getMessage();
}
A: PDO is the standard, it's what most developers will expect to use. mysqli was essentially a bespoke solution to a particular problem, but it has all the problems of the other DBMS-specific libraries. PDO is where all the hard work and clever thinking will go.
A: Here's something else to keep in mind: For now (PHP 5.2) the PDO library is buggy. It's full of strange bugs. For example: before storing a PDOStatement in a variable, the variable should be unset() to avoid a ton of bugs. Most of these have been fixed in PHP 5.3 and they will be released in early 2009 in PHP 5.3 which will probably have many other bugs. You should focus on using PDO for PHP 6.1 if you want a stable release and using PDO for PHP 5.3 if you want to help the community.
A: Another notable (good) difference about PDO is that it's PDO::quote() method automatically adds the enclosing quotes, whereas mysqli::real_escape_string() (and similars) don't:
PDO::quote() places quotes around the input string (if required) and
escapes special characters within the input string, using a quoting
style appropriate to the underlying driver.
A: There's one thing to keep in mind.
Mysqli does not support fetch_assoc() function which would return the columns with keys representing column names. Of course it's possible to write your own function to do that, it's not even very long, but I had really hard time writing it (for non-believers: if it seems easy to you, try it on your own some time and don't cheat :) )
| {
"language": "en",
"url": "https://stackoverflow.com/questions/13569",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "342"
} |
Q: Determining how long the user is logged on to Windows The need arose, in our product, to determine how long the current user has been logged on to Windows (specifically, Vista). It seems there is no straight forward API function for this and I couldn't find anything relevant with WMI (although I'm no expert with WMI, so I might have missed something).
Any ideas?
A: For people not familiar with WMI (like me), here are some links:
*
*MSDN page on using WMI from various languages: http://msdn.microsoft.com/en-us/library/aa393964(VS.85).aspx
*reference about Win32_Session: http://msdn.microsoft.com/en-us/library/aa394422(VS.85).aspx, but the objects in Win32_session are of type Win32_LogonSession (http://msdn.microsoft.com/en-us/library/aa394189(VS.85).aspx), which has more interesting properties.
*WMI Explorer - a tool you can use to easily run queries like the one Michal posted.
And here's example querying Win32_Session from VBS:
strComputer = "."
Set objWMIService = GetObject("winmgmts:" _
& "{impersonationLevel=impersonate}!\\" _
& strComputer & "\root\cimv2")
Set sessions = objWMIService.ExecQuery _
("select * from Win32_Session")
For Each objSession in sessions
Wscript.Echo objSession.StartTime
Next
It alerts 6 sessions for my personal computer, perhaps you can filter by LogonType to only list the real ("interactive") users. I couldn't see how you can select the session of the "current user".
[edit] and here's a result from Google to your problem: http://forum.sysinternals.com/forum_posts.asp?TID=3755
A: In Powershell and WMI, the following one-line command will return a list of objects showing the user and the time they logged on.
Get-WmiObject win32_networkloginprofile | ? {$_.lastlogon -ne $null} | % {[PSCustomObject]@{User=$_.caption; LastLogon=[Management.ManagementDateTimeConverter]::ToDateTime($_.lastlogon)}}
Explanation:
*
*Retrieve the list of logged in users from WMI
*Filter out any non-interactive users (effectively removes NT AUTHORITY\SYSTEM)
*Reformats the user and logon time for readability
References:
*
*The WMI object to use: https://forum.sysinternals.com/topic3755.html
*Formatting the date/time: https://blogs.msdn.microsoft.com/powershell/2009/08/12/get-systemuptime-and-working-with-the-wmi-date-format/
A: You can simply use CMD or PowerShell to query the users using the command:
C:\> query user
USERNAME SESSIONNAME ID STATE IDLE TIME LOGON TIME
john rdp-tcp#56 9 Active . 5/3/2020 10:19 AM
max rdp-tcp#5 30 Active 5+23:42 9/4/2020 7:31 PM
yee 35 Disc 6:41 10/14/2020 6:37 PM
mohammd rdp-tcp#3 37 Active . 10/15/2020 7:51 AM
A: In WMI do: "select * from Win32_Session"
there you'll have "StartTime" value.
Hope that helps.
A: Using WMI, the Win32Session is a great start. As well, it should be pointed out that if you're on a network you can use Win32_NetworkLoginProfile to get all sorts of info.
Set logins = objWMIService.ExecQuery _
("select * from Win32_NetworkLoginProfile")
For Each objSession in logins
Wscript.Echo objSession.LastLogon
Next
Other bits of info you can collect include the user name, last logoff, as well as various profile related stuff.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/13578",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Interpreted languages - leveraging the compiled language behind the interpreter If there are any language designers out there (or people simply in the know), I'm curious about the methodology behind creating standard libraries for interpreted languages. Specifically, what seems to be the best approach? Defining standard functions/methods in the interpreted language, or performing the processing of those calls in the compiled language in which the interpreter is written?
What got me to thinking about this was the SO question about a stripslashes()-like function in Python. My first thought was "why not define your own and just call it when you need it", but it raised the question: is it preferable, for such a function, to let the interpreted language handle that overhead, or would it be better to write an extension and leverage the compiled language behind the interpreter?
A: The line between "interpreted" and "compiled" languages is really fuzzy these days. For example, the first thing Python does when it sees source code is compile it into a bytecode representation, essentially the same as what Java does when compiling class files. This is what *.pyc files contain. Then, the python runtime executes the bytecode without referring to the original source. Traditionally, a purely interpreted language would refer to the source code continuously when executing the program.
When building a language, it is a good approach to build a solid foundation on which you can implement the higher level functions. If you've got a solid, fast string handling system, then the language designer can (and should) implement something like stripslashes() outside the base runtime. This is done for at least a few reasons:
*
*The language designer can show that the language is flexible enough to handle that kind of task.
*The language designer actually writes real code in the language, which has tests and therefore shows that the foundation is solid.
*Other people can more easily read, borrow, and even change the higher level function without having to be able to build or even understand the language core.
Just because a language like Python compiles to bytecode and executes that doesn't mean it is slow. There's no reason why somebody couldn't write a Just-In-Time (JIT) compiler for Python, along the lines of what Java and .NET already do, to further increase the performance. In fact, IronPython compiles Python directly to .NET bytecode, which is then run using the .NET system including the JIT.
To answer your question directly, the only time a language designer would implement a function in the language behind the runtime (eg. C in the case of Python) would be to maximise the performance of that function. This is why modules such as the regular expression parser are written in C rather than native Python. On the other hand, a module like getopt.py is implemented in pure Python because it can all be done there and there's no benefit to using the corresponding C library.
A: There's also an increasing trend of reimplementing languages that are traditionally considered "interpreted" onto a platform like the JVM or CLR -- and then allowing easy access to "native" code for interoperability. So from Jython and JRuby, you can easily access Java code, and from IronPython and IronRuby, you can easily access .NET code.
In cases like these, the ability to "leverage the compiled language behind the interpreter" could be described as the primary motivator for the new implementation.
A: See the 'Papers' section at www.lua.org.
Especially The Implementation of Lua 5.0
Lua defines all standard functions in the underlying (ANSI C) code. I believe this is mostly for performance reasons. Recently, i.e. the 'string.*' functions got an alternative implementation in pure Lua, which may prove vital for subprojects where Lua is run on top of .NET or Java runtime (where C code cannot be used).
A: As long as you are using a portable API for the compiled code base like the ANSI C standard library or STL in C++, then taking advantage of those functions would keep you from reinventing the wheel and likely provide a smaller, faster interpreter. Lua takes this approach and it is definitely small and fast as compared to many others.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/13586",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: ASP.net AJAX Drag/Drop? I wonder if someone knows if there is a pre-made solution for this: I have a List on an ASP.net Website, and I want that the User is able to re-sort the list through Drag and Drop. Additionally, I would love to have a second list to which the user can drag items from the first list onto.
So far, I found two solutions:
*
*The ReorderList from the Ajax Control Toolkit, which requires a bit of manual work to make sure changes are persisted into the database, and that does not support drag/drop between lists.
*The RadGrid from Telerik which does all I want, but is priced far far beyond my Budget.
Does anyone else have some ideas or at least some keywords/pointers to do further investigation on? Espectially the Drag/Drop between two lists is something I am rather clueless about how to do that in ASP.net.
Target Framework is 3.0 by the way.
A: This is just personal opinion, but the problem I find with ready-made controls in cases like this is that they are extremely bloated, because they're trying to fit everybody's purpose. If all you need is a sortable list then a simple Scriptaculous list or jQuery list with a quick WebMethod callback should fit the bill quite nicely, and you can obviously stick this into your own user control.
As I say, just my opinion, but I wouldn't go spending money on something that's going to add tons of overhead to my page, when I could spend (literally) 10 minutes writing one for free.
A: The Mootools sortables plugin does just that, and best of all, it's free ;)
http://demos.mootools.net/Sortables
A: I've evaluated the Telerik grid as well as Infragistics version. In the end we took an approach similar to what tags2k suggested. We just wrote our own javascript and called .Net PageMethods to do the server side work.
We found both of the "out of the box" solutions to be bloated. Unless you put paging in at like 20 records per row they really stunk performance wise.
A: Checkout Raj Kaimal's ajax control extender:
http://weblogs.asp.net/rajbk/Contents/Item/Display/517
It works like a charm.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/13589",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |