Question
stringlengths 25
7.47k
| Q_Score
int64 0
1.24k
| Users Score
int64 -10
494
| Score
float64 -1
1.2
| Data Science and Machine Learning
int64 0
1
| is_accepted
bool 2
classes | A_Id
int64 39.3k
72.5M
| Web Development
int64 0
1
| ViewCount
int64 15
1.37M
| Available Count
int64 1
9
| System Administration and DevOps
int64 0
1
| Networking and APIs
int64 0
1
| Q_Id
int64 39.1k
48M
| Answer
stringlengths 16
5.07k
| Database and SQL
int64 1
1
| GUI and Desktop Applications
int64 0
1
| Python Basics and Environment
int64 0
1
| Title
stringlengths 15
148
| AnswerCount
int64 1
32
| Tags
stringlengths 6
90
| Other
int64 0
1
| CreationDate
stringlengths 23
23
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Using pysqlite how can a user-defined-type be used as a value in a comparison, e. g: “... WHERE columnName > userType”?
For example, I've defined a bool type with the requisite registration, converter, etc. Pysqlite/Sqlite responds as expected for INSERT and SELECT operations (bool 'True' stored as an integer 1 and returned as True).
But it fails when the bool is used in either “SELECT * from tasks WHERE display = True” or “... WHERE display = 'True.' “ In the first case Sqlite reports an error that there is not a column named True. And in the second case no records are returned. The select works if a 1 is used in place of True. I seem to have the same problem when using pysqlite's own date and timestamp adaptors.
I can work around this behavior for this and other user-types but that's not as fun. I'd like to know if using a user-defined type in a query is or is not possible so that I don't keep banging my head on this particular wall.
Thank you. | 1 | 0 | 0 | 0 | false | 610,761 | 0 | 671 | 1 | 0 | 0 | 609,516 | You probably have to cast it to the correct type. Try "SELECT * FROM tasks WHERE (display = CAST ('True' AS bool))". | 1 | 0 | 0 | pysqlite user types in select statement | 2 | python,sqlite,pysqlite | 0 | 2009-03-04T07:08:00.000 |
I often need to execute custom sql queries in django, and manually converting query results into objects every time is kinda painful. I wonder how fellow Slackers deal with this. Maybe someone had written some kind of a library to help dealing with custom SQL in Django? | 2 | 3 | 1.2 | 0 | true | 620,117 | 1 | 3,263 | 1 | 0 | 0 | 619,384 | Since the issue is "manually converting query results into objects," the simplest solution is often to see if your custom SQL can fit into an ORM .extra() call rather than being a pure-SQL query. Often it can, and then you let the ORM do all the work of building up objects as usual. | 1 | 0 | 0 | Tools to ease executing raw SQL with Django ORM | 3 | python,django,orm | 0 | 2009-03-06T16:11:00.000 |
When I have created a table with an auto-incrementing primary key, is there a way to obtain what the primary key would be (that is, do something like reserve the primary key) without actually committing?
I would like to place two operations inside a transaction however one of the operations will depend on what primary key was assigned in the previous operation. | 53 | -3 | -0.291313 | 0 | false | 620,784 | 0 | 19,996 | 1 | 0 | 0 | 620,610 | You can use multiple transactions and manage it within scope. | 1 | 0 | 0 | SQLAlchemy Obtain Primary Key With Autoincrement Before Commit | 2 | python,sql,sqlalchemy | 0 | 2009-03-06T22:07:00.000 |
Our Python CMS stores some date values in a generic "attribute" table's varchar column. Some of these dates are later moved into a table with an actual date column. If the CMS user entered an invalid date, it doesn't get caught until the migration, when the query fails with an "Invalid string date" error.
How can I use Python to make sure that all dates put into our CMS are valid Oracle string date representations? | 1 | 1 | 0.066568 | 0 | false | 640,115 | 0 | 902 | 2 | 0 | 0 | 639,949 | The format of a date string that Oracle recognizes as a date is a configurable property of the database and as such it's considered bad form to rely on implicit conversions of strings to dates.
Typically Oracle dates format to 'DD-MON-YYYY' but you can't always rely on it being set that way.
Personally I would have the CMS write to this "attribute" table in a standard format like 'YYYY-MM-DD', and then whichever job moves that to a DATE column can explicitly cast the value with to_date( value, 'YYYY-MM-DD' ) and you won't have any problems. | 1 | 0 | 0 | Validating Oracle dates in Python | 3 | python,oracle,validation | 0 | 2009-03-12T18:41:00.000 |
Our Python CMS stores some date values in a generic "attribute" table's varchar column. Some of these dates are later moved into a table with an actual date column. If the CMS user entered an invalid date, it doesn't get caught until the migration, when the query fails with an "Invalid string date" error.
How can I use Python to make sure that all dates put into our CMS are valid Oracle string date representations? | 1 | -1 | -0.066568 | 0 | false | 640,153 | 0 | 902 | 2 | 0 | 0 | 639,949 | Validate as early as possible. Why don't you store dates as dates in your Python CMS?
It is difficult to know what date a string like '03-04-2008' is. Is it 3 april 2008 or 4 march 2008? An American will say 4 march 2008 but a Dutch person will say 3 april 2008. | 1 | 0 | 0 | Validating Oracle dates in Python | 3 | python,oracle,validation | 0 | 2009-03-12T18:41:00.000 |
I am finding it difficult to use MySQL with Python in my windows system.
I am currently using Python 2.6. I have tried to compile MySQL-python-1.2.3b1 (which is supposed to work for Python 2.6 ?) source code using the provided setup scripts. The setup script runs and it doesn't report any error but it doesn't generate _mysql module.
I have also tried setting up MySQL for Python 2.5 with out success. The problem with using 2.5 is that Python 2.5 is compiled with visual studio 2003 (I installed it using the provided binaries). I have visual studio 2005 on my windows system. Hence setuptools fails to generate _mysql module.
Any help ? | 102 | 1 | 0.012499 | 0 | false | 5,294,670 | 0 | 110,355 | 1 | 0 | 0 | 645,943 | Because I am running python in a (pylons/pyramid) virtualenv, I could not run the binary installers (helpfully) linked to previously.
I had problems following the steps with Willie's answer, but I determined that the problem is (probably) that I am running windows 7 x64 install, which puts the registry key for mysql in a slightly different location, specifically in my case (note: I am running version 5.5) in: "HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\MySQL AB\MySQL Server 5.5".
HOWEVER, "HKEY_LOCAL_MACHINE\" cannot be included in the path or it will fail.
Also, I had to do a restart between steps 3 and 4.
After working through all of this, IMO it would have been smarter to run the entire python dev environment from cygwin. | 1 | 0 | 1 | Integrating MySQL with Python in Windows | 16 | python,mysql,windows | 0 | 2009-03-14T13:53:00.000 |
OMG!
What an apparent problem... my django based scripts have locked my sqlite db...
Does anyone know how to fix? | 1 | 6 | 1.2 | 0 | true | 652,758 | 1 | 4,579 | 1 | 0 | 0 | 652,750 | Your database is locked because you have a transaction running somewhere.
Stop all your Django apps. If necessary, reboot.
It's also remotely possible that you crashed a SQLite client in the middle of a transaction and the file lock was left in place. | 1 | 0 | 0 | How to unlock an sqlite3 db? | 2 | python,django | 0 | 2009-03-17T01:33:00.000 |
I've added new models and pushed to our staging server, run syncdb to create their tables, and it locks up. It gets as far as 'Create table photos_photousertag' and postgres output shows the notice for creation of 'photos_photousertag_id_seq', but otherwise i get nothing on either said. I can't ctrl+c the syncdb process and I have no indication of what route to take from here. Has anyone else ran into this? | 3 | 0 | 0 | 0 | false | 21,254,637 | 1 | 985 | 3 | 0 | 0 | 674,030 | Strange here too, but simply restarting the PostgreSQL service (or server) solved it. I'd tried manually pasting the table creation code in psql too, but that wasn't solving it either (well, no way it could if it was a lock thing) - so I just used the restart:
systemctl restart postgresql.service
that's on my Suse box.
Am not sure whether reloading the service/server might lift existing table locks too? | 1 | 0 | 0 | Django syncdb locking up on table creation | 3 | python,django,django-syncdb | 0 | 2009-03-23T16:16:00.000 |
I've added new models and pushed to our staging server, run syncdb to create their tables, and it locks up. It gets as far as 'Create table photos_photousertag' and postgres output shows the notice for creation of 'photos_photousertag_id_seq', but otherwise i get nothing on either said. I can't ctrl+c the syncdb process and I have no indication of what route to take from here. Has anyone else ran into this? | 3 | 1 | 0.066568 | 0 | false | 10,438,955 | 1 | 985 | 3 | 0 | 0 | 674,030 | I just experienced this as well, and it turned out to just be a plain old lock on that particular table, unrelated to Django. Once that cleared the sync went through just fine.
Try querying the table that the sync is getting stuck on and make sure that's working correctly first. | 1 | 0 | 0 | Django syncdb locking up on table creation | 3 | python,django,django-syncdb | 0 | 2009-03-23T16:16:00.000 |
I've added new models and pushed to our staging server, run syncdb to create their tables, and it locks up. It gets as far as 'Create table photos_photousertag' and postgres output shows the notice for creation of 'photos_photousertag_id_seq', but otherwise i get nothing on either said. I can't ctrl+c the syncdb process and I have no indication of what route to take from here. Has anyone else ran into this? | 3 | 1 | 0.066568 | 0 | false | 674,105 | 1 | 985 | 3 | 0 | 0 | 674,030 | We use postgres, and while we've not run into this particular issue, there are some steps you may find helpful in debugging:
a. What version of postgres and psycopg2 are you using? For that matter, what version of django?
b. Try running the syncdb command with the "--verbosity=2" option to show all output.
c. Find the SQL that django is generating by running the "manage.py sql " command. Run the CREATE TABLE statements for your new models in the postgres shell and see what develops.
d. Turn the error logging, statement logging, and server status logging on postgres way up to see if you can catch any particular messages.
In the past, we've usually found that either option b or option c points out the problem. | 1 | 0 | 0 | Django syncdb locking up on table creation | 3 | python,django,django-syncdb | 0 | 2009-03-23T16:16:00.000 |
Let's say I have two or more processes dealing with an SQLite database - a "player" process and many "editor" processes.
The "player" process reads the database and updates a view - in my case it would be a waveform being mixed to the soundcard depending on events stored in the database.
An "editor" process is any editor for that database: it changes the database constantly.
Now I want the player to reflect the editing changes quickly.
I know that SQLite supplies hooks to trace database changes within the same process, but there seems to be little info on how to do this with multiple processes.
I could poll the database constantly, compare records and trigger events, but that seems to be quite inefficient, especially when the database grows to a large size.
I am thinking about using a log table and triggers, but I wonder if there is a simpler method. | 15 | 2 | 0.049958 | 0 | false | 677,042 | 0 | 15,631 | 5 | 0 | 0 | 677,028 | Just open a socket between the two processes and have the editor tell all the players about the update. | 1 | 0 | 0 | How do I notify a process of an SQLite database change done in a different process? | 8 | python,sqlite,notifications | 0 | 2009-03-24T11:34:00.000 |
Let's say I have two or more processes dealing with an SQLite database - a "player" process and many "editor" processes.
The "player" process reads the database and updates a view - in my case it would be a waveform being mixed to the soundcard depending on events stored in the database.
An "editor" process is any editor for that database: it changes the database constantly.
Now I want the player to reflect the editing changes quickly.
I know that SQLite supplies hooks to trace database changes within the same process, but there seems to be little info on how to do this with multiple processes.
I could poll the database constantly, compare records and trigger events, but that seems to be quite inefficient, especially when the database grows to a large size.
I am thinking about using a log table and triggers, but I wonder if there is a simpler method. | 15 | 2 | 0.049958 | 0 | false | 677,215 | 0 | 15,631 | 5 | 0 | 0 | 677,028 | I think in that case, I would make a process to manage the database read/writes.
Each editor that want to make some modifications to the database makes a call to this proccess, be it through IPC or network, or whatever method.
This process can then notify the player of a change in the database. The player, when he wants to retrieve some data should make a request of the data it wants to the process managing the database. (Or the db process tells it what it needs, when it notifies of a change, so no request from the player needed)
Doing this will have the advantage of having only one process accessing the SQLite DB, so no locking or concurrency issues on the database. | 1 | 0 | 0 | How do I notify a process of an SQLite database change done in a different process? | 8 | python,sqlite,notifications | 0 | 2009-03-24T11:34:00.000 |
Let's say I have two or more processes dealing with an SQLite database - a "player" process and many "editor" processes.
The "player" process reads the database and updates a view - in my case it would be a waveform being mixed to the soundcard depending on events stored in the database.
An "editor" process is any editor for that database: it changes the database constantly.
Now I want the player to reflect the editing changes quickly.
I know that SQLite supplies hooks to trace database changes within the same process, but there seems to be little info on how to do this with multiple processes.
I could poll the database constantly, compare records and trigger events, but that seems to be quite inefficient, especially when the database grows to a large size.
I am thinking about using a log table and triggers, but I wonder if there is a simpler method. | 15 | 2 | 0.049958 | 0 | false | 677,087 | 0 | 15,631 | 5 | 0 | 0 | 677,028 | If it's on the same machine, the simplest way would be to have named pipe, "player" with blocking read() and "editors" putting a token in pipe whenever they modify DB. | 1 | 0 | 0 | How do I notify a process of an SQLite database change done in a different process? | 8 | python,sqlite,notifications | 0 | 2009-03-24T11:34:00.000 |
Let's say I have two or more processes dealing with an SQLite database - a "player" process and many "editor" processes.
The "player" process reads the database and updates a view - in my case it would be a waveform being mixed to the soundcard depending on events stored in the database.
An "editor" process is any editor for that database: it changes the database constantly.
Now I want the player to reflect the editing changes quickly.
I know that SQLite supplies hooks to trace database changes within the same process, but there seems to be little info on how to do this with multiple processes.
I could poll the database constantly, compare records and trigger events, but that seems to be quite inefficient, especially when the database grows to a large size.
I am thinking about using a log table and triggers, but I wonder if there is a simpler method. | 15 | 4 | 1.2 | 0 | true | 677,085 | 0 | 15,631 | 5 | 0 | 0 | 677,028 | A relational database is not your best first choice for this.
Why?
You want all of your editors to pass changes to your player.
Your player is -- effectively -- a server for all those editors. Your player needs multiple open connections. It must listen to all those connections for changes. It must display those changes.
If the changes are really large, you can move to a hybrid solution where the editors persist the changes and notify the player.
Either way, the editors must notify they player that they have a change. It's much, much simpler than the player trying to discover changes in a database.
A better design is a server which accepts messages from the editors, persists them, and notifies the player. This server is neither editor nor player, but merely a broker that assures that all the messages are handled. It accepts connections from editors and players. It manages the database.
There are two implementations. Server IS the player. Server is separate from the player. The design of server doesn't change -- only the protocol. When server is the player, then server calls the player objects directly. When server is separate from the player, then the server writes to the player's socket.
When the player is part of the server, player objects are invoked directly when a message is received from an editor. When the player is separate, a small reader collects the messages from a socket and calls the player objects.
The player connects to the server and then waits for a stream of information. This can either be input from the editors or references to data that the server persisted in the database.
If your message traffic is small enough so that network latency is not a problem, editor sends all the data to the server/player. If message traffic is too large, then the editor writes to a database and sends a message with just a database FK to the server/player.
Please clarify "If the editor crashes while notifying, the player is permanently messed up" in your question.
This sounds like a poor design for the player service. It can't be "permanently messed up" unless it's not getting state from the various editors. If it's getting state from the editors (but attempting to mirror that state, for example) then you should consider a design where the player simply gets state from the editor and cannot get "permanently messed up". | 1 | 0 | 0 | How do I notify a process of an SQLite database change done in a different process? | 8 | python,sqlite,notifications | 0 | 2009-03-24T11:34:00.000 |
Let's say I have two or more processes dealing with an SQLite database - a "player" process and many "editor" processes.
The "player" process reads the database and updates a view - in my case it would be a waveform being mixed to the soundcard depending on events stored in the database.
An "editor" process is any editor for that database: it changes the database constantly.
Now I want the player to reflect the editing changes quickly.
I know that SQLite supplies hooks to trace database changes within the same process, but there seems to be little info on how to do this with multiple processes.
I could poll the database constantly, compare records and trigger events, but that seems to be quite inefficient, especially when the database grows to a large size.
I am thinking about using a log table and triggers, but I wonder if there is a simpler method. | 15 | 1 | 0.024995 | 0 | false | 677,169 | 0 | 15,631 | 5 | 0 | 0 | 677,028 | How many editor processes (why processes?), and how often do you expect updates? This doesn't sound like a good design, especially not considering sqlite really isn't too happy about multiple concurrent accesses to the database.
If multiple processes makes sense and you want persistence, it would probably be smarter to have the editors notify your player via sockets, pipes, shared memory or the like and then have the player (aka server process) do the persisting. | 1 | 0 | 0 | How do I notify a process of an SQLite database change done in a different process? | 8 | python,sqlite,notifications | 0 | 2009-03-24T11:34:00.000 |
does anybody know what is the equivalent to SQL "INSERT OR REPLACE" clause in SQLAlchemy and its SQL expression language?
Many thanks -- honzas | 13 | 5 | 1.2 | 0 | true | 709,452 | 0 | 20,634 | 1 | 0 | 0 | 708,762 | I don't think (correct me if I'm wrong) INSERT OR REPLACE is in any of the SQL standards; it's an SQLite-specific thing. There is MERGE, but that isn't supported by all dialects either. So it's not available in SQLAlchemy's general dialect.
The cleanest solution is to use Session, as suggested by M. Utku. You could also use SAVEPOINTs to save, try: an insert, except IntegrityError: then rollback and do an update instead. A third solution is to write your INSERT with an OUTER JOIN and a WHERE clause that filters on the rows with nulls. | 1 | 0 | 0 | SQLAlchemy - INSERT OR REPLACE equivalent | 4 | python,sqlalchemy | 0 | 2009-04-02T08:05:00.000 |
Yes, this is as stupid a situation as it sounds like. Due to some extremely annoying hosting restrictions and unresponsive tech support, I have to use a CSV file as a database.
While I can use MySQL with PHP, I can't use it with the Python backend of my program because of install issues with the host. I can't use SQLite with PHP because of more install issues, but can use it as it's a Python builtin.
Anyways, now, the question: is it possible to update values SQL-style in a CSV database? Or should I keep on calling the help desk? | 4 | 0 | 0 | 0 | false | 1,396,578 | 0 | 2,433 | 9 | 0 | 0 | 712,510 | Disagreeing with the noble colleagues, I often use DBD::CSV from Perl. There are good reasons to do it. Foremost is data update made simple using a spreadsheet. As a bonus, since I am using SQL queries, the application can be easily upgraded to a real database engine. Bear in mind these were extremely small database in a single user application.
So rephrasing the question: Is there a python module equivalent to Perl's DBD:CSV | 1 | 0 | 0 | Using CSV as a mutable database? | 12 | python,csv | 0 | 2009-04-03T04:02:00.000 |
Yes, this is as stupid a situation as it sounds like. Due to some extremely annoying hosting restrictions and unresponsive tech support, I have to use a CSV file as a database.
While I can use MySQL with PHP, I can't use it with the Python backend of my program because of install issues with the host. I can't use SQLite with PHP because of more install issues, but can use it as it's a Python builtin.
Anyways, now, the question: is it possible to update values SQL-style in a CSV database? Or should I keep on calling the help desk? | 4 | 0 | 0 | 0 | false | 713,531 | 0 | 2,433 | 9 | 0 | 0 | 712,510 | What about postgresql? I've found that quite nice to work with, and python supports it well.
But I really would look for another provider unless it's really not an option. | 1 | 0 | 0 | Using CSV as a mutable database? | 12 | python,csv | 0 | 2009-04-03T04:02:00.000 |
Yes, this is as stupid a situation as it sounds like. Due to some extremely annoying hosting restrictions and unresponsive tech support, I have to use a CSV file as a database.
While I can use MySQL with PHP, I can't use it with the Python backend of my program because of install issues with the host. I can't use SQLite with PHP because of more install issues, but can use it as it's a Python builtin.
Anyways, now, the question: is it possible to update values SQL-style in a CSV database? Or should I keep on calling the help desk? | 4 | 1 | 0.016665 | 0 | false | 713,396 | 0 | 2,433 | 9 | 0 | 0 | 712,510 | "Anyways, now, the question: is it possible to update values SQL-style in a CSV database?"
Technically, it's possible. However, it can be hard.
If both PHP and Python are writing the file, you'll need to use OS-level locking to assure that they don't overwrite each other. Each part of your system will have to lock the file, rewrite it from scratch with all the updates, and unlock the file.
This means that PHP and Python must load the entire file into memory before rewriting it.
There are a couple of ways to handle the OS locking.
Use the same file and actually use some OS lock module. Both processes have the file open at all times.
Write to a temp file and do a rename. This means each program must open and read the file for each transaction. Very safe and reliable. A little slow.
Or.
You can rearchitect it so that only Python writes the file. The front-end reads the file when it changes, and drops off little transaction files to create a work queue for Python. In this case, you don't have multiple writers -- you have one reader and one writer -- and life is much, much simpler. | 1 | 0 | 0 | Using CSV as a mutable database? | 12 | python,csv | 0 | 2009-04-03T04:02:00.000 |
Yes, this is as stupid a situation as it sounds like. Due to some extremely annoying hosting restrictions and unresponsive tech support, I have to use a CSV file as a database.
While I can use MySQL with PHP, I can't use it with the Python backend of my program because of install issues with the host. I can't use SQLite with PHP because of more install issues, but can use it as it's a Python builtin.
Anyways, now, the question: is it possible to update values SQL-style in a CSV database? Or should I keep on calling the help desk? | 4 | 0 | 0 | 0 | false | 712,567 | 0 | 2,433 | 9 | 0 | 0 | 712,510 | I agree. Tell them that 5 random strangers agree that you being forced into a corner to use CSV is absurd and unacceptable. | 1 | 0 | 0 | Using CSV as a mutable database? | 12 | python,csv | 0 | 2009-04-03T04:02:00.000 |
Yes, this is as stupid a situation as it sounds like. Due to some extremely annoying hosting restrictions and unresponsive tech support, I have to use a CSV file as a database.
While I can use MySQL with PHP, I can't use it with the Python backend of my program because of install issues with the host. I can't use SQLite with PHP because of more install issues, but can use it as it's a Python builtin.
Anyways, now, the question: is it possible to update values SQL-style in a CSV database? Or should I keep on calling the help desk? | 4 | 1 | 0.016665 | 0 | false | 712,568 | 0 | 2,433 | 9 | 0 | 0 | 712,510 | You can probably used sqlite3 for more real database. It's hard to imagine hosting that won't allow you to install it as a python module.
Don't even think of using CSV, your data will be corrupted and lost faster than you say "s#&t" | 1 | 0 | 0 | Using CSV as a mutable database? | 12 | python,csv | 0 | 2009-04-03T04:02:00.000 |
Yes, this is as stupid a situation as it sounds like. Due to some extremely annoying hosting restrictions and unresponsive tech support, I have to use a CSV file as a database.
While I can use MySQL with PHP, I can't use it with the Python backend of my program because of install issues with the host. I can't use SQLite with PHP because of more install issues, but can use it as it's a Python builtin.
Anyways, now, the question: is it possible to update values SQL-style in a CSV database? Or should I keep on calling the help desk? | 4 | 1 | 0.016665 | 0 | false | 712,522 | 0 | 2,433 | 9 | 0 | 0 | 712,510 | I couldn't imagine this ever being a good idea. The current mess I've inherited writes vital billing information to CSV and updates it after projects are complete. It runs horribly and thousands of dollars are missed a month. For the current restrictions that you have, I'd consider finding better hosting. | 1 | 0 | 0 | Using CSV as a mutable database? | 12 | python,csv | 0 | 2009-04-03T04:02:00.000 |
Yes, this is as stupid a situation as it sounds like. Due to some extremely annoying hosting restrictions and unresponsive tech support, I have to use a CSV file as a database.
While I can use MySQL with PHP, I can't use it with the Python backend of my program because of install issues with the host. I can't use SQLite with PHP because of more install issues, but can use it as it's a Python builtin.
Anyways, now, the question: is it possible to update values SQL-style in a CSV database? Or should I keep on calling the help desk? | 4 | 1 | 0.016665 | 0 | false | 712,515 | 0 | 2,433 | 9 | 0 | 0 | 712,510 | Keep calling on the help desk.
While you can use a CSV as a database, it's generally a bad idea. You would have to implement you own locking, searching, updating, and be very careful with how you write it out to make sure that it isn't erased in case of a power outage or other abnormal shutdown. There will be no transactions, no query language unless you write your own, etc. | 1 | 0 | 0 | Using CSV as a mutable database? | 12 | python,csv | 0 | 2009-04-03T04:02:00.000 |
Yes, this is as stupid a situation as it sounds like. Due to some extremely annoying hosting restrictions and unresponsive tech support, I have to use a CSV file as a database.
While I can use MySQL with PHP, I can't use it with the Python backend of my program because of install issues with the host. I can't use SQLite with PHP because of more install issues, but can use it as it's a Python builtin.
Anyways, now, the question: is it possible to update values SQL-style in a CSV database? Or should I keep on calling the help desk? | 4 | 0 | 0 | 0 | false | 712,512 | 0 | 2,433 | 9 | 0 | 0 | 712,510 | I'd keep calling help desk. You don't want to use CSV for data if it's relational at all. It's going to be nightmare. | 1 | 0 | 0 | Using CSV as a mutable database? | 12 | python,csv | 0 | 2009-04-03T04:02:00.000 |
Yes, this is as stupid a situation as it sounds like. Due to some extremely annoying hosting restrictions and unresponsive tech support, I have to use a CSV file as a database.
While I can use MySQL with PHP, I can't use it with the Python backend of my program because of install issues with the host. I can't use SQLite with PHP because of more install issues, but can use it as it's a Python builtin.
Anyways, now, the question: is it possible to update values SQL-style in a CSV database? Or should I keep on calling the help desk? | 4 | 0 | 0 | 0 | false | 712,974 | 0 | 2,433 | 9 | 0 | 0 | 712,510 | If I understand you correctly: you need to access the same database from both python and php, and you're screwed because you can only use mysql from php, and only sqlite from python?
Could you further explain this? Maybe you could use xml-rpc or plain http requests with xml/json/... to get the php program to communicate with the python program (or the other way around?), so that only one of them directly accesses the db.
If this is not the case, I'm not really sure what the problem. | 1 | 0 | 0 | Using CSV as a mutable database? | 12 | python,csv | 0 | 2009-04-03T04:02:00.000 |
I am trying to do the schedule for the upcoming season for my simulation baseball team. I have an existing Postgresql database that contains the old schedule.
There are 648 rows in the database: 27 weeks of series for 24 teams. The problem is that the schedule has gotten predictable and allows teams to know in advance about weak parts of their schedule. What I want to do is take the existing schedule and randomize it. That way teams are still playing each other the proper number of times but not in the same order as before.
There is one rule that has been tripping me up: each team can only play one home and one road series PER week. I had been fooling around with SELECT statements based on ORDER BY RANDOM() but I haven't figured out how to make sure a team only has one home and one road series per week.
Now, I could do this in PHP (which is the language I am most comfortable with) but I am trying to make the shift to Python so I'm not sure how to get this done in Python. I know that Python doesn't seem to handle two dimensional arrays very well.
Any help would be greatly appreciated. | 0 | 2 | 0.132549 | 0 | false | 719,913 | 0 | 2,359 | 2 | 0 | 0 | 719,886 | Have you considered keeping your same "schedule", and just shuffling the teams? Generating a schedule where everyone plays each other the proper number of times is possible, but if you already have such a schedule then it's much easier to just shuffle the teams.
You could keep your current table, but replace each team in it with an id (0-23, or A-X, or whatever), then randomly generate into another table where you assign each team to each id (0 = TeamJoe, 1 = TeamBob, etc). Then when it's time to shuffle again next year, just regenerate that mapping table.
Not sure if this answers the question the way you want, but is probably what I would go with (and is actually how I do it on my fantasy football website). | 1 | 0 | 0 | Help Me Figure Out A Random Scheduling Algorithm using Python and PostgreSQL | 3 | python,postgresql | 0 | 2009-04-05T23:34:00.000 |
I am trying to do the schedule for the upcoming season for my simulation baseball team. I have an existing Postgresql database that contains the old schedule.
There are 648 rows in the database: 27 weeks of series for 24 teams. The problem is that the schedule has gotten predictable and allows teams to know in advance about weak parts of their schedule. What I want to do is take the existing schedule and randomize it. That way teams are still playing each other the proper number of times but not in the same order as before.
There is one rule that has been tripping me up: each team can only play one home and one road series PER week. I had been fooling around with SELECT statements based on ORDER BY RANDOM() but I haven't figured out how to make sure a team only has one home and one road series per week.
Now, I could do this in PHP (which is the language I am most comfortable with) but I am trying to make the shift to Python so I'm not sure how to get this done in Python. I know that Python doesn't seem to handle two dimensional arrays very well.
Any help would be greatly appreciated. | 0 | 1 | 0.066568 | 0 | false | 719,909 | 0 | 2,359 | 2 | 0 | 0 | 719,886 | I'm not sure I fully understand the problem, but here is how I would do it:
1. create a complete list of matches that need to happen
2. iterate over the weeks, selecting which match needs to happen in this week.
You can use Python lists to represent the matches that still need to happen, and, for each week, the matches that are happening in this week.
In step 2, selecting a match to happen would work this way:
a. use random.choice to select a random match to happen.
b. determine which team has a home round for this match, using random.choice([1,2]) (if it could have been a home round for either team)
c. temporarily remove all matches that get blocked by this selection. a match is blocked if one of its teams has already two matches in the week, or if both teams already have a home match in this week, or if both teams already have a road match in this week.
d. when there are no available matches anymore for a week, proceed to the next week, readding all the matches that got blocked for the previous week. | 1 | 0 | 0 | Help Me Figure Out A Random Scheduling Algorithm using Python and PostgreSQL | 3 | python,postgresql | 0 | 2009-04-05T23:34:00.000 |
I'm working with a 20 gig XML file that I would like to import into a SQL database (preferably MySQL, since that is what I am familiar with). This seems like it would be a common task, but after Googling around a bit I haven't been able to figure out how to do it. What is the best way to do this?
I know this ability is built into MySQL 6.0, but that is not an option right now because it is an alpha development release.
Also, if I have to do any scripting I would prefer to use Python because that's what I am most familiar with.
Thanks. | 5 | 0 | 0 | 0 | false | 723,931 | 0 | 12,197 | 1 | 0 | 0 | 723,757 | It may be a common task, but maybe 20GB isn't as common with MySQL as it is with SQL Server.
I've done this using SQL Server Integration Services and a bit of custom code. Whether you need either of those depends on what you need to do with 20GB of XML in a database. Is it going to be a single column of a single row of a table? One row per child element?
SQL Server has an XML datatype if you simply want to store the XML as XML. This type allows you to do queries using XQuery, allows you to create XML indexes over the XML, and allows the XML column to be "strongly-typed" by referring it to a set of XML schemas, which you store in the database. | 1 | 0 | 0 | Import XML into SQL database | 5 | python,sql,xml | 0 | 2009-04-07T00:39:00.000 |
I've written a web-app in python using SQLite and it runs fine on my server at home (with apache and python 2.5.2). I'm now trying to upload it to my web host and there servers use python 2.2.3 without SQLite.
Anyone know of a way to use SQLite in python 2.2.3 e.g. a module that I can upload and import? I've tried butchering the module from newer versions of python, but they don't seem to be compatible.
Thanks,
Mike | 1 | 2 | 0.099668 | 0 | false | 737,617 | 0 | 1,012 | 2 | 0 | 0 | 737,511 | There is no out-of-the-box solution; you either have to backport the SQLlite module from Python 2.5 to Python 2.2 or ask your web hoster to upgrade to the latest Python version.
Python 2.2 is really ancient! At least for security reasons, they should upgrade (no more security fixes for 2.2 since May 30, 2003!).
Note that you can install several versions of Python in parallel. Just make sure you use "/usr/bin/python25" instead of "/usr/bin/python" in your scripts. To make sure all the old stuff is still working, after installing Python 2.5, you just have to fix the two symbolic links "/usr/bin/python" and "/usr/lib/python" which should now point to 2.5. Bend them back to 2.2 and you're good. | 1 | 0 | 0 | SQLite in Python 2.2.3 | 4 | python,sql,linux,sqlite,hosting | 0 | 2009-04-10T12:40:00.000 |
I've written a web-app in python using SQLite and it runs fine on my server at home (with apache and python 2.5.2). I'm now trying to upload it to my web host and there servers use python 2.2.3 without SQLite.
Anyone know of a way to use SQLite in python 2.2.3 e.g. a module that I can upload and import? I've tried butchering the module from newer versions of python, but they don't seem to be compatible.
Thanks,
Mike | 1 | 0 | 0 | 0 | false | 4,066,757 | 0 | 1,012 | 2 | 0 | 0 | 737,511 | In case anyone comes across this question, the reason why neither pysqlite nor APSW are available for Python 2.2 is because Python 2.3 added the simplified GIL API. Prior to Python 2.3 it required a lot of code to keep track of the GIL. (The GIL is the lock used by Python to ensure correct behaviour while multi-threading.)
Doing a backport to 2.2 would require ripping out all the threading code. Trying to make it also be thread safe under 2.2 would be a nightmare. There was a reason they introduced the simplified GIL API!
I am still astonished at just how popular older Python versions are. APSW for Python 2.3 is still regularly downloaded. | 1 | 0 | 0 | SQLite in Python 2.2.3 | 4 | python,sql,linux,sqlite,hosting | 0 | 2009-04-10T12:40:00.000 |
I'm trying to use SQLAlchemy to implement a basic users-groups model where users can have multiple groups and groups can have multiple users.
When a group becomes empty, I want the group to be deleted, (along with other things associated with the group. Fortunately, SQLAlchemy's cascade works fine with these more simple situations).
The problem is that cascade='all, delete-orphan' doesn't do exactly what I want; instead of deleting the group when the group becomes empty, it deletes the group when any member leaves the group.
Adding triggers to the database works fine for deleting a group when it becomes empty, except that triggers seem to bypass SQLAlchemy's cascade processing so things associated with the group don't get deleted.
What is the best way to delete a group when all of its members leave and have this deletion cascade to related entities.
I understand that I could do this manually by finding every place in my code where a user can leave a group and then doing the same thing as the trigger however, I'm afraid that I would miss places in the code (and I'm lazy). | 9 | 0 | 0 | 0 | false | 776,246 | 0 | 3,255 | 3 | 0 | 0 | 740,630 | Could you post a sample of your table and mapper set up? It might be easier to spot what is going on.
Without seeing the code it is hard to tell, but perhaps there is something wrong with the direction of the relationship? | 1 | 0 | 0 | SQLAlchemy many-to-many orphan deletion | 4 | python,sqlalchemy | 0 | 2009-04-11T19:07:00.000 |
I'm trying to use SQLAlchemy to implement a basic users-groups model where users can have multiple groups and groups can have multiple users.
When a group becomes empty, I want the group to be deleted, (along with other things associated with the group. Fortunately, SQLAlchemy's cascade works fine with these more simple situations).
The problem is that cascade='all, delete-orphan' doesn't do exactly what I want; instead of deleting the group when the group becomes empty, it deletes the group when any member leaves the group.
Adding triggers to the database works fine for deleting a group when it becomes empty, except that triggers seem to bypass SQLAlchemy's cascade processing so things associated with the group don't get deleted.
What is the best way to delete a group when all of its members leave and have this deletion cascade to related entities.
I understand that I could do this manually by finding every place in my code where a user can leave a group and then doing the same thing as the trigger however, I'm afraid that I would miss places in the code (and I'm lazy). | 9 | 3 | 1.2 | 0 | true | 763,256 | 0 | 3,255 | 3 | 0 | 0 | 740,630 | The way I've generally handled this is to have a function on your user or group called leave_group. When you want a user to leave a group, you call that function, and you can add any side effects you want into there. In the long term, this makes it easier to add more and more side effects. (For example when you want to check that someone is allowed to leave a group). | 1 | 0 | 0 | SQLAlchemy many-to-many orphan deletion | 4 | python,sqlalchemy | 0 | 2009-04-11T19:07:00.000 |
I'm trying to use SQLAlchemy to implement a basic users-groups model where users can have multiple groups and groups can have multiple users.
When a group becomes empty, I want the group to be deleted, (along with other things associated with the group. Fortunately, SQLAlchemy's cascade works fine with these more simple situations).
The problem is that cascade='all, delete-orphan' doesn't do exactly what I want; instead of deleting the group when the group becomes empty, it deletes the group when any member leaves the group.
Adding triggers to the database works fine for deleting a group when it becomes empty, except that triggers seem to bypass SQLAlchemy's cascade processing so things associated with the group don't get deleted.
What is the best way to delete a group when all of its members leave and have this deletion cascade to related entities.
I understand that I could do this manually by finding every place in my code where a user can leave a group and then doing the same thing as the trigger however, I'm afraid that I would miss places in the code (and I'm lazy). | 9 | 3 | 0.148885 | 0 | false | 770,287 | 0 | 3,255 | 3 | 0 | 0 | 740,630 | I think you want cascade='save, update, merge, expunge, refresh, delete-orphan'. This will prevent the "delete" cascade (which you get from "all") but maintain the "delete-orphan", which is what you're looking for, I think (delete when there are no more parents). | 1 | 0 | 0 | SQLAlchemy many-to-many orphan deletion | 4 | python,sqlalchemy | 0 | 2009-04-11T19:07:00.000 |
Trying to understand S3...How do you limit access to a file you upload to S3? For example, from a web application, each user has files they can upload, but how do you limit access so only that user has access to that file? It seems like the query string authentication requires an expiration date and that won't work for me, is there another way to do this? | 10 | 1 | 0.049958 | 0 | false | 766,030 | 1 | 4,305 | 3 | 0 | 1 | 765,964 | You will have to build the whole access logic to S3 in your applications | 1 | 0 | 0 | Amazon S3 permissions | 4 | python,django,amazon-web-services,amazon-s3 | 0 | 2009-04-19T19:51:00.000 |
Trying to understand S3...How do you limit access to a file you upload to S3? For example, from a web application, each user has files they can upload, but how do you limit access so only that user has access to that file? It seems like the query string authentication requires an expiration date and that won't work for me, is there another way to do this? | 10 | 8 | 1 | 0 | false | 768,090 | 1 | 4,305 | 3 | 0 | 1 | 765,964 | Have the user hit your server
Have the server set up a query-string authentication with a short expiration (minutes, hours?)
Have your server redirect to #2 | 1 | 0 | 0 | Amazon S3 permissions | 4 | python,django,amazon-web-services,amazon-s3 | 0 | 2009-04-19T19:51:00.000 |
Trying to understand S3...How do you limit access to a file you upload to S3? For example, from a web application, each user has files they can upload, but how do you limit access so only that user has access to that file? It seems like the query string authentication requires an expiration date and that won't work for me, is there another way to do this? | 10 | 14 | 1 | 0 | false | 768,050 | 1 | 4,305 | 3 | 0 | 1 | 765,964 | There are various ways to control access to the S3 objects:
Use the query string auth - but as you noted this does require an expiration date. You could make it far in the future, which has been good enough for most things I have done.
Use the S3 ACLS - but this requires the user to have an AWS account and authenticate with AWS to access the S3 object. This is probably not what you are looking for.
You proxy the access to the S3 object through your application, which implements your access control logic. This will bring all the bandwidth through your box.
You can set up an EC2 instance with your proxy logic - this keeps the bandwidth closer to S3 and can reduce latency in certain situations. The difference between this and #3 could be minimal, but depends your particular situation. | 1 | 0 | 0 | Amazon S3 permissions | 4 | python,django,amazon-web-services,amazon-s3 | 0 | 2009-04-19T19:51:00.000 |
Given a time (eg. currently 4:24pm on Tuesday), I'd like to be able to select all businesses that are currently open out of a set of businesses.
I have the open and close times for every business for every day of the week
Let's assume a business can open/close only on 00, 15, 30, 45 minute marks of each hour
I'm assuming the same schedule each week.
I am most interested in being able to quickly look up a set of businesses that is open at a certain time, not the space requirements of the data.
Mind you, some my open at 11pm one day and close 1am the next day.
Holidays don't matter - I will handle these separately
What's the most efficient way to store these open/close times such that with a single time/day-of-week tuple I can speedily figure out which businesses are open?
I am using Python, SOLR and mysql. I'd like to be able to do the querying in SOLR. But frankly, I'm open to any suggestions and alternatives. | 6 | 4 | 0.113791 | 0 | false | 775,354 | 0 | 1,489 | 5 | 0 | 0 | 775,161 | You say you're using SOLR, don't care about storage, and want the lookups to be fast. Then instead of storing open/close tuples, index an entry for every open block of time at the level of granularity you need (15 mins). For the encoding itself, you could use just cumulative hours:minutes.
For example, a store open from 4-5 pm on Monday, would have indexed values added for [40:00, 40:15, 40:30, 40:45]. A query at 4:24 pm on Monday would be normalized to 40:15, and therefore match that store document.
This may seem inefficient at first glance, but it's a relatively small constant penalty for indexing speed and space. And makes the searches as fast as possible. | 1 | 0 | 0 | Efficiently determining if a business is open or not based on store hours | 7 | python,mysql,performance,solr | 0 | 2009-04-21T23:48:00.000 |
Given a time (eg. currently 4:24pm on Tuesday), I'd like to be able to select all businesses that are currently open out of a set of businesses.
I have the open and close times for every business for every day of the week
Let's assume a business can open/close only on 00, 15, 30, 45 minute marks of each hour
I'm assuming the same schedule each week.
I am most interested in being able to quickly look up a set of businesses that is open at a certain time, not the space requirements of the data.
Mind you, some my open at 11pm one day and close 1am the next day.
Holidays don't matter - I will handle these separately
What's the most efficient way to store these open/close times such that with a single time/day-of-week tuple I can speedily figure out which businesses are open?
I am using Python, SOLR and mysql. I'd like to be able to do the querying in SOLR. But frankly, I'm open to any suggestions and alternatives. | 6 | 8 | 1.2 | 0 | true | 775,247 | 0 | 1,489 | 5 | 0 | 0 | 775,161 | If you are willing to just look at single week at a time, you can canonicalize all opening/closing times to be set numbers of minutes since the start of the week, say Sunday 0 hrs. For each store, you create a number of tuples of the form [startTime, endTime, storeId]. (For hours that spanned Sunday midnight, you'd have to create two tuples, one going to the end of the week, one starting at the beginning of the week). This set of tuples would be indexed (say, with a tree you would pre-process) on both startTime and endTime. The tuples shouldn't be that large: there are only ~10k minutes in a week, which can fit in 2 bytes. This structure would be graceful inside a MySQL table with appropriate indexes, and would be very resilient to constant insertions & deletions of records as information changed. Your query would simply be "select storeId where startTime <= time and endtime >= time", where time was the canonicalized minutes since midnight on sunday.
If information doesn't change very often, and you want to have lookups be very fast, you could solve every possible query up front and cache the results. For instance, there are only 672 quarter-hour periods in a week. With a list of businesses, each of which had a list of opening & closing times like Brandon Rhodes's solution, you could simply, iterate through every 15-minute period in a week, figure out who's open, then store the answer in a lookup table or in-memory list. | 1 | 0 | 0 | Efficiently determining if a business is open or not based on store hours | 7 | python,mysql,performance,solr | 0 | 2009-04-21T23:48:00.000 |
Given a time (eg. currently 4:24pm on Tuesday), I'd like to be able to select all businesses that are currently open out of a set of businesses.
I have the open and close times for every business for every day of the week
Let's assume a business can open/close only on 00, 15, 30, 45 minute marks of each hour
I'm assuming the same schedule each week.
I am most interested in being able to quickly look up a set of businesses that is open at a certain time, not the space requirements of the data.
Mind you, some my open at 11pm one day and close 1am the next day.
Holidays don't matter - I will handle these separately
What's the most efficient way to store these open/close times such that with a single time/day-of-week tuple I can speedily figure out which businesses are open?
I am using Python, SOLR and mysql. I'd like to be able to do the querying in SOLR. But frankly, I'm open to any suggestions and alternatives. | 6 | 3 | 0.085505 | 0 | false | 775,175 | 0 | 1,489 | 5 | 0 | 0 | 775,161 | Sorry I don't have an easy answer, but I can tell you that as the manager of a development team at a company in the late 90's we were tasked with solving this very problem and it was HARD.
It's not the weekly hours that's tough, that can be done with a relatively small bitmask (168 bits = 1 per hour of the week), the trick is the businesses which are closed every alternating Tuesday.
Starting with a bitmask then moving on to an exceptions field is the best solution I've ever seen. | 1 | 0 | 0 | Efficiently determining if a business is open or not based on store hours | 7 | python,mysql,performance,solr | 0 | 2009-04-21T23:48:00.000 |
Given a time (eg. currently 4:24pm on Tuesday), I'd like to be able to select all businesses that are currently open out of a set of businesses.
I have the open and close times for every business for every day of the week
Let's assume a business can open/close only on 00, 15, 30, 45 minute marks of each hour
I'm assuming the same schedule each week.
I am most interested in being able to quickly look up a set of businesses that is open at a certain time, not the space requirements of the data.
Mind you, some my open at 11pm one day and close 1am the next day.
Holidays don't matter - I will handle these separately
What's the most efficient way to store these open/close times such that with a single time/day-of-week tuple I can speedily figure out which businesses are open?
I am using Python, SOLR and mysql. I'd like to be able to do the querying in SOLR. But frankly, I'm open to any suggestions and alternatives. | 6 | 0 | 0 | 0 | false | 775,459 | 0 | 1,489 | 5 | 0 | 0 | 775,161 | Have you looked at how many unique open/close time combinations there are? If there are not that many, make a reference table of the unique combinations and store the index of the appropriate entry against each business. Then you only have to search the reference table and then find the business with those indices. | 1 | 0 | 0 | Efficiently determining if a business is open or not based on store hours | 7 | python,mysql,performance,solr | 0 | 2009-04-21T23:48:00.000 |
Given a time (eg. currently 4:24pm on Tuesday), I'd like to be able to select all businesses that are currently open out of a set of businesses.
I have the open and close times for every business for every day of the week
Let's assume a business can open/close only on 00, 15, 30, 45 minute marks of each hour
I'm assuming the same schedule each week.
I am most interested in being able to quickly look up a set of businesses that is open at a certain time, not the space requirements of the data.
Mind you, some my open at 11pm one day and close 1am the next day.
Holidays don't matter - I will handle these separately
What's the most efficient way to store these open/close times such that with a single time/day-of-week tuple I can speedily figure out which businesses are open?
I am using Python, SOLR and mysql. I'd like to be able to do the querying in SOLR. But frankly, I'm open to any suggestions and alternatives. | 6 | 1 | 0.028564 | 0 | false | 777,443 | 0 | 1,489 | 5 | 0 | 0 | 775,161 | In your Solr index, instead of indexing each business as one document with hours, index every "retail session" for every business during the course of a week.
For example if Joe's coffee is open Mon-Sat 6am-9pm and closed on Sunday, you would index six distinct documents, each with two indexed fields, "open" and "close". If your units are 15 minute intervals, then the values can range from 0 to 7*24*4. Assuming you have a unique ID for each business, store this in each document so you can map the sessions to businesses.
Then you can simply do a range search in Solr:
open:[* TO N] AND close:[N+1 TO *]
where N is computed to the Nth 15 minute interval that the current time falls into. For examples if it's 10:10AM on Wednesday, your query would be:
open:[* TO 112] AND close:[113 TO *]
aka "find a session that starts at or before 10:00am Wed and ends at or after 10:15am Wed"
If you want to include other criteria in your search, such as location or products, you will need to index this with each session document as well. This is a bit redundant, but if your index is not huge, it shouldn't be a problem. | 1 | 0 | 0 | Efficiently determining if a business is open or not based on store hours | 7 | python,mysql,performance,solr | 0 | 2009-04-21T23:48:00.000 |
I have a legacy database with an integer set as a primary key. It was initially managed manually, but since we are wanting to move to django, the admin tool seemed to be the right place to start. I created the model and am trying to set the primary key to be an autofield. It doesn't seem to be remembering the old id in updates, and it doesn't create new id's on insert. What am I doing wrong? | 1 | 2 | 1.2 | 0 | true | 778,346 | 1 | 454 | 1 | 0 | 0 | 777,778 | The DB is responsible for managing the value of the ID. If you want to use AutoField, you have to change the column in the DB to use that. Django is not responsible for managing the generated ID | 1 | 0 | 0 | How do I set up a model to use an AutoField with a legacy database in Python? | 1 | python,django,oracle,autofield | 0 | 2009-04-22T15:18:00.000 |
Python communicating with EXCEL... i need to find a way so that I can find/search a row for given column datas. Now, i m scanning entire rows one by one... It would be useful, If there is some functions like FIND/SEARCH/REPLACE .... I dont see these features in pyExcelerator or xlrd modules.. I dont want to use win32com modules! it makes my tool windows based!
FIND/SEARCH Excel rows through Python.... Any idea, anybody? | 2 | 0 | 0 | 0 | false | 779,599 | 0 | 7,840 | 3 | 0 | 0 | 778,093 | With pyExcelerator you can do a simple optimization by finding the maximum row and column indices first (and storing them), so that you iterate over (row, i) for i in range(maxcol+1) instead of iterating over all the dictionary keys. That may be the best you get, unless you want to go through and build up a dictionary mapping value to set of keys.
Incidentally, if you're using pyExcelerator to write spreadsheets, be aware that it has some bugs. I've encountered one involving writing integers between 230 and 232 (or thereabouts). The original author is apparently hard to contact these days, so xlwt is a fork that fixes the (known) bugs. For writing spreadsheets, it's a drop-in replacement for pyExcelerator; you could do import xlwt as pyExcelerator and change nothing else. It doesn't read spreadsheets, though. | 1 | 0 | 0 | pyExcelerator or xlrd - How to FIND/SEARCH a row for the given few column data? | 4 | python,excel,search,pyexcelerator,xlrd | 0 | 2009-04-22T16:23:00.000 |
Python communicating with EXCEL... i need to find a way so that I can find/search a row for given column datas. Now, i m scanning entire rows one by one... It would be useful, If there is some functions like FIND/SEARCH/REPLACE .... I dont see these features in pyExcelerator or xlrd modules.. I dont want to use win32com modules! it makes my tool windows based!
FIND/SEARCH Excel rows through Python.... Any idea, anybody? | 2 | 2 | 0.099668 | 0 | false | 779,030 | 0 | 7,840 | 3 | 0 | 0 | 778,093 | You can't. Those tools don't offer search capabilities. You must iterate over the data in a loop and search yourself. Sorry. | 1 | 0 | 0 | pyExcelerator or xlrd - How to FIND/SEARCH a row for the given few column data? | 4 | python,excel,search,pyexcelerator,xlrd | 0 | 2009-04-22T16:23:00.000 |
Python communicating with EXCEL... i need to find a way so that I can find/search a row for given column datas. Now, i m scanning entire rows one by one... It would be useful, If there is some functions like FIND/SEARCH/REPLACE .... I dont see these features in pyExcelerator or xlrd modules.. I dont want to use win32com modules! it makes my tool windows based!
FIND/SEARCH Excel rows through Python.... Any idea, anybody? | 2 | 2 | 0.099668 | 0 | false | 778,282 | 0 | 7,840 | 3 | 0 | 0 | 778,093 | "Now, i m scanning entire rows one by one"
What's wrong with that? "search" -- in a spreadsheet context -- is really complicated. Search values? Search formulas? Search down rows then across columns? Search specific columns only? Search specific rows only?
A spreadsheet isn't simple text -- simple text processing design patterns don't apply.
Spreadsheet search is hard and you're doing it correctly. There's nothing better because it's hard. | 1 | 0 | 0 | pyExcelerator or xlrd - How to FIND/SEARCH a row for the given few column data? | 4 | python,excel,search,pyexcelerator,xlrd | 0 | 2009-04-22T16:23:00.000 |
Currently I have tables like: Pages, Groups, GroupPage, Users, UserGroup. With pickled sets I can implement the same thing with only 3 tables: Pages, Groups, Users.
set seems a natural choice for implementing ACL, as group and permission related operations can be expressed very naturally with sets. If I store the allow/deny lists as pickled sets, it can eliminate few intermediate tables for many-to-many relationship and allow permission editing without many database operations.
If human readability is important, I can always use json instead of cPickle for serialization and use set when manipulating the permission list in Python. It is highly unlikely that permissions will ever be edited directly using SQL. So is it a good design idea?
We're using SQLAlchemy as ORM, so it's likely to be implemented with PickleType column. I'm not planning to store the whole pickled "resource" recordset, only the set object made out of "resource" primary key values. | 1 | 2 | 0.099668 | 0 | false | 791,425 | 1 | 2,877 | 2 | 0 | 0 | 790,613 | Me, I'd stick with keeping persistent info in the relational DB in a form that's independent from a specific programming language used to access it -- much as I love Python (and that's a lot), some day I may want to access that info from some other language, and if I went for Python-specific formats... boy would I ever regret it... | 1 | 0 | 0 | Using Python set type to implement ACL | 4 | python,set,acl,pickle | 0 | 2009-04-26T10:37:00.000 |
Currently I have tables like: Pages, Groups, GroupPage, Users, UserGroup. With pickled sets I can implement the same thing with only 3 tables: Pages, Groups, Users.
set seems a natural choice for implementing ACL, as group and permission related operations can be expressed very naturally with sets. If I store the allow/deny lists as pickled sets, it can eliminate few intermediate tables for many-to-many relationship and allow permission editing without many database operations.
If human readability is important, I can always use json instead of cPickle for serialization and use set when manipulating the permission list in Python. It is highly unlikely that permissions will ever be edited directly using SQL. So is it a good design idea?
We're using SQLAlchemy as ORM, so it's likely to be implemented with PickleType column. I'm not planning to store the whole pickled "resource" recordset, only the set object made out of "resource" primary key values. | 1 | 2 | 0.099668 | 0 | false | 790,662 | 1 | 2,877 | 2 | 0 | 0 | 790,613 | You need to consider what it is that a DBMS provides you with, and which of those features you'll need to reimplement.
The issue of concurrency is a big one. There are a few race conditions to be considered (such as multiple writes taking place in different threads and processes and overwriting the new data), performance issues (write policy? What if your process crashes and you lose your data?), memory issues (how big are your permission sets? Will it all fit in RAM?).
If you have enough memory and you don't have to worry about concurrency, then your solution might be a good one. Otherwise I'd stick with a databases -- it takes care of those problems for you, and lots of work has gone into them to make sure that they always take your data from one consistent state to another. | 1 | 0 | 0 | Using Python set type to implement ACL | 4 | python,set,acl,pickle | 0 | 2009-04-26T10:37:00.000 |
I have downloaded & installed the latest Python InformixDB package, but when I try to import it from the shell, I am getting the following error in the form of a Windows dialog box!
"A procedure entry point sqli_describe_input_stmt could not be located in the dynamic link isqlit09a.dll"
Any ideas what's happening?
Platform: Windows Vista (Biz Edition), Python 2.5. | 1 | 0 | 0 | 0 | false | 823,474 | 0 | 435 | 2 | 0 | 0 | 801,515 | Does other way to connect to database work?
Can you use (configure in control panel) ODBC? If ODBC works then you can use Python win32 extensions (ActiveState distribution comes with it) and there is ODBC support. You can also use Jython which can work with ODBC via JDBC-ODBC bridge or with Informix JDBC driver. | 1 | 0 | 0 | Why Python informixdb package is throwing an error! | 2 | python,informix | 0 | 2009-04-29T09:01:00.000 |
I have downloaded & installed the latest Python InformixDB package, but when I try to import it from the shell, I am getting the following error in the form of a Windows dialog box!
"A procedure entry point sqli_describe_input_stmt could not be located in the dynamic link isqlit09a.dll"
Any ideas what's happening?
Platform: Windows Vista (Biz Edition), Python 2.5. | 1 | 1 | 0.099668 | 0 | false | 803,958 | 0 | 435 | 2 | 0 | 0 | 801,515 | Which version of IBM Informix Connect (I-Connect) or IBM Informix ClientSDK (CSDK) are you using? The 'describe input' function is a more recent addition, but it is likely that you have it.
Have you been able to connect to any Informix DBMS from the command shell? If not, then the suspicion must be that you don't have the correct environment. You would probably need to specify $INFORMIXDIR (or %INFORMIXDIR% - I'm going to omit '$' and '%' sigils from here on); you would need to set INFORMIXSERVER to connect successfully; you would need to have the correct directory (probably INFORMIXDIR/bin on Windows; on Unix, it would be INFORMIXDIR/lib and INFORMIXDIR/lib/esql or INFORMIXDIR/lib/odbc) on your PATH. | 1 | 0 | 0 | Why Python informixdb package is throwing an error! | 2 | python,informix | 0 | 2009-04-29T09:01:00.000 |
I'm developing software using the Google App Engine.
I have some considerations about the optimal design regarding the following issue: I need to create and save snapshots of some entities at regular intervals.
In the conventional relational db world, I would create db jobs which would insert new summary records.
For example, a job would insert a record for every active user that would contain his current score to the "userrank" table, say, every hour.
I'd like to know what's the best method to achieve this in Google App Engine. I know that there is the Cron service, but does it allow us to execute jobs which will insert/update thousands of records? | 1 | 3 | 0.197375 | 0 | false | 815,113 | 1 | 1,473 | 1 | 1 | 0 | 814,896 | I think you'll find that snapshotting every user's state every hour isn't something that will scale well no matter what your framework. A more ordinary environment will disguise this by letting you have longer running tasks, but you'll still reach the point where it's not practical to take a snapshot of every user's data, every hour.
My suggestion would be this: Add a 'last snapshot' field, and subclass the put() function of your model (assuming you're using Python; the same is possible in Java, but I don't know the syntax), such that whenever you update a record, it checks if it's been more than an hour since the last snapshot, and if so, creates and writes a snapshot record.
In order to prevent concurrent updates creating two identical snapshots, you'll want to give the snapshots a key name derived from the time at which the snapshot was taken. That way, if two concurrent updates try to write a snapshot, one will harmlessly overwrite the other.
To get the snapshot for a given hour, simply query for the oldest snapshot newer than the requested period. As an added bonus, since inactive records aren't snapshotted, you're saving a lot of space, too. | 1 | 0 | 0 | Google App Engine - design considerations about cron tasks | 3 | python,database,google-app-engine,cron | 0 | 2009-05-02T13:54:00.000 |
So, I've been tossing this idea around in my head for a while now. At its core, it's mostly a project for me to learn programming. The idea is that, I have a large set of data, my music collection. There are quite a few datasets that my music has. Format, artist, title, album, genre, length, year of release, filename, directory, just to name a few. Ideally, I'd like to create a database that has all of this data stored in it, and in the future, create a web interface on top of it that I can manage my music collection with. So, my questions are as follows:
Does this sound like a good project to begin building databases from scratch with?
What language would you recommend I start with? I know tidbits of PHP, but I would imagine it would be awful to index data in a filesystem with. Python was the other language I was thinking of, considering it's the language most people consider as a beginner language.
If you were going to implement this kind of system (the web interface) in your home (if you had PCs connected to a couple of stereos in your home and this was the software connected), what kind of features would you want to see?
My idea for building up the indexing script would be as follows:
Get it to populate the database with only the filenames
From the extension of the filename, determine format
Get file size
Using the filenames in the database as a reference, pull ID3 or other applicable metadata (artist, track name, album, etc)
Check if all files still exist on disk, and if not, flag the file as unavailable
Another script would go in later and check if the files are back, if they are not, the will remove the row from the database. | 2 | 1 | 0.049958 | 0 | false | 818,763 | 0 | 2,085 | 1 | 0 | 0 | 818,752 | Working on something you care about is the best way to learn programming, so I think this is a great idea.
I also recommend Python as a place to start. Have fun! | 1 | 0 | 0 | Web-Based Music Library (programming concept) | 4 | php,python,mysql | 0 | 2009-05-04T04:10:00.000 |
in my google app application, whenever a user purchases a number of contracts, these events are executed (simplified for clarity):
user.cash is decreased
user.contracts is increased by the number
contracts.current_price is updated.
market.no_of_transactions is increased by 1.
in a rdms, these would be placed within the same transaction. I conceive that google datastore does not allow entities of more than one model to be in the same transaction.
what is the correct approach to this issue? how can I ensure that if a write fails, all preceding writes are rolled back?
edit: I have obviously missed entity groups. Now I'd appreciate some further information regarding how they are used. Another point to clarify is google says "Only use entity groups when they are needed for transactions. For other relationships between entities, use ReferenceProperty properties and Key values, which can be used in queries". does it mean I have to define both a reference property (since I need queriying them) and a parent-child relationship (for transactions)?
edit 2: and finally, how do I define two parents for an entity if the entity is being created to establish an n-to-n relationship between 2 parents? | 3 | 0 | 1.2 | 0 | true | 838,960 | 1 | 384 | 1 | 1 | 0 | 836,992 | After a through research, I have found that a distributed transaction layer that provides a solution to the single entity group restriction has been developed in userland with the help of some google people. But so far, it is not released and is only available in java. | 1 | 0 | 0 | datastore transaction restrictions | 3 | python,google-app-engine,transactions,google-cloud-datastore | 0 | 2009-05-07T20:55:00.000 |
Has anybody got recent experience with deploying a Django application with an SQL Server database back end? Our workplace is heavily invested in SQL Server and will not support Django if there isn't a sufficiently developed back end for it.
I'm aware of mssql.django-pyodbc and django-mssql as unofficially supported back ends. Both projects seem to have only one person contributing which is a bit of a worry though the contributions seem to be somewhat regular.
Are there any other back ends for SQL Server that are well supported? Are the two I mentioned here 'good enough' for production? What are your experiences? | 52 | 4 | 0.113791 | 0 | false | 843,500 | 1 | 48,333 | 2 | 0 | 0 | 842,831 | We are using django-mssql in production at our company. We too had an existing system using mssql. For me personally it was the best design decision I have ever made because my productivity increased dramatically now that I can use django .
I submitted a patch but when I started using django-mssql and did a week or two of testing.Since then (October 2008) we run our system on django and it runs solid. I also tried pyodbc but I did not like to much.
We are running a repair system where all transactions run through this system 40 heavy users. If you have more questions let me know. | 1 | 0 | 0 | Using Sql Server with Django in production | 7 | python,sql-server,django,pyodbc | 0 | 2009-05-09T06:45:00.000 |
Has anybody got recent experience with deploying a Django application with an SQL Server database back end? Our workplace is heavily invested in SQL Server and will not support Django if there isn't a sufficiently developed back end for it.
I'm aware of mssql.django-pyodbc and django-mssql as unofficially supported back ends. Both projects seem to have only one person contributing which is a bit of a worry though the contributions seem to be somewhat regular.
Are there any other back ends for SQL Server that are well supported? Are the two I mentioned here 'good enough' for production? What are your experiences? | 52 | 1 | 0.028564 | 0 | false | 843,476 | 1 | 48,333 | 2 | 0 | 0 | 842,831 | Haven't used it in production yet, but my initial experiences with django-mssql have been pretty solid. All you need are the Python Win32 extensions and to get the sqlserver_ado module onto your Python path. From there, you just use sql_server.pyodbc as your DATABASE_ENGINE. So far I haven't noticed anything missing, but I haven't fully banged on it yet either. | 1 | 0 | 0 | Using Sql Server with Django in production | 7 | python,sql-server,django,pyodbc | 0 | 2009-05-09T06:45:00.000 |
I need to launch a server side process off a mysql row insert. I'd appreciate some feedback/suggestions. So far I can think of three options:
1st (least attractive): My preliminary understanding is that I can write a kind of "custom trigger" in C that could fire off a row insert. In addition to having to renew my C skills this would requite a (custom?) recompile of MySQl ... yuck!
2nd (slightly more attractive): I could schedule a cron task server side of a program that I write that would query the table for new rows periodically. This has the benefit of being DB and language independent. The problem with this is that I suffer the delay of the cron's schedule.
3rd (the option I'm leading with): I could write a multi threaded program that would query the table for changes on a single thread, spawning new threads to process the newly inserted rows as needed. This has all the benefits of option 2 with less delay.
I'll also mention that I'm leaning towards python for this task, as easy access to the system (linux) commands, as well as some in house perl scripts, is going to be very very useful.
I'd appreciate any feedback/suggestion
Thanks in advance. | 3 | 4 | 0.379949 | 0 | false | 856,208 | 0 | 227 | 2 | 0 | 0 | 856,173 | Write an insert trigger which duplicates inserted rows to a secondary table. Periodically poll the secondary table for rows with an external application/cronjob; if any rows are in the table, delete them and do your processing (or set a 'processing started' flag and only delete from the secondary table upon successful processing).
This will work very nicely for low to medium insert volumes. If you have a ton of data coming at your table, some kind of custom trigger in C is probably your only choice. | 1 | 0 | 0 | launch a process off a mysql row insert | 2 | python,mysql,linux,perl | 0 | 2009-05-13T05:15:00.000 |
I need to launch a server side process off a mysql row insert. I'd appreciate some feedback/suggestions. So far I can think of three options:
1st (least attractive): My preliminary understanding is that I can write a kind of "custom trigger" in C that could fire off a row insert. In addition to having to renew my C skills this would requite a (custom?) recompile of MySQl ... yuck!
2nd (slightly more attractive): I could schedule a cron task server side of a program that I write that would query the table for new rows periodically. This has the benefit of being DB and language independent. The problem with this is that I suffer the delay of the cron's schedule.
3rd (the option I'm leading with): I could write a multi threaded program that would query the table for changes on a single thread, spawning new threads to process the newly inserted rows as needed. This has all the benefits of option 2 with less delay.
I'll also mention that I'm leaning towards python for this task, as easy access to the system (linux) commands, as well as some in house perl scripts, is going to be very very useful.
I'd appreciate any feedback/suggestion
Thanks in advance. | 3 | 0 | 0 | 0 | false | 856,210 | 0 | 227 | 2 | 0 | 0 | 856,173 | I had this issue about 2 years ago in .NET and I went with the 3rd approach. However, looking back at it, I'm wondering if looking into Triggers with PhpMyAdmin & MySQL isn't the approach to look into. | 1 | 0 | 0 | launch a process off a mysql row insert | 2 | python,mysql,linux,perl | 0 | 2009-05-13T05:15:00.000 |
I am writing a code in python in which I established a connection with database. I have queries in a loop. While queries being executed in the loop , If i unplug the network cable it should stop with an exception. But this not happens, When i again plug yhe network cabe after 2 minutes it starts again from where it ended. I am using linux and psycopg2. It is not showing exception | 0 | 1 | 0.066568 | 0 | false | 867,433 | 0 | 442 | 2 | 0 | 0 | 867,175 | If you want to implement timeouts that work no matter how the client library is connecting to the server, it's best to attempt the DB operations in a separate thread, or, better, a separate process, which a "monitor" thread/process can kill if needed; see the multiprocessing module in Python 2.6 standard library (there's a backported version for 2.5 if you need that). A process is better because when it's killed the operating system will take care of deallocating and cleaning up resources, while killing a thread is always a pretty unsafe and messy business. | 1 | 0 | 0 | db connection in python | 3 | python,tcp,database-connection | 0 | 2009-05-15T06:03:00.000 |
I am writing a code in python in which I established a connection with database. I have queries in a loop. While queries being executed in the loop , If i unplug the network cable it should stop with an exception. But this not happens, When i again plug yhe network cabe after 2 minutes it starts again from where it ended. I am using linux and psycopg2. It is not showing exception | 0 | 2 | 0.132549 | 0 | false | 867,202 | 0 | 442 | 2 | 0 | 0 | 867,175 | Your database connection will almost certainly be based on a TCP socket. TCP sockets will hang around for a long time retrying before failing and (in python) raising an exception. Not to mention and retries/automatic reconnection attempts in the database layer. | 1 | 0 | 0 | db connection in python | 3 | python,tcp,database-connection | 0 | 2009-05-15T06:03:00.000 |
I am in the design phase of a file upload service that allows users to upload very large zip files to our server as well as updates our database with the data. Since the files are large (About 300mb) we want to allow the user to limit the amount of bandwidth they want to use for uploading. They should also be able to pause and resume the transfer, and it should recover from a system reboot. The user also needs to be authenticated in our MSSQL database to ensure that they have permission to upload the file and make changes to our database.
My question is, what is the best technology to do this? We would like to minimize the amount of development required, but the only thing that I can think of now that would allow us to do this would be to create a client and server app from scratch in something like python, java or c#. Is there an existing technology available that will allow us to do this? | 6 | 4 | 0.132549 | 0 | false | 878,154 | 0 | 3,014 | 3 | 0 | 0 | 878,143 | What's wrong with FTP? The protocol supports reusability and there are lots and lots of clients. | 1 | 0 | 0 | Resumable File Upload | 6 | c#,java,python | 0 | 2009-05-18T14:56:00.000 |
I am in the design phase of a file upload service that allows users to upload very large zip files to our server as well as updates our database with the data. Since the files are large (About 300mb) we want to allow the user to limit the amount of bandwidth they want to use for uploading. They should also be able to pause and resume the transfer, and it should recover from a system reboot. The user also needs to be authenticated in our MSSQL database to ensure that they have permission to upload the file and make changes to our database.
My question is, what is the best technology to do this? We would like to minimize the amount of development required, but the only thing that I can think of now that would allow us to do this would be to create a client and server app from scratch in something like python, java or c#. Is there an existing technology available that will allow us to do this? | 6 | 0 | 0 | 0 | false | 878,160 | 0 | 3,014 | 3 | 0 | 0 | 878,143 | On client side, flash; On server side, whatever (it wouldn't make any difference).
No existing technologies (except for using FTP or something). | 1 | 0 | 0 | Resumable File Upload | 6 | c#,java,python | 0 | 2009-05-18T14:56:00.000 |
I am in the design phase of a file upload service that allows users to upload very large zip files to our server as well as updates our database with the data. Since the files are large (About 300mb) we want to allow the user to limit the amount of bandwidth they want to use for uploading. They should also be able to pause and resume the transfer, and it should recover from a system reboot. The user also needs to be authenticated in our MSSQL database to ensure that they have permission to upload the file and make changes to our database.
My question is, what is the best technology to do this? We would like to minimize the amount of development required, but the only thing that I can think of now that would allow us to do this would be to create a client and server app from scratch in something like python, java or c#. Is there an existing technology available that will allow us to do this? | 6 | 0 | 0 | 0 | false | 30,990,243 | 0 | 3,014 | 3 | 0 | 0 | 878,143 | I'm surprised no one has mentioned torrent files. They can also be packaged into a script that then triggers something to execute. | 1 | 0 | 0 | Resumable File Upload | 6 | c#,java,python | 0 | 2009-05-18T14:56:00.000 |
I'm currently working with a web application written in Python (and using SQLAlchemy). In order to handle authentication, the app first checks for a user ID in the session, and providing it exists, pulls that whole user record out of the database and stores it for the rest of that request. Another query is also run to check the permissions of the user it has stored.
I'm fairly new to the web application development world, but from my understanding, hitting the database for something like this on every request isn't efficient. Or is this considered a normal thing to do?
The only thing I've thought of so far is pulling up this data once, and storing what's relevant (most of the data isn't even required on every request). However, this brings up the problem of what's supposed to happen if this user record happens to be removed in the interim. Any ideas on how best to manage this? | 0 | 3 | 0.148885 | 0 | false | 890,202 | 1 | 967 | 4 | 0 | 0 | 881,517 | For a user login and basic permission tokens in a simple web application I will definitely store that in a cookie-based session. It's true that a few SELECTs per request is not a big deal at all, but then again if you can get some/all of your web requests to execute from cached data with no DB hits at all, that just adds that much more scalability to an app which is planning on receiving a lot of load.
The issue of the user token being changed on the database is handled in two ways. One is, ignore it - for a lot of use cases its not that big a deal for the user to log out and log back in again to get at new permissions that have been granted elsewhere (witness unix as an example). The other is that all mutations of the user row are filtered through a method that also resets the state within the cookie-based session, but this is only effective if the user him/herself is the one initiating the changes through the browser interface.
If OTOH neither of the above use cases apply to you, then you probably need to stick with a little bit of database access built into every request. | 1 | 0 | 0 | SQLAlchemy - Database hits on every request? | 4 | python,sqlalchemy | 0 | 2009-05-19T08:11:00.000 |
I'm currently working with a web application written in Python (and using SQLAlchemy). In order to handle authentication, the app first checks for a user ID in the session, and providing it exists, pulls that whole user record out of the database and stores it for the rest of that request. Another query is also run to check the permissions of the user it has stored.
I'm fairly new to the web application development world, but from my understanding, hitting the database for something like this on every request isn't efficient. Or is this considered a normal thing to do?
The only thing I've thought of so far is pulling up this data once, and storing what's relevant (most of the data isn't even required on every request). However, this brings up the problem of what's supposed to happen if this user record happens to be removed in the interim. Any ideas on how best to manage this? | 0 | 1 | 0.049958 | 0 | false | 881,535 | 1 | 967 | 4 | 0 | 0 | 881,517 | It's a Database, so often it's fairly common to "hit" the Database to pull the required data. You can reduce single queries if you build up Joins or Stored Procedures. | 1 | 0 | 0 | SQLAlchemy - Database hits on every request? | 4 | python,sqlalchemy | 0 | 2009-05-19T08:11:00.000 |
I'm currently working with a web application written in Python (and using SQLAlchemy). In order to handle authentication, the app first checks for a user ID in the session, and providing it exists, pulls that whole user record out of the database and stores it for the rest of that request. Another query is also run to check the permissions of the user it has stored.
I'm fairly new to the web application development world, but from my understanding, hitting the database for something like this on every request isn't efficient. Or is this considered a normal thing to do?
The only thing I've thought of so far is pulling up this data once, and storing what's relevant (most of the data isn't even required on every request). However, this brings up the problem of what's supposed to happen if this user record happens to be removed in the interim. Any ideas on how best to manage this? | 0 | 3 | 0.148885 | 0 | false | 882,021 | 1 | 967 | 4 | 0 | 0 | 881,517 | "hitting the database for something like this on every request isn't efficient."
False. And, you've assumed that there's no caching, which is also false.
Most ORM layers are perfectly capable of caching rows, saving some DB queries.
Most RDBMS's have extensive caching, resulting in remarkably fast responses to common queries.
All ORM layers will use consistent SQL, further aiding the database in optimizing the repetitive operations. (Specifically, the SQL statement is cached, saving parsing and planning time.)
" Or is this considered a normal thing to do?"
True.
Until you can prove that your queries are the slowest part of your application, don't worry. Build something that actually works. Then optimize the part that you can prove is the bottleneck. | 1 | 0 | 0 | SQLAlchemy - Database hits on every request? | 4 | python,sqlalchemy | 0 | 2009-05-19T08:11:00.000 |
I'm currently working with a web application written in Python (and using SQLAlchemy). In order to handle authentication, the app first checks for a user ID in the session, and providing it exists, pulls that whole user record out of the database and stores it for the rest of that request. Another query is also run to check the permissions of the user it has stored.
I'm fairly new to the web application development world, but from my understanding, hitting the database for something like this on every request isn't efficient. Or is this considered a normal thing to do?
The only thing I've thought of so far is pulling up this data once, and storing what's relevant (most of the data isn't even required on every request). However, this brings up the problem of what's supposed to happen if this user record happens to be removed in the interim. Any ideas on how best to manage this? | 0 | 2 | 0.099668 | 0 | false | 882,171 | 1 | 967 | 4 | 0 | 0 | 881,517 | You are basically talking about caching data as a performance optimization. As always, premature optimization is a bad idea. It's hard to know where the bottlenecks are beforehand, even more so if the application domain is new to you. Optimization adds complexity and if you optimize the wrong things, you not only have wasted the effort, but have made the necessary optimizations harder.
Requesting user data usually is usually a pretty trivial query. You can build yourself a simple benchmark to see what kind of overhead it will introduce. If it isn't a significant percentage of your time-budget, just leave it be.
If you still want to cache the data on the application server then you have to come up with a cache invalidation scheme.
Possible schemes are to check for changes from the database. If you don't have a lot of data to cache, this really isn't significantly more efficient than just reloading it.
Another option is to just time out cached data. This is a good option if instant visibility of changes isn't important.
Another option is to actively invalidate caches on changes. This depends on whether you only modify the database through your application and if you have a single application server or a clustered solution. | 1 | 0 | 0 | SQLAlchemy - Database hits on every request? | 4 | python,sqlalchemy | 0 | 2009-05-19T08:11:00.000 |
MySQL has a RENAME TABLE statemnt that will allow you to change the name of a table.
The manual mentions
The rename operation is done atomically, which means that no other session can
access any of the tables while the rename is running
The manual does not (to my knowedge) state how this renaming is accomplished. Is an entire copy of the table created, given a new name, and then the old table deleted? Or does MySQL do some magic behind the scenes to quickly rename the table?
In other words, does the size of the table have an effect on how long the RENAME table statement will take to run. Are there other things that might cause the renaming of a block to significantly block? | 3 | 5 | 1.2 | 0 | true | 885,783 | 0 | 2,472 | 1 | 0 | 0 | 885,771 | I believe MySQL only needs to alter metadata and references to the table's old name in stored procedures -- the number of records in the table should be irrelevant. | 1 | 0 | 0 | How does MySQL's RENAME TABLE statment work/perform? | 2 | php,python,mysql,ruby,migration | 0 | 2009-05-20T01:22:00.000 |
At start you have a string 'DDMMYYYY HHMMSS' and I want at the end to insert the string in a date field in sqlite3 database. The program is made in python. How can I do that ? | 0 | 1 | 0.066568 | 0 | false | 890,025 | 0 | 359 | 1 | 0 | 0 | 889,974 | Even though the ".schema" indicates that the field is a date or timestamp field... the field is actually a string. You can format the string anyway you want. If memory serves... their is no validation at all. | 1 | 0 | 0 | Is it possible to format a date with sqlite3? | 3 | python,sqlite,date | 0 | 2009-05-20T20:08:00.000 |
I posted this in the mailing list, but the reply I got wasn't too clear, so maybe I'll have better luck here.
I currently have a grid with data in it.
I would like to know if there is a way to give each generated row an
ID, or at least, associate each row with an object.
It may make it more clear if I clarify what i'm doing. It is described
below.
I pull data from an SQL table and display them in the grid.
I am allowing for the user to add/delete rows and edit cells.
Say the user is viewing a grid that has 3 rows(which is, in turn, a
mysql table with 3 rows).
If he is on the last row and presses the down arrow key, a new row is
created and he can enter data into it and it will be inserted in the
database when he presses enter.
However, I need a way to find out which rows will use "insert" query
and which will use "update" query.
So ideally, when the user creates a new row by pressing the down
arrow, I would give that row an ID and store it in a list(or, if rows
already have IDs, just store it in a list) and when the user finishes
entering data in the cells and presses enter, I would check if that
row's ID is in the in the list. If it is, i would insert all of that
row's cells values into the table, if not, i would update mysql with
the values.
Hope I made this clear. | 2 | 3 | 0.291313 | 0 | false | 901,806 | 0 | 1,026 | 1 | 0 | 0 | 901,704 | What I did when I encountered such a case was to create a column for IDs and set its width to 0. | 1 | 1 | 0 | Give Wxwidget Grid rows an ID | 2 | python,wxpython,wxwidgets | 0 | 2009-05-23T15:13:00.000 |
In diagnosing SQL query problems, it would sometimes be useful to be able to see the query string after parameters are interpolated into it, using MySQLdb's safe interpolation.
Is there a way to get that information from either a MySQL exception object or from the connection object itself? | 2 | 2 | 0.197375 | 0 | false | 904,077 | 0 | 817 | 1 | 0 | 0 | 904,042 | Use mysql's own ability to log the queries and watch for them. | 1 | 0 | 0 | python-mysql : How to get interpolated query string? | 2 | python,mysql | 0 | 2009-05-24T15:56:00.000 |
We have lots of data and some charts repesenting one logical item. Charts and data is stored in various files. As a result, most users can easily access and re-use the information in their applications.
However, this not exactly a good way of storing data. Amongst other reasons, charts belong to some data, the charts and data have some meta-information that is not reflected in the file system, there are a lot of files, etc.
Ideally, we want
one big "file" that can store all
information (text, data and charts)
the "file" is human readable,
portable and accessible by
non-technical users
allows typical office applications
like MS Word or MS Excel to extract
text, data and charts easily.
light-weight, easy solution. Quick
and dirty is sufficient. Not many
users.
I am happy to use some scripting language like Python to generate the "file", third-party tools (ideally free as in beer), and everything that you find on a typical Windows-centric office computer.
Some ideas that we currently ponder:
using VB or pywin32 to script MS Word or Excel
creating html and publish it on a RESTful web server
Could you expand on the ideas above? Do you have any other ideas? What should we consider? | 2 | 2 | 1.2 | 0 | true | 921,061 | 0 | 325 | 2 | 0 | 0 | 915,726 | I can only agree with Reef on the general concepts he presented:
You will almost certainly prefer the data in a database than in a single large file
You should not worry that the data is not directly manipulated by users because as Reef mentioned, it can only go wrong. And you would be suprised at how ugly it can get
Concerning the usage of MS Office integration tools I disagree with Reef. You can quite easily create an ActiveX Server (in Python if you like) that is accessible from the MS Office suite. As long as you have a solid infrastructure that allows some sort of file share, you could use that shared area to keep your code. I guess the mess Reef was talking about mostly is about keeping users' versions of your extract/import code in sync. If you do not use some sort of shared repository (a simple shared folder) or if your infrastructure fails you often so that the shared folder becomes unavailable you will be in great pain. Note what is also somewhat painful if you do not have the appropriate tools but deal with many users: The ActiveX Server is best registered on each machine.
So.. I just said MS Office integration is very doable. But whether it is the best thing to do is a different matter. I strongly believe you will serve your users better if you build a web-site that handles their data for them. This sort of tool however almost certainly becomes an "ongoing project". Often, even as an "ongoing project", the time saved by your users could still make it worth it. But sometimes, strategically, you want to give your users a poorer experience to control project costs. In that case the ActiveX Server I mentioned could be what you want. | 1 | 0 | 0 | Reporting charts and data for MS-Office users | 2 | python,web-services,scripting,reporting,ms-office | 0 | 2009-05-27T13:34:00.000 |
We have lots of data and some charts repesenting one logical item. Charts and data is stored in various files. As a result, most users can easily access and re-use the information in their applications.
However, this not exactly a good way of storing data. Amongst other reasons, charts belong to some data, the charts and data have some meta-information that is not reflected in the file system, there are a lot of files, etc.
Ideally, we want
one big "file" that can store all
information (text, data and charts)
the "file" is human readable,
portable and accessible by
non-technical users
allows typical office applications
like MS Word or MS Excel to extract
text, data and charts easily.
light-weight, easy solution. Quick
and dirty is sufficient. Not many
users.
I am happy to use some scripting language like Python to generate the "file", third-party tools (ideally free as in beer), and everything that you find on a typical Windows-centric office computer.
Some ideas that we currently ponder:
using VB or pywin32 to script MS Word or Excel
creating html and publish it on a RESTful web server
Could you expand on the ideas above? Do you have any other ideas? What should we consider? | 2 | 1 | 0.099668 | 0 | false | 920,669 | 0 | 325 | 2 | 0 | 0 | 915,726 | Instead of using one big file, You should use a database. Yes, You can store various types of files like gifs in the database if You like to.
The file would not be human readable or accessible by non-technical users, but this is good.
The database would have a website that Your non-technical users would use to insert, update and get data from. They would be able to display it on the page or export it to csv (or even xls - it's not that hard, I've seen some csv->xls converters). You could look into some open standard document formats, I think it should be quite easy to output data with in it. Do not try to output in "doc" format (but You could try "docx"). You should be able to easily teach the users how to export their data to a CSV and upload it to the site, or they could use the web interface to insert the data if they like to.
If You will allow Your users to mess with the raw data, they will break it (i have tried that, You have no idea how those guys could do that). The only way to prevent it is to make a web form that only allows them to perform certain actions that You exactly know how that they should suppose to perform.
The database + web page solution is the good one. Using VB or pywin32 to script MSOffice will get You in so much trouble I cannot even imagine.
You could use gnuplot or some other graphics library to draw (pretty straightforward to implement, it does all the hard work for You).
I am afraid that the "quick" and dirty solution is tempting, but I only can say one thing: it will not be quick. In a few weeks You will find that hacking around with MSOffice scripting is messy, buggy and unreliable and the non-technical guys will hate it and say that in other companies they used to have a simple web panel that did that. Then You will find that You will not be able to ask about the scripting because everyone uses the web interfaces nowadays, as they are quite easy to implement and maintain.
This is not a small project, it's a medium sized one, You need to remember this while writing it. It will take some time to do it and test it and You will have to add new features as the non-technical guys will start using it. I knew some passionate php teenagers who would be able to write this panel in a week, but as I understand You have some better resources so I hope You will come with a really reliable, modular, extensible solution with good usability and happy users.
Good luck! | 1 | 0 | 0 | Reporting charts and data for MS-Office users | 2 | python,web-services,scripting,reporting,ms-office | 0 | 2009-05-27T13:34:00.000 |
A table has been ETLed to another table. My task is to verify the data between two tables programmatically.
One of the difficulties I m facing rite now is:
how to use the expression that I can get from, let s say, derived column task and verify with the source and destination.
or in other words, how can I use the expression to work in the code.
Any ideas....highly appreciated
Sagar | 0 | 2 | 0.379949 | 0 | false | 1,233,648 | 0 | 1,102 | 1 | 0 | 0 | 921,268 | Set up a column which holds a CHECKSUM() of each row. Do a left outer join between the two tables . If you have any nulls for the right side, you have problems. | 1 | 0 | 0 | How to compare data of two tables transformed in SSIS package | 1 | .net,ssis,ironpython | 0 | 2009-05-28T14:55:00.000 |
I am having a script which makes a db connection and pereform some select operation.accroding to the fetch data i am calling different functions which also perform db operations.How can i pass db connection to the functions which are being called as i donot want to make new connection | 0 | 2 | 0.379949 | 0 | false | 934,709 | 0 | 515 | 1 | 0 | 0 | 934,221 | Why to pass connection itself? Maybe build a class that handles all the DB-operation and just pass this class' instance around, calling it's methods to perform selects, inserts and all that DB-specific code? | 1 | 0 | 0 | python db connection | 1 | python | 0 | 2009-06-01T10:08:00.000 |
It seems cx_Oracle doesn't.
Any other suggestion for handling xml with Oracle and Python is appreciated.
Thanks. | 5 | 1 | 1.2 | 0 | true | 946,854 | 0 | 1,256 | 1 | 0 | 0 | 936,381 | I managed to do this with cx_Oracle.
I used the sys.xmltype.createxml() function in the statement that inserts the rows in a table with XMLTYPE fields; then I used prepare() and setinputsizes() to specify that the bind variables I used for XMLTYPE fields were of cx_Oracle.CLOB type. | 1 | 0 | 0 | Is there an Oracle wrapper for Python that supports xmltype columns? | 3 | python,xml,oracle,xmltype | 0 | 2009-06-01T19:36:00.000 |
I'm trying to find a way to cause SQLAlchemy to generate a query of the following form:
select * from t where (a,b) in ((a1,b1),(a2,b2));
Is this possible?
If not, any suggestions on a way to emulate it? | 10 | 3 | 1.2 | 0 | true | 951,640 | 0 | 6,490 | 1 | 0 | 0 | 948,212 | Well, thanks to Hao Lian above, I came up with a functional if painful solution.
Assume that we have a declarative-style mapped class, Clazz, and a list of tuples of compound primary key values, values
(Edited to use a better (IMO) sql generation style):
from sqlalchemy.sql.expression import text,bindparam
...
def __gParams(self, f, vs, ts, bs):
for j,v in enumerate(vs):
key = f % (j+97)
bs.append(bindparam(key, value=v, type_=ts[j]))
yield ':%s' % key
def __gRows(self, ts, values, bs):
for i,vs in enumerate(values):
f = '%%c%d' % i
yield '(%s)' % ', '.join(self.__gParams(f, vs, ts, bs))
def __gKeys(self, k, ts):
for c in k:
ts.append(c.type)
yield str(c)
def __makeSql(self,Clazz, values):
t = []
b = []
return text(
'(%s) in (%s)' % (
', '.join(self.__gKeys(Clazz.__table__.primary_key,t)),
', '.join(self.__gRows(t,values,b))),
bindparams=b)
This solution works for compound or simple primary keys. It's probably marginally slower than the col.in_(keys) for simple primary keys though.
I'm still interested in suggestions of better ways to do this, but this way is working for now and performs noticeably better than the or_(and_(conditions)) way, or the for key in keys: do_stuff(q.get(key)) way. | 1 | 0 | 0 | Sqlalchemy complex in_ clause with tuple in list of tuples | 4 | python,sql,sqlalchemy | 0 | 2009-06-04T01:40:00.000 |
I have a webapp (call it myapp.com) that allows users to upload files. The webapp will be deployed on Amazon EC2 instance. I would like to serve these files back out to the webapp consumers via an s3 bucket based domain (i.e. uploads.myapp.com).
When the user uploads the files, I can easily drop them in into a folder called "site_uploads" on the local ec2 instance. However, since my ec2 instance has finite storage, with a lot of uploads, the ec2 file system will fill up quickly.
It would be great if the ec2 instance could mount and s3 bucket as the "site_upload" directory. So that uploads to the EC2 "site_upload" directory automatically end up on uploads.myapp.com (and my webapp can use template tags to make sure the links for this uploaded content is based on that s3 backed domain). This also gives me scalable file serving, as request for files hits s3 and not my ec2 instance. Also, it makes it easy for my webapp to perform scaling/resizing of the images that appear locally in "site_upload" but are actually on s3.
I'm looking at s3fs, but judging from the comments, it doesn't look like a fully baked solution. I'm looking for a non-commercial solution.
FYI, The webapp is written in django, not that that changes the particulars too much. | 4 | 0 | 0 | 0 | false | 6,308,720 | 1 | 8,589 | 1 | 1 | 0 | 956,904 | I'd suggest using a separately-mounted EBS volume. I tried doing the same thing for some movie files. Access to S3 was slow, and S3 has some limitations like not being able to rename files, no real directory structure, etc.
You can set up EBS volumes in a RAID5 configuration and add space as you need it. | 1 | 0 | 0 | mounting an s3 bucket in ec2 and using transparently as a mnt point | 5 | python,django,amazon-s3,amazon-ec2 | 0 | 2009-06-05T16:39:00.000 |
I'm looking for resources to help migrate my design skills from traditional RDBMS data store over to AppEngine DataStore (ie: 'Soft Schema' style). I've seen several presentations and all touch on the the overarching themes and some specific techniques.
I'm wondering if there's a place we could pool knowledge from experience ("from the trenches") on real-world approaches to rethinking how data is structured, especially porting existing applications. We're heavily Hibernate based and have probably travelled a bit down the wrong path with our data model already, generating some gnarly queries which our DB is struggling with.
Please respond if:
You have ported a non-trivial application over to AppEngine
You've created a common type of application from scratch in AppEngine
You've done neither 1 or 2, but are considering it and want to share your own findings so far. | 6 | 1 | 0.049958 | 0 | false | 979,391 | 1 | 1,022 | 2 | 1 | 0 | 976,639 | The non relational database design essentially involves denormalization wherever possible.
Example: Since the BigTable doesnt provide enough aggregation features, the sum(cash) option that would be in the RDBMS world is not available. Instead it would have to be stored on the model and the model save method must be overridden to compute the denormalized field sum.
Essential basic design that comes to mind is that each template has its own model where all the required fields to be populated are present denormalized in the corresponding model; and you have an entire signals-update-bots complexity going on in the models. | 1 | 0 | 0 | Thinking in AppEngine | 4 | java,python,google-app-engine,data-modeling | 0 | 2009-06-10T16:13:00.000 |
I'm looking for resources to help migrate my design skills from traditional RDBMS data store over to AppEngine DataStore (ie: 'Soft Schema' style). I've seen several presentations and all touch on the the overarching themes and some specific techniques.
I'm wondering if there's a place we could pool knowledge from experience ("from the trenches") on real-world approaches to rethinking how data is structured, especially porting existing applications. We're heavily Hibernate based and have probably travelled a bit down the wrong path with our data model already, generating some gnarly queries which our DB is struggling with.
Please respond if:
You have ported a non-trivial application over to AppEngine
You've created a common type of application from scratch in AppEngine
You've done neither 1 or 2, but are considering it and want to share your own findings so far. | 6 | 1 | 0.049958 | 0 | false | 978,757 | 1 | 1,022 | 2 | 1 | 0 | 976,639 | The timeouts are tight and performance was ok but not great, so I found myself using extra space to save time; for example I had a many-to-many relationship between trading cards and players, so I duplicated the information of who owns what: Card objects have a list of Players and Player objects have a list of Cards.
Normally storing all your information twice would have been silly (and prone to get out of sync) but it worked really well.
In Python they recently released a remote API so you can get an interactive shell to the datastore so you can play with your datastore without any timeouts or limits (for example, you can delete large swaths of data, or refactor your models); this is fantastically useful since otherwise as Julien mentioned it was very difficult to do any bulk operations. | 1 | 0 | 0 | Thinking in AppEngine | 4 | java,python,google-app-engine,data-modeling | 0 | 2009-06-10T16:13:00.000 |
I have a sqlite3 db which i insert/select from in python. The app works great but i want to tweak it so no one can read from the DB without a password. How can i do this in python? note i have no idea where to start. | 10 | 3 | 0.119427 | 0 | false | 987,942 | 0 | 29,194 | 1 | 0 | 0 | 986,403 | SQLite databases are pretty human-readable, and there isn't any built-in encryption.
Are you concerned about someone accessing and reading the database files directly, or accessing them through your program?
I'm assuming the former, because the latter isn't really database related--it's your application's security you're asking about.
A few options come to mind:
Protect the db with filesystem permissions rather than encryption. You haven't mentioned what your environment is, so I can't say if this is workable for you or not, but it's probably the simplest and most reliable way, as you can't attempt to decrypt what you can't read.
Encrypt in Python before writing, and decrypt in Python after reading. Fairly simple, but you lose most of the power of SQL's set-based matching operations.
Switch to another database; user authentication and permissions are standard features of most multi-user databases. When you find yourself up against the limitations of a tool, it may be easier to look around at other tools rather than hacking new features into the current tool. | 1 | 0 | 0 | Encrypted file or db in python | 5 | python,sqlite,encryption | 0 | 2009-06-12T12:35:00.000 |
A hashtable in memcached will be discarded either when it's Expired or when there's not enough memory and it's choosen to die based on the Least Recently Used algorithm.
Can we put a Priority to hint or influence the LRU algorithm? I want to use memcached to store Web Sessions so i can use the cheap round-robin.
I need to give Sessions Top Priority and nothing can kill them (not even if it's the Least Recently Used) except their own Max_Expiry. | 0 | 1 | 0.197375 | 0 | false | 1,130,289 | 0 | 799 | 1 | 0 | 0 | 1,000,540 | Not that I know of.
memcached is designed to be very fast and very straightforward, no fancy weights and priorities keep it simple.
You should not rely on memcache for persistent session storage. You should keep your sessions in the DB, but you can cache them in memcache. This way you can enjoy both worlds. | 1 | 0 | 0 | Is there an option to configure a priority in memcached? (Similiar to Expiry) | 1 | python,database,session,caching,memcached | 0 | 2009-06-16T09:56:00.000 |
I need your advices to choose a Python Web Framework for developing a large project:
Database (Postgresql)will have at least 500 tables, most of them with a composite primary
key, lots of constraints, indexes & queries. About 1,500 views for starting. The project belongs to the financial area. Alwasy new requirements are coming.
Will a ORM be helpful? | 4 | 1 | 0.028564 | 0 | false | 2,246,687 | 1 | 2,101 | 4 | 0 | 0 | 1,003,131 | I would absolutely recommend Repoze.bfg with SQLAlchemy for what you describe. I've done projects now in Django, TurboGears 1, TurboGears 2, Pylons, and dabbled in pure Zope3. BFG is far and away the framework most designed to accomodate a project growing in ways you don't anticipate at the beginning, but is far more lightweight and pared down than Grok or Zope 3. Also, the docs are the best technical docs of all of them, not the easiest, but the ones that answer the hard questions you're going to encounter the best. I'm currently doing a similar thing where we are overhauling a bunch of legacy databases into a new web deliverable app and we're using BFG, some Pylons, Zope 3 adapters, Genshi for templating, SQLAlchemy, and Dojo for the front end. We couldn't be happier with BFG, and it's working out great. BFGs classes as views that are actually zope multi-adapters is absolutely perfect for being able to override only very specific bits for certain domain resources. And the complete lack of magic globals anywhere makes testing and packaging the easiest we've had with any framework.
ymmv! | 1 | 0 | 0 | python web framework large project | 7 | python,frameworks,web-frameworks | 0 | 2009-06-16T18:21:00.000 |
I need your advices to choose a Python Web Framework for developing a large project:
Database (Postgresql)will have at least 500 tables, most of them with a composite primary
key, lots of constraints, indexes & queries. About 1,500 views for starting. The project belongs to the financial area. Alwasy new requirements are coming.
Will a ORM be helpful? | 4 | 2 | 0.057081 | 0 | false | 1,003,329 | 1 | 2,101 | 4 | 0 | 0 | 1,003,131 | Depending on what you want to do, you actually have a few possible frameworks :
[Django] Big, strong (to the limit of what a python framework can be), and the older in the race. Used by a few 'big' sites around the world ([Django sites]). Still is a bit of an overkill for almost everything and with a deprecated coding approach.
[Turbogears] is a recent framework based on Pylons. Don't know much about it, but got many good feedbacks from friends who tried it.
[Pylons] ( which Turbogears2 is based on ). Often saw at the "PHP of Python" , it allow very quick developements from scratch. Even if it can seem inappropriate for big projects, it's often the faster and easier way to go.
The last option is [Zope] ( with or without Plone ), but Plone is way to slow, and Zope learning curve is way too long ( not even speaking in replacing the ZODB with an SQL connector ) so if you don't know the framework yet, just forget about it.
And yes, An ORM seem mandatory for a project of this size. For Django, you'll have to handle migration to their database models (don't know how hard it is to plug SQLAlchemy in Django). For turbogears and Pylons, the most suitable solution is [SQLAlchemy], which is actually the most complete ( and rising ) ORM for python. For zope ... well, nevermind
Last but not least, I'm not sure you're starting on a good basis for your project. 500 tables on any python framework would scare me to death. A boring but rigid language such as java (hibernate+spring+tapestry or so) seem really more appropriate. | 1 | 0 | 0 | python web framework large project | 7 | python,frameworks,web-frameworks | 0 | 2009-06-16T18:21:00.000 |
I need your advices to choose a Python Web Framework for developing a large project:
Database (Postgresql)will have at least 500 tables, most of them with a composite primary
key, lots of constraints, indexes & queries. About 1,500 views for starting. The project belongs to the financial area. Alwasy new requirements are coming.
Will a ORM be helpful? | 4 | 8 | 1 | 0 | false | 1,003,173 | 1 | 2,101 | 4 | 0 | 0 | 1,003,131 | Yes. An ORM is essential for mapping SQL stuff to objects.
You have three choices.
Use someone else's ORM
Roll your own.
Try to execute low-level SQL queries and pick out the fields they want from the result set. This is -- actually -- a kind of ORM with the mappings scattered throughout the applications. It may be fast to execute and appear easy to develop, but it is a maintenance nightmare.
If you're designing the tables first, any ORM will be painful. For example, "composite primary key" is generally a bad idea, and with an ORM it's almost always a bad idea. You'll need to have a surrogate primary key. Then you can have all the composite keys with indexes you want. They just won't be "primary".
If you design the objects first, then work out tables that will implement the objects, the ORM will be pleasant, simple and will run quickly, also. | 1 | 0 | 0 | python web framework large project | 7 | python,frameworks,web-frameworks | 0 | 2009-06-16T18:21:00.000 |
I need your advices to choose a Python Web Framework for developing a large project:
Database (Postgresql)will have at least 500 tables, most of them with a composite primary
key, lots of constraints, indexes & queries. About 1,500 views for starting. The project belongs to the financial area. Alwasy new requirements are coming.
Will a ORM be helpful? | 4 | 12 | 1 | 0 | false | 1,003,161 | 1 | 2,101 | 4 | 0 | 0 | 1,003,131 | Django has been used by many large organizations (Washington Post, etc.) and can connect with Postgresql easily enough. I use it fairly often and have had no trouble. | 1 | 0 | 0 | python web framework large project | 7 | python,frameworks,web-frameworks | 0 | 2009-06-16T18:21:00.000 |
How can I configure Django with SQLAlchemy? | 29 | 0 | 0 | 0 | false | 45,878,579 | 1 | 30,261 | 1 | 0 | 0 | 1,011,476 | There are many benefits of using SQLAlchemy instead of Django ORM, but consider developing a built-in-Django choice of SQLAlchemy
(to have something called a production ready)
By the way, Django ORM is going better - in Django 1.11 they added UNION support (a SQL basic operator), so maybe some day there will be no need to change ORM. | 1 | 0 | 0 | Configuring Django to use SQLAlchemy | 5 | python,django,sqlalchemy,configure | 0 | 2009-06-18T08:31:00.000 |
This is mainly just a "check my understanding" type of question. Here's my understanding of CLOBs and BLOBs as they work in Oracle:
CLOBs are for text like XML, JSON, etc. You should not assume what encoding the database will store it as (at least in an application) as it will be converted to whatever encoding the database was configured to use.
BLOBs are for binary data. You can be reasonably assured that they will be stored how you send them and that you will get them back with exactly the same data as they were sent as.
So in other words, say I have some binary data (in this case a pickled python object). I need to be assured that when I send it, it will be stored exactly how I sent it and that when I get it back it will be exactly the same. A BLOB is what I want, correct?
Is it really feasible to use a CLOB for this? Or will character encoding cause enough problems that it's not worth it? | 39 | 56 | 1.2 | 0 | true | 1,018,096 | 0 | 43,567 | 2 | 0 | 0 | 1,018,073 | CLOB is encoding and collation sensitive, BLOB is not.
When you write into a CLOB using, say, CL8WIN1251, you write a 0xC0 (which is Cyrillic letter А).
When you read data back using AL16UTF16, you get back 0x0410, which is a UTF16 represenation of this letter.
If you were reading from a BLOB, you would get same 0xC0 back. | 1 | 0 | 0 | Help me understand the difference between CLOBs and BLOBs in Oracle | 2 | python,oracle | 0 | 2009-06-19T13:51:00.000 |
This is mainly just a "check my understanding" type of question. Here's my understanding of CLOBs and BLOBs as they work in Oracle:
CLOBs are for text like XML, JSON, etc. You should not assume what encoding the database will store it as (at least in an application) as it will be converted to whatever encoding the database was configured to use.
BLOBs are for binary data. You can be reasonably assured that they will be stored how you send them and that you will get them back with exactly the same data as they were sent as.
So in other words, say I have some binary data (in this case a pickled python object). I need to be assured that when I send it, it will be stored exactly how I sent it and that when I get it back it will be exactly the same. A BLOB is what I want, correct?
Is it really feasible to use a CLOB for this? Or will character encoding cause enough problems that it's not worth it? | 39 | 10 | 1 | 0 | false | 1,018,102 | 0 | 43,567 | 2 | 0 | 0 | 1,018,073 | Your understanding is correct. Since you mention Python, think of the Python 3 distinction between strings and bytes: CLOBs and BLOBs are quite analogous, with the extra issue that the encoding of CLOBs is not under your app's control. | 1 | 0 | 0 | Help me understand the difference between CLOBs and BLOBs in Oracle | 2 | python,oracle | 0 | 2009-06-19T13:51:00.000 |
I have been building business database applications such as finance, inventory and other business requirement applications. I am planning to shift to Python. What would be the tools to start with best. I would need to do master, transaction forms, processing (back end), reports and that sort of thing. The database would be postgress or mysql. As I am new to Python I understand that I need besides Python the ORM and also a framework. My application is not a web site related but it could also be need to be done over the web if needed.
How to choose the initial setup of tool combinations? | 9 | 0 | 0 | 0 | false | 1,021,195 | 1 | 9,831 | 1 | 0 | 0 | 1,020,775 | just FYI, for PyQT, the book has a chapter 15 with Databases, It looks good. and the book has something with data and view etc. I have read it and I think it's well worth your time:) | 1 | 0 | 0 | Python database application framework and tools | 6 | python,frame | 0 | 2009-06-20T02:22:00.000 |
which versions of sqlite may best suite for python 2.6.2? | 2 | 0 | 0 | 0 | false | 1,028,006 | 0 | 444 | 1 | 0 | 0 | 1,025,493 | I'm using 3.4.0 out of inertia (it's what came with the Python 2.* versions I'm using) but there's no real reason (save powerful inertia;-) to avoid upgrading to 3.4.2, which fixes a couple of bugs that could lead to DB corruption and introduces no incompatibilities that I know of. (If you stick with 3.4.0 I'm told the key thing is to avoid VACUUM as it might mangle your data).
Python 3.1 comes with SQLite 3.6.11 (which is supposed to work with Python 2.* just as well) and I might one day update to that (or probably to the latest, currently 3.6.15, to pick up a slew of minor bug fixes and enhancements) just to make sure I'm using identical releases on either Python 2 or Python 3 -- I've never observed a compatibility problem, but I doubt there has been thorough testing to support reading and writing the same DB from 3.4.0 and 3.6.11 (or any two releases so far apart from each other!-). | 1 | 0 | 0 | sqlite version for python26 | 2 | python,sqlite,python-2.6 | 0 | 2009-06-22T04:49:00.000 |
I'm using the above mentioned Python lib to connect to a MySQL server. So far I've worked locally and all worked fine, until i realized I'll have to use my program in a network where all access goes through a proxy.
Does anyone now how I can set the connections managed by that lib to use a proxy?
Alternatively: do you know of another Python lib for MySQL that can handle this?
I also have no idea if the if the proxy server will allow access to the standard MySQL port or how I can trick it to allow it. Help on this is also welcomed. | 2 | 1 | 0.066568 | 0 | false | 1,027,817 | 0 | 2,939 | 1 | 0 | 0 | 1,027,751 | there are a lot of different possibilities here. the only way you're going to get a definitive answer is to talk to the person that runs the proxy.
if this is a web app and the web server and the database serve are both on the other side of a proxy, then you won't need to connect to the mysql server at all since the web app will do it for you. | 1 | 0 | 0 | MySQLdb through proxy | 3 | python,mysql,proxy | 0 | 2009-06-22T15:11:00.000 |
I am newbie in Google App Engine. While I was going through the tutorial, I found several things that we do in php-mysql is not available in GAE. For example in dataStore auto increment feature is not available. Also I am confused about session management in GAE. Over all I am confused and can not visualize the whole thing.
Please advise me a simple user management system with user registration, user login, user logout, session (create,manage,destroy) with data Store. Also please advise me where I can get simple but effective examples.
Thanks in advance. | 17 | 1 | 0.066568 | 0 | false | 1,030,362 | 1 | 7,295 | 1 | 1 | 0 | 1,030,293 | You don't write user management and registration and all that, because you use Google's own authentication services. This is all included in the App Engine documentation. | 1 | 0 | 0 | Simple User management example for Google App Engine? | 3 | php,python,google-app-engine | 0 | 2009-06-23T01:58:00.000 |
I'm making a Django web-app which allows a user to build up a set of changes over a series of GETs/POSTs before committing them to the database (or reverting) with a final POST. I have to keep the updates isolated from any concurrent database users until they are confirmed (this is a configuration front-end), ruling out committing after each POST.
My preferred solution is to use a per-session transaction. This keeps all the problems of remembering what's changed (and how it affects subsequent queries), together with implementing commit/rollback, in the database where it belongs. Deadlock and long-held locks are not an issue, as due to external constraints there can only be one user configuring the system at any one time, and they are well-behaved.
However, I cannot find documentation on setting up Django's ORM to use this sort of transaction model. I have thrown together a minimal monkey-patch (ew!) to solve the problem, but dislike such a fragile solution. Has anyone else done this before? Have I missed some documentation somewhere?
(My version of Django is 1.0.2 Final, and I am using an Oracle database.) | 8 | 2 | 0.132549 | 0 | false | 1,121,915 | 1 | 2,065 | 1 | 0 | 0 | 1,033,934 | I came up with something similar to the Memento pattern, but different enough that I think it bears posting. When a user starts an editing session, I duplicate the target object to a temporary object in the database. All subsequent editing operations affect the duplicate. Instead of saving the object state in a memento at each change, I store operation objects. When I apply an operation to an object, it returns the inverse operation, which I store.
Saving operations is much cheaper for me than mementos, since the operations can be described with a few small data items, while the object being edited is much bigger. Also I apply the operations as I go and save the undos, so that the temporary in the db always corresponds to the version in the user's browser. I never have to replay a collection of changes; the temporary is always only one operation away from the next version.
To implement "undo," I pop the last undo object off the stack (as it were--by retrieving the latest operation for the temporary object from the db) apply it to the temporary and return the transformed temporary. I could also push the resultant operation onto a redo stack if I cared to implement redo.
To implement "save changes," i.e. commit, I de-activate and time-stamp the original object and activate the temporary in it's place.
To implement "cancel," i.e. rollback, I do nothing! I could delete the temporary, of course, because there's no way for the user to retrieve it once the editing session is over, but I like to keep the canceled edit sessions so I can run stats on them before clearing them out with a cron job. | 1 | 0 | 0 | Per-session transactions in Django | 3 | python,django,transactions | 0 | 2009-06-23T17:16:00.000 |
I have a system sitting on a "Master Server", that is periodically transferring quite a few chunks of information from a MySQL DB to another server in the web.
Both servers have a MySQL Server and an Apache running. I would like an easy-to-use solution for this.
Currently I'm looking into:
XMLRPC
RestFul Services
a simple POST to a processing script
socket transfers
The app on my master is a TurboGears app, so I would prefer "pythonic" aka less ugly solutions. Copying a dumped table to another server via FTP / SCP or something like that might be quick, but in my eyes it is also very (quick and) dirty, and I'd love to have a nicer solution.
Can anyone describe shortly how you would do this the "best-practise" way?
This doesn't necessarily have to involve Databases. Dumping the table on Server1 and transferring the raw data in a structured way so server2 can process it without parsing too much is just as good. One requirement though: As soon as the data arrives on server2, I want it to be processed, so there has to be a notification of some sort when the transfer is done. Of course I could just write my whole own server sitting on a socket on the second machine and accepting the file with own code and processing it and so forth, but this is just a very very small piece of a very big system, so I dont want to spend half a day implementing this.
Thanks,
Tom | 2 | 2 | 1.2 | 0 | true | 1,043,653 | 0 | 810 | 2 | 0 | 0 | 1,043,528 | Server 1: Convert rows to JSON, call the RESTful api of second with JSON data
Server 2: listens on a URI e.g. POST /data , get json data convert back to dictionary or ORM objects, insert into db
sqlalchemy/sqlobject and simplejson is what you need. | 1 | 0 | 0 | Best Practise for transferring a MySQL table to another server? | 5 | python,web-services,database-design | 0 | 2009-06-25T11:59:00.000 |
I have a system sitting on a "Master Server", that is periodically transferring quite a few chunks of information from a MySQL DB to another server in the web.
Both servers have a MySQL Server and an Apache running. I would like an easy-to-use solution for this.
Currently I'm looking into:
XMLRPC
RestFul Services
a simple POST to a processing script
socket transfers
The app on my master is a TurboGears app, so I would prefer "pythonic" aka less ugly solutions. Copying a dumped table to another server via FTP / SCP or something like that might be quick, but in my eyes it is also very (quick and) dirty, and I'd love to have a nicer solution.
Can anyone describe shortly how you would do this the "best-practise" way?
This doesn't necessarily have to involve Databases. Dumping the table on Server1 and transferring the raw data in a structured way so server2 can process it without parsing too much is just as good. One requirement though: As soon as the data arrives on server2, I want it to be processed, so there has to be a notification of some sort when the transfer is done. Of course I could just write my whole own server sitting on a socket on the second machine and accepting the file with own code and processing it and so forth, but this is just a very very small piece of a very big system, so I dont want to spend half a day implementing this.
Thanks,
Tom | 2 | 0 | 0 | 0 | false | 1,043,595 | 0 | 810 | 2 | 0 | 0 | 1,043,528 | Assuming your situation allows this security-wise, you forgot one transport mechanism: simply opening a mysql connection from one server to another.
Me, I would start by thinking about one script that ran regularly on the write server and opens a read only db connection to the read server (A bit of added security) and a full connection to it's own data base server.
How you then proceed depends on the data (is it just inserts to deal with? do you have to mirror deletes? how many inserts vs updates? etc) but basically you could write a script that pulled data from the read server and processed it immediately into the write server.
Also, would mysql server replication work or would it be to over-blown as a solution? | 1 | 0 | 0 | Best Practise for transferring a MySQL table to another server? | 5 | python,web-services,database-design | 0 | 2009-06-25T11:59:00.000 |
I want to access a postgreSQL database that's running on a remote machine, from Python in OS/X. Do I have to install postgres on the mac as well? Or will psycopg2 work on its own.
Any hints for a good installation guide for psycopg2 for os/x? | 2 | 3 | 1.2 | 0 | true | 1,052,990 | 0 | 4,829 | 1 | 1 | 0 | 1,052,957 | macports tells me that the psycopg2 package has a dependency on the postgres client and libraries (but not the db server). If you successfully installed psycopg, then you should be good to go.
If you haven't installed yet, consider using macports or fink to deal with dependency resolution for you. In most cases, this will make things easier (occasionally build problems erupt). | 1 | 0 | 0 | psycopg2 on OSX: do I have to install PostgreSQL too? | 3 | python,macos,postgresql | 0 | 2009-06-27T14:52:00.000 |
I have a username which I must change in numerous (up to ~25) tables. (Yeah, I know.) An atomic transaction seems to be the way to go for this sort of thing. However, I do not know how to do this with pyodbc. I've seen various tutorials on atomic transactions before, but have never used them.
The setup: Windows platform, Python 2.6, pyodbc, Microsoft SQL 2005. I've used pyodbc for single SQL statements, but no compound statements or transactions.
Best practices for SQL seem to suggest that creating a stored procedure is excellent for this. My fears about doing a stored procedure are as follows, in order of increasing importance:
1) I have never written a stored procedure.
2) I heard that pyodbc does not return results from stored procedures as of yet.
3) This is most definitely Not My Database. It's vendor-supplied, vendor-updated, and so forth.
So, what's the best way to go about this? | 15 | -10 | -1 | 0 | false | 1,063,879 | 0 | 27,883 | 1 | 0 | 0 | 1,063,770 | I don't think pyodbc has any specific support for transactions. You need to send the SQL command to start/commit/rollback transactions. | 1 | 0 | 0 | In Python, Using pyodbc, How Do You Perform Transactions? | 2 | python,transactions,pyodbc | 0 | 2009-06-30T13:45:00.000 |
I am not very familiar with databases, and so I do not know how to partition a table using SQLAlchemy.
Your help would be greatly appreciated. | 2 | 3 | 0.197375 | 0 | false | 1,087,081 | 0 | 7,505 | 1 | 0 | 0 | 1,085,304 | Automatic partitioning is a very database engine specific concept and SQLAlchemy doesn't provide any generic tools to manage partitioning. Mostly because it wouldn't provide anything really useful while being another API to learn. If you want to do database level partitioning then do the CREATE TABLE statements using custom Oracle DDL statements (see Oracle documentation how to create partitioned tables and migrate data to them). You can use a partitioned table in SQLAlchemy just like you would use a normal table, you just need the table declaration so that SQLAlchemy knows what to query. You can reflect the definition from the database, or just duplicate the table declaration in SQLAlchemy code.
Very large datasets are usually time-based, with older data becoming read-only or read-mostly and queries usually only look at data from a time interval. If that describes your data, you should probably partition your data using the date field.
There's also application level partitioning, or sharding, where you use your application to split data across different database instances. This isn't all that popular in the Oracle world due to the exorbitant pricing models. If you do want to use sharding, then look at SQLAlchemy documentation and examples for that, for how SQLAlchemy can support you in that, but be aware that application level sharding will affect how you need to build your application code. | 1 | 0 | 0 | how to make table partitions? | 3 | python,sqlalchemy | 0 | 2009-07-06T03:33:00.000 |
I want to write some unittests for an application that uses MySQL. However, I do not want to connect to a real mysql database, but rather to a temporary one that doesn't require any SQL server at all.
Any library (I could not find anything on google)? Any design pattern? Note that DIP doesn't work since I will still have to test the injected class. | 8 | 12 | 1.2 | 0 | true | 1,088,090 | 0 | 3,416 | 1 | 0 | 0 | 1,088,077 | There isn't a good way to do that. You want to run your queries against a real MySQL server, otherwise you don't know if they will work or not.
However, that doesn't mean you have to run them against a production server. We have scripts that create a Unit Test database, and then tear it down once the unit tests have run. That way we don't have to maintain a static test database, but we still get to test against the real server. | 1 | 0 | 0 | testing python applications that use mysql | 2 | python,mysql,unit-testing | 0 | 2009-07-06T17:00:00.000 |
I have id values for products that I need store. Right now they are all integers, but I'm not sure if the data provider in the future will introduce letters or symbols into that mix, so I'm debating whether to store it now as integer or string.
Are there performance or other disadvantages to saving the values as strings? | 22 | 1 | 0.019997 | 0 | false | 1,090,708 | 0 | 10,684 | 8 | 0 | 0 | 1,090,022 | You won't be able to do comparisons correctly. "... where x > 500" is not same as ".. where x > '500'" because "500" > "100000"
Performance wise string it would be a hit especially if you use indexes as integer indexes are much faster than string indexes.
On the other hand it really depends upon your situation. If you intend to store something like phone numbers or student enrollment numbers, then it makes perfect sense to use strings. | 1 | 0 | 1 | Drawbacks of storing an integer as a string in a database | 10 | python,mysql,database,database-design | 0 | 2009-07-07T01:58:00.000 |
I have id values for products that I need store. Right now they are all integers, but I'm not sure if the data provider in the future will introduce letters or symbols into that mix, so I'm debating whether to store it now as integer or string.
Are there performance or other disadvantages to saving the values as strings? | 22 | 0 | 0 | 0 | false | 1,090,924 | 0 | 10,684 | 8 | 0 | 0 | 1,090,022 | Better use independent ID and add string ID if necessary: if there's a business indicator you need to include, why make it system ID?
Main drawbacks:
Integer operations and indexing always show better performance on large scales of data (more than 1k rows in a table, not to speak of connected tables)
You'll have to make additional checks to restrict numeric-only values in a column: these can be regex whether on client or database side. Anyway, you'll have to guarantee somehow that there's actually integer.
And you will create additional context layer for developers to know, and anyway someone will always mess this up :) | 1 | 0 | 1 | Drawbacks of storing an integer as a string in a database | 10 | python,mysql,database,database-design | 0 | 2009-07-07T01:58:00.000 |