Answer
stringlengths
16
5.07k
Available Count
int64
1
9
A_Id
int64
39.3k
72.5M
Q_Score
int64
0
1.24k
is_accepted
bool
2 classes
Q_Id
int64
39.1k
48M
System Administration and DevOps
int64
0
1
Title
stringlengths
15
148
Python Basics and Environment
int64
0
1
Users Score
int64
-10
494
Score
float64
-1
1.2
GUI and Desktop Applications
int64
0
1
Other
int64
0
1
Networking and APIs
int64
0
1
AnswerCount
int64
1
32
ViewCount
int64
15
1.37M
CreationDate
stringlengths
23
23
Web Development
int64
0
1
Tags
stringlengths
6
90
Question
stringlengths
25
7.47k
Database and SQL
int64
1
1
Data Science and Machine Learning
int64
0
1
Use pkgutil.get_data. It’s the cousin of pkg_resources.resource_stream, but in the standard library, and should work with flat filesystem installs as well as zipped packages and other importers.
2
9,918,496
32
false
39,104
0
Finding a file in a Python module distribution
1
19
1
0
0
0
4
28,993
2008-09-02T09:40:00.000
0
python,distutils
I've written a Python package that includes a bsddb database of pre-computed values for one of the more time-consuming computations. For simplicity, my setup script installs the database file in the same directory as the code which accesses the database (on Unix, something like /usr/lib/python2.5/site-packages/mypackage/). How do I store the final location of the database file so my code can access it? Right now, I'm using a hack based on the __file__ variable in the module which accesses the database: dbname = os.path.join(os.path.dirname(__file__), "database.dat") It works, but it seems... hackish. Is there a better way to do this? I'd like to have the setup script just grab the final installation location from the distutils module and stuff it into a "dbconfig.py" file that gets installed alongside the code that accesses the database.
1
0
That's probably the way to do it, without resorting to something more advanced like using setuptools to install the files where they belong. Notice there's a problem with that approach, because on OSes with real a security framework (UNIXes, etc.) the user running your script might not have the rights to access the DB in the system directory where it gets installed.
2
39,295
32
false
39,104
0
Finding a file in a Python module distribution
1
3
0.148885
0
0
0
4
28,993
2008-09-02T09:40:00.000
0
python,distutils
I've written a Python package that includes a bsddb database of pre-computed values for one of the more time-consuming computations. For simplicity, my setup script installs the database file in the same directory as the code which accesses the database (on Unix, something like /usr/lib/python2.5/site-packages/mypackage/). How do I store the final location of the database file so my code can access it? Right now, I'm using a hack based on the __file__ variable in the module which accesses the database: dbname = os.path.join(os.path.dirname(__file__), "database.dat") It works, but it seems... hackish. Is there a better way to do this? I'd like to have the setup script just grab the final installation location from the distutils module and stuff it into a "dbconfig.py" file that gets installed alongside the code that accesses the database.
1
0
via the __table__ attribute on your declarative class
2
77,962
8
false
75,829
0
Best way to access table instances when using SQLAlchemy's declarative syntax
0
4
0.26052
0
0
0
3
2,919
2008-09-16T19:08:00.000
0
python,sql,sqlalchemy
All the docs for SQLAlchemy give INSERT and UPDATE examples using the local table instance (e.g. tablename.update()... ) Doing this seems difficult with the declarative syntax, I need to reference Base.metadata.tables["tablename"] to get the table reference. Am I supposed to do this another way? Is there a different syntax for INSERT and UPDATE recommended when using the declarative syntax? Should I just switch to the old way?
1
0
There may be some confusion between table (the object) and tablename (the name of the table, a string). Using the table class attribute works fine for me.
2
315,406
8
false
75,829
0
Best way to access table instances when using SQLAlchemy's declarative syntax
0
0
0
0
0
0
3
2,919
2008-09-16T19:08:00.000
0
python,sql,sqlalchemy
All the docs for SQLAlchemy give INSERT and UPDATE examples using the local table instance (e.g. tablename.update()... ) Doing this seems difficult with the declarative syntax, I need to reference Base.metadata.tables["tablename"] to get the table reference. Am I supposed to do this another way? Is there a different syntax for INSERT and UPDATE recommended when using the declarative syntax? Should I just switch to the old way?
1
0
"implement a Domain Specific Language" "nobody is going to want to install a server that downloads and executes arbitrary Python code at runtime" I want a DSL but I don't want Python to be that DSL. Okay. How will you execute this DSL? What runtime is acceptable if not Python? What if I have a C program that happens to embed the Python interpreter? Is that acceptable? And -- if Python is not an acceptable runtime -- why does this have a Python tag?
6
141,872
5
false
140,026
0
Writing a Domain Specific Language for selecting rows from a table
0
1
0.022219
0
0
0
9
2,773
2008-09-26T14:56:00.000
0
python,database,algorithm,dsl
I'm writing a server that I expect to be run by many different people, not all of whom I will have direct contact with. The servers will communicate with each other in a cluster. Part of the server's functionality involves selecting a small subset of rows from a potentially very large table. The exact choice of what rows are selected will need some tuning, and it's important that it's possible for the person running the cluster (eg, myself) to update the selection criteria without getting each and every server administrator to deploy a new version of the server. Simply writing the function in Python isn't really an option, since nobody is going to want to install a server that downloads and executes arbitrary Python code at runtime. What I need are suggestions on the simplest way to implement a Domain Specific Language to achieve this goal. The language needs to be capable of simple expression evaluation, as well as querying table indexes and iterating through the returned rows. Ease of writing and reading the language is secondary to ease of implementing it. I'd also prefer not to have to write an entire query optimiser, so something that explicitly specifies what indexes to query would be ideal. The interface that this will have to compile against will be similar in capabilities to what the App Engine datastore exports: You can query for sequential ranges on any index on the table (eg, less-than, greater-than, range and equality queries), then filter the returned row by any boolean expression. You can also concatenate multiple independent result sets together. I realise this question sounds a lot like I'm asking for SQL. However, I don't want to require that the datastore backing this data be a relational database, and I don't want the overhead of trying to reimplement SQL myself. I'm also dealing with only a single table with a known schema. Finally, no joins will be required. Something much simpler would be far preferable. Edit: Expanded description to clear up some misconceptions.
1
0
Why not create a language that when it "compiles" it generates SQL or whatever query language your datastore requires ? You would be basically creating an abstraction over your persistence layer.
6
140,066
5
false
140,026
0
Writing a Domain Specific Language for selecting rows from a table
0
0
0
0
0
0
9
2,773
2008-09-26T14:56:00.000
0
python,database,algorithm,dsl
I'm writing a server that I expect to be run by many different people, not all of whom I will have direct contact with. The servers will communicate with each other in a cluster. Part of the server's functionality involves selecting a small subset of rows from a potentially very large table. The exact choice of what rows are selected will need some tuning, and it's important that it's possible for the person running the cluster (eg, myself) to update the selection criteria without getting each and every server administrator to deploy a new version of the server. Simply writing the function in Python isn't really an option, since nobody is going to want to install a server that downloads and executes arbitrary Python code at runtime. What I need are suggestions on the simplest way to implement a Domain Specific Language to achieve this goal. The language needs to be capable of simple expression evaluation, as well as querying table indexes and iterating through the returned rows. Ease of writing and reading the language is secondary to ease of implementing it. I'd also prefer not to have to write an entire query optimiser, so something that explicitly specifies what indexes to query would be ideal. The interface that this will have to compile against will be similar in capabilities to what the App Engine datastore exports: You can query for sequential ranges on any index on the table (eg, less-than, greater-than, range and equality queries), then filter the returned row by any boolean expression. You can also concatenate multiple independent result sets together. I realise this question sounds a lot like I'm asking for SQL. However, I don't want to require that the datastore backing this data be a relational database, and I don't want the overhead of trying to reimplement SQL myself. I'm also dealing with only a single table with a known schema. Finally, no joins will be required. Something much simpler would be far preferable. Edit: Expanded description to clear up some misconceptions.
1
0
It really sounds like SQL, but perhaps it's worth to try using SQLite if you want to keep it simple?
6
140,304
5
false
140,026
0
Writing a Domain Specific Language for selecting rows from a table
0
0
0
0
0
0
9
2,773
2008-09-26T14:56:00.000
0
python,database,algorithm,dsl
I'm writing a server that I expect to be run by many different people, not all of whom I will have direct contact with. The servers will communicate with each other in a cluster. Part of the server's functionality involves selecting a small subset of rows from a potentially very large table. The exact choice of what rows are selected will need some tuning, and it's important that it's possible for the person running the cluster (eg, myself) to update the selection criteria without getting each and every server administrator to deploy a new version of the server. Simply writing the function in Python isn't really an option, since nobody is going to want to install a server that downloads and executes arbitrary Python code at runtime. What I need are suggestions on the simplest way to implement a Domain Specific Language to achieve this goal. The language needs to be capable of simple expression evaluation, as well as querying table indexes and iterating through the returned rows. Ease of writing and reading the language is secondary to ease of implementing it. I'd also prefer not to have to write an entire query optimiser, so something that explicitly specifies what indexes to query would be ideal. The interface that this will have to compile against will be similar in capabilities to what the App Engine datastore exports: You can query for sequential ranges on any index on the table (eg, less-than, greater-than, range and equality queries), then filter the returned row by any boolean expression. You can also concatenate multiple independent result sets together. I realise this question sounds a lot like I'm asking for SQL. However, I don't want to require that the datastore backing this data be a relational database, and I don't want the overhead of trying to reimplement SQL myself. I'm also dealing with only a single table with a known schema. Finally, no joins will be required. Something much simpler would be far preferable. Edit: Expanded description to clear up some misconceptions.
1
0
You mentioned Python. Why not use Python? If someone can "type in" an expression in your DSL, they can type in Python. You'll need some rules on structure of the expression, but that's a lot easier than implementing something new.
6
140,091
5
false
140,026
0
Writing a Domain Specific Language for selecting rows from a table
0
0
0
0
0
0
9
2,773
2008-09-26T14:56:00.000
0
python,database,algorithm,dsl
I'm writing a server that I expect to be run by many different people, not all of whom I will have direct contact with. The servers will communicate with each other in a cluster. Part of the server's functionality involves selecting a small subset of rows from a potentially very large table. The exact choice of what rows are selected will need some tuning, and it's important that it's possible for the person running the cluster (eg, myself) to update the selection criteria without getting each and every server administrator to deploy a new version of the server. Simply writing the function in Python isn't really an option, since nobody is going to want to install a server that downloads and executes arbitrary Python code at runtime. What I need are suggestions on the simplest way to implement a Domain Specific Language to achieve this goal. The language needs to be capable of simple expression evaluation, as well as querying table indexes and iterating through the returned rows. Ease of writing and reading the language is secondary to ease of implementing it. I'd also prefer not to have to write an entire query optimiser, so something that explicitly specifies what indexes to query would be ideal. The interface that this will have to compile against will be similar in capabilities to what the App Engine datastore exports: You can query for sequential ranges on any index on the table (eg, less-than, greater-than, range and equality queries), then filter the returned row by any boolean expression. You can also concatenate multiple independent result sets together. I realise this question sounds a lot like I'm asking for SQL. However, I don't want to require that the datastore backing this data be a relational database, and I don't want the overhead of trying to reimplement SQL myself. I'm also dealing with only a single table with a known schema. Finally, no joins will be required. Something much simpler would be far preferable. Edit: Expanded description to clear up some misconceptions.
1
0
You said nobody is going to want to install a server that downloads and executes arbitrary code at runtime. However, that is exactly what your DSL will do (eventually) so there probably isn't that much of a difference. Unless you're doing something very specific with the data then I don't think a DSL will buy you that much and it will frustrate the users who are already versed in SQL. Don't underestimate the size of the task you'll be taking on. To answer your question however, you will need to come up with a grammar for your language, something to parse the text and walk the tree, emitting code or calling an API that you've written (which is why my comment that you're still going to have to ship some code). There are plenty of educational texts on grammars for mathematical expressions you can refer to on the net, that's fairly straight forward. You may have a parser generator tool like ANTLR or Yacc you can use to help you generate the parser (or use a language like Lisp/Scheme and marry the two up). Coming up with a reasonable SQL grammar won't be easy. But google 'BNF SQL' and see what you come up with. Best of luck.
6
140,228
5
false
140,026
0
Writing a Domain Specific Language for selecting rows from a table
0
0
0
0
0
0
9
2,773
2008-09-26T14:56:00.000
0
python,database,algorithm,dsl
I'm writing a server that I expect to be run by many different people, not all of whom I will have direct contact with. The servers will communicate with each other in a cluster. Part of the server's functionality involves selecting a small subset of rows from a potentially very large table. The exact choice of what rows are selected will need some tuning, and it's important that it's possible for the person running the cluster (eg, myself) to update the selection criteria without getting each and every server administrator to deploy a new version of the server. Simply writing the function in Python isn't really an option, since nobody is going to want to install a server that downloads and executes arbitrary Python code at runtime. What I need are suggestions on the simplest way to implement a Domain Specific Language to achieve this goal. The language needs to be capable of simple expression evaluation, as well as querying table indexes and iterating through the returned rows. Ease of writing and reading the language is secondary to ease of implementing it. I'd also prefer not to have to write an entire query optimiser, so something that explicitly specifies what indexes to query would be ideal. The interface that this will have to compile against will be similar in capabilities to what the App Engine datastore exports: You can query for sequential ranges on any index on the table (eg, less-than, greater-than, range and equality queries), then filter the returned row by any boolean expression. You can also concatenate multiple independent result sets together. I realise this question sounds a lot like I'm asking for SQL. However, I don't want to require that the datastore backing this data be a relational database, and I don't want the overhead of trying to reimplement SQL myself. I'm also dealing with only a single table with a known schema. Finally, no joins will be required. Something much simpler would be far preferable. Edit: Expanded description to clear up some misconceptions.
1
0
I think we're going to need a bit more information here. Let me know if any of the following is based on incorrect assumptions. First of all, as you pointed out yourself, there already exists a DSL for selecting rows from arbitrary tables-- it is called "SQL". Since you don't want to reinvent SQL, I'm assuming that you only need to query from a single table with a fixed format. If this is the case, you probably don't need to implement a DSL (although that's certainly one way to go); it may be easier, if you are used to Object Orientation, to create a Filter object. More specifically, a "Filter" collection that would hold one or more SelectionCriterion objects. You can implement these to inherit from one or more base classes representing types of selections (Range, LessThan, ExactMatch, Like, etc.) Once these base classes are in place, you can create column-specific inherited versions which are appropriate to that column. Finally, depending on the complexity of the queries you want to support, you'll want to implement some kind of connective glue to handle AND and OR and NOT linkages between the various criteria. If you feel like it, you can create a simple GUI to load up the collection; I'd look at the filtering in Excel as a model, if you don't have anything else in mind. Finally, it should be trivial to convert the contents of this Collection to the corresponding SQL, and pass that to the database. However: if what you are after is simplicity, and your users understand SQL, you could simply ask them to type in the contents of a WHERE clause, and programmatically build up the rest of the query. From a security perspective, if your code has control over the columns selected and the FROM clause, and your database permissions are set properly, and you do some sanity checking on the string coming in from the users, this would be a relatively safe option.
6
140,275
5
false
140,026
0
Writing a Domain Specific Language for selecting rows from a table
0
1
0.022219
0
0
0
9
2,773
2008-09-26T14:56:00.000
0
python,database,algorithm,dsl
I'm writing a server that I expect to be run by many different people, not all of whom I will have direct contact with. The servers will communicate with each other in a cluster. Part of the server's functionality involves selecting a small subset of rows from a potentially very large table. The exact choice of what rows are selected will need some tuning, and it's important that it's possible for the person running the cluster (eg, myself) to update the selection criteria without getting each and every server administrator to deploy a new version of the server. Simply writing the function in Python isn't really an option, since nobody is going to want to install a server that downloads and executes arbitrary Python code at runtime. What I need are suggestions on the simplest way to implement a Domain Specific Language to achieve this goal. The language needs to be capable of simple expression evaluation, as well as querying table indexes and iterating through the returned rows. Ease of writing and reading the language is secondary to ease of implementing it. I'd also prefer not to have to write an entire query optimiser, so something that explicitly specifies what indexes to query would be ideal. The interface that this will have to compile against will be similar in capabilities to what the App Engine datastore exports: You can query for sequential ranges on any index on the table (eg, less-than, greater-than, range and equality queries), then filter the returned row by any boolean expression. You can also concatenate multiple independent result sets together. I realise this question sounds a lot like I'm asking for SQL. However, I don't want to require that the datastore backing this data be a relational database, and I don't want the overhead of trying to reimplement SQL myself. I'm also dealing with only a single table with a known schema. Finally, no joins will be required. Something much simpler would be far preferable. Edit: Expanded description to clear up some misconceptions.
1
0
I uses only psycopg2 and had no problems with that.
2
1,579,851
28
false
144,448
0
Python PostgreSQL modules. Which is best?
0
0
0
0
0
0
6
15,582
2008-09-27T20:55:00.000
0
python,postgresql,module
I've seen a number of postgresql modules for python like pygresql, pypgsql, psyco. Most of them are Python DB API 2.0 compliant, some are not being actively developed anymore. Which module do you recommend? Why?
1
0
Psycopg1 is known for better performance in heavilyy threaded environments (like web applications) than Psycopg2, although not maintained. Both are well written and rock solid, I'd choose one of these two depending on use case.
2
145,801
28
false
144,448
0
Python PostgreSQL modules. Which is best?
0
0
0
0
0
0
6
15,582
2008-09-27T20:55:00.000
0
python,postgresql,module
I've seen a number of postgresql modules for python like pygresql, pypgsql, psyco. Most of them are Python DB API 2.0 compliant, some are not being actively developed anymore. Which module do you recommend? Why?
1
0
I thought I posted my solution already... Modifying both apps to run under WSGIApplicationGroup ${GLOBAL} in their httpd conf file and patching sqlalchemy.databases.firebird.py to check if self.dbapi.initialized is True before calling self.dbapi.init(... was the only way I could manage to get this scenario up and running. The SQLAlchemy 0.4.7 patch: diff -Naur SQLAlchemy-0.4.7/lib/sqlalchemy/databases/firebird.py SQLAlchemy-0.4.7.new/lib/sqlalchemy/databases/firebird.py --- SQLAlchemy-0.4.7/lib/sqlalchemy/databases/firebird.py 2008-07-26 12:43:52.000000000 -0400 +++ SQLAlchemy-0.4.7.new/lib/sqlalchemy/databases/firebird.py 2008-10-01 10:51:22.000000000 -0400 @@ -291,7 +291,8 @@ global _initialized_kb if not _initialized_kb and self.dbapi is not None: _initialized_kb = True - self.dbapi.init(type_conv=type_conv, concurrency_level=concurrency_level) + if not self.dbapi.initialized: + self.dbapi.init(type_conv=type_conv, concurrency_level=concurrency_level) return ([], opts) def create_execution_context(self, *args, **kwargs):
1
175,634
1
false
155,029
0
SQLAlchemy and kinterbasdb in separate apps under mod_wsgi
0
2
0.379949
1
0
0
1
270
2008-09-30T20:47:00.000
0
python,sqlalchemy,kinterbasdb
I'm trying to develop an app using turbogears and sqlalchemy. There is already an existing app using kinterbasdb directly under mod_wsgi on the same server. When both apps are used, neither seems to recognize that kinterbasdb is already initialized Is there something non-obvious I am missing about using sqlalchemy and kinterbasdb in separate apps? In order to make sure only one instance of kinterbasdb gets initialized and both apps use that instance, does anyone have suggestions?
1
0
Can you use the built-in database aggregate functions like MAX(column)?
1
177,302
1
false
177,284
0
SQL Absolute value across columns
0
0
0
0
0
0
5
4,588
2008-10-07T05:06:00.000
0
python,mysql,sql,oracle,postgresql
I have a table that looks something like this: word big expensive smart fast dog 9 -10 -20 4 professor 2 4 40 -7 ferrari 7 50 0 48 alaska 10 0 1 0 gnat -3 0 0 0 The + and - values are associated with the word, so professor is smart and dog is not smart. Alaska is big, as a proportion of the total value associated with its entries, and the opposite is true of gnat. Is there a good way to get the absolute value of the number farthest from zero, and some token whether absolute value =/= value? Relatedly, how might I calculate whether the results for a given value are proportionately large with respect to the other values? I would write something to format the output to the effect of: "dog: not smart, probably not expensive; professor smart; ferrari: fast, expensive; alaska: big; gnat: probably small." (The formatting is not a question, just an illustration, I am stuck on the underlying queries.) Also, the rest of the program is python, so if there is any python solution with normal dbapi modules or a more abstract module, any help appreciated.
1
0
if the communication is such a problem, consider writing a 'proxy' that receives your SQL commands over the flaky connection and relays them to the MySQL server on a reliable channel (maybe running on the same box as the MySQL server). This way you have total control over failure detection and retrying.
2
196,308
2
false
196,217
0
MySQLdb execute timeout
0
2
0.197375
0
0
0
2
2,995
2008-10-12T22:27:00.000
0
python,mysql,timeout
Sometimes in our production environment occurs situation when connection between service (which is python program that uses MySQLdb) and mysql server is flacky, some packages are lost, some black magic happens and .execute() of MySQLdb.Cursor object never ends (or take great amount of time to end). This is very bad because it is waste of service worker threads. Sometimes it leads to exhausting of workers pool and service stops responding at all. So the question is: Is there a way to interrupt MySQLdb.Connection.execute operation after given amount of time?
1
0
You need to analyse exactly what the problem is. MySQL connections should eventually timeout if the server is gone; TCP keepalives are generally enabled. You may be able to tune the OS-level TCP timeouts. If the database is "flaky", then you definitely need to investigate how. It seems unlikely that the database really is the problem, more likely that networking in between is. If you are using (some) stateful firewalls of any kind, it's possible that they're losing some of the state, thus causing otherwise good long-lived connections to go dead. You might want to consider changing the idle timeout parameter in MySQL; otherwise, a long-lived, unused connection may go "stale", where the server and client both think it's still alive, but some stateful network element in between has "forgotten" about the TCP connection. An application trying to use such a "stale" connection will have a long wait before receiving an error (but it should eventually).
2
196,891
2
false
196,217
0
MySQLdb execute timeout
0
1
0.099668
0
0
0
2
2,995
2008-10-12T22:27:00.000
0
python,mysql,timeout
Sometimes in our production environment occurs situation when connection between service (which is python program that uses MySQLdb) and mysql server is flacky, some packages are lost, some black magic happens and .execute() of MySQLdb.Cursor object never ends (or take great amount of time to end). This is very bad because it is waste of service worker threads. Sometimes it leads to exhausting of workers pool and service stops responding at all. So the question is: Is there a way to interrupt MySQLdb.Connection.execute operation after given amount of time?
1
0
Since Pickle can dump your object graph to a string it should be possible. Be aware though that TEXT fields in SQLite uses database encoding so you might need to convert it to a simple string before you un-pickle.
6
198,763
40
false
198,692
0
Can I pickle a python dictionary into a sqlite3 text field?
1
2
0.028564
0
0
0
14
31,019
2008-10-13T19:11:00.000
0
python,sqlite,pickle
Any gotchas I should be aware of? Can I store it in a text field, or do I need to use a blob? (I'm not overly familiar with either pickle or sqlite, so I wanted to make sure I'm barking up the right tree with some of my high-level design ideas.)
1
0
Pickle has both text and binary output formats. If you use the text-based format you can store it in a TEXT field, but it'll have to be a BLOB if you use the (more efficient) binary format.
6
198,767
40
false
198,692
0
Can I pickle a python dictionary into a sqlite3 text field?
1
5
0.071307
0
0
0
14
31,019
2008-10-13T19:11:00.000
0
python,sqlite,pickle
Any gotchas I should be aware of? Can I store it in a text field, or do I need to use a blob? (I'm not overly familiar with either pickle or sqlite, so I wanted to make sure I'm barking up the right tree with some of my high-level design ideas.)
1
0
If a dictionary can be pickled, it can be stored in text/blob field as well. Just be aware of the dictionaries that can't be pickled (aka that contain unpickable objects).
6
198,770
40
false
198,692
0
Can I pickle a python dictionary into a sqlite3 text field?
1
2
0.028564
0
0
0
14
31,019
2008-10-13T19:11:00.000
0
python,sqlite,pickle
Any gotchas I should be aware of? Can I store it in a text field, or do I need to use a blob? (I'm not overly familiar with either pickle or sqlite, so I wanted to make sure I'm barking up the right tree with some of my high-level design ideas.)
1
0
Yes, you can store a pickled object in a TEXT or BLOB field in an SQLite3 database, as others have explained. Just be aware that some object cannot be pickled. The built-in container types can (dict, set, list, tuple, etc.). But some objects, such as file handles, refer to state that is external to their own data structures, and other extension types have similar problems. Since a dictionary can contain arbitrary nested data structures, it might not be pickle-able.
6
198,829
40
false
198,692
0
Can I pickle a python dictionary into a sqlite3 text field?
1
2
0.028564
0
0
0
14
31,019
2008-10-13T19:11:00.000
0
python,sqlite,pickle
Any gotchas I should be aware of? Can I store it in a text field, or do I need to use a blob? (I'm not overly familiar with either pickle or sqlite, so I wanted to make sure I'm barking up the right tree with some of my high-level design ideas.)
1
0
SpoonMeiser is correct, you need to have a strong reason to pickle into a database. It's not difficult to write Python objects that implement persistence with SQLite. Then you can use the SQLite CLI to fiddle with the data as well. Which in my experience is worth the extra bit of work, since many debug and admin functions can be simply performed from the CLI rather than writing specific Python code. In the early stages of a project, I did what you propose and ended up re-writing with a Python class for each business object (note: I didn't say for each table!) This way the body of the application can focus on "what" needs to be done rather than "how" it is done.
6
199,190
40
false
198,692
0
Can I pickle a python dictionary into a sqlite3 text field?
1
1
0.014285
0
0
0
14
31,019
2008-10-13T19:11:00.000
0
python,sqlite,pickle
Any gotchas I should be aware of? Can I store it in a text field, or do I need to use a blob? (I'm not overly familiar with either pickle or sqlite, so I wanted to make sure I'm barking up the right tree with some of my high-level design ideas.)
1
0
If you want to store a pickled object, you'll need to use a blob, since it is binary data. However, you can, say, base64 encode the pickled object to get a string that can be stored in a text field. Generally, though, doing this sort of thing is indicative of bad design, since you're storing opaque data you lose the ability to use SQL to do any useful manipulation on that data. Although without knowing what you're actually doing, I can't really make a moral call on it.
6
198,748
40
true
198,692
0
Can I pickle a python dictionary into a sqlite3 text field?
1
23
1.2
0
0
0
14
31,019
2008-10-13T19:11:00.000
0
python,sqlite,pickle
Any gotchas I should be aware of? Can I store it in a text field, or do I need to use a blob? (I'm not overly familiar with either pickle or sqlite, so I wanted to make sure I'm barking up the right tree with some of my high-level design ideas.)
1
0
I would go with nginx + php + xcache + postgresql
4
244,836
7
false
204,802
0
What would you recommend for a high traffic ajax intensive website?
0
2
0.07983
0
0
0
5
1,670
2008-10-15T13:57:00.000
1
php,python,lighttpd,cherrypy,high-load
For a website like reddit with lots of up/down votes and lots of comments per topic what should I go with? Lighttpd/Php or Lighttpd/CherryPy/Genshi/SQLAlchemy? and for database what would scale better / be fastest MySQL ( 4.1 or 5 ? ) or PostgreSQL?
1
0
Going to need more data. Jeff had a few articles on the same problems and the answer was to wait till you hit a performance issue. to start with - who is hosting and what do they have available ? what's your in house talent skill sets ? Are you going to be hiring an outside firm ? what do they recommend ? brand new project w/ a team willing to learn a new framework ? 2nd thing is to do some mockups - how is the interface going to work. what data does it need to load and persist ? the idea is to keep your traffic between the web and db side down. e.g. no chatty pages with lots of queries. etc. Once you have a better idea of the data requirements and flow - then work on the database design. there are plenty of rules to follow but one of the better ones is to follow normalization rules (yea i'm a db guy why ?) Now you have a couple of pages build - run your tests. are you having a problem ? Yes, now look at what is it. Page serving or db pulls ? Measure then pick a course of action.
4
204,854
7
false
204,802
0
What would you recommend for a high traffic ajax intensive website?
0
2
0.07983
0
0
0
5
1,670
2008-10-15T13:57:00.000
1
php,python,lighttpd,cherrypy,high-load
For a website like reddit with lots of up/down votes and lots of comments per topic what should I go with? Lighttpd/Php or Lighttpd/CherryPy/Genshi/SQLAlchemy? and for database what would scale better / be fastest MySQL ( 4.1 or 5 ? ) or PostgreSQL?
1
0
I can't speak to the MySQL/PostgreSQL question as I have limited experience with Postgres, but my Masters research project was about high-performance websites with CherryPy, and I don't think you'll be disappointed if you use CherryPy for your site. It can easily scale to thousands of simultaneous users on commodity hardware. Of course, the same could be said for PHP, and I don't know of any reasonable benchmarks comparing PHP and CherryPy performance. But if you were wondering whether CherryPy can handle a high-traffic site with a huge number of requests per second, the answer is definitely yes.
4
204,853
7
true
204,802
0
What would you recommend for a high traffic ajax intensive website?
0
8
1.2
0
0
0
5
1,670
2008-10-15T13:57:00.000
1
php,python,lighttpd,cherrypy,high-load
For a website like reddit with lots of up/down votes and lots of comments per topic what should I go with? Lighttpd/Php or Lighttpd/CherryPy/Genshi/SQLAlchemy? and for database what would scale better / be fastest MySQL ( 4.1 or 5 ? ) or PostgreSQL?
1
0
On the DB question, I'd say PostgreSQL scales better and has better data integrity than MySQL. For a small site MySQL might be faster, but from what I've heard it slows significantly as the size of the database grows. (Note: I've never used MySQL for a large database, so you should probably get a second opinion about its scalability.) But PostgreSQL definitely scales well, and would be a good choice for a high traffic site.
4
205,425
7
false
204,802
0
What would you recommend for a high traffic ajax intensive website?
0
3
0.119427
0
0
0
5
1,670
2008-10-15T13:57:00.000
1
php,python,lighttpd,cherrypy,high-load
For a website like reddit with lots of up/down votes and lots of comments per topic what should I go with? Lighttpd/Php or Lighttpd/CherryPy/Genshi/SQLAlchemy? and for database what would scale better / be fastest MySQL ( 4.1 or 5 ? ) or PostgreSQL?
1
0
Yes, I was nuking out the problem. All I needed to do was check for the file and catch the IOError if it didn't exist. Thanks for all the other answers. They may come in handy in the future.
5
214,623
13
false
211,501
0
Using SQLite in a Python program
0
0
0
0
0
0
8
31,269
2008-10-17T09:02:00.000
0
python,exception,sqlite
I have created a Python module that creates and populates several SQLite tables. Now, I want to use it in a program but I don't really know how to call it properly. All the tutorials I've found are essentially "inline", i.e. they walk through using SQLite in a linear fashion rather than how to actually use it in production. What I'm trying to do is have a method check to see if the database is already created. If so, then I can use it. If not, an exception is raised and the program will create the database. (Or use if/else statements, whichever is better). I created a test script to see if my logic is correct but it's not working. When I create the try statement, it just creates a new database rather than checking if one already exists. The next time I run the script, I get an error that the table already exists, even if I tried catching the exception. (I haven't used try/except before but figured this is a good time to learn). Are there any good tutorials for using SQLite operationally or any suggestions on how to code this? I've looked through the pysqlite tutorial and others I found but they don't address this.
1
0
Doing SQL in overall is horrible in any language I've picked up. SQLalchemy has shown to be easiest from them to use because actual query and committing with it is so clean and absent from troubles. Here's some basic steps on actually using sqlalchemy in your app, better details can be found from the documentation. provide table definitions and create ORM-mappings load database ask it to create tables from the definitions (won't do so if they exist) create session maker (optional) create session After creating a session, you can commit and query from the database.
5
211,539
13
false
211,501
0
Using SQLite in a Python program
0
3
0.07486
0
0
0
8
31,269
2008-10-17T09:02:00.000
0
python,exception,sqlite
I have created a Python module that creates and populates several SQLite tables. Now, I want to use it in a program but I don't really know how to call it properly. All the tutorials I've found are essentially "inline", i.e. they walk through using SQLite in a linear fashion rather than how to actually use it in production. What I'm trying to do is have a method check to see if the database is already created. If so, then I can use it. If not, an exception is raised and the program will create the database. (Or use if/else statements, whichever is better). I created a test script to see if my logic is correct but it's not working. When I create the try statement, it just creates a new database rather than checking if one already exists. The next time I run the script, I get an error that the table already exists, even if I tried catching the exception. (I haven't used try/except before but figured this is a good time to learn). Are there any good tutorials for using SQLite operationally or any suggestions on how to code this? I've looked through the pysqlite tutorial and others I found but they don't address this.
1
0
SQLite automatically creates the database file the first time you try to use it. The SQL statements for creating tables can use IF NOT EXISTS to make the commands only take effect if the table has not been created This way you don't need to check for the database's existence beforehand: SQLite can take care of that for you. The main thing I would still be worried about is that executing CREATE TABLE IF EXISTS for every web transaction (say) would be inefficient; you can avoid that by having the program keep an (in-memory) variable saying whether it has created the database today, so it runs the CREATE TABLE script once per run. This would still allow for you to delete the database and start over during debugging.
5
211,573
13
false
211,501
0
Using SQLite in a Python program
0
7
1
0
0
0
8
31,269
2008-10-17T09:02:00.000
0
python,exception,sqlite
I have created a Python module that creates and populates several SQLite tables. Now, I want to use it in a program but I don't really know how to call it properly. All the tutorials I've found are essentially "inline", i.e. they walk through using SQLite in a linear fashion rather than how to actually use it in production. What I'm trying to do is have a method check to see if the database is already created. If so, then I can use it. If not, an exception is raised and the program will create the database. (Or use if/else statements, whichever is better). I created a test script to see if my logic is correct but it's not working. When I create the try statement, it just creates a new database rather than checking if one already exists. The next time I run the script, I get an error that the table already exists, even if I tried catching the exception. (I haven't used try/except before but figured this is a good time to learn). Are there any good tutorials for using SQLite operationally or any suggestions on how to code this? I've looked through the pysqlite tutorial and others I found but they don't address this.
1
0
Don't make this more complex than it needs to be. The big, independent databases have complex setup and configuration requirements. SQLite is just a file you access with SQL, it's much simpler. Do the following. Add a table to your database for "Components" or "Versions" or "Configuration" or "Release" or something administrative like that. CREATE TABLE REVISION( RELEASE_NUMBER CHAR(20) ); In your application, connect to your database normally. Execute a simple query against the revision table. Here's what can happen. The query fails to execute: your database doesn't exist, so execute a series of CREATE statements to build it. The query succeeds but returns no rows or the release number is lower than expected: your database exists, but is out of date. You need to migrate from that release to the current release. Hopefully, you have a sequence of DROP, CREATE and ALTER statements to do this. The query succeeds, and the release number is the expected value. Do nothing more, your database is configured correctly.
5
211,660
13
false
211,501
0
Using SQLite in a Python program
0
29
1
0
0
0
8
31,269
2008-10-17T09:02:00.000
0
python,exception,sqlite
I have created a Python module that creates and populates several SQLite tables. Now, I want to use it in a program but I don't really know how to call it properly. All the tutorials I've found are essentially "inline", i.e. they walk through using SQLite in a linear fashion rather than how to actually use it in production. What I'm trying to do is have a method check to see if the database is already created. If so, then I can use it. If not, an exception is raised and the program will create the database. (Or use if/else statements, whichever is better). I created a test script to see if my logic is correct but it's not working. When I create the try statement, it just creates a new database rather than checking if one already exists. The next time I run the script, I get an error that the table already exists, even if I tried catching the exception. (I haven't used try/except before but figured this is a good time to learn). Are there any good tutorials for using SQLite operationally or any suggestions on how to code this? I've looked through the pysqlite tutorial and others I found but they don't address this.
1
0
AFAIK an SQLITE database is just a file. To check if the database exists, check for file existence. When you open a SQLITE database it will automatically create one if the file that backs it up is not in place. If you try and open a file as a sqlite3 database that is NOT a database, you will get this: "sqlite3.DatabaseError: file is encrypted or is not a database" so check to see if the file exists and also make sure to try and catch the exception in case the file is not a sqlite3 database
5
211,534
13
true
211,501
0
Using SQLite in a Python program
0
13
1.2
0
0
0
8
31,269
2008-10-17T09:02:00.000
0
python,exception,sqlite
I have created a Python module that creates and populates several SQLite tables. Now, I want to use it in a program but I don't really know how to call it properly. All the tutorials I've found are essentially "inline", i.e. they walk through using SQLite in a linear fashion rather than how to actually use it in production. What I'm trying to do is have a method check to see if the database is already created. If so, then I can use it. If not, an exception is raised and the program will create the database. (Or use if/else statements, whichever is better). I created a test script to see if my logic is correct but it's not working. When I create the try statement, it just creates a new database rather than checking if one already exists. The next time I run the script, I get an error that the table already exists, even if I tried catching the exception. (I haven't used try/except before but figured this is a good time to learn). Are there any good tutorials for using SQLite operationally or any suggestions on how to code this? I've looked through the pysqlite tutorial and others I found but they don't address this.
1
0
There are several differences: All entities with the same ancestor are in the same entity group. Transactions can only affect entities inside a single entity group. All writes to a single entity group are serialized, so throughput is limited. The parent entity is set on creation and is fixed. References can be changed at any time. With reference properties, you can only query for direct relationships, but with parent properties you can use the .ancestor() filter to find everything (directly or indirectly) descended from a given ancestor. Each entity has only a single parent, but can have multiple reference properties.
1
216,187
10
true
215,570
1
What's the difference between a parent and a reference property in Google App Engine?
0
15
1.2
0
0
0
2
1,067
2008-10-18T21:12:00.000
1
python,api,google-app-engine
From what I understand, the parent attribute of a db.Model (typically defined/passed in the constructor call) allows you to define hierarchies in your data models. As a result, this increases the size of the entity group. However, it's not very clear to me why we would want to do that. Is this strictly for ACID compliance? I would like to see scenarios where each is best suited or more appropriate.
1
0
maybe the best way to install pywin32 is to place it in (openofficedir)\program\python-core-2.3.4\lib\site-packages it is easy if you have a python 2.3 installation (with pywin installed) under C:\python2.3 move the C:\python2.3\Lib\site-packages\ to your (openofficedir)\program\python-core-2.3.4\lib\site-packages
1
239,487
0
false
239,009
0
getting pywin32 to work inside open office 2.4 built in python 2.3 interpreter
0
1
0.066568
0
0
0
3
833
2008-10-27T03:32:00.000
0
python,openoffice.org,pywin32,adodbapi
I need to update data to a mssql 2005 database so I have decided to use adodbapi, which is supposed to come built into the standard installation of python 2.1.1 and greater. It needs pywin32 to work correctly and the open office python 2.3 installation does not have pywin32 built into it. It also seems like this built int python installation does not have adodbapi, as I get an error when I go import adodbapi. Any suggestions on how to get both pywin32 and adodbapi installed into this open office 2.4 python installation? thanks oh yeah I tried those ways. annoyingly nothing. So i have reverted to jython, that way I can access Open Office for its conversion capabilities along with decent database access. Thanks for the help.
1
0
I think you could look at the child's __dict__ attribute dictionary to check if the data is already there or not.
1
261,191
16
true
258,775
0
How to find out if a lazy relation isn't loaded yet, with SQLAlchemy?
0
5
1.2
0
0
0
3
3,701
2008-11-03T14:28:00.000
0
python,sqlalchemy
With SQLAlchemy, is there a way to know beforehand whether a relation would be lazy-loaded? For example, given a lazy parent->children relation and an instance X of "parent", I'd like to know if "X.children" is already loaded, without triggering the query.
1
0
you can always run LOCK TABLE tablename from another session (mysql CLI for instance). That might do the trick. It will remain locked until you release it or disconnect the session.
1
270,449
10
false
269,676
0
How can I Cause a Deadlock in MySQL for Testing Purposes
0
1
0.039979
0
0
0
5
7,080
2008-11-06T18:06:00.000
0
python,mysql,database,deadlock
I want to make my Python library working with MySQLdb be able to detect deadlocks and try again. I believe I've coded a good solution, and now I want to test it. Any ideas for the simplest queries I could run using MySQLdb to create a deadlock condition would be? system info: MySQL 5.0.19 Client 5.1.11 Windows XP Python 2.4 / MySQLdb 1.2.1 p2
1
0
Your success with createTable() will depend on your existing underlying table schema / data types. In other words, how well SQLite maps to the database you choose and how SQLObject decides to use your data types. The safest option may be to create the new database by hand. Then you'll have to deal with data migration, which may be as easy as instantiating two SQLObject database connections over the same table definitions. Why not just start with the more full-featured database?
1
275,676
1
false
275,572
0
Database change underneath SQLObject
0
2
0.132549
0
0
0
3
876
2008-11-09T03:46:00.000
0
python,mysql,database,sqlite,sqlobject
I'm starting a web project that likely should be fine with SQLite. I have SQLObject on top of it, but thinking long term here -- if this project should require a more robust (e.g. able to handle high traffic), I will need to have a transition plan ready. My questions: How easy is it to transition from one DB (SQLite) to another (MySQL or Firebird or PostGre) under SQLObject? Does SQLObject provide any tools to make such a transition easier? Is it simply take the objects I've defined and call createTable? What about having multiple SQLite databases instead? E.g. one per visitor group? Does SQLObject provide a mechanism for handling this scenario and if so, what is the mechanism to use? Thanks, Sean
1
0
This always works and requires little thinking -- only patience. Make a backup. Actually make a backup. Everyone skips step 1 thinking that they have a backup, but they can never find it or work with it. Don't trust any backup that you can't recover from. Create a new database schema. Define your new structure from the ground up in the new schema. Ideally, you'll run a DDL script that builds the new schema. Don't have a script to build the schema? Create one and put it under version control. With SA, you can define your tables and it can build your schema for you. This is ideal, since you have your schema under version control in Python. Move data. a. For tables which did not change structure, move data from old schema to new schema using simple INSERT/SELECT statements. b. For tables which did change structure, develop INSERT/SELECT scripts to move the data from old to new. Often, this can be a single SQL statement per new table. In some cases, it has to be a Python loop with two open connections. c. For new tables, load the data. Stop using the old schema. Start using the new schema. Find every program that used the old schema and fix the configuration. Don't have a list of applications? Make one. Seriously -- it's important. Applications have hard-coded DB configurations? Fix that, too, while you're at it. Either create a common config file, or use some common environment variable or something to (a) assure consistency and (b) centralize the notion of "production". You can do this kind of procedure any time you do major surgery. It never touches the old database except to extract the data.
1
301,708
1
true
301,566
0
How to update turbogears application production database
0
1
1.2
0
0
0
4
889
2008-11-19T11:00:00.000
1
python,database,postgresql,data-migration,turbogears
I am having a postgres production database in production (which contains a lot of Data). now I need to modify the model of the tg-app to add couple of new tables to the database. How do i do this? I am using sqlAlchemy.
1
0
I think that the header files are shipped with MySQL, just make sure you check the appropriate options when installing (I think that sources and headers are under "developer components" in the installation dialog).
1
317,716
9
true
316,484
0
Problem compiling MySQLdb for Python 2.6 on Win32
0
2
1.2
0
0
0
4
3,446
2008-11-25T06:14:00.000
0
python,mysql,winapi
I'm using Django and Python 2.6, and I want to grow my application using a MySQL backend. Problem is that there isn't a win32 package for MySQLdb on Python 2.6. Now I'm no hacker, but I thought I might compile it myself using MSVC++9 Express. But I run into a problem that the compiler quickly can't find config_win.h, which I assume is a header file for MySQL so that the MySQLdb package can know what calls it can make into MySQL. Am I right? And if so, where do I get the header files for MySQL?
1
0
we've never had an "OID" type specifically, though we've supported the concept of an implicit "OID" column on every table through the 0.4 series, primarily for the benefit of postgres. However since user-table defined OID columns are deprecated in Postgres, and we in fact never really used the OID feature that was present, we've removed this feature from the library. If a particular type is not supplied in SQLA, as an alternative to specifying a custom type, you can always use the NullType which just means SQLA doesn't know anything in particular about that type. If psycopg2 sends/receives a useful Python type for the column already, there's not really any need for a SQLA type object, save for issuing CREATE TABLE statements.
1
405,923
8
true
359,409
0
What is the sqlalchemy equivalent column type for 'money' and 'OID' in Postgres?
0
3
1.2
0
0
0
3
10,565
2008-12-11T13:52:00.000
0
python,postgresql,sqlalchemy
What is the sqlalchemy equivalent column type for 'money' and 'OID' column types in Postgres?
1
0
First step to get The Library: Open terminal and execute pip install mysql-python-connector. After the installation go the second step. Second Step to import the library: Open your python file and write the following code: import mysql.connector Third step to connect to the server: Write the following code: conn = mysql.connector.connect(host=you host name like localhost or 127.0.0.1, username=your username like root, password = your password) Third step Making the cursor: Making a cursor makes it easy for us to run queries. To make the cursor use the following code: cursor = conn.cursor() Executing queries: For executing queries you can do the following: cursor.execute(query) If the query changes any thing in the table you need to add the following code after the execution of the query: conn.commit() Getting values from a query: If you want to get values from a query then you can do the following: cursor.excecute('SELECT * FROM table_name') for i in cursor: print(i) #Or for i in cursor.fetchall(): print(i) The fetchall() method returns a list with many tuples that contain the values that you requested ,row after row . Closing the connection: To close the connection you should use the following code: conn.close() Handling exception: To Handel exception you can do it Vai the following method: try: #Logic pass except mysql.connector.errors.Error: #Logic pass To use a database: For example you are a account creating system where you are storing the data in a database named blabla, you can just add a database parameter to the connect() method ,like mysql.connector.connect(database = database name) don't remove other informations like host,username,password.
1
64,762,149
1,242
false
372,885
0
How do I connect to a MySQL Database in Python?
0
1
0.008
0
0
0
25
1,369,727
2008-12-16T21:49:00.000
0
python,mysql
How do I connect to a MySQL database using a python program?
1
0
You're probably better off using Python 2.x at the moment. It's going to be a while before all Python packages are ported to 3.x, and I expect writing a library or application with 3.x at the moment would be quite frustrating.
1
385,225
36
false
384,471
0
MySQL-db lib for Python 3.x?
0
0
0
0
0
0
9
43,916
2008-12-21T13:37:00.000
0
python,mysql,python-3.x
So, looking for a mysql-db-lib that is compatible with py3k/py3.0/py3000, any ideas? Google turned up nothing.
1
0
To start with, treat the barcode input as plain old text. It has been quite a while since I worked with barcode scanners, but I doubt they have changed that much, the older ones used to just piggyback on the keyboard input, so from a programming perspective, the net result was a stream of characters in the keyboard buffer, either typed or scanned made no difference. If the device you are targeting differs from that, you will need to write something to deal with that before you get to the database query. If you have one of the devices to play with, plug it in, start notepad, start scanning some barcodes and see what happens.
3
387,800
1
false
387,606
0
Using user input to find information in a Mysql database
0
0
0
0
0
0
4
4,847
2008-12-22T22:37:00.000
0
python,sql,user-input
I need to design a program using python that will ask the user for a barcode. Then, using this barcode, it will search a mysql to find its corresponding product. I am a bit stuck on how to get started. Does anyone have any tips for me?
1
0
A barcode is simply a graphical representation of a series of characters (alphanumeric) So if you have a method for users to enter this code (a barcode scanner), then its just an issue of querying the mysql database for the character string.
3
387,622
1
false
387,606
0
Using user input to find information in a Mysql database
0
1
0.049958
0
0
0
4
4,847
2008-12-22T22:37:00.000
0
python,sql,user-input
I need to design a program using python that will ask the user for a barcode. Then, using this barcode, it will search a mysql to find its corresponding product. I am a bit stuck on how to get started. Does anyone have any tips for me?
1
0
That is a very ambiguous question. What you want can be done in many ways depending on what you actually want to do. How are your users going to enter the bar code? Are they going to use a bar code scanner? Are they entering the bar code numbers manually? Is this going to run on a desktop/laptop computer or is it going to run on a handheld device? Is the bar code scanner storing the bar codes for later retrieval or is it sending them directly to the computer. Will it send them through a USB cable or wireless?
3
387,694
1
false
387,606
0
Using user input to find information in a Mysql database
0
0
0
0
0
0
4
4,847
2008-12-22T22:37:00.000
0
python,sql,user-input
I need to design a program using python that will ask the user for a barcode. Then, using this barcode, it will search a mysql to find its corresponding product. I am a bit stuck on how to get started. Does anyone have any tips for me?
1
0
"However, opening and closing the connection with each update seems more 'neat'. " It's also a huge amount of overhead -- and there's no actual benefit. Creating and disposing of connections is relatively expensive. More importantly, what's the actual reason? How does it improve, simplify, clarify? Generally, most applications have one connection that they use from when they start to when they stop.
3
387,932
2
true
387,619
0
Mysql Connection, one or many?
0
7
1.2
0
0
0
4
1,201
2008-12-22T22:40:00.000
0
python,mysql
I'm writing a script in python which basically queries WMI and updates the information in a mysql database. One of those "write something you need" to learn to program exercises. In case something breaks in the middle of the script, for example, the remote computer turns off, it's separated out into functions. Query Some WMI data Update that to the database Query Other WMI data Update that to the database Is it better to open one mysql connection at the beginning and leave it open or close the connection after each update? It seems as though one connection would use less resources. (Although I'm just learning, so this is a complete guess.) However, opening and closing the connection with each update seems more 'neat'. Functions would be more stand alone, rather than depend on code outside that function.
1
0
I don't think that there is "better" solution. Its too early to think about resources. And since wmi is quite slow ( in comparison to sql connection ) the db is not an issue. Just make it work. And then make it better. The good thing about working with open connection here, is that the "natural" solution is to use objects and not just functions. So it will be a learning experience( In case you are learning python and not mysql).
3
387,735
2
false
387,619
0
Mysql Connection, one or many?
0
2
0.099668
0
0
0
4
1,201
2008-12-22T22:40:00.000
0
python,mysql
I'm writing a script in python which basically queries WMI and updates the information in a mysql database. One of those "write something you need" to learn to program exercises. In case something breaks in the middle of the script, for example, the remote computer turns off, it's separated out into functions. Query Some WMI data Update that to the database Query Other WMI data Update that to the database Is it better to open one mysql connection at the beginning and leave it open or close the connection after each update? It seems as though one connection would use less resources. (Although I'm just learning, so this is a complete guess.) However, opening and closing the connection with each update seems more 'neat'. Functions would be more stand alone, rather than depend on code outside that function.
1
0
Useful clues in S.Lott's and Igal Serban's answers. I think you should first find out your actual requirements and code accordingly. Just to mention a different strategy; some applications keep a pool of database (or whatever) connections and in case of a transaction just pull one from that pool. It seems rather obvious you just need one connection for this kind of application. But you can still keep a pool of one connection and apply following; Whenever database transaction is needed the connection is pulled from the pool and returned back at the end. (optional) The connection is expired (and of replaced by a new one) after a certain amount of time. (optional) The connection is expired after a certain amount of usage. (optional) The pool can check (by sending an inexpensive query) if the connection is alive before handing it over the program. This is somewhat in between single connection and connection per transaction strategies.
3
389,364
2
false
387,619
0
Mysql Connection, one or many?
0
1
0.049958
0
0
0
4
1,201
2008-12-22T22:40:00.000
0
python,mysql
I'm writing a script in python which basically queries WMI and updates the information in a mysql database. One of those "write something you need" to learn to program exercises. In case something breaks in the middle of the script, for example, the remote computer turns off, it's separated out into functions. Query Some WMI data Update that to the database Query Other WMI data Update that to the database Is it better to open one mysql connection at the beginning and leave it open or close the connection after each update? It seems as though one connection would use less resources. (Although I'm just learning, so this is a complete guess.) However, opening and closing the connection with each update seems more 'neat'. Functions would be more stand alone, rather than depend on code outside that function.
1
0
I tried this with Excel 2007 and VBA. It is giving correct value. 1) Try pasting this value in a new excel workbook 2) Press Alt + F11. Gets you to VBA Editor. 3) Press Ctrl + G. Gets you to immediate window. 4) In the immediate window, type ?cells("a1").Value here "a1" is the cell where you have pasted the value. I am doubting that the cell has some value or character due to which it is interpreted this way. Post your observations here.
1
390,304
1
false
390,263
0
Interpreting Excel Currency Values
1
0
0
0
0
0
2
586
2008-12-23T22:37:00.000
0
python,excel,pywin32
I am using python to read a currency value from excel. The returned from the range.Value method is a tuple that I don't know how to parse. For example, the cell appears as $548,982, but in python the value is returned as (1, 1194857614). How can I get the numerical amount from excel or how can I convert this tuple value into the numerical value? Thanks!
1
0
using a MERGE statement instead of an INSERT one would solve your problem.
2
675,865
6
false
396,455
0
Python-PostgreSQL psycopg2 interface --> executemany
0
-1
-0.049958
0
0
0
4
7,742
2008-12-28T17:51:00.000
0
python,postgresql,database,psycopg
I am currently analyzing a wikipedia dump file; I am extracting a bunch of data from it using python and persisting it into a PostgreSQL db. I am always trying to make things go faster for this file is huge (18GB). In order to interface with PostgreSQL, I am using psycopg2, but this module seems to mimic many other such DBAPIs. Anyway, I have a question concerning cursor.executemany(command, values); it seems to me like executing an executemany once every 1000 values or so is better than calling cursor.execute(command % value) for each of these 5 million values (please confirm or correct me!). But, you see, I am using an executemany to INSERT 1000 rows into a table which has a UNIQUE integrity constraint; this constraint is not verified in python beforehand, for this would either require me to SELECT all the time (this seems counter productive) or require me to get more than 3 GB of RAM. All this to say that I count on Postgres to warn me when my script tried to INSERT an already existing row via catching the psycopg2.DatabaseError. When my script detects such a non-UNIQUE INSERT, it connection.rollback() (which makes ups to 1000 rows everytime, and kind of makes the executemany worthless) and then INSERTs all values one by one. Since psycopg2 is so poorly documented (as are so many great modules...), I cannot find an efficient and effective workaround. I have reduced the number of values INSERTed per executemany from 1000 to 100 in order to reduce the likeliness of a non-UNIQUE INSERT per executemany, but I am pretty certain their is a way to just tell psycopg2 to ignore these execeptions or to tell the cursor to continue the executemany. Basically, this seems like the kind of problem which has a solution so easy and popular, that all I can do is ask in order to learn about it. Thanks again!
1
0
"When my script detects such a non-UNIQUE INSERT, it connection.rollback() (which makes ups to 1000 rows everytime, and kind of makes the executemany worthless) and then INSERTs all values one by one." The question doesn't really make a lot of sense. Does EVERY block of 1,000 rows fail due to non-unique rows? Does 1 block of 1,000 rows fail (out 5,000 such blocks)? If so, then the execute many helps for 4,999 out of 5,000 and is far from "worthless". Are you worried about this non-Unique insert? Or do you have actual statistics on the number of times this happens? If you've switched from 1,000 row blocks to 100 row blocks, you can -- obviously -- determine if there's a performance advantage for 1,000 row blocks, 100 row blocks and 1 row blocks. Please actually run the actual program with actual database and different size blocks and post the numbers.
2
396,824
6
false
396,455
0
Python-PostgreSQL psycopg2 interface --> executemany
0
0
0
0
0
0
4
7,742
2008-12-28T17:51:00.000
0
python,postgresql,database,psycopg
I am currently analyzing a wikipedia dump file; I am extracting a bunch of data from it using python and persisting it into a PostgreSQL db. I am always trying to make things go faster for this file is huge (18GB). In order to interface with PostgreSQL, I am using psycopg2, but this module seems to mimic many other such DBAPIs. Anyway, I have a question concerning cursor.executemany(command, values); it seems to me like executing an executemany once every 1000 values or so is better than calling cursor.execute(command % value) for each of these 5 million values (please confirm or correct me!). But, you see, I am using an executemany to INSERT 1000 rows into a table which has a UNIQUE integrity constraint; this constraint is not verified in python beforehand, for this would either require me to SELECT all the time (this seems counter productive) or require me to get more than 3 GB of RAM. All this to say that I count on Postgres to warn me when my script tried to INSERT an already existing row via catching the psycopg2.DatabaseError. When my script detects such a non-UNIQUE INSERT, it connection.rollback() (which makes ups to 1000 rows everytime, and kind of makes the executemany worthless) and then INSERTs all values one by one. Since psycopg2 is so poorly documented (as are so many great modules...), I cannot find an efficient and effective workaround. I have reduced the number of values INSERTed per executemany from 1000 to 100 in order to reduce the likeliness of a non-UNIQUE INSERT per executemany, but I am pretty certain their is a way to just tell psycopg2 to ignore these execeptions or to tell the cursor to continue the executemany. Basically, this seems like the kind of problem which has a solution so easy and popular, that all I can do is ask in order to learn about it. Thanks again!
1
0
Here is an example of inner joining two tables based on a common field in both tables. SELECT table1.Products FROM table1 INNER JOIN table2 on table1.barcode = table2.barcode WHERE table1.Products is not null
2
403,848
0
false
403,527
0
Making a SQL Query in two tables
0
0
0
0
0
0
6
680
2008-12-31T17:20:00.000
0
python,sql
I'm wondering, is it possible to make an sql query that does the same function as 'select products where barcode in table1 = barcode in table2'. I am writing this function in a python program. Once that function is called will the table be joined permanently or just while that function is running? thanks.
1
0
Here's a way to talk yourself through table design in these cases, based on Object Role Modeling. (Yes, I realize this is only indirectly related to the question.) You have products and barcodes. Products are uniquely identified by Product Code (e.g. 'A2111'; barcodes are uniquely identified by Value (e.g. 1002155061). A Product has a Barcode. Questions: Can a product have no barcode? Can the same product have multiple barcodes? Can multiple products have the same barcode? (If you have any experience with UPC labels, you know the answer to all these is TRUE.) So you can make some assertions: A Product (code) has zero or more Barcode (value). A Barcode (value) has one or more Product (code). -- assumption: we barcodes don't have independent existence if they aren't/haven't been/won't be related to products). Which leads directly (via your ORM model) to a schema with two tables: Product ProductCode(PK) Description etc ProductBarcode ProductCode(FK) BarcodeValue -- with a two-part natural primary key, ProductCode + BarcodeValue and you tie them together as described in the other answers. Similar assertions can be used to determine which fields go into various tables in your design.
2
403,904
0
false
403,527
0
Making a SQL Query in two tables
0
0
0
0
0
0
6
680
2008-12-31T17:20:00.000
0
python,sql
I'm wondering, is it possible to make an sql query that does the same function as 'select products where barcode in table1 = barcode in table2'. I am writing this function in a python program. Once that function is called will the table be joined permanently or just while that function is running? thanks.
1
0
For what it's worth, django uses psycopg2.
4
413,259
13
true
413,228
0
PyGreSQL vs psycopg2
0
5
1.2
0
0
0
5
15,364
2009-01-05T14:21:00.000
1
python,postgresql
What is the difference between these two apis? Which one faster, reliable using Python DB API? Upd: I see two psql drivers for Django. The first one is psycopg2. What is the second one? pygresql?
1
0
psycopg2 is partly written in C so you can expect a performance gain, but on the other hand, a bit harder to install. PyGreSQL is written in Python only, easy to deployed but slower.
4
413,508
13
false
413,228
0
PyGreSQL vs psycopg2
0
0
0
0
0
0
5
15,364
2009-01-05T14:21:00.000
1
python,postgresql
What is the difference between these two apis? Which one faster, reliable using Python DB API? Upd: I see two psql drivers for Django. The first one is psycopg2. What is the second one? pygresql?
1
0
"PyGreSQL is written in Python only, easy to deployed but slower." PyGreSQL contains a C-coded module, too. I haven't done speed tests, but they're not likely to be much different, as the real work will happen inside the database server.
4
592,846
13
false
413,228
0
PyGreSQL vs psycopg2
0
4
0.158649
0
0
0
5
15,364
2009-01-05T14:21:00.000
1
python,postgresql
What is the difference between these two apis? Which one faster, reliable using Python DB API? Upd: I see two psql drivers for Django. The first one is psycopg2. What is the second one? pygresql?
1
0
Licensing may be an issue for you. PyGreSQL is MIT license. Psycopg2 is GPL license. (as long as you are accessing psycopg2 in normal ways from Python, with no internal API, and no direct C calls, this shouldn't cause you any headaches, and you can release your code under whatever license you like - but I am not a lawyer).
4
413,537
13
false
413,228
0
PyGreSQL vs psycopg2
0
2
0.07983
0
0
0
5
15,364
2009-01-05T14:21:00.000
1
python,postgresql
What is the difference between these two apis? Which one faster, reliable using Python DB API? Upd: I see two psql drivers for Django. The first one is psycopg2. What is the second one? pygresql?
1
0
No. Adding indexes willy-nilly to all "slow" queries will also slow down inserts, updates and deletes. Indexes are a balancing act between fast queries and fast changes. There is no general or "right" answer. There's certainly nothing that can automate this. You have to measure the improvement across your whole application as you add and change indexes.
1
438,700
5
false
438,559
0
Is there a way to automatically generate a list of columns that need indexing?
0
4
0.379949
0
0
0
2
620
2009-01-13T10:36:00.000
1
python,mysql,database,django,django-models
The beauty of ORM lulled me into a soporific sleep. I've got an existing Django app with a lack of database indexes. Is there a way to automatically generate a list of columns that need indexing? I was thinking maybe some middleware that logs which columns are involved in WHERE clauses? but is there anything built into MySQL that might help?
1
0
Just to throw it out there... there are PHP frameworks utilizing MVC. Codeigniter does simple and yet powerful things. You can definitely separate the template layer from the logic layer.
6
494,119
2
false
439,759
0
Is a PHP, Python, PostgreSQL design suitable for a business application?
0
0
0
0
0
0
7
2,512
2009-01-13T16:47:00.000
0
php,python,postgresql
I'm looking for some quick thoughts about a business application I am looking to build. I'd like to separate the three layers of presentation, domain logic, and data using PHP, Python, and PostgreSQL, respectively. I would like to hear, possibly from other folks who have gone down this path before, if there are problems with this approach, if I am targeting the wrong tools, etc. I'm looking at PHP because it is widely used, fairly mature, and I can find ample people with skills in PHP interface design. I'm looking at Python because of the benefits of readable code, because I hear can find more Python programmers that also have subject-matter skills (in this case, finance), and it's an open source language. Plus, it seems easier to code with. I'm looking at PostgreSQL for the transaction-level features. MySQL is also an option here, but I don't need to debate this aspect. This is not a web application, although I would like to utilize a browser for the user interface. This is more of an Enterprise Application, but for a small business with moderate numbers of users (maybe 5-10) and a modest number of daily transactions. What is important is that we are able to upgrade the database or domain logic or interface separate from the other layers in the future. I'm NOT looking for a buy vs. build debate, as that's a different discussion. Thanks for any insight
1
0
I personally agree with the second and the third points in your post. Speaking about PHP, in my opinion you can use Python also for presentation, there are many solutions (Zope, Plone ...) based on Python.
6
439,793
2
false
439,759
0
Is a PHP, Python, PostgreSQL design suitable for a business application?
0
0
0
0
0
0
7
2,512
2009-01-13T16:47:00.000
0
php,python,postgresql
I'm looking for some quick thoughts about a business application I am looking to build. I'd like to separate the three layers of presentation, domain logic, and data using PHP, Python, and PostgreSQL, respectively. I would like to hear, possibly from other folks who have gone down this path before, if there are problems with this approach, if I am targeting the wrong tools, etc. I'm looking at PHP because it is widely used, fairly mature, and I can find ample people with skills in PHP interface design. I'm looking at Python because of the benefits of readable code, because I hear can find more Python programmers that also have subject-matter skills (in this case, finance), and it's an open source language. Plus, it seems easier to code with. I'm looking at PostgreSQL for the transaction-level features. MySQL is also an option here, but I don't need to debate this aspect. This is not a web application, although I would like to utilize a browser for the user interface. This is more of an Enterprise Application, but for a small business with moderate numbers of users (maybe 5-10) and a modest number of daily transactions. What is important is that we are able to upgrade the database or domain logic or interface separate from the other layers in the future. I'm NOT looking for a buy vs. build debate, as that's a different discussion. Thanks for any insight
1
0
Just skip PHP and use Python (with Django, as already noticed while I typed). Django already separates the layers as you mentioned. I have never used PgSQL myself, but I think it's mostly a matter of taste whether you prefer it over MySQL. It used to support more enterprise features than MySQL but I'm not sure if that's still true with MySQL 5.0 and 5.1. Transactions are supported in MySQL, anyway (you have to use the InnoDB table engine, however).
6
439,818
2
false
439,759
0
Is a PHP, Python, PostgreSQL design suitable for a business application?
0
0
0
0
0
0
7
2,512
2009-01-13T16:47:00.000
0
php,python,postgresql
I'm looking for some quick thoughts about a business application I am looking to build. I'd like to separate the three layers of presentation, domain logic, and data using PHP, Python, and PostgreSQL, respectively. I would like to hear, possibly from other folks who have gone down this path before, if there are problems with this approach, if I am targeting the wrong tools, etc. I'm looking at PHP because it is widely used, fairly mature, and I can find ample people with skills in PHP interface design. I'm looking at Python because of the benefits of readable code, because I hear can find more Python programmers that also have subject-matter skills (in this case, finance), and it's an open source language. Plus, it seems easier to code with. I'm looking at PostgreSQL for the transaction-level features. MySQL is also an option here, but I don't need to debate this aspect. This is not a web application, although I would like to utilize a browser for the user interface. This is more of an Enterprise Application, but for a small business with moderate numbers of users (maybe 5-10) and a modest number of daily transactions. What is important is that we are able to upgrade the database or domain logic or interface separate from the other layers in the future. I'm NOT looking for a buy vs. build debate, as that's a different discussion. Thanks for any insight
1
0
I can only repeat what other peoples here already said : if you choose Python for the domain layer, you won't gain anything (quite on the contrary) using PHP for the presentation layer. Others already advised Django, and that might be a pretty good choice, but there's no shortage of good Python web frameworks.
6
440,496
2
false
439,759
0
Is a PHP, Python, PostgreSQL design suitable for a business application?
0
1
0.028564
0
0
0
7
2,512
2009-01-13T16:47:00.000
0
php,python,postgresql
I'm looking for some quick thoughts about a business application I am looking to build. I'd like to separate the three layers of presentation, domain logic, and data using PHP, Python, and PostgreSQL, respectively. I would like to hear, possibly from other folks who have gone down this path before, if there are problems with this approach, if I am targeting the wrong tools, etc. I'm looking at PHP because it is widely used, fairly mature, and I can find ample people with skills in PHP interface design. I'm looking at Python because of the benefits of readable code, because I hear can find more Python programmers that also have subject-matter skills (in this case, finance), and it's an open source language. Plus, it seems easier to code with. I'm looking at PostgreSQL for the transaction-level features. MySQL is also an option here, but I don't need to debate this aspect. This is not a web application, although I would like to utilize a browser for the user interface. This is more of an Enterprise Application, but for a small business with moderate numbers of users (maybe 5-10) and a modest number of daily transactions. What is important is that we are able to upgrade the database or domain logic or interface separate from the other layers in the future. I'm NOT looking for a buy vs. build debate, as that's a different discussion. Thanks for any insight
1
0
I'm going to assume that by "business application" you mean a web application hosted in an intranet environment as opposed to some sort of SaaS application on the internet. While you're in the process of architecting your application you need to consider the existing infrastructure and infrastructure support people of your employer/customer. Also, if the company is large enough to have things such as "approved software/hardware lists," you should be aware of those. Keep in mind that some elements of the list may be downright retarded. Don't let past mistakes dictate the architecture of your app, but in cases where they are reasonably sensible I would pick my battles and stick with your enterprise standard. This can be a real pain when you pick a development stack that really works best on Unix/Linux, and then someone tries to force onto a Windows server admined by someone who's never touched anything but ASP.NET applications. Unless there is a particular PHP module that you intend to use that has no Python equivalent, I would drop PHP and use Django. If there is a compelling reason to use PHP, then I'd drop Python. I'm having difficulty imagining a scenario where you would want to use both at the same time. As for PG versus MySQL, either works. Look at what you customer already has deployed, and if they have a bunch of one and little of another, pick that. If they have existing Oracle infrastructure you should consider using it. If they are an SQL Server shop...reconsider your stack and remember to pick your battles.
6
440,118
2
false
439,759
0
Is a PHP, Python, PostgreSQL design suitable for a business application?
0
1
0.028564
0
0
0
7
2,512
2009-01-13T16:47:00.000
0
php,python,postgresql
I'm looking for some quick thoughts about a business application I am looking to build. I'd like to separate the three layers of presentation, domain logic, and data using PHP, Python, and PostgreSQL, respectively. I would like to hear, possibly from other folks who have gone down this path before, if there are problems with this approach, if I am targeting the wrong tools, etc. I'm looking at PHP because it is widely used, fairly mature, and I can find ample people with skills in PHP interface design. I'm looking at Python because of the benefits of readable code, because I hear can find more Python programmers that also have subject-matter skills (in this case, finance), and it's an open source language. Plus, it seems easier to code with. I'm looking at PostgreSQL for the transaction-level features. MySQL is also an option here, but I don't need to debate this aspect. This is not a web application, although I would like to utilize a browser for the user interface. This is more of an Enterprise Application, but for a small business with moderate numbers of users (maybe 5-10) and a modest number of daily transactions. What is important is that we are able to upgrade the database or domain logic or interface separate from the other layers in the future. I'm NOT looking for a buy vs. build debate, as that's a different discussion. Thanks for any insight
1
0
Just to address the MySQL vs PgSQL issues - it shouldn't matter. They're both more than capable of the task, and any reasonable framework should isolate you from the differences relatively well. I think it's down to what you use already, what people have most experience in, and if there's a feature in one or the other you think you'd benefit from. If you have no preference, you might want to go with MySQL purely because it's more popular for web work. This translates to more examples, easier to find help, etc. I actually prefer the philosophy of PgSQL, but this isn't a good enough reason to blow against the wind.
6
440,098
2
false
439,759
0
Is a PHP, Python, PostgreSQL design suitable for a business application?
0
0
0
0
0
0
7
2,512
2009-01-13T16:47:00.000
0
php,python,postgresql
I'm looking for some quick thoughts about a business application I am looking to build. I'd like to separate the three layers of presentation, domain logic, and data using PHP, Python, and PostgreSQL, respectively. I would like to hear, possibly from other folks who have gone down this path before, if there are problems with this approach, if I am targeting the wrong tools, etc. I'm looking at PHP because it is widely used, fairly mature, and I can find ample people with skills in PHP interface design. I'm looking at Python because of the benefits of readable code, because I hear can find more Python programmers that also have subject-matter skills (in this case, finance), and it's an open source language. Plus, it seems easier to code with. I'm looking at PostgreSQL for the transaction-level features. MySQL is also an option here, but I don't need to debate this aspect. This is not a web application, although I would like to utilize a browser for the user interface. This is more of an Enterprise Application, but for a small business with moderate numbers of users (maybe 5-10) and a modest number of daily transactions. What is important is that we are able to upgrade the database or domain logic or interface separate from the other layers in the future. I'm NOT looking for a buy vs. build debate, as that's a different discussion. Thanks for any insight
1
0
If you look at how the SQL solution you provided will be executed, it will go basically like this: Fetch a list of friends for the current user For each user in the list, start an index scan over recent posts Merge-join all the scans from step 2, stopping when you've retrieved enough entries You can carry out exactly the same procedure yourself in App Engine, by using the Query instances as iterators and doing a merge join over them. You're right that this will not scale well to large numbers of friends, but it suffers from exactly the same issues the SQL implementation has, it just doesn't disguise them as well: Fetching the latest 20 (for example) entries costs roughly O(n log n) work, where n is the number of friends.
2
446,471
13
false
445,827
1
GAE - How to live with no joins?
0
13
1
0
0
0
4
2,112
2009-01-15T06:07:00.000
1
python,google-app-engine,join,google-cloud-datastore
Example Problem: Entities: User contains name and a list of friends (User references) Blog Post contains title, content, date and Writer (User) Requirement: I want a page that displays the title and a link to the blog of the last 10 posts by a user's friend. I would also like the ability to keep paging back through older entries. SQL Solution: So in sql land it would be something like: select * from blog_post where user_id in (select friend_id from user_friend where user_id = :userId) order by date GAE solutions i can think of are: Load user, loop through the list of friends and load their latest blog posts. Finally merge all the blog posts to find the latest 10 blog entries In a blog post have a list of all users that have the writer as a friend. This would mean a simple read but would result in quota overload when adding a friend who has lots of blog posts. I don't believe either of these solutions will scale. Im sure others have hit this problem but I've searched, watched google io videos, read other's code ... What am i missing?
1
0
"Load user, loop through the list of friends and load their latest blog posts." That's all a join is -- nested loops. Some kinds of joins are loops with lookups. Most lookups are just loops; some are hashes. "Finally merge all the blog posts to find the latest 10 blog entries" That's a ORDER BY with a LIMIT. That's what the database is doing for you. I'm not sure what's not scalable about this; it's what a database does anyway.
2
446,477
13
false
445,827
1
GAE - How to live with no joins?
0
1
0.049958
0
0
0
4
2,112
2009-01-15T06:07:00.000
1
python,google-app-engine,join,google-cloud-datastore
Example Problem: Entities: User contains name and a list of friends (User references) Blog Post contains title, content, date and Writer (User) Requirement: I want a page that displays the title and a link to the blog of the last 10 posts by a user's friend. I would also like the ability to keep paging back through older entries. SQL Solution: So in sql land it would be something like: select * from blog_post where user_id in (select friend_id from user_friend where user_id = :userId) order by date GAE solutions i can think of are: Load user, loop through the list of friends and load their latest blog posts. Finally merge all the blog posts to find the latest 10 blog entries In a blog post have a list of all users that have the writer as a friend. This would mean a simple read but would result in quota overload when adding a friend who has lots of blog posts. I don't believe either of these solutions will scale. Im sure others have hit this problem but I've searched, watched google io videos, read other's code ... What am i missing?
1
0
It should be safe to do a repozo backup of the Data.fs followed by an rsync of the blobstorage directory, as long as the database doesn't get packed while those two operations are happening. This is because, at least when using blobs with FileStorage, modifications to a blob always results in the creation of a new file named based on the object id and transaction id. So if new or updated blobs are written after the Data.fs is backed up, it shouldn't be a problem, as the files that are referenced by the Data.fs should still be around. Deletion of a blob doesn't result in the file being removed until the database is packed, so that should be okay too. Performing a backup in a different order, or with packing during the backup, may result in a backup Data.fs that references blobs that are not included in the backup.
3
2,664,479
8
false
451,952
0
What is the correct way to backup ZODB blobs?
0
13
1
0
0
0
4
2,618
2009-01-16T20:51:00.000
0
python,plone,zope,zodb,blobstorage
I am using plone.app.blob to store large ZODB objects in a blobstorage directory. This reduces size pressure on Data.fs but I have not been able to find any advice on backing up this data. I am already backing up Data.fs by pointing a network backup tool at a directory of repozo backups. Should I simply point that tool at the blobstorage directory to backup my blobs? What if the database is being repacked or blobs are being added and deleted while the copy is taking place? Are there files in the blobstorage directory that must be copied over in a certain order?
1
0
Backing up "blobstorage" will do it. No need for a special order or anything else, it's very simple. All operations in Plone are fully transactional, so hitting the backup in the middle of a transaction should work just fine. This is why you can do live backups of the ZODB. Without knowing what file system you're on, I'd guess that it should work as intended.
3
453,942
8
true
451,952
0
What is the correct way to backup ZODB blobs?
0
3
1.2
0
0
0
4
2,618
2009-01-16T20:51:00.000
0
python,plone,zope,zodb,blobstorage
I am using plone.app.blob to store large ZODB objects in a blobstorage directory. This reduces size pressure on Data.fs but I have not been able to find any advice on backing up this data. I am already backing up Data.fs by pointing a network backup tool at a directory of repozo backups. Should I simply point that tool at the blobstorage directory to backup my blobs? What if the database is being repacked or blobs are being added and deleted while the copy is taking place? Are there files in the blobstorage directory that must be copied over in a certain order?
1
0
Your backup strategy for the FileStorage is fine. However, making a backup of any database that stores data in multiple files never is easy as your copy has to happen with no writes to the various files. For the FileStorage a blind stupid copy is fine as it's just a single file. (Using repozo is even better.) In this case (with BlobStorage combined with FileStorage) I have to point to the regular backup advice: take the db offline while making a file-system copy use snapshot tools like LVM to freeze the disk at a given point do a transactional export (not feasable in practice)
3
676,364
8
false
451,952
0
What is the correct way to backup ZODB blobs?
0
1
0.049958
0
0
0
4
2,618
2009-01-16T20:51:00.000
0
python,plone,zope,zodb,blobstorage
I am using plone.app.blob to store large ZODB objects in a blobstorage directory. This reduces size pressure on Data.fs but I have not been able to find any advice on backing up this data. I am already backing up Data.fs by pointing a network backup tool at a directory of repozo backups. Should I simply point that tool at the blobstorage directory to backup my blobs? What if the database is being repacked or blobs are being added and deleted while the copy is taking place? Are there files in the blobstorage directory that must be copied over in a certain order?
1
0
Go to your project directory with cd. source/bin/activate (activate your env. if not previously). Run the command easy_install MySQL-python
5
28,278,997
493
false
454,854
0
No module named MySQLdb
0
5
0.03124
0
0
0
32
804,257
2009-01-18T09:13:00.000
1
python,django,python-2.x
I am using Python version 2.5.4 and install MySQL version 5.0 and Django. Django is working fine with Python, but not MySQL. I am using it in Windows Vista.
1
0
I personally recommend using pymysql instead of using the genuine MySQL connector, which provides you with a platform independent interface and could be installed through pip. And you could edit the SQLAlchemy URL schema like this: mysql+pymysql://username:passwd@host/database
5
58,246,337
493
false
454,854
0
No module named MySQLdb
0
6
1
0
0
0
32
804,257
2009-01-18T09:13:00.000
1
python,django,python-2.x
I am using Python version 2.5.4 and install MySQL version 5.0 and Django. Django is working fine with Python, but not MySQL. I am using it in Windows Vista.
1
0
if your python version is 3.5, do a pip install mysqlclient, other things didn't work for me
5
38,310,817
493
false
454,854
0
No module named MySQLdb
0
93
1
0
0
0
32
804,257
2009-01-18T09:13:00.000
1
python,django,python-2.x
I am using Python version 2.5.4 and install MySQL version 5.0 and Django. Django is working fine with Python, but not MySQL. I am using it in Windows Vista.
1
0
None of the above worked for me on an Ubuntu 18.04 fresh install via docker image. The following solved it for me: apt-get install holland python3-mysqldb
5
58,825,148
493
false
454,854
0
No module named MySQLdb
0
2
0.012499
0
0
0
32
804,257
2009-01-18T09:13:00.000
1
python,django,python-2.x
I am using Python version 2.5.4 and install MySQL version 5.0 and Django. Django is working fine with Python, but not MySQL. I am using it in Windows Vista.
1
0
For CentOS 8 and Python3 $ sudo dnf install python3-mysqlclient -y
5
72,496,371
493
false
454,854
0
No module named MySQLdb
0
0
0
0
0
0
32
804,257
2009-01-18T09:13:00.000
1
python,django,python-2.x
I am using Python version 2.5.4 and install MySQL version 5.0 and Django. Django is working fine with Python, but not MySQL. I am using it in Windows Vista.
1
0
That's because parameters can only be passed to VALUES. The table name can't be parametrized. Also you have quotes around a parametrized argument on the second query. Remove the quotes, escaping is handled by the underlining library automatically for you.
1
474,296
1
true
474,261
0
Python pysqlite not accepting my qmark parameterization
0
7
1.2
0
0
0
3
1,629
2009-01-23T19:55:00.000
0
python,sqlite,pysqlite,python-db-api
I think I am being a bonehead, maybe not importing the right package, but when I do... from pysqlite2 import dbapi2 as sqlite import types import re import sys ... def create_asgn(self): stmt = "CREATE TABLE ? (login CHAR(8) PRIMARY KEY NOT NULL, grade INTEGER NOT NULL)" stmt2 = "insert into asgn values ('?', ?)" self.cursor.execute(stmt, (sys.argv[2],)) self.cursor.execute(stmt2, [sys.argv[2], sys.argv[3]]) ... I get the error pysqlite2.dbapi2.OperationalError: near "?": syntax error This makes very little sense to me, as the docs show that pysqlite is qmark parametrized. I am new to python and db-api though, help me out! THANKS
1
0
I was in the exact same situation as you and went with PL/Python after giving up on PL/SQL after a while. It was a good decision, looking back. Some things that bit me where unicode issues (client encoding, byte sequence) and specific postgres data types (bytea).
2
476,089
3
false
475,302
0
PostgreSQL procedural languages: to choose?
0
1
0.066568
0
0
0
3
2,801
2009-01-24T01:38:00.000
0
python,postgresql
I have been working with PostgreSQL, playing around with Wikipedia's millions of hyperlinks and such, for 2 years now. I either do my thing directly by sending SQL commands, or I write a client side script in python to manage a million queries when this cannot be done productively (efficiently and effectively) manually. I would run my python script on my 32bit laptop and have it communicate with a $6000 64bit server running PostgreSQL; I would hence have an extra 2.10 Ghz, 3 GB of RAM, psyco and a multithreaded SQL query manager. I now realize that it is time for me to level up. I need to learn to server-side script using a procedural language (PL); I really need to reduce network traffic and its inherent serializing overhead. Now, I really do not feel like researching all the PLs. Knowing that I already know python, and that I am looking for the means between effort and language efficiency, what PL do you guys fancy I should install, learn and use, and why and how?
1
0
Why can't you run your Python on the database server? That has the fewest complexities -- you can run the program you already have.
2
475,939
3
false
475,302
0
PostgreSQL procedural languages: to choose?
0
2
0.132549
0
0
0
3
2,801
2009-01-24T01:38:00.000
0
python,postgresql
I have been working with PostgreSQL, playing around with Wikipedia's millions of hyperlinks and such, for 2 years now. I either do my thing directly by sending SQL commands, or I write a client side script in python to manage a million queries when this cannot be done productively (efficiently and effectively) manually. I would run my python script on my 32bit laptop and have it communicate with a $6000 64bit server running PostgreSQL; I would hence have an extra 2.10 Ghz, 3 GB of RAM, psyco and a multithreaded SQL query manager. I now realize that it is time for me to level up. I need to learn to server-side script using a procedural language (PL); I really need to reduce network traffic and its inherent serializing overhead. Now, I really do not feel like researching all the PLs. Knowing that I already know python, and that I am looking for the means between effort and language efficiency, what PL do you guys fancy I should install, learn and use, and why and how?
1
0
We have an O/RM that has C++ and C# (actually COM) bindings (in FOST.3) and we're putting together the Python bindings which are new in version 4 together with Linux and Mac support.
2
496,166
7
false
482,612
0
ORM (object relational manager) solution with multiple programming language support
0
0
0
0
0
0
3
1,697
2009-01-27T08:10:00.000
0
c#,c++,python,orm
Is there a good ORM (object relational manager) solution that can use the same database from C++, C#, Python? It could also be multiple solutions, e.g. one per language, as long as they can can access the same database and use the same schema. Multi platform support is also needed. Clarification: The idea is to have one database and access this from software written in several different programming languages. Ideally this would be provided by one ORM having APIs (or bindings) in all of these languages. One other solution is to have a different ORM in each language, that use compatible schemas. However I believe that schema migration will be very hard in this setting.
1
0
With SQLAlchemy, you can use reflection to get the schema, so it should work with any of the supported engines. I've used this to migrate data from an old SQLite to Postgres.
2
482,653
7
false
482,612
0
ORM (object relational manager) solution with multiple programming language support
0
1
0.066568
0
0
0
3
1,697
2009-01-27T08:10:00.000
0
c#,c++,python,orm
Is there a good ORM (object relational manager) solution that can use the same database from C++, C#, Python? It could also be multiple solutions, e.g. one per language, as long as they can can access the same database and use the same schema. Multi platform support is also needed. Clarification: The idea is to have one database and access this from software written in several different programming languages. Ideally this would be provided by one ORM having APIs (or bindings) in all of these languages. One other solution is to have a different ORM in each language, that use compatible schemas. However I believe that schema migration will be very hard in this setting.
1
0
I had the same question as the parent when using the ORM, and GHZ's link contained the answer on how it's possible. In sqlalchemy, assuming BlogPost.comments is a mapped relation to the Comments table, you can't do: session.query(BlogPost).order_by(BlogPost.comments.creationDate.desc()) , but you can do: session.query(BlogPost).join(Comments).order_by(Comments.creationDate.desc())
1
1,227,979
3
false
492,223
0
How can I order objects according to some attribute of the child in sqlalchemy?
0
1
0.099668
0
0
0
2
595
2009-01-29T16:01:00.000
1
python,sqlalchemy
Here is the situation: I have a parent model say BlogPost. It has many Comments. What I want is the list of BlogPosts ordered by the creation date of its' Comments. I.e. the blog post which has the most newest comment should be on top of the list. Is this possible with SQLAlchemy?
1
0
I agree with your intuition that using a stored procedure is the right way to go, but then, I almost always try to implement database stuff in the database. In your proc, I would introduce some kind of logic like say, there's only a 30% chance that returning the result will actually increment the counter. Just to increase the variability.
1
514,643
4
false
514,617
0
Random name generator strategy - help me improve it
0
1
0.099668
0
0
0
2
1,758
2009-02-05T04:51:00.000
0
python,mysql,random,web.py
I have a small project I am doing in Python using web.py. It's a name generator, using 4 "parts" of a name (firstname, middlename, anothername, surname). Each part of the name is a collection of entites in a MySQL databse (name_part (id, part, type_id), and name_part_type (id, description)). Basic stuff, I guess. My generator picks a random entry of each "type", and assembles a comical name. Right now, I am using select * from name_part where type_id=[something] order by rand() limit 1 to select a random entry of each type (so I also have 4 queries that run per pageview, I figured this was better than one fat query returning potentially hundreds of rows; if you have a suggestion for how to pull this off in one query w/o a sproc I'll listen). Obviously I want to make this more random. Actually, I want to give it better coverage, not necessarily randomness. I want to make sure it's using as many possibilities as possible. That's what I am asking in this question, what sorts of strategies can I use to give coverage over a large random sample? My idea, is to implement a counter column on each name_part, and increment it each time I use it. I would need some logic to then say like: "get a name_part that is less than the highest "counter" for this "name_part_type", unless there are none then pick a random one". I am not very good at SQL, is this kind of logic even possible? The only way I can think to do this would require up to 3 or 4 queries for each part of the name (so up to 12 queries per pageview). Can I get some input on my logic here? Am I overthinking it? This actually sounds ideal for a stored procedure... but can you guys at least help me solve how to do it without a sproc? (I don't know if I can even use a sproc with the built-in database stuff of web.py). I hope this isn't terribly dumb but thanks ahead of time. edit: Aside from my specific problem I am still curious if there are any alternate strategies I can use that may be better.
1
0
If you application uses a single excel file which contains macros which you call, I fear the answer is probably no since aside from COM Excel does not allow the same file to be opened with the same name (even if in different directories). You may be able to get around this by dynamically copying the file to another name before opening. My python knowledge isn't huge, but in most languages there is a way of specifying when you create a COM object whether you wish it to be a new object or connect to a preexisting instance by default. Check the python docs for something along these lines. Can you list the kind of specific problems you are having and exactly what you are hoping to do?
1
516,983
6
false
516,946
0
Control 2 separate Excel instances by COM independently... can it be done?
1
0
0
0
0
0
3
4,391
2009-02-05T17:32:00.000
0
python,windows,excel,com
I've got a legacy application which is implemented in a number of Excel workbooks. It's not something that I have the authority to re-implement, however another application that I do maintain does need to be able to call functions in the Excel workbook. It's been given a python interface using the Win32Com library. Other processes can call functions in my python package which in turn invokes the functions I need via Win32Com. Unfortunately COM does not allow me to specify a particular COM process, so at the moment no matter how powerful my server I can only control one instance of Excel at a time on the computer. If I were to try to run more than one instance of excel there would be no way of ensuring that the python layer is bound to a specific Excel instance. I'd like to be able to run more than 1 of my excel applications on my Windows server concurrently. Is there a way to do this? For example, could I compartmentalize my environment so that I could run as many Excel _ Python combinations as my application will support?
1
0
Make sure your db connection command isn't in any kind of loop. I was getting the same error from my script until I moved my db.database() out of my programs repeating execution loop.
3
15,046,529
22
false
519,296
0
Getting OperationalError: FATAL: sorry, too many clients already using psycopg2
0
3
0.197375
0
0
0
3
27,905
2009-02-06T06:15:00.000
0
python,postgresql,psycopg2
I am getting the error OperationalError: FATAL: sorry, too many clients already when using psycopg2. I am calling the close method on my connection instance after I am done with it. I am not sure what could be causing this, it is my first experience with python and postgresql, but I have a few years experience with php, asp.net, mysql, and sql server. EDIT: I am running this locally, if the connections are closing like they should be then I only have 1 connection open at a time. I did have a GUI open to the database but even closed I am getting this error. It is happening very shortly after I run my program. I have a function I call that returns a connection that is opened like: psycopg2.connect(connectionString) Thanks Final Edit: It was my mistake, I was recursively calling the same method on mistake that was opening the same method over and over. It has been a long day..
1
0
This error means what it says, there are too many clients connected to postgreSQL. Questions you should ask yourself: Are you the only one connected to this database? Are you running a graphical IDE? What method are you using to connect? Are you testing queries at the same time that you running the code? Any of these things could be the problem. If you are the admin, you can up the number of clients, but if a program is hanging it open, then that won't help for long. There are many reasons why you could be having too many clients running at the same time.
3
519,304
22
true
519,296
0
Getting OperationalError: FATAL: sorry, too many clients already using psycopg2
0
15
1.2
0
0
0
3
27,905
2009-02-06T06:15:00.000
0
python,postgresql,psycopg2
I am getting the error OperationalError: FATAL: sorry, too many clients already when using psycopg2. I am calling the close method on my connection instance after I am done with it. I am not sure what could be causing this, it is my first experience with python and postgresql, but I have a few years experience with php, asp.net, mysql, and sql server. EDIT: I am running this locally, if the connections are closing like they should be then I only have 1 connection open at a time. I did have a GUI open to the database but even closed I am getting this error. It is happening very shortly after I run my program. I have a function I call that returns a connection that is opened like: psycopg2.connect(connectionString) Thanks Final Edit: It was my mistake, I was recursively calling the same method on mistake that was opening the same method over and over. It has been a long day..
1
0
It simple means many clients are making transaction to PostgreSQL at same time. I was running Postgis container and Django in different docker container. Hence for my case restarting both db and system container solved the problem.
3
64,746,356
22
false
519,296
0
Getting OperationalError: FATAL: sorry, too many clients already using psycopg2
0
1
0.066568
0
0
0
3
27,905
2009-02-06T06:15:00.000
0
python,postgresql,psycopg2
I am getting the error OperationalError: FATAL: sorry, too many clients already when using psycopg2. I am calling the close method on my connection instance after I am done with it. I am not sure what could be causing this, it is my first experience with python and postgresql, but I have a few years experience with php, asp.net, mysql, and sql server. EDIT: I am running this locally, if the connections are closing like they should be then I only have 1 connection open at a time. I did have a GUI open to the database but even closed I am getting this error. It is happening very shortly after I run my program. I have a function I call that returns a connection that is opened like: psycopg2.connect(connectionString) Thanks Final Edit: It was my mistake, I was recursively calling the same method on mistake that was opening the same method over and over. It has been a long day..
1
0
Depending on the data rate sqlite could be exactly the correct way to do this. The entire database is locked for each write so you aren't going to scale to 1000s of simultaneous writes per second. But if you only have a few it is the safest way of assuring you don't overwrite each other.
4
524,955
9
false
524,797
0
Python, SQLite and threading
0
0
0
0
0
0
6
13,542
2009-02-07T23:18:00.000
0
python,multithreading,sqlite
I'm working on an application that will gather data through HTTP from several places, cache the data locally and then serve it through HTTP. So I was looking at the following. My application will first create several threads that will gather data at a specified interval and cache that data locally into a SQLite database. Then in the main thread start a CherryPy application that will query that SQLite database and serve the data. My problem is: how do I handle connections to the SQLite database from my threads and from the CherryPy application? If I'd do a connection per thread to the database will I also be able to create/use an in memory database?
1
0
Depending on the application the DB could be a real overhead. If we are talking about volatile data, maybe you could skip the communication via DB completely and share the data between the data gathering process and the data serving process(es) via IPC. This is not an option if the data has to be persisted, of course.
4
524,937
9
false
524,797
0
Python, SQLite and threading
0
0
0
0
0
0
6
13,542
2009-02-07T23:18:00.000
0
python,multithreading,sqlite
I'm working on an application that will gather data through HTTP from several places, cache the data locally and then serve it through HTTP. So I was looking at the following. My application will first create several threads that will gather data at a specified interval and cache that data locally into a SQLite database. Then in the main thread start a CherryPy application that will query that SQLite database and serve the data. My problem is: how do I handle connections to the SQLite database from my threads and from the CherryPy application? If I'd do a connection per thread to the database will I also be able to create/use an in memory database?
1
0
Short answer: Don't use Sqlite3 in a threaded application. Sqlite3 databases scale well for size, but rather terribly for concurrency. You will be plagued with "Database is locked" errors. If you do, you will need a connection per thread, and you have to ensure that these connections clean up after themselves. This is traditionally handled using thread-local sessions, and is performed rather well (for example) using SQLAlchemy's ScopedSession. I would use this if I were you, even if you aren't using the SQLAlchemy ORM features.
4
524,806
9
true
524,797
0
Python, SQLite and threading
0
8
1.2
0
0
0
6
13,542
2009-02-07T23:18:00.000
0
python,multithreading,sqlite
I'm working on an application that will gather data through HTTP from several places, cache the data locally and then serve it through HTTP. So I was looking at the following. My application will first create several threads that will gather data at a specified interval and cache that data locally into a SQLite database. Then in the main thread start a CherryPy application that will query that SQLite database and serve the data. My problem is: how do I handle connections to the SQLite database from my threads and from the CherryPy application? If I'd do a connection per thread to the database will I also be able to create/use an in memory database?
1
0
"...create several threads that will gather data at a specified interval and cache that data locally into a sqlite database. Then in the main thread start a CherryPy app that will query that sqlite db and serve the data." Don't waste a lot of time on threads. The things you're describing are simply OS processes. Just start ordinary processes to do gathering and run Cherry Py. You have no real use for concurrent threads in a single process for this. Gathering data at a specified interval -- when done with simple OS processes -- can be scheduled by the OS very simply. Cron, for example, does a great job of this. A CherryPy App, also, is an OS process, not a single thread of some larger process. Just use processes -- threads won't help you.
4
524,901
9
false
524,797
0
Python, SQLite and threading
0
1
0.033321
0
0
0
6
13,542
2009-02-07T23:18:00.000
0
python,multithreading,sqlite
I'm working on an application that will gather data through HTTP from several places, cache the data locally and then serve it through HTTP. So I was looking at the following. My application will first create several threads that will gather data at a specified interval and cache that data locally into a SQLite database. Then in the main thread start a CherryPy application that will query that SQLite database and serve the data. My problem is: how do I handle connections to the SQLite database from my threads and from the CherryPy application? If I'd do a connection per thread to the database will I also be able to create/use an in memory database?
1
0
I've also seen this error when the db file is on an NFS mounted file system.
1
6,345,495
3
false
531,711
0
python, sqlite error? db is locked? but it isnt?
0
0
0
0
0
0
4
5,791
2009-02-10T09:50:00.000
0
python,sqlite,locking
I get "database table is locked" error in my sqlite3 db. My script is single threaded, no other app is using the program (i did have it open once in "SQLite Database Browser.exe"). I copied the file, del the original (success) and renamed the copy so i know no process is locking it yet when i run my script everything in table B cannot be written to and it looks like table A is fine. Whats happening? -edit- I fixed it but unsure how. I notice the code not doing the correct things (i copied the wrong field) and after fixing it up and cleaning it, it magically started working again. -edit2- Someone else posted so i might as well update. I think the problem was i was trying to do a statement with a command/cursor in use.
1
0
"the byte overhead is significant" Why does this matter? It does the job. If you're running low on disk space, I'd be glad to sell you a 1Tb for $500. Have you run it? Is performance a problem? Can you demonstrate that the performance of serialization is the problem? "I thought of just using repr() and eval(), but is there a simple way I could accomplish this without using eval()?" Nothing simpler than repr and eval. What's wrong with eval? Is is the "someone could insert malicious code into the file where I serialized my lists" issue? Who -- specifically -- is going to find and edit this file to put in malicious code? Anything you do to secure this (i.e., encryption) removes "simple" from it.
1
532,989
7
false
532,934
0
Lightweight pickle for basic types in python?
1
0
0
0
0
0
7
2,674
2009-02-10T16:03:00.000
0
python,serialization,pickle
All I want to do is serialize and unserialize tuples of strings or ints. I looked at pickle.dumps() but the byte overhead is significant. Basically it looks like it takes up about 4x as much space as it needs to. Besides, all I need is basic types and have no need to serialize objects. marshal is a little better in terms of space but the result is full of nasty \x00 bytes. Ideally I would like the result to be human readable. I thought of just using repr() and eval(), but is there a simple way I could accomplish this without using eval()? This is getting stored in a db, not a file. Byte overhead matters because it could make the difference between requiring a TEXT column versus a varchar, and generally data compactness affects all areas of db performance.
1
0
Exactly what problems are you running into? You can simply iterate over the ResultProxy object: for row in conn_or_sess_or_engine.execute(selectable_obj_or_SQLstring): do_something_with(row)
1
536,269
0
false
536,051
0
Outputting data a row at a time from mysql using sqlalchemy
0
1
0.099668
0
0
0
2
563
2009-02-11T09:19:00.000
0
python,mysql,sqlalchemy
I want to fetch data from a mysql database using sqlalchemy and use the data in a different class.. Basically I fetch a row at a time, use the data, fetch another row, use the data and so on.. I am running into some problem doing this.. Basically, how do I output data a row at a time from mysql data?.. I have looked into all tutorials but they are not helping much.
1
0
Here are a couple points for you to consider. If your data is large reading it all into memory may be wasteful. If you need random access and not just sequential access to your data then you'll either have to scan the at most the entire file each time or read that table into an indexed memory structure like a dictionary. A list will still require some kind of scan (straight iteration or binary search if sorted). With that said, if you don't require some of the features of a DB then don't use one but if you just think MySQL is too heavy then +1 on the Sqlite suggestion from earlier. It gives you most of the features you'd want while using a database without the concurrency overhead.
5
558,822
0
false
557,199
0
Converting a database-driven (non-OO) python script into a non-database driven, OO-script
0
1
0.024995
0
0
0
8
330
2009-02-17T14:56:00.000
0
python,object
I have some software that is heavily dependent on MySQL, and is written in python without any class definitions. For performance reasons, and because the database is really just being used to store and retrieve large amounts of data, I'd like to convert this to an object-oriented python script that does not use the database at all. So my plan is to export the database tables to a set of files (not many -- it's a pretty simple database; it's big in that it has a lot of rows, but only a few tables, each of which has just two or three columns). Then I plan to read the data in, and have a set of functions which provide access to and operations on the data. My question is this: is there a preferred way to convert a set of database tables to classes and objects? For example, if I have a table which contains fruit, where each fruit has an id and a name, would I have a "CollectionOfFruit" class which contains a list of "Fruit" objects, or would I just have a "CollectionOfFruit" class which contains a list of tuples? Or would I just have a list of Fruit objects? I don't want to add any extra frameworks, because I want this code to be easy to transfer to different machines. So I'm really just looking for general advice on how to represent data that might more naturally be stored in database tables, in objects in Python. Alternatively, is there a good book I should read that would point me in the right direction on this?
1
0
If the data is a natural fit for database tables ("rectangular data"), why not convert it to sqlite? It's portable -- just one file to move the db around, and sqlite is available anywhere you have python (2.5 and above anyway).
5
557,473
0
true
557,199
0
Converting a database-driven (non-OO) python script into a non-database driven, OO-script
0
5
1.2
0
0
0
8
330
2009-02-17T14:56:00.000
0
python,object
I have some software that is heavily dependent on MySQL, and is written in python without any class definitions. For performance reasons, and because the database is really just being used to store and retrieve large amounts of data, I'd like to convert this to an object-oriented python script that does not use the database at all. So my plan is to export the database tables to a set of files (not many -- it's a pretty simple database; it's big in that it has a lot of rows, but only a few tables, each of which has just two or three columns). Then I plan to read the data in, and have a set of functions which provide access to and operations on the data. My question is this: is there a preferred way to convert a set of database tables to classes and objects? For example, if I have a table which contains fruit, where each fruit has an id and a name, would I have a "CollectionOfFruit" class which contains a list of "Fruit" objects, or would I just have a "CollectionOfFruit" class which contains a list of tuples? Or would I just have a list of Fruit objects? I don't want to add any extra frameworks, because I want this code to be easy to transfer to different machines. So I'm really just looking for general advice on how to represent data that might more naturally be stored in database tables, in objects in Python. Alternatively, is there a good book I should read that would point me in the right direction on this?
1
0
you could have a fruit class with id and name instance variables. and a function to read/write the information from a file, and maybe a class variable to keep track of the number of fruits (objects) created
5
557,279
0
false
557,199
0
Converting a database-driven (non-OO) python script into a non-database driven, OO-script
0
1
0.024995
0
0
0
8
330
2009-02-17T14:56:00.000
0
python,object
I have some software that is heavily dependent on MySQL, and is written in python without any class definitions. For performance reasons, and because the database is really just being used to store and retrieve large amounts of data, I'd like to convert this to an object-oriented python script that does not use the database at all. So my plan is to export the database tables to a set of files (not many -- it's a pretty simple database; it's big in that it has a lot of rows, but only a few tables, each of which has just two or three columns). Then I plan to read the data in, and have a set of functions which provide access to and operations on the data. My question is this: is there a preferred way to convert a set of database tables to classes and objects? For example, if I have a table which contains fruit, where each fruit has an id and a name, would I have a "CollectionOfFruit" class which contains a list of "Fruit" objects, or would I just have a "CollectionOfFruit" class which contains a list of tuples? Or would I just have a list of Fruit objects? I don't want to add any extra frameworks, because I want this code to be easy to transfer to different machines. So I'm really just looking for general advice on how to represent data that might more naturally be stored in database tables, in objects in Python. Alternatively, is there a good book I should read that would point me in the right direction on this?
1
0
There's no "one size fits all" answer for this -- it'll depend a lot on the data and how it's used in the application. If the data and usage are simple enough you might want to store your fruit in a dict with id as key and the rest of the data as tuples. Or not. It totally depends. If there's a guiding principle out there then it's to extract the underlying requirements of the app and then write code against those requirements.
5
557,241
0
false
557,199
0
Converting a database-driven (non-OO) python script into a non-database driven, OO-script
0
1
0.024995
0
0
0
8
330
2009-02-17T14:56:00.000
0
python,object
I have some software that is heavily dependent on MySQL, and is written in python without any class definitions. For performance reasons, and because the database is really just being used to store and retrieve large amounts of data, I'd like to convert this to an object-oriented python script that does not use the database at all. So my plan is to export the database tables to a set of files (not many -- it's a pretty simple database; it's big in that it has a lot of rows, but only a few tables, each of which has just two or three columns). Then I plan to read the data in, and have a set of functions which provide access to and operations on the data. My question is this: is there a preferred way to convert a set of database tables to classes and objects? For example, if I have a table which contains fruit, where each fruit has an id and a name, would I have a "CollectionOfFruit" class which contains a list of "Fruit" objects, or would I just have a "CollectionOfFruit" class which contains a list of tuples? Or would I just have a list of Fruit objects? I don't want to add any extra frameworks, because I want this code to be easy to transfer to different machines. So I'm really just looking for general advice on how to represent data that might more naturally be stored in database tables, in objects in Python. Alternatively, is there a good book I should read that would point me in the right direction on this?
1
0
Generally you want your Objects to absolutely match your "real world entities". Since you're starting from a database, it's not always the case that the database has any real-world fidelity, either. Some database designs are simply awful. If your database has reasonable models for Fruit, that's where you start. Get that right first. A "collection" may -- or may not -- be an artificial construct that's part of the solution algorithm, not really a proper part of the problem. Usually collections are part of the problem, and you should design those classes, also. Other times, however, the collection is an artifact of having used a database, and a simple Python list is all you need. Still other times, the collection is actually a proper mapping from some unique key value to an entity, in which case, it's a Python dictionary. And sometimes, the collection is a proper mapping from some non-unique key value to some collection of entities, in which case it's a Python collections.defaultdict(list). Start with the fundamental, real-world-like entities. Those get class definitions. Collections may use built-in Python collections or may require their own classes.
5
557,291
0
false
557,199
0
Converting a database-driven (non-OO) python script into a non-database driven, OO-script
0
2
0.049958
0
0
0
8
330
2009-02-17T14:56:00.000
0
python,object
I have some software that is heavily dependent on MySQL, and is written in python without any class definitions. For performance reasons, and because the database is really just being used to store and retrieve large amounts of data, I'd like to convert this to an object-oriented python script that does not use the database at all. So my plan is to export the database tables to a set of files (not many -- it's a pretty simple database; it's big in that it has a lot of rows, but only a few tables, each of which has just two or three columns). Then I plan to read the data in, and have a set of functions which provide access to and operations on the data. My question is this: is there a preferred way to convert a set of database tables to classes and objects? For example, if I have a table which contains fruit, where each fruit has an id and a name, would I have a "CollectionOfFruit" class which contains a list of "Fruit" objects, or would I just have a "CollectionOfFruit" class which contains a list of tuples? Or would I just have a list of Fruit objects? I don't want to add any extra frameworks, because I want this code to be easy to transfer to different machines. So I'm really just looking for general advice on how to represent data that might more naturally be stored in database tables, in objects in Python. Alternatively, is there a good book I should read that would point me in the right direction on this?
1
0
If you're used to thinking a relational database has to be huge and heavy like PostgreSQL or MySQL, then you'll be pleasantly surprised by SQLite. It is relational, very small, uses a single file, has Python bindings, requires no extra priviledges, and works on Linux, Windows, and many other platforms.
1
575,197
2
false
575,172
0
portable non-relational database
0
4
0.088656
0
0
0
9
3,510
2009-02-22T16:31:00.000
0
python,non-relational-database,portable-database
I want to experiment/play around with non-relational databases, it'd be best if the solution was: portable, meaning it doesn't require an installation. ideally just copy-pasting the directory to someplace would make it work. I don't mind if it requires editing some configuration files or running a configuration tool for first time usage. accessible from python works on both windows and linux What can you recommend for me? Essentially, I would like to be able to install this system on a shared linux server where I have little user privileges.
1
0
I think SQLObject is more pythonic/simpler, so if it works for you, then stick with it. SQLAlchemy takes a little more to learn, but can do more advanced things if you need that.
1
592,348
14
false
592,332
0
Any reasons not to use SQLObject over SQLAlchemy?
0
9
1
0
0
0
3
3,351
2009-02-26T20:37:00.000
0
python,orm,sqlalchemy,sqlobject
I don't expect to need much more than basic CRUD type functionality. I know that SQLAlchemy is more flexible, but the syntax etc of sqlobject just seem to be a bit easier to get up and going with.
1
0
You'd either have to use a cache, or fetch the most recent change on each request (since you can't persist objects between requests in-memory). From what you describe, it sounds as if it's being hit fairly frequently, so the cache is probably the way to go.
1
603,637
0
false
602,030
0
Store last created model's row in memory
1
0
0
0
0
0
4
127
2009-03-02T11:46:00.000
1
python,django
I am working on ajax-game. The abstract: 2+ gamers(browsers) change a variable which is saved to DB through json. All gamers are synchronized by javascript-timer+json - periodically reading that variable from DB. In general, all changes are stored in DB as history, but I want the recent change duplicated in memory. So the problem is: i want one variable to be stored in memory instead of DB.
1
0
When dealing with databases and PyQt UIs, I'll use something similar to model-view-controller model to help organize and simplify the code. View module uses/holds any QObjects that are necessary for the UI contain simple functions/methods for updating your QTGui Object, as well as extracting input from GUI objects Controller module will perform all DB interactions the more complex code lives here By using a MVC, you will not need to rely on the QT Library as much, and you will run into less problems linking QT with Python. So I guess my suggestion is to continue using pysqlite (since that's what you are used to), but refactor your design a little so the only thing dealing with the QT libraries is the UI. From the description of your GUI, it should be fairly straightforward.
1
608,262
1
false
608,098
0
What will I lose or gain from switching database APIs? (from pywin32 and pysqlite to QSql)
0
0
0
0
0
0
1
241
2009-03-03T20:45:00.000
0
python,qt,sqlite,pyqt4,pywin32
I am writing a Python (2.5) GUI Application that does the following: Imports from Access to an Sqlite database Saves ui form settings to an Sqlite database Currently I am using pywin32 to read Access, and pysqlite2/dbapi2 to read/write Sqlite. However, certain Qt objects don't automatically cast to Python or Sqlite equivalents when updating the Sqlite database. For example, a QDate, QDateTime, QString and others raise an error. Currently I am maintaining conversion functions. I investigated using QSql, which appears to overcome the casting problem. In addition, it is able to connect to both Access and Sqlite. These two benefits would appear to allow me to refactor my code to use less modules and not maintain my own conversion functions. What I am looking for is a list of important side-effects, performance gains/losses, functionality gains/losses that any of the SO community has experienced as a result from the switch to QSql. One functionality loss I have experienced thus far is the inability to use Access functions using the QODBC driver (e.g., 'SELECT LCASE(fieldname) from tablename' fails, as does 'SELECT FORMAT(fieldname, "General Number") from tablename')
1
0

Dataset Card for Dataset Name

Dataset Summary

This dataset card aims to be a base template for new datasets. It has been generated using this raw template.

Supported Tasks and Leaderboards

[More Information Needed]

Languages

[More Information Needed]

Dataset Structure

Data Instances

[More Information Needed]

Data Fields

[More Information Needed]

Data Splits

[More Information Needed]

Dataset Creation

Curation Rationale

[More Information Needed]

Source Data

Initial Data Collection and Normalization

[More Information Needed]

Who are the source language producers?

[More Information Needed]

Annotations

Annotation process

[More Information Needed]

Who are the annotators?

[More Information Needed]

Personal and Sensitive Information

[More Information Needed]

Considerations for Using the Data

Social Impact of Dataset

[More Information Needed]

Discussion of Biases

[More Information Needed]

Other Known Limitations

[More Information Needed]

Additional Information

Dataset Curators

[More Information Needed]

Licensing Information

[More Information Needed]

Citation Information

[More Information Needed]

Contributions

[More Information Needed]

Downloads last month
23