wxODBC

Is ODBC even necessary, when we can just interact with Python's
DB API directly?

mxODBC and adodbapi both use Python's DB API 2.0 to communicate
with databases via ODBC. Win32all odbc use DB API 1.0 I think.

Actually, more accurately (if I understand it completely) I
should say that we can interact directly with the Python
drivers for MySQL, PostGres, MS-SQL, Oracle, etc. etc. As long
as the drivers comply (more or less) with the the DB API it
should work fine and without the need for the ODBC wrapper
which can be a bitch to install and maintain on an end-user's
system (especially if you are deploying multi-platform apps),

I find it to be the other way around. First of all, I mainly
deploy these systems on Windows boxes. I can place the burden
on my customers to set up ODBC system data sources for the
databases I need to work with. This might not always be trivial,
but it's a standard practice for any Windows shop that has a
relational database system running. The Windows sysadmins need
to know how to do this *anyway*. They don't just run my Python
apps.

With native database drivers, I will typically need to build
database drivers written in C. This means that I either need
to access a C compiler in the client environment, and they might
not have that, or I need to have access to their database and
operating system of correct versions in my environment. Even
if we just stick to MS Windows and IBM DB2, this might well be
a big hazzle. Will drivers I build for DB2 version 7.2 on Win
2000 work on Win XP with DB2 8.1? DB2 7.1 on Win NT 4?

On for instance Linux systems, where C compilers almost always
are installed on the boxes, it's a bit different.

Programming-wise, using ODBC compatible drivers, I also have
less problems with my code. I know that...

cur.executemany("INSERT INTO T (A,B,C) VALUES (?,?,?)", rows )

...should work regardless of what database server I use. With
the native drivers, I might need either the syntax above, or
either of...

cur.executemany("INSERT INTO T (A,B,C) VALUES (%s,%s,%s)", rows )
cur.executemany("INSERT INTO T (A,B,C) VALUES (%(a)s,%(b)s,%(c)s)", rows )
cur.executemany("INSERT INTO T (A,B,C) VALUES (:1,:2,:3)", rows )
cur.executemany("INSERT INTO T (A,B,C) VALUES (:a,:b,:c)", rows )
cur.executemany("INSERT INTO T (A,B,C) VALUES (@a,@b,@c)", rows )

...where "rows" should either be a tuple or a dictionary (where
the keys look different for different drivers)...

See PEP 249 – Python Database API Specification v2.0 | peps.python.org and
http://www.object-craft.com.au/projects/sybase/sybase/node10.html

I *can* always do something like
"cur.execute("INSERT INTO T (A,B,C) VALUES (%s,%s,%s)" % row)"
instead, but that has several drawbacks:
* Cursor.executemany won't work.
* The database won't work as efficiently (no difference in the insert,
   but it might well matter for a select.)
* I need to take care of quoting and escaping manually.

and which performs much worse than just interacting directly
with the Python drivers.

Performance-wise you are probably right, but my experience
is that my Python apps that use adodbapi are much faster than
many off-the-shelf DBA tools that also communicate via ODBC.
The bottlenecks seems to be elsewhere...

···

--
Magnus Lycka, Thinkware AB
Alvans vag 99, SE-907 50 UMEA, SWEDEN
phone: int+46 70 582 80 65, fax: int+46 70 612 80 65
http://www.thinkware.se/ mailto:magnus@thinkware.se