原文是这么描述的
class psycopg2.pool.AbstractConnectionPool(minconn, maxconn, *args, **kwargs)
Base class implementing generic key-based pooling code.
New minconn connections are created automatically. The pool will support a maximum of about maxconn connections. *args and **kwargsare passed to the connect() function.
The following methods are expected to be implemented by subclasses:
getconn(key=None)
Get a free connection and assign it to key if not None.
putconn(conn, key=None, close=False)
Put away a connection.
If close is True, discard the connection from the pool.
closeall()
Close all the connections handled by the pool.
Note that all the connections are closed, including ones eventually in use by the application.
大致意思是说在创建这个pool对象时,会自动创建参数minconn个数的连接池.并且最终最多能支持maximum这么多个链接
然后这个pool提供getconn
,putconn
,closeall
三个方法
getconn
用于获取一个链接, 可选参数key
,传入获取对应的链接
putconn
回收一个链接, 可选参数key
,与 get 与之相对
closeall
关闭所有链接
普通的 connect, 10000 次查询
import database
from time import time
t = time()
n = 10000
db = database.PSQL()
while n:
db.get_conn()
data = db.query(table="vshop_order",
columns=["id", "order_no", "state"],
order_by="-id",
limit=1)
n -= 1
print(time() - t)
$: 138.07099604606628
使用连接池
import database
from time import time
db = database.PSQL()
lst = [str(i) for i in range(20)]
t = time()
n = 10000
while n:
key = lst.pop(0)
db.get_conn(key)
data = db.query(table="vshop_order",
columns=["id", "order_no", "state"],
order_by="-id",
limit=1)
n -= 1
db.put_conn(key)
lst.append(key)
print(time() - t)
$: 8.982805013656616
效果还是很明显的, 重复测试多次,倍数范围都在 15 倍左右
顺便说下环境
1
welsmann 2017-03-31 14:08:42 +08:00
....连接池不就是干这个的吗....
|
3
glasslion 2017-03-31 15:54:15 +08:00 1
Postgresql 连接池一般用 pgbounce 或 pgpool 之类的中间件
|
5
stabc 2017-03-31 16:17:08 +08:00
我对这个概念不懂,这是不是一万次查询和一万次连接数据库的性能差别?
|
7
1069401249 2017-03-31 18:37:32 +08:00 1
没达到性能瓶颈吧,现在高并发的产品毕竟不多,所以你们还没做优化。。。
|
8
dikT OP @1069401249 是啊,去哪儿找那么多高并发 ( ͡° ͜ʖ ͡°)
|