使用 FastAPI+aiosqlite+databases 搭建服务端的基础用法
说明
本文是主要为从未使用过这些框架的人做一个介绍,并没有太多的技术含量。
连接池
首先先把我最焦虑的问题放在这里,到底要不要一个连接池?
我在 aiosqlite
的仓库中搜到了这样一条 issue,作者为我们介绍了为什么使用 sqlite
数据库时连接池不是那么重要。我并没有说它不重要,只是在轻量级使用中可以不在意这点。我这里把原文复制过来,并且把重要句子标粗。
Generally speaking, the other DB libraries are pooling connections because a) making a network connection takes a significant amount of time between TCP handshake, etc, so having connections already made and waiting is faster; and b) because network bandwidth is limited, so preventing too many simultaneous connections can help ensure better network utilization.
However, since sqlite by definition is a local database, either on-disk or in memory, the benefits from connection pooling are minimal: there is no network delay to opening a file on disk or allocating memory, and disk bandwidth is much higher (and better managed by the kernel). “Connections” are lightweight compared to mysql or postgres, and can generally be opened or closed at will. The biggest cost to a new connection will likely be spinning up another thread.
I added a simple perf test to measure speed of creating connections, and on my M1 Mac Mini with Python 3.10.2, I was able to open ~4700 connections per second from a file, or ~5300/s for in-memory DBs:
1 | (.venv) jreese@mordin ~/workspace/aiosqlite main » python -m unittest aiosqlite.tests.perf.PerfTest -k test_connection |
When it comes to concurrency, you can simply create more connections with aiosqlite, without needing a connection pool. Pooling connections could potentially still help in that regard if you expect to be making a very large number of concurrent requests and want to limit the number of background threads, though you will likely end up with contention on the limited number of connections in the pool instead.
My general suggestion for most use cases with aiosqlite is to just keep the code simple, and open a new connection using a context manager anywhere in the code you need to talk to the db. This also helps keep transactions isolated, and prevents a wide class of bugs around managing queries on shared connection threads.
简而言之,就是使用 sqlite
这种本地文件的数据库,连接池相较于 MySQL
那种通过网络连接的数据库来说就没有那么必要了,所以你可以放心大胆的在需要的时候创建数据库连接。
依赖
均可通过 pip install
下载。
实践
首先,如果去看 databases/core.py
源码的话,可以注意到以下几行:
1 | async def __aenter__(self) -> "Database": |
所以说我们可以通过异步的 context manager
也就是 async with
来管理一个 databases.Database
对象。
假设我们有一个储存文本,并根据关键字随机获取的需求,我们需要创建一个表,包含 id
和 text
字段。那么我为了方便,创建了一个工具类来帮助我们管理。
1 | import databases |
为了方便,本项目直接写 SQL
语句了,就没用 sqlalchemy
,如果数据复杂就不要直接写 SQL
语句了。
FastAPI
介绍完了我们的工具类,接下来就是集成到 FastAPI
中了。这里我们使用了依赖注入来获取工具类 Database
的实例。
1 | from pydantic import BaseModel |
之后用 uvicorn
运行,就搭建了一个简单的服务端了,基本上 CRUD
都可以实现。
1 | uvicorn server:app --port 8080 |