Sorry, your browser cannot access this site
This page requires browser support (enable) JavaScript
Learn more >

Gray-Ice

个人博客兼个人网站

本篇博文主要是为了方便博主再次使用python3 logging模块时能够快速上手,故此讲解的不会很详细。若是第一次接触这个模块,建议看这些内容: 日志 HOWTO, python3 logging模块使用说明, 第一个链接是文档,可以快速入门,第二个链接是某位博主写的博客,质量不错。

那么接下来就开始讲解logging模块的使用。

日志等级分为4个等级,CRITICAL, ERROR, WARNING, INFO, DEBUG。简单的使用方法如下:

1
2
3
4
5
6
7
import logging

logging.info("Info")
logging.warning("warning")
logging.error("error")
logging.critical("critical")

输出如下:

1
2
3
4
5
/home/fire/PyVenv/web_env/bin/python3.9 /home/fire/work_project/test.py
WARNING:root:warning
ERROR:root:error
CRITICAL:root:critical

可以看到,info的内容并没有输出出来,这是因为被称为”日志级别”的东西限制了info的输出。日志级别是有等级的,默认日志级别是warning,因为info的级别小于warning,所以就没有输出出来。日志级别等级: CRITICAL > ERROR > WARNING > INFO > DEBUG。

好,基础知识已经讲完了,那么下面我们来实现一个根据文件大小自动截断日志的功能:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
import logging
from logging import handlers


# ulog: 用于记录用户的日志
# getLogger返回一个Logger对象,关于日志的操作通过它来执行
ulog = logging.getLogger("root.user")
# 设置日志级别
ulog.setLevel(logging.INFO)

# 配置ulog
# 可以把handler看成一个工具,它可以改变Logger对象的行为
# handler有很多种,这里使用的当文件大小达到限制后就自动截断日志并生成新的文件的handler
# 文件名: user.log, 打开方式: a, 最大字节数: 1024 * 1024 Bytes, 日志根据时间依次存放在user.log.1, user.log.2, user.log.3。
uhandler = handlers.RotatingFileHandler(filename="user.log", mode='a', maxBytes=1024 * 1024, backupCount=3)
# 设置日志的格式,详细内容建议参考文档
uformat = logging.Formatter(fmt="%(asctime)s %(levelname)s %(filename)s %(funcName)s:line %(lineno)d %(message)s", datefmt='%Y-%m-%d %H:%M:%S')
# 设置handler的格式
uhandler.setFormatter(uformat)
# 添加handler
ulog.addHandler(uhandler)

ulog.info("Hey!")
ulog.info("Hello, %s.", "Jack")

执行上面的程序,结果如下:

1
2
3
2021-06-07 20:12:15 INFO test.py <module>:line 45 Hey!
2021-06-07 20:12:15 INFO test.py <module>:line 46 Hello, Jack.

关于backupCount这个参数,也许你会感到一脸懵,所以我贴心的复制了一份RotatingFileHandler的注释:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
class RotatingFileHandler(BaseRotatingHandler):
"""
Handler for logging to a set of files, which switches from one file
to the next when the current file reaches a certain size.
"""
def __init__(self, filename, mode='a', maxBytes=0, backupCount=0,
encoding=None, delay=False, errors=None):
"""
Open the specified file and use it as the stream for logging.

By default, the file grows indefinitely. You can specify particular
values of maxBytes and backupCount to allow the file to rollover at
a predetermined size.

Rollover occurs whenever the current log file is nearly maxBytes in
length. If backupCount is >= 1, the system will successively create
new files with the same pathname as the base file, but with extensions
".1", ".2" etc. appended to it. For example, with a backupCount of 5
and a base file name of "app.log", you would get "app.log",
"app.log.1", "app.log.2", ... through to "app.log.5". The file being
written to is always "app.log" - when it gets filled up, it is closed
and renamed to "app.log.1", and if files "app.log.1", "app.log.2" etc.
exist, then they are renamed to "app.log.2", "app.log.3" etc.
respectively.

If maxBytes is zero, rollover never occurs.
"""

本篇完。

评论



愿火焰指引你