最新EFK(Elasticsearch+FileBeat+Kibana)日志收集

news/2025/2/6 13:00:52 标签: elasticsearch, elk, 运维开发

文章目录

  • 1.EFK介绍
  • 2.操作前提
  • 3.FileBeat8.15下载&安装
  • 4.编写FileBeat配置文件
  • 5.启动FileBeat
  • 6.模拟实时日志数据生成
  • 7.查看索引(数据流)是否创建成功
  • 8.创建数据视图:
  • 9.查看数据视图
  • 10.使用KQL对采集的日志内容进行过滤
  • 11.给日志数据配置保留天数(扩展知识)

1.EFK介绍

Kibana参考官方文档:https://www.elastic.co/guide/en/beats/filebeat/8.15/filebeat-overview.html

在这里插入图片描述

2.操作前提

需要提前安装Elasticsearch+Kibana。安装过程见我上一个文章:elasticsearch8.15 高可用集群搭建(含认证&Kibana)

3.FileBeat8.15下载&安装

软件下载地址:https://www.elastic.co/downloads/past-releases/filebeat-8-15-0
在这里插入图片描述
下载文件后上传至服务器

# 创建目录
mkdir -p /opt/software/
#解压至特定目录
tar -zxvf filebeat-8.15.0-linux-x86_64.tar.gz -C /opt/software/
#切到filebeat安装目录
cd /opt/software/filebeat-8.15.0-linux-x86_64

4.编写FileBeat配置文件

:FileBeat配置文在FileBeat安装目录下的filebeat.yml。我这里是重新创建一个配置文件,启动FileBeat时指定我自己创建的配置文件。你可以看情况来。

# 创建配置目录
mkdir config
# filebeat_2_es.yml默认是不存在的,编辑保持后,该文件会自动创建
vim config/filebeat_2_es.yml

filebeat_2_es.yml配置文件完整内容如下:

filebeat.inputs:			#数据输入相关配置
  - type: filestream		# (必填)数据输入类型是filestream
    id: nginx-access-log-1	# (必填)全局唯一即可
    enabled: true			# 启动
    paths:
      - /tmp/logs/nginx_access.log*		# 当前要监听的文件, *是通配符任意。例如nginx.access.log.1 的文件内容也会被监听到。
    tags: ["nginx-access-log"]			# (可选)这里定义用户自定义值。用于下文的索引条件选择 以及 后续kibana上也可以用该标签做数据过滤
    fields:
      my_server_ip: 192.168.25.31   # (可选)这里定义用户自定义属性,后面会出现在采集数据中
    fields_under_root: true		 	# (可选)让上面Field里面定义的属性出现在采集数据的最外层,而不是嵌套在采集数据的filed字段里面。
    #exclude_lines: ['^DBG']  		# (可选)当前数据行如果以DBG开头,则当前数据行不会被采集
    #include_lines: ['^ERR', '^WARN']   # (可选)当前数据行以“ERR”或“WAR”开头,采可能会被采集
    
  - type: filestream
    id: nginx-error-log-1
    enabled: true
    paths:
      - /tmp/logs/nginx_error.log*
    tags: ["nginx-error-log"]
    fields:
      my_server_ip: 192.168.25.31  
    fields_under_root: true
    #exclude_lines: ['^DBG']  #要求当前数据行必须不能以DBG开头
    include_lines: ['\[error\]']   #要求当前数据行必须以“ERR”或“WAR”开头

  - type: filestream
    id: elasticsearch-log-1
    enabled: true
    paths:
      - /tmp/logs/elasticsearch.log*
    tags: ["elasticsearch-log"]
    fields:
      my_server_ip: 192.168.25.31  
    fields_under_root: true
    parsers:
      - multiline:			#多行匹配。例如 java异常日志,一般一个异常会占用多行,此时需要把一个异常下的多行日志合成一行日志。参考:https://www.elastic.co/guide/en/beats/filebeat/8.15/multiline-examples.html
          type: pattern
          pattern: '^\['	# 匹配模式。每一行日志是以 "["开头。
          negate: true		# 这个可以固定,参考:https://www.elastic.co/guide/en/beats/filebeat/8.15/multiline-examples.html
          match: after		# 这个可以固定,参考:https://www.elastic.co/guide/en/beats/filebeat/8.15/multiline-examples.html
#output.console:
#  pretty: true
output.elasticsearch:
  enabled: true
  hosts: ["http://192.168.25.31:9200", "http://192.168.25.32:9200", "http://192.168.25.33:9200"]
  username: "elastic"
  password: "123456"
  indices:
    - index: "log-software-nginx-access-%{+yyyy.MM.dd}" 
      when.contains:
        tags: "nginx-access-log"
    - index: "log-software-nginx-error-%{+yyyy.MM.dd}" 
      when.contains:
        tags: "nginx-error-log"
    - index: "log-software-elasticsearch-%{+yyyy.MM.dd}" 
      when.contains:
        tags: "elasticsearch-log"
#必须把ilm索引生命周期管理关掉(否则上面的indecis索引条件、以及下面的setup.template.pattern将会生效)
setup.ilm.enabled: false
#默认索引名称是filebeat,如果想上面自定义索引上面,需要设置 模板name和匹配。pattern必须能匹配 上面设置的索引名称
setup.template.name: "log"
setup.template.pattern: "log*"
setup.template.overwrite: false
setup.template.settings:
  index.number_of_shards: 3		# 索引在ES的分片数
  index.number_of_replicas: 1	# 每个索引的分片在ES的副本数

5.启动FileBeat

./filebeat -e -c config/filebeat_2_es.yml

小技巧::filebeat可以采集已有文件的数据,也可以采集实时往文件里面写入的数据。文件里的数据一旦被filebeat采集后,后面就不会对应已经采集过的数据内容进行重新采集。如果你想每次启动filebeat的时候,能重新采集已经采集过的文件数据,那么可以在启动前执行rm -rf /opt/software/filebeat-8.15.0-linux-x86_64/data/*”,然后再启动filebeat,执行会重新采集已有文件的数据。这个data目录下记录的是filebeat的已采集过的采集信息,故而删掉后,filebeat获取不到之前的采集记录,故而会重新对已有文件进行重新采集。

6.模拟实时日志数据生成

模拟Nginx access日志

cat > nginx_access.log << 'EOF'
192.168.25.83 - - [31/Jan/2025:21:42:22 +0000] "\x03\x00\x00/*\xE0\x00\x00\x00\x00\x00Cookie: mstshash=Administr" 400 157 "-" "-" "-"
192.168.25.83 - - [01/Feb/2025:11:18:18 +0000] "GET / HTTP/1.1" 400 255 "-" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:130.0) Gecko/20100101 Firefox/130.0" "-"

EOF

模拟Nginx error日志

cat > nginx_error.log << 'EOF'
2025/01/13 19:15:52 [error] 8#8: *634461 "/usr/local/nginx/html/index.html" is not found (2: No such file or directory), client: 172.168.110.83, server: localhost, request: "GET / HTTP/1.1", host: "192.168.25.10:80"
2025/01/17 17:37:05 [error] 10#10: *770125 "/usr/local/nginx/html/index.html" is not found (2: No such file or directory), client: 172.168.110.83, server: localhost, request: "GET / HTTP/1.1", host: "192.168.25.10:80"
我不是错误数据日志行,看是否会把我给过滤掉
 [error]我是错误消息
 [error]我是错误消息2
EOF

模拟ES日志(含异常:后面验证是否会把异常的多行日志合在一起)

cat > elasticsearch.log << 'EOF'
[2025-02-02T01:20:03,049][INFO ][o.e.c.m.MetadataIndexTemplateService] [node-2] adding index template [metrics-apm.transaction.1m@template] for index patterns [metrics-apm.transaction.1m-*]
[2025-02-02T01:20:03,113][INFO ][o.e.c.m.MetadataIndexTemplateService] [node-2] adding index template [metrics-apm.service_destination.60m@template] for index patterns [metrics-apm.service_destination.60m-*]
[2025-02-02T01:20:03,186][INFO ][o.e.c.m.MetadataIndexTemplateService] [node-2] adding index template [metrics-apm.service_transaction.60m@template] for index patterns [metrics-apm.service_transaction.60m-*]
[2025-02-02T01:20:03,232][INFO ][o.e.c.m.MetadataIndexTemplateService] [node-2] adding index template [traces-apm.rum@template] for index patterns [traces-apm.rum-*]
[2025-02-02T01:20:03,319][INFO ][o.e.c.m.MetadataIndexTemplateService] [node-2] adding index template [metrics-apm.service_destination.10m@template] for index patterns [metrics-apm.service_destination.10m-*]
[2025-02-02T01:20:03,383][INFO ][o.e.c.m.MetadataIndexTemplateService] [node-2] adding index template [metrics-apm.service_transaction.10m@template] for index patterns [metrics-apm.service_transaction.10m-*]
[2025-02-02T01:20:03,501][INFO ][o.e.c.m.MetadataIndexTemplateService] [node-2] adding index template [metrics-apm.transaction.60m@template] for index patterns [metrics-apm.transaction.60m-*]
[2025-02-02T01:20:03,582][INFO ][o.e.c.m.MetadataIndexTemplateService] [node-2] adding index template [metrics-apm.app@template] for index patterns [metrics-apm.app.*-*]
[2025-02-02T01:20:03,677][INFO ][o.e.c.m.MetadataIndexTemplateService] [node-2] adding index template [traces-apm@template] for index patterns [traces-apm-*]
[2025-02-02T01:20:03,735][INFO ][o.e.c.m.MetadataIndexTemplateService] [node-2] adding index template [logs-apm.app@template] for index patterns [logs-apm.app.*-*]
[2025-02-02T01:20:03,850][INFO ][o.e.c.m.MetadataIndexTemplateService] [node-2] adding index template [traces-apm.sampled@template] for index patterns [traces-apm.sampled-*]
[2025-02-02T01:20:03,943][INFO ][o.e.c.m.MetadataIndexTemplateService] [node-2] adding index template [metrics-apm.service_destination.1m@template] for index patterns [metrics-apm.service_destination.1m-*]
[2025-02-02T01:20:04,317][ERROR][o.e.x.c.t.IndexTemplateRegistry] [node-2] error adding ingest pipeline template [ent-search-generic-ingestion] for [enterprise_search]
java.lang.IllegalStateException: Ingest info is empty
        at org.elasticsearch.ingest.IngestService.validatePipeline(IngestService.java:648) ~[elasticsearch-8.15.0.jar:?]
        at org.elasticsearch.ingest.IngestService.validatePipelineRequest(IngestService.java:465) ~[elasticsearch-8.15.0.jar:?]
        at org.elasticsearch.ingest.IngestService.lambda$putPipeline$5(IngestService.java:448) ~[elasticsearch-8.15.0.jar:?]
        at org.elasticsearch.action.ActionListenerImplementations$ResponseWrappingActionListener.onResponse(ActionListenerImplementations.java:245) ~[elasticsearch-8.15.0.jar:?]
        at org.elasticsearch.action.support.ContextPreservingActionListener.onResponse(ContextPreservingActionListener.java:32) ~[elasticsearch-8.15.0.jar:?]
        at org.elasticsearch.tasks.TaskManager$1.onResponse(TaskManager.java:202) ~[elasticsearch-8.15.0.jar:?]
        at org.elasticsearch.tasks.TaskManager$1.onResponse(TaskManager.java:196) ~[elasticsearch-8.15.0.jar:?]
        at org.elasticsearch.action.ActionListenerImplementations$RunBeforeActionListener.onResponse(ActionListenerImplementations.java:307) ~[elasticsearch-8.15.0.jar:?]
        at org.elasticsearch.action.support.ContextPreservingActionListener.onResponse(ContextPreservingActionListener.java:32) ~[elasticsearch-8.15.0.jar:?]
        at org.elasticsearch.action.ActionListenerImplementations$MappedActionListener.onResponse(ActionListenerImplementations.java:95) ~[elasticsearch-8.15.0.jar:?]
        at org.elasticsearch.action.ActionListener.respondAndRelease(ActionListener.java:367) ~[elasticsearch-8.15.0.jar:?]
        at org.elasticsearch.action.support.nodes.TransportNodesAction.lambda$newResponseAsync$2(TransportNodesAction.java:215) ~[elasticsearch-8.15.0.jar:?]
        at org.elasticsearch.action.ActionListener.run(ActionListener.java:444) ~[elasticsearch-8.15.0.jar:?]
        at org.elasticsearch.action.support.nodes.TransportNodesAction.newResponseAsync(TransportNodesAction.java:215) ~[elasticsearch-8.15.0.jar:?]
        at org.elasticsearch.action.support.nodes.TransportNodesAction$1.lambda$onCompletion$4(TransportNodesAction.java:166) ~[elasticsearch-8.15.0.jar:?]
        at org.elasticsearch.action.support.nodes.TransportNodesAction.lambda$doExecute$0(TransportNodesAction.java:178) ~[elasticsearch-8.15.0.jar:?]
        at org.elasticsearch.action.ActionListenerImplementations$ResponseWrappingActionListener.onResponse(ActionListenerImplementations.java:245) ~[elasticsearch-8.15.0.jar:?]
        at org.elasticsearch.action.support.ThreadedActionListener$1.doRun(ThreadedActionListener.java:39) ~[elasticsearch-8.15.0.jar:?]
        at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:984) ~[elasticsearch-8.15.0.jar:?]
        at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:26) ~[elasticsearch-8.15.0.jar:?]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) ~[?:?]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ~[?:?]
        at java.lang.Thread.run(Thread.java:1570) ~[?:?]
[2025-02-02T01:20:04,395][ERROR][o.e.x.c.t.IndexTemplateRegistry] [node-2] error adding ingest pipeline template [behavioral_analytics-events-final_pipeline] for [enterprise_search]
java.lang.IllegalStateException: Ingest info is empty
        at org.elasticsearch.ingest.IngestService.validatePipeline(IngestService.java:648) ~[elasticsearch-8.15.0.jar:?]
        at org.elasticsearch.ingest.IngestService.validatePipelineRequest(IngestService.java:465) ~[elasticsearch-8.15.0.jar:?]
        at org.elasticsearch.ingest.IngestService.lambda$putPipeline$5(IngestService.java:448) ~[elasticsearch-8.15.0.jar:?]
        at org.elasticsearch.action.ActionListenerImplementations$ResponseWrappingActionListener.onResponse(ActionListenerImplementations.java:245) ~[elasticsearch-8.15.0.jar:?]
        at org.elasticsearch.action.support.ContextPreservingActionListener.onResponse(ContextPreservingActionListener.java:32) ~[elasticsearch-8.15.0.jar:?]
        at org.elasticsearch.tasks.TaskManager$1.onResponse(TaskManager.java:202) ~[elasticsearch-8.15.0.jar:?]
        at org.elasticsearch.tasks.TaskManager$1.onResponse(TaskManager.java:196) ~[elasticsearch-8.15.0.jar:?]
        at org.elasticsearch.action.ActionListenerImplementations$RunBeforeActionListener.onResponse(ActionListenerImplementations.java:307) ~[elasticsearch-8.15.0.jar:?]
        at org.elasticsearch.action.support.ContextPreservingActionListener.onResponse(ContextPreservingActionListener.java:32) ~[elasticsearch-8.15.0.jar:?]
        at org.elasticsearch.action.ActionListenerImplementations$MappedActionListener.onResponse(ActionListenerImplementations.java:95) ~[elasticsearch-8.15.0.jar:?]
        at org.elasticsearch.action.ActionListener.respondAndRelease(ActionListener.java:367) ~[elasticsearch-8.15.0.jar:?]
        at org.elasticsearch.action.support.nodes.TransportNodesAction.lambda$newResponseAsync$2(TransportNodesAction.java:215) ~[elasticsearch-8.15.0.jar:?]
        at org.elasticsearch.action.ActionListener.run(ActionListener.java:444) ~[elasticsearch-8.15.0.jar:?]
        at org.elasticsearch.action.support.nodes.TransportNodesAction.newResponseAsync(TransportNodesAction.java:215) ~[elasticsearch-8.15.0.jar:?]
        at org.elasticsearch.action.support.nodes.TransportNodesAction$1.lambda$onCompletion$4(TransportNodesAction.java:166) ~[elasticsearch-8.15.0.jar:?]
        at org.elasticsearch.action.support.nodes.TransportNodesAction.lambda$doExecute$0(TransportNodesAction.java:178) ~[elasticsearch-8.15.0.jar:?]
        at org.elasticsearch.action.ActionListenerImplementations$ResponseWrappingActionListener.onResponse(ActionListenerImplementations.java:245) ~[elasticsearch-8.15.0.jar:?]
        at org.elasticsearch.action.support.ThreadedActionListener$1.doRun(ThreadedActionListener.java:39) ~[elasticsearch-8.15.0.jar:?]
        at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:984) ~[elasticsearch-8.15.0.jar:?]
        at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:26) ~[elasticsearch-8.15.0.jar:?]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) ~[?:?]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ~[?:?]
        at java.lang.Thread.run(Thread.java:1570) ~[?:?]
[2025-02-02T01:20:04,399][ERROR][o.e.x.c.t.IndexTemplateRegistry] [node-2] error adding ingest pipeline template [search-default-ingestion] for [enterprise_search]
java.lang.IllegalStateException: Ingest info is empty
        at org.elasticsearch.ingest.IngestService.validatePipeline(IngestService.java:648) ~[elasticsearch-8.15.0.jar:?]
        at org.elasticsearch.ingest.IngestService.validatePipelineRequest(IngestService.java:465) ~[elasticsearch-8.15.0.jar:?]
        at org.elasticsearch.ingest.IngestService.lambda$putPipeline$5(IngestService.java:448) ~[elasticsearch-8.15.0.jar:?]
        at org.elasticsearch.action.ActionListenerImplementations$ResponseWrappingActionListener.onResponse(ActionListenerImplementations.java:245) ~[elasticsearch-8.15.0.jar:?]
        at org.elasticsearch.action.support.ContextPreservingActionListener.onResponse(ContextPreservingActionListener.java:32) ~[elasticsearch-8.15.0.jar:?]
        at org.elasticsearch.tasks.TaskManager$1.onResponse(TaskManager.java:202) ~[elasticsearch-8.15.0.jar:?]
        at org.elasticsearch.tasks.TaskManager$1.onResponse(TaskManager.java:196) ~[elasticsearch-8.15.0.jar:?]
        at org.elasticsearch.action.ActionListenerImplementations$RunBeforeActionListener.onResponse(ActionListenerImplementations.java:307) ~[elasticsearch-8.15.0.jar:?]
        at org.elasticsearch.action.support.ContextPreservingActionListener.onResponse(ContextPreservingActionListener.java:32) ~[elasticsearch-8.15.0.jar:?]
        at org.elasticsearch.action.ActionListenerImplementations$MappedActionListener.onResponse(ActionListenerImplementations.java:95) ~[elasticsearch-8.15.0.jar:?]
        at org.elasticsearch.action.ActionListener.respondAndRelease(ActionListener.java:367) ~[elasticsearch-8.15.0.jar:?]
        at org.elasticsearch.action.support.nodes.TransportNodesAction.lambda$newResponseAsync$2(TransportNodesAction.java:215) ~[elasticsearch-8.15.0.jar:?]
        at org.elasticsearch.action.ActionListener.run(ActionListener.java:444) ~[elasticsearch-8.15.0.jar:?]
        at org.elasticsearch.action.support.nodes.TransportNodesAction.newResponseAsync(TransportNodesAction.java:215) ~[elasticsearch-8.15.0.jar:?]
        at org.elasticsearch.action.support.nodes.TransportNodesAction$1.lambda$onCompletion$4(TransportNodesAction.java:166) ~[elasticsearch-8.15.0.jar:?]
        at org.elasticsearch.action.support.nodes.TransportNodesAction.lambda$doExecute$0(TransportNodesAction.java:178) ~[elasticsearch-8.15.0.jar:?]
        at org.elasticsearch.action.ActionListenerImplementations$ResponseWrappingActionListener.onResponse(ActionListenerImplementations.java:245) ~[elasticsearch-8.15.0.jar:?]
        at org.elasticsearch.action.support.ThreadedActionListener$1.doRun(ThreadedActionListener.java:39) ~[elasticsearch-8.15.0.jar:?]
        at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:984) ~[elasticsearch-8.15.0.jar:?]
        at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:26) ~[elasticsearch-8.15.0.jar:?]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) ~[?:?]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ~[?:?]
        at java.lang.Thread.run(Thread.java:1570) ~[?:?]
EOF

7.查看索引(数据流)是否创建成功

登陆进行kibana。登陆地址:http://你的kibana所在服务器IP:5601/


点击左侧菜单的“Stack Management”选项

在这里插入图片描述


点击“Stack Management”之下的“索引管理” 之下的“数据流”。
可以看到我们的配置文件中指定的索引,已经变成了 对应的数据流。
在这里插入图片描述


8.创建数据视图:

菜单位置:Stack Management --> 数据视图

#创建数据视图时,内容可以填下面这几个。
名称:软件日志
索引模式:log-software*
时间戳字段:@timestamp

:上面这个索引模式你可以根据实际需求来,例如你也可以填log-software-nginx-access-*。这样创建出来的数据视图就只有nginx-access日志的。上面log-software*匹配的是所有的软件日志,即匹配下图的这三个数据流。
在这里插入图片描述

9.查看数据视图

菜单位置:Discover
在这里插入图片描述

在这里插入图片描述


检查es多行异常是否被合成一行:
在这里插入图片描述
效果如下,一个异常的多行内容被合在一行了。
在这里插入图片描述

10.使用KQL对采集的日志内容进行过滤

KQL语法:
在这里插入图片描述

示例1:过滤出ES日志:
查询条件:

tags : "elasticsearch-log"

在这里插入图片描述


示例2:过滤出ES日志,并且过滤出存在error的日志
查询条件:

tags : "elasticsearch-log" and message:error

在这里插入图片描述

11.给日志数据配置保留天数(扩展知识)

我们在上面的Filebeat配置文件指定了一个名为“log”的索引模板(setup.template.name),默认通过该索引模板创建出来的索引、数据流 日志数据是永久保留的,故而我们需要更改索引模板里的数据保留天数(界面方式)

修改索引模板
在这里插入图片描述


把索引模板“log”里面的“数据保留时间”进行开启,然后设置对应的数据保留天数,然后一直点下一步即可。
在这里插入图片描述


重新把之前自动创建的几个“数据流”给删掉。
在这里插入图片描述
在filebeat上执行 rm -rf data/* ,然后启动filebeat重新采集日志文件数据,此时会自动创建 “索引数据流”,记得把filebeat配置文件的“setup.template.overwrite: false”索引模板重写设置为fasle。

重新访问kibana的索引管理,发现自动创建出来的数据流已经有“数据保留时间”了,并且保留时间和我们在索引模板里面指定的保留天数一致。效果如下:
在这里插入图片描述

至此,大功告成~~~~


http://www.niftyadmin.cn/n/5843008.html

相关文章

【力扣】48.旋转图像

AC截图 题目 思路 以矩阵 1 2 3 4 5 6 7 8 9 为例&#xff0c;想要翻转90度&#xff0c;可以先沿着对角线翻转一次 1 4 7 2 5 8 3 6 9 然后再逐行翻转&#xff0c;即可得到所求矩阵 7 4 1 8 5 2 9 6 3 代码 class Solution { public:void rotate(vector<vector…

亚远景-从SPICE到ASPICE:汽车软件开发的标准化演进

一、SPICE标准的起源与背景 SPICE&#xff0c;全称“Software Process Improvement and Capability dEtermination”&#xff0c;即“软件流程改进和能力测定”&#xff0c;是由国际标准化组织ISO、国际电工委员会IEC、信息技术委员会JTC1联合发起制定的ISO 15504标准。该标准旨…

git 指定ssh key

在git clone操作中指定SSH密钥&#xff0c;可以通过以下几种方法实现&#xff1a; 1 使用–config选项在克隆时指定密钥 当你克隆一个git仓库时&#xff0c;可以直接在命令中指定要使用的ssh密钥。这种方法适用于一次性操作&#xff0c;不需要修改全局或仓库级别的配置 git …

libdrm移植到arm设备

一、环境资源要求 下载libdrm Index of /libdrm 这边使用的是2.4.114版本&#xff0c;版本太高对meson版本要求也很高&#xff0c;为了省事用apt安装meson就不用太高版本了&#xff0c;1.x版本虽然使用makefile编译方便但是太老&#xff0c;对应用支持不太好。 https://dri…

【Kubernetes Pod间通信-第3篇】Kubernetes中Pod与ClusterIP服务之间的通信

引言 我们之前了解了在不同场景下,Kubernetes中Pod之间的通信是如何路由的。 【Kubernetes Pod间通信-第1篇】在单个子网中使用underlay网络实现Pod到Pod的通信【Kubernetes Pod间通信-第2篇】使用BGP实现Pod到Pod的通信现在,我们来看看在集群中,Pod与服务之间的通信是如何…

使用PaddlePaddle实现逻辑回归:从训练到模型保存与加载

1. 引入必要的库 首先&#xff0c;需要引入必要的库。PaddlePaddle用于构建和训练模型&#xff0c;pandas和numpy用于数据处理&#xff0c;matplotlib用于结果的可视化。 import paddle import pandas as pd import numpy as np import matplotlib.pyplot as plt 2. 加载自定…

腾讯云 TI 平台部署与调用DeepSeek-R1大模型的实战指南

今天我们将继续探讨如何部署一个私有化的 DeepSeek-R1 大模型&#xff0c;具体的部署过程我们将利用腾讯云的 TI 平台进行操作。当前&#xff0c;腾讯云 TI 平台为用户提供了免费体验的满血版 DeepSeek-R1 大模型&#xff0c;同时该平台还提供了开放的 API 接口服务&#xff0c…

【Rust自学】20.2. 最后的项目:多线程Web服务器

说句题外话&#xff0c;这篇文章非常要求Rust的各方面知识&#xff0c;最好看一下我的【Rust自学】专栏的所有内容。这篇文章也是整个专栏最长&#xff08;4762字&#xff09;的文章&#xff0c;需要多次阅读消化&#xff0c;最好点个收藏&#xff0c;免得刷不到了。 喜欢的话…