愛招飛幫助手冊 愛招飛幫助手冊
  • FastERP-1
  • Smart
  • PinToo
  • FastWeb
  • FastERP-2 企業管理系統 (opens new window)
  • 印染業ERP (opens new window)
  • 工廠終端機 (opens new window)
  • TARS
  • MARS
  • TaskRunner
  • Flying
  • FastDesk
  • HiDesk
  • HiNAT
  • FastBPM
  • 設備故障診斷 (opens new window)
  • 設備最佳運轉效益 (opens new window)
  • 企業智能助手SmeGPT (opens new window)
  • 燈號管理 (opens new window)
  • 戰情室 (opens new window)
  • 能源管理 (opens new window)
  • 人車定位 (opens new window)
  • 戰情指揮系統 (opens new window)
  • FastERP-1
  • FastWeb
  • Smart
  • PinToo
  • Flying
  • TARS
  • 通用功能

    • Report
    • Script
    • Echarts
    • Chart
    • DB Install
  • FastERP-1
  • Smart
  • PinToo
  • FastWeb
  • FastERP-2 企業管理系統 (opens new window)
  • 印染業ERP (opens new window)
  • 工廠終端機 (opens new window)
  • TARS
  • MARS
  • TaskRunner
  • Flying
  • FastDesk
  • HiDesk
  • HiNAT
  • FastBPM
  • 設備故障診斷 (opens new window)
  • 設備最佳運轉效益 (opens new window)
  • 企業智能助手SmeGPT (opens new window)
  • 燈號管理 (opens new window)
  • 戰情室 (opens new window)
  • 能源管理 (opens new window)
  • 人車定位 (opens new window)
  • 戰情指揮系統 (opens new window)
  • FastERP-1
  • FastWeb
  • Smart
  • PinToo
  • Flying
  • TARS
  • 通用功能

    • Report
    • Script
    • Echarts
    • Chart
    • DB Install
  • TaskRunner幫助主頁
  • 學習手冊

  • 開發手冊

    • 自定程式

    • 運行衛士

    • 自動化作業

    • 工作流

    • 預設資料

      • 訓練設備最佳效益分析模型
      • 分析預測設備最佳效益
        • 1. 說明
        • 2. 設計Python程式
        • 3. 呼叫執行
      • 訓練設備故障診斷模型
目录

分析預測設備最佳效益

# 分析預測設備最佳效益

# 1. 說明

  1. 載入最新訓練的模型權重,用作後續的分析預測。
  2. 根據使用者提供的分析範圍,輸出分析預測的結果為檔案。
  3. 將分析預測的結果檔案傳輸至FastWeb,同時更新分析預測結果資訊。

# 2. 設計Python程式

  設計的Python示例程式如下:

# 分析預測

import torch
import torch.nn as nn
#import torch.optim as optim
import json
#import websockets
import requests
#from aiohttp import web
#from pydantic import BaseModel
import csv
import random
import datetime
#import asyncio
import os
import logging
from logging.handlers import TimedRotatingFileHandler
import sys
#import gc

fastweb_url='http://192.168.0.201:8803'

# 檢測目錄是否存在,如不存在則建立新目錄
def create_directory_if_not_exists(directory_path):
    if not os.path.exists(directory_path):
        os.makedirs(directory_path)
        #logger.info(f"目錄 '{directory_path}' 建立成功!")

# 定義神經網路模型
class NeuralNetwork(nn.Module):
    def __init__(self):
        super(NeuralNetwork, self).__init__()
        self.hidden_layers = nn.Sequential(
            nn.Linear(4, 100),
            nn.ReLU()
        )
        for _ in range(9):
            self.hidden_layers.add_module(f'hidden_{_+1}', nn.Linear(100, 100))
            self.hidden_layers.add_module(f'relu_{_+1}', nn.ReLU())
        self.output_layer = nn.Linear(100, 1)

    def forward(self, x):
        x = self.hidden_layers(x)
        x = self.output_layer(x)
        return x.squeeze(-1)

# 將數據集分割為訓練集和驗證集
def split_dataset(data, split_ratio):
    random.shuffle(data)
    split_index = int(len(data) * split_ratio)
    train_data = data[:split_index]
    validate_data = data[split_index:]
    return train_data, validate_data

# 將數據儲存為CSV檔案
def save_dataset_to_csv(filename, data):
    with open(filename, 'w', newline='') as csvfile:
        writer = csv.writer(csvfile)
        writer.writerow(['kp', 'ki', 'kd', 'setpoint', 'sumWeight'])
        writer.writerows(data)

# 載入數據集
def load_dataset_from_csv(filename):
    data = []
    with open(filename, 'r') as csvfile:
        reader = csv.reader(csvfile)
        next(reader)  # 跳過標題行
        for row in reader:
            data.append([float(value) for value in row])
    return data

# 訓練數據
def main():
    # 檢查是否可用GPU加速
    #os.mkdir('temp/')
    # 配置日誌
    logger = logging.getLogger('__dcc_pid_predict__')
    if logger.hasHandlers():
        logger.handlers.clear()
    log_filename = 'log/dcc_pid_predict.log'
    log_formatter = logging.Formatter('%(asctime)s - %(levelname)s - %(message)s')
    log_handler = TimedRotatingFileHandler(log_filename, when="D", interval=1, backupCount=7)
    log_handler.suffix = "%Y-%m-%d.log"
    log_handler.encoding = "utf-8"
    log_handler.setFormatter(log_formatter)
    # 建立日誌記錄器
    logger = logging.getLogger('__dcc_pid_predict__')
    logger.setLevel(logging.DEBUG)
    logger.addHandler(log_handler)

    # 獲取目前日期和時間
    current_datetime = datetime.datetime.now()

    # 根據日期和時間產生自動編號
    # 這裡使用年月日時分秒作為編號,例如:20230515120000
    auto_number = current_datetime.strftime("%Y%m%d%H%M%S")

    create_directory_if_not_exists('log/')
    create_directory_if_not_exists('data/')
    create_directory_if_not_exists('model/')

    device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')

    #model_path = "model/model.pt"

    model_dir = "model/"

    latest_file = max(
        (os.path.join(model_dir, f) for f in os.listdir(model_dir) if f.endswith(".pt")),
        key=os.path.getctime,
        default=None
    )

    if latest_file:
        model_path = "model/"+os.path.basename(latest_file)
        logger.info(model_path)
    else:
        logger.info("沒有找到 .pt 檔案")

    # 載入訓練好的模型
    model = NeuralNetwork().to(device)
    logger.info("載入模型1")
    model.load_state_dict(torch.load(model_path))
    logger.info("載入模型2")
    model.eval()
    logger.info(f"模型已載入:{model_path}")
    try:
        params = json.loads(input_value.value)
#        params = {"username": "admin","tag": "0","guid": "45156B2E-8EDC-41E5-BE2F-030A10A2ECE4","eqname": "esp32","periodid": 1,"kp_min": 5,"kp_step": 0.1,"kp_max": 30,"ki_min": 5,"ki_max": 25,"ki_step": 0.1,"kd_min": 0,"kd_max": 0,"kd_step": 0.1,"setpoint_min": 30.7,"setpoint_max": 30.7,"setpoint_step": 0.1}
        #elif "guid" in params and "eqname" in params and "periodid" in params and "kp_min" in params and "kp_max" in params and "kp_step" in params and "ki_min" in params and "ki_max" in params and "ki_step" in params and "kd_min" in params and "kd_max" in params and "kd_step" in params and "setpoint_min" in params and "setpoint_max" in params and "setpoint_step" in params:
        guid = params["guid"]
        # 指定產生數值的範圍和步長
        kp_min = params["kp_min"]  
        kp_max = params["kp_max"]  
        kp_step = params["kp_step"]  
        ki_min = params["ki_min"]  
        ki_max = params["ki_max"]  
        ki_step = params["ki_step"]  
        kd_min = params["kd_min"]  
        kd_max = params["kd_max"]  
        kd_step = params["kd_step"]  
        setpoint_min = params["setpoint_min"] 
        setpoint_max = params["setpoint_max"]
        setpoint_step = params["setpoint_step"] 
        # 產生需要預測的數值
        kp_values = None
        if kp_min == kp_max:
            kp_values = torch.tensor([kp_min], dtype=torch.float32)
        else:
            kp_values = torch.arange(kp_min,kp_max + kp_step,kp_step, dtype=torch.float32)
            ki_values = None
        if ki_min == ki_max:
            ki_values = torch.tensor([ki_min], dtype=torch.float32)
        else:
            ki_values = torch.arange(ki_min,ki_max + ki_step,ki_step, dtype=torch.float32)
        kd_values = None
        if kd_min == kd_max:
            kd_values = torch.tensor([kd_min], dtype=torch.float32)
        else:
            kd_values = torch.arange(kd_min,kd_max + kd_step,kd_step, dtype=torch.float32)
        setpoint_values = None
        if setpoint_min == setpoint_max:
            setpoint_values = torch.tensor([setpoint_min], dtype=torch.float32)
        else:
            setpoint_values = torch.arange(setpoint_min,setpoint_max + setpoint_step,setpoint_step, dtype=torch.float32)
        # 將輸入數據轉換為張量
        input_data = torch.cartesian_prod(kp_values, ki_values, kd_values, setpoint_values).to(device)
        # 使用模型進行預測
        with torch.inference_mode():
            predictions = model(input_data)
        # 獲取預測的 sumweight 值
        sumweight_values = predictions.flatten()  # 將張量轉換為 Python 列表
        result_tensor = torch.cat((input_data,sumweight_values.unsqueeze(0).t()), dim=1)
        # 獲取目前日期和時間
        #current_datetime = datetime.datetime.now()
        # 根據日期和時間產生自動編號
        # 這裡使用年月日時分秒作為編號,例如:20230515120000
        auto_number = current_datetime.strftime("%Y%m%d%H%M%S")
        # 儲存數據集到 CSV 檔案
        csv_file = f"data/{params['eqname']}_{auto_number}.csv"  # 儲存的 CSV 檔案路徑
        with open(csv_file, 'w', newline='') as file:
            writer = csv.writer(file)
            writer.writerow(['kp', 'ki', 'kd', 'setpoint', 'sumweight'])  # 寫入 CSV 檔案的表頭
            writer.writerows(result_tensor.tolist())  # 寫入數據集內容

        logger.info(f"數據集已儲存到 {csv_file}")

        csv_filename = f"{params['eqname']}_{auto_number}.csv"
                            
        # 計算預測結果數據集中的最小值及其對應的 kp、ki、kd 和 setpoint 值
        # result_tensor = result_tensor.cpu()
        min_sumweight, min_index = torch.min(result_tensor[:, 4], dim=0)

        # 找到對應的其他維度數值
        min_kp = result_tensor[min_index, 0]
        min_ki = result_tensor[min_index, 1]
        min_kd = result_tensor[min_index, 2]
        min_setpoint = result_tensor[min_index, 3] 
        # 更新記錄
        isfinish = True
        # model_path = model_path
        url = fastweb_url + "/?restapi=pid_update_predictlog"
        data = {"guid":guid,"isfinish":isfinish,"min_kp": min_kp.item(),"min_ki": min_ki.item(),"min_kd":min_kd.item(),"min_setpoint":min_setpoint.item(),"min_sumweight":min_sumweight.item(),"csv_file":csv_filename,"model_path":""}
        data = json.dumps(data)
        logger.info(data)  
        response = requests.post(url, data=data)
        if response.status_code == 200:
            logger.info("請求成功")   

        # 上傳檔案至FastWeb的目錄

        with open(f'{csv_file}', 'rb') as f:
            files = {
                'fileName': (csv_filename, f, 'application/octet-stream'),
                'filePath': (None, 'temp/')
            }

            file_params = {
                'restapi':'uploadfiles'
            }
            response = requests.post(fastweb_url,params=file_params, files=files)

        os.remove(csv_file)

        # 呼叫http 提示已訓練完成

        data = json.dumps({"username":params['username'],"action":"callback","tag":params['tag'], \
            "data":{"callbackcomponent":"WebHomeFrame","callbackeventname":"update", \
            "callbackparams":[{"paramname":"messagetype","paramvalue":"success"},{"paramname":"title","paramvalue":"success"},{"paramname":"message","paramvalue":"設備最佳運轉效益模型預測分析已完成"}]}})

        input_value.value = '預測分析已完成'

        url = fastweb_url + "/?restapi=sendwsmsg"
        response = requests.post(url, data=data)
        if response.status_code == 200:
            logger.info("請求成功")


        # 保留的系統變數列表(如 __name__ 等)
        keep = {'__name__', '__doc__', '__package__', '__loader__', '__spec__', '__builtins__'}

        # 遍歷全域性變數並刪除非保留項
        for name in list(globals().keys()):
            if name not in keep:
                #print(name)
                del globals()[name]
#        gc.collect()  # 強制垃圾回收

    except Exception as e:
        logger.error(e)

    #sys.exit(0)


if __name__ == "__main__":
    main()
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256

  將上述程式儲存為預設資料。按照下述樣式進行儲存。

  上述程式中定義的參數說明如下:

  • 參數名稱:input_value。

# 3. 呼叫執行

  可以使用FastWeb 數控中心-設備最佳運轉效益-PID智能分析助手 (opens new window)來呼叫啟用模型分析的Python指令碼。設定好呼叫taskrunner的地址,在分析預測界面點選[分析預測],以啟用模型預測的過程。預測完成後,可以看到此次預測的記錄,以及相關的分析預測結果。點選對應的記錄可以檢視圖表資訊。

訓練設備最佳效益分析模型
訓練設備故障診斷模型

← 訓練設備最佳效益分析模型 訓練設備故障診斷模型→

Copyright © 2021-2025 愛招飛IsoFace | ALL Rights Reserved
  • 跟随系统
  • 浅色模式
  • 深色模式
  • 阅读模式