飞道的博客

CNC机床刀具寿命预测

469人阅读  评论(0)

背景介绍
刀具失效可能造成工件表面粗糙度和尺寸精度的下降,或造成更严重的工件报废或机床受损。采取过维护的策略又会造成刀具剩余寿命的浪费以及不必要的换刀停机时间浪费。因此如果能够预测刀具的剩余寿命,将有效的帮助优化维护工作排程及降低刀具采购成本。

数据介绍
通过在一台高速CNC机床上安装测力计、三个轴向上的振动传感器、声音传感器,设置工艺参数为:主轴转速10400 RPM, 进给率为1555 mm/min, 横向切深为0.125mm, 纵向切深为 0.2mm.采样率为50KHz进行实验。经由数采板卡,采集包含:X轴切削力、Y轴切削力、Z轴切削力、X轴振动、Y轴振动、Z轴振动、声音信号RMS、声音信号这8个数据项。每次切削循环后的刀具磨损量也以10^-3mm为单位进行记录。分析者将利用这些数据预测6mm球鼻碳化钨钢刀的剩余寿命。
数据文件已上传到我的下载:
数据文件

数据描述
数据文件分为6组:c1…c6,其中c1, c4和c6为训练数据集,c2, c3和c5为测试数据集。每组包含约300个独立的.csv文件,每个文件对应一次切削循环。数据各列分别为:
第1列: X轴向切削力 (N)
第2列: Y轴向切削力 (N)
第3列: Z轴向切削力 (N)
第 4列: X轴向振动 (g)
第 5列: Y轴向振动 (g)
第 6列: Z轴向振动 (g)
第 7列: 声音信号RMS(V)

原始数据集来源:工业大数据产业创新平台
需要登录注册后到数据集页面下载

该平台收录了多种行业场景,包括加工制造、轨道交通、能源电力、半导体等行业,从不同层级收录了包括部件级、设备级、产线级的数据。

简单思路如下:

  1. 将训练数据和标签连接起来
  2. 训练集与测试集共同进行归一化
  3. 特征选择,筛选调不重要的特征
  4. 将极端随机森林、神经网络、岭回归、lgb等模型融合在一起,提高算法准确率

优化方法:

  • 使用时间序列模型LSTM试试
  • 特征工程部分多多分析与优化、观察训练集与测试集是否在同一分布
  • 了解CNC机床行业背景知识,或许可以加入一些其他特征去优化

数据处理代码:

import os

import pandas as pd

from utils.read_write import pdReadCsv

os.chdir(r'E:\项目文件\刀具寿命预测\\')


def join_data():
    label_file = 'Dataset_Remaining_service_life_of_machine_tool_download_wear_2020_09_05.csv'
    label = pdReadCsv(label_file, ',')
    download_file = 'Dataset_Remaining_service_life_of_machine_tool_download_raw_2020_09_05.csv'
    download = pdReadCsv(download_file, ',')
    download_label = pd.merge(download, label, on='File_No')
    download_label.to_csv('download_label.csv')


join_data()

模型文件:

import os

from sklearn.metrics import mean_absolute_error, mean_squared_error
from lightgbm import LGBMRegressor
from sklearn.ensemble import RandomForestRegressor, ExtraTreesRegressor
from sklearn.model_selection import GridSearchCV

from utils.read_write import writeOneCsv, pdReadCsv

os.chdir(r'E:\项目文件\刀具寿命预测\\')


def get_train():
    # file = 'train_label.csv'
    # file = 'download_label.csv'
    file = 'download_label.csv'
    train = pdReadCsv(file, ',')
    return train.values[:, 0:6], train.values[:, -1:].ravel()


def build_model_rf(x_train, y_train):
    estimator = RandomForestRegressor(criterion='mse')
    param_grid = {
   
        'max_depth': range(33, 35, 9),
        'n_estimators': range(73, 77, 9),
    }
    model = GridSearchCV(estimator, param_grid, cv=3)
    model.fit(x_train, y_train)
    print('rf')
    print(model.best_params_)
    writeParams('rf', model.best_params_)
    return model


def build_model_etr(x_train, y_train):
    # 极端随机森林回归   n_estimators 即ExtraTreesRegressor最大的决策树个数
    estimator = ExtraTreesRegressor(criterion='mse')
    param_grid = {
   
        'max_depth': range(33, 39, 9),
        'n_estimators': range(96, 99, 9),
    }
    model = GridSearchCV(estimator, param_grid)
    model.fit(x_train, y_train)
    print('etr')
    print(model.best_params_)
    writeParams('etr', model.best_params_)
    return model


def build_model_lgb(x_train, y_train):
    estimator = LGBMRegressor()
    param_grid = {
   
        'learning_rate': [0.1],
        'n_estimators': range(77, 78, 9),
        'num_leaves': range(59, 66, 9)
    }
    gbm = GridSearchCV(estimator, param_grid)
    gbm.fit(x_train, y_train.ravel())
    print('lgb')
    print(gbm.best_params_)
    writeParams('lgb', gbm.best_params_)
    return gbm


def scatter_line(y_val, y_pre):
    import matplotlib.pyplot as plt
    xx = range(0, len(y_val))
    plt.scatter(xx, y_val, color="red", label="Sample Point", linewidth=3)
    plt.plot(xx, y_pre, color="orange", label="Fitting Line", linewidth=2)
    plt.legend()
    plt.show()


def score_model(train, test, predict, model, data_type):
    score = model.score(train, test)
    print(data_type + ",R^2,", round(score, 6))
    writeOneCsv(['staking', data_type, 'R^2', round(score, 6)],  '调参记录.csv')
    mae = mean_absolute_error(test, predict)
    print(data_type + ',MAE,', mae)
    writeOneCsv(['staking', data_type, 'MAE', mae], '调参记录.csv')
    mse = mean_squared_error(test, predict)
    print(data_type + ",MSE,", mse)
    writeOneCsv(['staking', data_type, 'MSE', mse],  '调参记录.csv')


def writeParams(model, best):
    if model == 'lgb':
        writeOneCsv([model, best['num_leaves'], best['n_estimators'], best['learning_rate']],  '调参记录.csv')
    else:
        writeOneCsv([model, best['max_depth'], best['n_estimators'], 0],  '调参记录.csv')


def write_mse(model, data_type, mse):
    writeOneCsv([model, data_type, 'mse', mse],  '调参记录.csv')

算法应用文件:

#!/usr/bin/env Python
# coding=utf-8
import warnings

from sklearn.model_selection import train_test_split
from Remaining_service_life.data_model import get_train, build_model_lgb, build_model_etr, build_model_rf, write_mse, \
    score_model

warnings.filterwarnings("ignore", "(?s).*MATPLOTLIBDATA.*", category=UserWarning)

import pandas as pd
from sklearn.metrics import mean_squared_error

X_data, Y_data = get_train()
# test = get_test()
x_train, x_val, y_train, y_val = train_test_split(X_data, Y_data, test_size=0.2, random_state=20)
model_lgb = build_model_lgb(x_train, y_train)
val_lgb = model_lgb.predict(x_val)
model_etr = build_model_etr(x_train, y_train)
val_etr = model_etr.predict(x_val)
model_rf = build_model_rf(x_train, y_train)
val_rf = model_rf.predict(x_val)
# Starking 第一层
train_etr_pred = model_etr.predict(x_train)
print('etr训练集,mse:', mean_squared_error(y_train, train_etr_pred))
write_mse('etr', '训练集', mean_squared_error(y_train, train_etr_pred))
train_lgb_pred = model_lgb.predict(x_train)
print('lgb训练集,mse:', mean_squared_error(y_train, train_lgb_pred))
write_mse('lgb', '训练集', mean_squared_error(y_train, train_lgb_pred))
train_rf_pred = model_rf.predict(x_train)
print('rf训练集,mse:', mean_squared_error(y_train, train_rf_pred))
write_mse('rf', '训练集', mean_squared_error(y_train, train_rf_pred))

Stacking_X_train = pd.DataFrame()
Stacking_X_train['Method_1'] = train_rf_pred
Stacking_X_train['Method_2'] = train_lgb_pred
Stacking_X_train['Method_3'] = train_etr_pred

Stacking_X_val = pd.DataFrame()
Stacking_X_val['Method_1'] = val_rf
Stacking_X_val['Method_2'] = val_lgb
Stacking_X_val['Method_3'] = val_etr

# 第二层
model_Stacking = build_model_etr(Stacking_X_train, y_train)

train_pre_Stacking = model_Stacking.predict(Stacking_X_train)
score_model(Stacking_X_train, y_train, train_pre_Stacking, model_Stacking, '训练集')
val_pre_Stacking = model_Stacking.predict(Stacking_X_val)
score_model(Stacking_X_val, y_val, val_pre_Stacking, model_Stacking, '验证集')


欢迎大家多多交流工业大数据创新应用


转载:https://blog.csdn.net/qq_30803353/article/details/111711604
查看评论
* 以上用户言论只代表其个人观点,不代表本网站的观点或立场