目录
零、模型下载
一、清理C盘
二、 配置环境
三、运行项目前提操作
(1)根据自己的项目设置路径。每次激活虚拟环境(tensorflow115)都得重设一次
(2)执行setup
这个项目的路径移动了位置也需要重设一次
四、制作训练集
(1)设计一个文件夹。总体相片分布可以用80%train,20%test
(2)使用labelImg获得xml文件
(3)xml转csv
(4)csv转tfrecord
(5)在object_detection路径下创建training,在training文件夹建一个labelmap.pbtxt
五、模型配置
1.修改num个数,根据你class的个数写
2.batch_size根据你电脑的性能填,最低填1
3.训练最高步数限制
4.路径修改
编辑六、模型开始训练
七、观察训练
八、生成模型
九、测试模型
十、寻找别人训练好的模型
零、模型下载
GitHub - tensorflow/models: Models and examples built with TensorFlow
Detection/faster_rcnn_inception_v2_coco_2018_01_28 at master · librahfacebook/Detection · GitHub
一、清理C盘
修改conda默认envs_dirs和pkgs_dirs
推荐博文(若侵权告知必删):http://t.csdnimg.cn/Ki52N
二、 配置环境
(1)用anaconda创建虚拟环境
win+R进入cmd 或者 进入Anaconda Prompt应用(这里是cpu,如果你用GPU推荐Anaconda Prompt,会帮你安装跑GPU要用的驱动)
conda create -n tensorflow115 python=3.6
(2)创建完环境后,一一输入下面的指令
conda activate tensorflow115
conda install tensorflow=1.15.0
conda install -c anaconda protobuf
如果这个指令报错404尝试换源
找到的.condarc文件打开。
拷贝以下镜像源到该文件
channels:
- https://mirrors.sjtug.sjtu.edu.cn/anaconda/pkgs/main/
- https://mirrors.sjtug.sjtu.edu.cn/anaconda/pkgs/free/
- defaults
ssl_verify: true
show_channel_urls: true
pip install pillow
pip install lxml
pip install jupyter
pip install matplotlib
pip install pandas
pip install opencv-python==4.3.0.38
三、运行项目前提操作
(1)根据自己的项目设置路径。每次激活虚拟环境(tensorflow115)都得重设一次
set PYTHONPATH=G:\BaiduNetdiskDownload\Tensorflow+FasterRCNN+KITTI\models;G:\BaiduNetdiskDownload\Tensorflow+FasterRCNN+KITTI\models\research;G:\BaiduNetdiskDownload\Tensorflow+FasterRCNN+KITTI\models\research\slim
(2)执行setup
这个项目的路径移动了位置也需要重设一次
命令端进入research路径
输入指令
python setup.py build
python setup.py install
四、制作训练集
(1)设计一个文件夹。总体相片分布可以用80%train,20%test
(2)使用labelImg获得xml文件
pip install labelimg
使用labelimg指令
labelimg
快捷键操作
Ctrl + u 选择要标注的文件目录;
Ctrl + r 选择标注好的标签存放的目录;
Ctrl + s 保存标注好的标签(自动保存模式下会自动保存);
Ctrl + d 复制当前标签和矩形框;
Ctrl + Shift + d 删除当前图片;
Space 将当前图像标记为已验证;
w 开始创建矩形框;
d 切换到下一张图;
a 切换到上一张图;
del 删除选中的标注矩形框;
Ctrl++ 放大图片;
Ctrl-- 缩小图片;
↑→↓← 移动选中的矩形框的位置;
(3)xml转csv
路径根据实际情况修改
import os
import glob
import pandas as pd
import xml.etree.ElementTree as ET
def xml_to_csv(path):
xml_list = []
for xml_file in glob.glob(path + '/*.xml'):
tree = ET.parse(xml_file)
root = tree.getroot()
for member in root.findall('object'):
value = (root.find('filename').text,
int(root.find('size')[0].text),
int(root.find('size')[1].text),
member[0].text,
int(member[4][0].text),
int(member[4][1].text),
int(member[4][2].text),
int(member[4][3].text)
)
xml_list.append(value)
column_name = ['filename', 'width', 'height', 'class', 'xmin', 'ymin', 'xmax', 'ymax']
xml_df = pd.DataFrame(xml_list, columns=column_name)
return xml_df
def main():
for folder in ['train','test']:
image_path = os.path.join(os.getcwd(), ('images/' + folder))
xml_df = xml_to_csv(image_path)
xml_df.to_csv(('images/' + folder + '_labels.csv'), index=None)
print('Successfully converted xml to csv.')
main()
(4)csv转tfrecord
"""
Usage:
# From tensorflow/models/
# Create train data:
python generate_tfrecord.py --csv_input=images/train_labels.csv --image_dir=images/train --output_path=train.record
# Create test data:
python generate_tfrecord.py --csv_input=images/test_labels.csv --image_dir=images/test --output_path=test.record
"""
from __future__ import division
from __future__ import print_function
from __future__ import absolute_import
import os
import io
import pandas as pd
import tensorflow as tf
from PIL import Image
from object_detection.utils import dataset_util
from collections import namedtuple, OrderedDict
os.environ['TF_CPP_MIN_LOG_LEVEL']='2'
flags = tf.app.flags
flags.DEFINE_string('csv_input', '', 'Path to the CSV input')
flags.DEFINE_string('image_dir', '', 'Path to the image directory')
flags.DEFINE_string('output_path', '', 'Path to output TFRecord')
FLAGS = flags.FLAGS
# TO-DO replace this with label map
def class_text_to_int(row_label):
if row_label == 'Car':
return 1
if row_label == 'Van':
return 2
if row_label == 'Truck':
return 3
if row_label == 'Pedestrian':
return 4
if row_label == 'Person_sitting':
return 5
if row_label == 'Cyclist':
return 6
if row_label == 'Tram':
return 7
if row_label == 'Misc':
return 8
else:
return 0
def split(df, group):
data = namedtuple('data', ['filename', 'object'])
gb = df.groupby(group)
return [data(filename, gb.get_group(x)) for filename, x in zip(gb.groups.keys(), gb.groups)]
def create_tf_example(group, path):
with tf.gfile.GFile(os.path.join(path, '{}'.format(group.filename)), 'rb') as fid:
encoded_jpg = fid.read()
encoded_jpg_io = io.BytesIO(encoded_jpg)
image = Image.open(encoded_jpg_io)
width, height = image.size
filename = group.filename.encode('utf8')
image_format = b'jpg'
xmins = []
xmaxs = []
ymins = []
ymaxs = []
classes_text = []
classes = []
for index, row in group.object.iterrows():
xmins.append(row['xmin'] / width)
xmaxs.append(row['xmax'] / width)
ymins.append(row['ymin'] / height)
ymaxs.append(row['ymax'] / height)
classes_text.append(row['class'].encode('utf8'))
classes.append(class_text_to_int(row['class']))
tf_example = tf.train.Example(features=tf.train.Features(feature={
'image/height': dataset_util.int64_feature(height),
'image/width': dataset_util.int64_feature(width),
'image/filename': dataset_util.bytes_feature(filename),
'image/source_id': dataset_util.bytes_feature(filename),
'image/encoded': dataset_util.bytes_feature(encoded_jpg),
'image/format': dataset_util.bytes_feature(image_format),
'image/object/bbox/xmin': dataset_util.float_list_feature(xmins),
'image/object/bbox/xmax': dataset_util.float_list_feature(xmaxs),
'image/object/bbox/ymin': dataset_util.float_list_feature(ymins),
'image/object/bbox/ymax': dataset_util.float_list_feature(ymaxs),
'image/object/class/text': dataset_util.bytes_list_feature(classes_text),
'image/object/class/label': dataset_util.int64_list_feature(classes),
}))
return tf_example
def main(_):
writer = tf.python_io.TFRecordWriter(FLAGS.output_path)
path = os.path.join(os.getcwd(), FLAGS.image_dir)
examples = pd.read_csv(FLAGS.csv_input)
grouped = split(examples, 'filename')
for group in grouped:
tf_example = create_tf_example(group, path)
writer.write(tf_example.SerializeToString())
writer.close()
output_path = os.path.join(os.getcwd(), FLAGS.output_path)
print('Successfully created the TFRecords: {}'.format(output_path))
if __name__ == '__main__':
tf.app.run()
这里根据你的label修改
修改完后使用下面的指令
python generate_tfrecord.py --csv_input=images/train_labels.csv --image_dir=images/train --output_path=train.record
python generate_tfrecord.py --csv_input=images/test_labels.csv --image_dir=images/test --output_path=test.record
(5)在object_detection路径下创建training,在training文件夹建一个labelmap.pbtxt
labelmap.pbtxt的格式
五、模型配置
在object_detection\samples\configs 文件夹下找到对应的.config文件然后将该config文件复制到刚刚创建的training文件中
拿faster_rcnn_inception_v2_coco_2018_01_28模型举例 ,复制faster_rcnn_inception_v2_coco.config到 training 文件夹下
1.修改num个数,根据你class的个数写
2.batch_size根据你电脑的性能填,最低填1
3.训练最高步数限制
4.路径修改
六、模型开始训练
因为我们用的CPU,所以需要修改一处地方使用CPU
开始训练
python train.py --logtostderr --train_dir=training/ --pipeline_config_path=training/faster_rcnn_inception_v2_pets.config
出现报错
解决办法
pip install tensorflow-estimator==1.15.0
七、观察训练
开一个命令窗口,在object_detection 下运行
tensorboard --logdir=my_obgect/training
八、生成模型
python export_inference_graph.py \ --input_type image_tensor \ --pipeline_config_path training/faster_rcnn_inception_v2_pets.config \ --trained_checkpoint_prefix training/model.ckpt-4446 \ --output_directory my_detection_v1
my_detection_v1是自己在当前路径下建的空白文件夹
如果报错
Current thread 0x000045d8 (most recent call first):
解决方法
pip install tensorflow=1.15.0
再重复第三、运行项目前提操作
九、测试模型
打开Object_detection_image.py(该python文件在object_detection中)
修改下面的内容,一一对应
最后输入下面指令
python Object_detection_image.py
十、寻找别人训练好的模型
models/research/object_detection/g3doc/tf1_detection_zoo.md at master · tensorflow/models · GitHub