MaskRCNN语义分割算法及Keras实现

您所在的位置:网站首页 形势与政策课程教学与改革 MaskRCNN语义分割算法及Keras实现

MaskRCNN语义分割算法及Keras实现

2024-07-13 06:34:46| 来源: 网络整理| 查看: 265

Hilights:RCNN系列比较:CNN检测框平移逐个比对分类->分割RCNN对整张图片进行选择性搜索原则筛选出2000个区域进行分割Fast RCNN一次CNN提出特征图,特征图内进行选择性搜索2000个区域进行分割Faster RCNN一次CNN提出特征图,特征图内RPN区域提议所有可能的anchors进行分割Mask RCNN引入FPN和ROI Align (bilinear)强化,且各ROI各有mask分开计算lossMask RCNN:只通过一次CNN提出特征图减少了计算量RPN提出所有anchor_box以及FPN提升了检出率和准确率ROI Align通过双线性插值替换邻近插值消除ROI Pooling缩放偏差Loss由分类、检测、分割组成,将分割像素的分类误差放在了ROI的分类loss中,各ROI只需与自己的mask计算loss而使loss计算更精确,更好地表征学习效果

Yolo系列也使用了CNN回归提出特征图+RPN提议anchors的方案,还进一步加入了k-means聚类计算AvgIoU以选取合适的anchors数目

MaskRCNN by Keras:1. 项目安装:From GitHub https://github.com/matterport/Mask_RCNN #安装matterport MaskRCNN git clone https://github.com/matterport/Mask_RCNN.git pip3 install -r requirements.txt python3 setup.py install #下载预训练权重 wget https://github.com/fchollet/deep-learning-models/releases/download/v0.2/resnet50_weights_tf_dim_ordering_tf_kernels_notop.h5 #使用配置 ''' CUDA 11.0 tensorflow==1.13.1 tensorflow-gpu==1.14.0 keras==2.0.8 numpy==1.21.4 2. matterport MaskRCNN根目录结构:3. 建立COCO格式数据集:import os, json path = '/home/zhangyp/data/segmentation/Kvasir-SEG/' #读取数据信息 dict_targets, classes = {}, [] json_file = open(path+'raw/kvasir_bboxes.json','r',encoding = 'utf-8') json_data = json.load(json_file) for instance in json_data: info = json_data[instance] H, W, label = info['height'], info['width'], info['bbox'][0]['label'] if label not in classes: classes.append(label) x1, y1, x2, y2 = info['bbox'][0]['xmin'], info['bbox'][0]['ymin'], info['bbox'][0]['xmax'], info['bbox'][0]['ymax'] dict_targets[instance]={'boxes':[x1, y1, x2, y2],'labels':label,'masks':[H, W]} json_file.close() #建立COCO数据集 for task in ['train', 'test', 'val']: list_imgs = os.listdir(path+task+'/images/') list_images, list_annotations, list_categories = [], [], [] img_id, box_id = 1, 1 for img in list_imgs: os.environ['img']=img os.system('cp '+path+task+'/images/$img '+path+'coco/'+task+'/') instance = img[:-4] dict_info = dict_targets[instance] x1, y1, x2, y2 = dict_info['boxes'][0], dict_info['boxes'][1], dict_info['boxes'][2], dict_info['boxes'][3] H, W, label = dict_info['masks'][0], dict_info['masks'][1], dict_info['labels'] category = {'supercategory':label, 'id':classes.index(label)+1, 'name':label} list_images.append({'file_name':img, 'id':img_id, 'height':H, 'width':W}) list_annotations.append({'segmentation':[[x1, y1, x2, y1, x2, y2, x1, y2]], 'area':H*W, 'iscrowd':0, 'image_id':img_id, 'bbox':[x1,y1,W,H], 'category_id':classes.index(label)+1, 'id':box_id}) if category not in list_categories: list_categories.append(category) box_id += 1 img_id += 1 save_file = open(path+'coco/annotations/instances_'+task+'.json','w') json.dump({'images':list_images,'annotations':list_annotations,'categories':list_categories}, save_file) save_file.close() print('finished!')4.训练+推理

在samples/coco/目录下根据coco.py结构改写自己的脚本My.py (修改参数已标注#####)

''' Usage: train: python3 My.py train --dataset=/path/to/coco --model=/path/to/weights.h5 interface: python3 My.py evaluate --dataset=/path/to/coco --model=/path/to/weights.h5 ''' import os, sys, time, imgaug, zipfile, urllib.request, shutil import numpy as np from pycocotools.coco import COCO from pycocotools.cocoeval import COCOeval from pycocotools import mask as maskUtils # Root directory of the project ROOT_DIR = os.path.abspath("../../") # Import Mask RCNN sys.path.append(ROOT_DIR) # To find local version of the library from mrcnn.config import Config from mrcnn import model as modellib, utils import cv2 # Path to trained weights file COCO_MODEL_PATH = os.path.join(ROOT_DIR, "mask_rcnn_coco.h5") # Directory to save logs and model checkpoints, if not provided # through the command line argument --logs DEFAULT_LOGS_DIR = os.path.join(ROOT_DIR, "logs") DEFAULT_DATASET_YEAR = "" ############################################################ # Configurations ############################################################ class CocoConfig(Config): """Configuration for training on MS COCO. Derives from the base Config class and overrides values specific to the COCO dataset. """ # Give the configuration a recognizable name NAME = "coco" # We use a GPU with 12GB memory, which can fit two images. # Adjust down if you use a smaller GPU. IMAGES_PER_GPU = 4 #即BATCHSIZE################################################################################################################################## STEPS_PER_EPOCH = 80 #每个epoch中的训练次数####################################################################################################################### EPOCH = 50####################################################################################################################################################### VALIDATION_STEPS = 50 #每个epoch结束时的验证次数################################################################################################################### # Uncomment to train on 8 GPUs (default is 1) # GPU_COUNT = 8 # Number of classes (including background) #NUM_CLASSES = 1 + 80 #我的数据里class只有1类,因此是1 + 1 ,默认有background############################################################################################################# NUM_CLASSES = 1 + 1 ############################################################ # Dataset ############################################################ class CocoDataset(utils.Dataset): def load_coco(self, dataset_dir, subset, year=DEFAULT_DATASET_YEAR, class_ids=None, class_map=None, return_coco=False, auto_download=False): if auto_download is True: self.auto_download(dataset_dir, subset, year) coco = COCO("{}/annotations/instances_{}{}.json".format(dataset_dir, subset, year)) if subset == "minival" or subset == "valminusminival": subset = "val" image_dir = "{}/{}{}".format(dataset_dir, subset, year) # Load all classes or a subset? if not class_ids: # All classes class_ids = sorted(coco.getCatIds()) # All images or a subset? if class_ids: image_ids = [] for id in class_ids: image_ids.extend(list(coco.getImgIds(catIds=[id]))) # Remove duplicates image_ids = list(set(image_ids)) else: # All images image_ids = list(coco.imgs.keys()) # Add classes for i in class_ids: self.add_class("coco", i, coco.loadCats(i)[0]["name"]) # Add images for i in image_ids: self.add_image( "coco", image_id=i, path=os.path.join(image_dir, coco.imgs[i]['file_name']), width=coco.imgs[i]["width"], height=coco.imgs[i]["height"], annotations=coco.loadAnns(coco.getAnnIds( imgIds=[i], catIds=class_ids, iscrowd=None))) if return_coco: return coco def auto_download(self, dataDir, dataType, dataYear): # Setup paths and file names if dataType == "minival" or dataType == "valminusminival": imgDir = "{}/{}{}".format(dataDir, "val", dataYear) imgZipFile = "{}/{}{}.zip".format(dataDir, "val", dataYear) imgURL = "http://images.cocodataset.org/zips/{}{}.zip".format("val", dataYear) else: imgDir = "{}/{}{}".format(dataDir, dataType, dataYear) imgZipFile = "{}/{}{}.zip".format(dataDir, dataType, dataYear) imgURL = "http://images.cocodataset.org/zips/{}{}.zip".format(dataType, dataYear) # print("Image paths:"); print(imgDir); print(imgZipFile); print(imgURL) # Create main folder if it doesn't exist yet if not os.path.exists(dataDir): os.makedirs(dataDir) # Download images if not available locally if not os.path.exists(imgDir): os.makedirs(imgDir) print("Downloading images to " + imgZipFile + " ...") with urllib.request.urlopen(imgURL) as resp, open(imgZipFile, 'wb') as out: shutil.copyfileobj(resp, out) print("... done downloading.") print("Unzipping " + imgZipFile) with zipfile.ZipFile(imgZipFile, "r") as zip_ref: zip_ref.extractall(dataDir) print("... done unzipping") print("Will use images in " + imgDir) # Setup annotations data paths annDir = "{}/annotations".format(dataDir) if dataType == "minival": annZipFile = "{}/instances_minival2014.json.zip".format(dataDir) annFile = "{}/instances_minival2014.json".format(annDir) annURL = "https://dl.dropboxusercontent.com/s/o43o90bna78omob/instances_minival2014.json.zip?dl=0" unZipDir = annDir elif dataType == "valminusminival": annZipFile = "{}/instances_valminusminival2014.json.zip".format(dataDir) annFile = "{}/instances_valminusminival2014.json".format(annDir) annURL = "https://dl.dropboxusercontent.com/s/s3tw5zcg7395368/instances_valminusminival2014.json.zip?dl=0" unZipDir = annDir else: annZipFile = "{}/annotations_trainval{}.zip".format(dataDir, dataYear) annFile = "{}/instances_{}{}.json".format(annDir, dataType, dataYear) annURL = "http://images.cocodataset.org/annotations/annotations_trainval{}.zip".format(dataYear) unZipDir = dataDir # print("Annotations paths:"); print(annDir); print(annFile); print(annZipFile); print(annURL) # Download annotations if not available locally if not os.path.exists(annDir): os.makedirs(annDir) if not os.path.exists(annFile): if not os.path.exists(annZipFile): print("Downloading zipped annotations to " + annZipFile + " ...") with urllib.request.urlopen(annURL) as resp, open(annZipFile, 'wb') as out: shutil.copyfileobj(resp, out) print("... done downloading.") print("Unzipping " + annZipFile) with zipfile.ZipFile(annZipFile, "r") as zip_ref: zip_ref.extractall(unZipDir) print("... done unzipping") print("Will use annotations in " + annFile) def load_mask(self, image_id): # If not a COCO image, delegate to parent class. image_info = self.image_info[image_id] if image_info["source"] != "coco": return super(CocoDataset, self).load_mask(image_id) instance_masks = [] class_ids = [] annotations = self.image_info[image_id]["annotations"] # Build mask of shape [height, width, instance_count] and list # of class IDs that correspond to each channel of the mask. for annotation in annotations: class_id = self.map_source_class_id( "coco.{}".format(annotation['category_id'])) if class_id: m = self.annToMask(annotation, image_info["height"], image_info["width"]) # Some objects are so small that they're less than 1 pixel area # and end up rounded out. Skip those objects. if m.max() < 1: continue # Is it a crowd? If so, use a negative class ID. if annotation['iscrowd']: # Use negative class ID for crowds class_id *= -1 # For crowd masks, annToMask() sometimes returns a mask # smaller than the given dimensions. If so, resize it. if m.shape[0] != image_info["height"] or m.shape[1] != image_info["width"]: m = np.ones([image_info["height"], image_info["width"]], dtype=bool) instance_masks.append(m) class_ids.append(class_id) # Pack instance masks into an array if class_ids: mask = np.stack(instance_masks, axis=2).astype(bool) class_ids = np.array(class_ids, dtype=np.int32) return mask, class_ids else: # Call super class to return an empty mask return super(CocoDataset, self).load_mask(image_id) def image_reference(self, image_id): """Return a link to the image in the COCO Website.""" info = self.image_info[image_id] if info["source"] == "coco": return "http://cocodataset.org/#explore?id={}".format(info["id"]) else: super(CocoDataset, self).image_reference(image_id) # The following two functions are from pycocotools with a few changes. def annToRLE(self, ann, height, width): segm = ann['segmentation'] if isinstance(segm, list): # polygon -- a single object might consist of multiple parts # we merge all parts into one mask rle code rles = maskUtils.frPyObjects(segm, height, width) rle = maskUtils.merge(rles) elif isinstance(segm['counts'], list): # uncompressed RLE rle = maskUtils.frPyObjects(segm, height, width) else: # rle rle = ann['segmentation'] return rle def annToMask(self, ann, height, width): rle = self.annToRLE(ann, height, width) m = maskUtils.decode(rle) return m ############################################################ # COCO Evaluation ############################################################ def build_coco_results(dataset, image_ids, rois, class_ids, scores, masks): # If no results, return an empty list if rois is None: return [] results = [] for image_id in image_ids: # Loop through detections for i in range(rois.shape[0]): class_id = class_ids[i] score = scores[i] bbox = np.around(rois[i], 1) mask = masks[:, :, i] result = { "image_id": image_id, "category_id": dataset.get_source_class_id(class_id, "coco"), "bbox": [bbox[1], bbox[0], bbox[3] - bbox[1], bbox[2] - bbox[0]], "score": score, "segmentation": maskUtils.encode(np.asfortranarray(mask)) } results.append(result) return results def evaluate_coco(model, dataset, coco, eval_type="bbox", limit=0, image_ids=None): # Pick COCO images from the dataset image_ids = image_ids or dataset.image_ids # Limit to a subset if limit: image_ids = image_ids[:limit] # Get corresponding COCO image IDs. coco_image_ids = [dataset.image_info[id]["id"] for id in image_ids] coco_image_files = [dataset.image_info[id]["path"].split('/')[-1] for id in image_ids] t_prediction = 0 t_start = time.time() results = [] for i, image_id in enumerate(image_ids): # Load image image = dataset.load_image(image_id) # Run detection t = time.time() r = model.detect([image], verbose=0)[0] rois, ids, scores = r["rois"], r["class_ids"], r["scores"] input_cp = image.copy() classes = [1]########################################################################################################################################label列表 colors = [(0,0,255),(0,255,0)]##########################################################################################################################color列表 index = 0 for contours in rois: input_cp = cv2.drawContours(input_cp, contours.astype(int).reshape(-1,1,2), -1, colors[classes.index(ids[index])]).copy() index += 1 name = coco_image_files[coco_image_ids.index(image_id+1)] cv2.imwrite('/home/zhangyp/data/segmentation/Kvasir-SEG/coco/result/'+name, input_cp) ########################################################################## t_prediction += (time.time() - t) # Convert results to COCO format # Cast masks to uint8 because COCO tools errors out on bool image_results = build_coco_results(dataset, coco_image_ids[i:i + 1], r["rois"], r["class_ids"], r["scores"], r["masks"].astype(np.uint8)) results.extend(image_results) # Load results. This modifies results with additional attributes. coco_results = coco.loadRes(results) # Evaluate cocoEval = COCOeval(coco, coco_results, eval_type) cocoEval.params.imgIds = coco_image_ids cocoEval.evaluate() cocoEval.accumulate() cocoEval.summarize() print("Prediction time: {}. Average {}/image".format( t_prediction, t_prediction / len(image_ids))) print("Total time: ", time.time() - t_start) ############################################################ # Training ############################################################ os.environ["CUDA_VISIBLE_DEVICES"] = "4"#指定GPU运行################################################################################################################## if __name__ == '__main__': import argparse # Parse command line arguments parser = argparse.ArgumentParser( description='Train Mask R-CNN on MS COCO.') parser.add_argument("command",#指定训练or推理##################################################################################################################### metavar="", help="'train' or 'evaluate' on MS COCO") parser.add_argument('--dataset', required=True,#指定数据集######################################################################################################## metavar="/path/to/coco", help='Directory of the MS-COCO dataset') parser.add_argument('--year', required=False, default=DEFAULT_DATASET_YEAR, metavar="", help='Year of the MS-COCO dataset (2014 or 2017) (default=2014)') parser.add_argument('--model', required=True,#指定模型权重######################################################################################################## metavar="/path/to/weights.h5", help="Path to weights .h5 file or 'coco'") parser.add_argument('--logs', required=False, default=DEFAULT_LOGS_DIR, metavar="/path/to/logs/", help='Logs and checkpoints directory (default=logs/)') parser.add_argument('--limit', required=False, default=500, metavar="", help='Images to use for evaluation (default=500)') parser.add_argument('--download', required=False, default=False, metavar="", help='Automatically download and unzip MS-COCO files (default=False)', type=bool) args = parser.parse_args() print("Command: ", args.command) print("Model: ", args.model) print("Dataset: ", args.dataset) print("Year: ", args.year) print("Logs: ", args.logs) print("Auto Download: ", args.download) # Configurations if args.command == "train": config = CocoConfig() else: class InferenceConfig(CocoConfig): # Set batch size to 1 since we'll be running inference on # one image at a time. Batch size = GPU_COUNT * IMAGES_PER_GPU GPU_COUNT = 1 IMAGES_PER_GPU = 1 DETECTION_MIN_CONFIDENCE = 0 config = InferenceConfig() config.display() # Create model if args.command == "train": model = modellib.MaskRCNN(mode="training", config=config, model_dir=args.logs) else: model = modellib.MaskRCNN(mode="inference", config=config, model_dir=args.logs) # Select weights file to load if args.model.lower() == "coco": model_path = COCO_MODEL_PATH elif args.model.lower() == "last": # Find last trained weights model_path = model.find_last() elif args.model.lower() == "imagenet": # Start from ImageNet trained weights model_path = model.get_imagenet_weights() else: model_path = args.model # Load weights print("Loading weights ", model_path) model.load_weights(model_path, by_name=True) # Train or evaluate if args.command == "train": # Training dataset. Use the training set and 35K from the # validation set, as as in the Mask RCNN paper. dataset_train = CocoDataset() dataset_train.load_coco(args.dataset, "train", year=args.year, auto_download=args.download) if args.year in '2014': dataset_train.load_coco(args.dataset, "train", year=args.year, auto_download=args.download) dataset_train.prepare() # Validation dataset dataset_val = CocoDataset() val_type = "val" if args.year in '2017' else "minival" dataset_val.load_coco(args.dataset, val_type, year=args.year, auto_download=args.download) dataset_val.prepare() # Image Augmentation # Right/Left flip 50% of the time augmentation = imgaug.augmenters.Fliplr(0.5) # *** This training schedule is an example. Update to your needs *** # Training - Stage 1 print("Training network heads") model.train(dataset_train, dataset_val, learning_rate=config.LEARNING_RATE, epochs=config.EPOCH, layers='heads', augmentation=augmentation) # Training - Stage 2 # Finetune layers from ResNet stage 4 and up print("Fine tune Resnet stage 4 and up") model.train(dataset_train, dataset_val, learning_rate=config.LEARNING_RATE, epochs=config.EPOCH*3, layers='4+', augmentation=augmentation) # Training - Stage 3 # Fine tune all layers print("Fine tune all layers") model.train(dataset_train, dataset_val, learning_rate=config.LEARNING_RATE / 10, epochs=config.EPOCH*4, layers='all', augmentation=augmentation) elif args.command == "evaluate": # Validation dataset dataset_val = CocoDataset() val_type = "val" if args.year in '2017' else "minival" coco = dataset_val.load_coco(args.dataset, val_type, year=args.year, return_coco=True, auto_download=args.download) dataset_val.prepare() print("Running COCO evaluation on {} images.".format(args.limit)) evaluate_coco(model, dataset_val, coco, "bbox", limit=int(args.limit)) else: print("'{}' is not recognized. " "Use 'train' or 'evaluate'".format(args.command))



【本文地址】

公司简介

联系我们

今日新闻


点击排行

实验室常用的仪器、试剂和
说到实验室常用到的东西,主要就分为仪器、试剂和耗
不用再找了,全球10大实验
01、赛默飞世尔科技(热电)Thermo Fisher Scientif
三代水柜的量产巅峰T-72坦
作者:寞寒最近,西边闹腾挺大,本来小寞以为忙完这
通风柜跟实验室通风系统有
说到通风柜跟实验室通风,不少人都纠结二者到底是不
集消毒杀菌、烘干收纳为一
厨房是家里细菌较多的地方,潮湿的环境、没有完全密
实验室设备之全钢实验台如
全钢实验台是实验室家具中较为重要的家具之一,很多

推荐新闻


图片新闻

实验室药品柜的特性有哪些
实验室药品柜是实验室家具的重要组成部分之一,主要
小学科学实验中有哪些教学
计算机 计算器 一般 打孔器 打气筒 仪器车 显微镜
实验室各种仪器原理动图讲
1.紫外分光光谱UV分析原理:吸收紫外光能量,引起分
高中化学常见仪器及实验装
1、可加热仪器:2、计量仪器:(1)仪器A的名称:量
微生物操作主要设备和器具
今天盘点一下微生物操作主要设备和器具,别嫌我啰嗦
浅谈通风柜使用基本常识
 众所周知,通风柜功能中最主要的就是排气功能。在

专题文章

    CopyRight 2018-2019 实验室设备网 版权所有 win10的实时保护怎么永久关闭