基于深度学习的图像篡改识别

深碍√TFBOYSˉ_ 2022-11-05 08:51 580阅读 0赞

文章目录

  • 为什么要做图像篡改识别
  • 图像篡改的类型
  • 不同篡改类型的训练和测试结果
    • 代码框架以及通用的实验参数
    • 高斯模糊
    • 高斯噪音
    • 中值滤波
    • 二次JPEG压缩
    • 亮度
    • 对比度
    • 实验总结

为什么要做图像篡改识别

在安防和司法领域,图像是一种重要的线索和证物,但在PS盛行的当下,并不是随意一张图像都可以具备此功能,一般而言要求图像没有被篡改过。毕竟我们谁都不希望自己的脸在非正常的情况下,无缘无故地出现在了犯罪现场,甚至出现在犯罪嫌疑人身上;或者拍摄的合同图像中的关键文字发生了不利的变化等等。

另外在美颜盛行的当下,或许一些人有“反美颜”的需求?毕竟有一部分人不太希望被“照骗”。

图像篡改的类型

实践中,图像篡改至少有以下几种类型:

  • 一、图像内容的修改,比如前面提到的通过PS换脸或者合同文字的修改
  • 二、能够间接表达第一类篡改嫌疑的操作。比如为了遮掩第一类篡改痕迹而做的中值滤波、平滑、模糊、加噪音等等,以及再次保存图像而产生的二次JPEG压缩。前面所述都是一些传统的数字图像处理操作,除此之外,还有一种非常难以识别的遮掩方式:重新拍摄,把经过修改的图像在显示器上打开并重新拍摄,这样就不会留下明显的数字图像处理的“痕迹”
  • 三、可能“美颜”也是一种“篡改”,但目前美颜似乎不太会出现在司法领域,本文档不针对这种情况讨论

一般而言我们的最终目的都是识别第一类篡改,但是难度很大,需要很深厚的司法、摄影和图像专业知识,比如在传统的图像篡改识别领域,有噪声一致性、几何一致性、光照一致性等等方式来进行判断。实际操作时,需要遍历可疑区域,且每一个可疑区域都需要遍历各种方法进行检验,所以非常地费时费力。

如果使用深度学习方法的话,可以利用图像分割的方式直接将篡改区域分割出来,然而实际训练一下就会发现,难度是真的大,因为训练数据非常难做。至少有两种方式制作这类训练数据,但各有优缺点:

  • 一是使用算法很随意地进行图像拼接,并辅以一些数据增强方法。这种方式可以生成无限多的数据,然而都假得非常明显,训练出的模型往往无法应对经过精细PS的图像
  • 二是使用人工PS的方式制造数据。这种方式产生的数据质量可能比较高,但是效率实在是太低,对于训模型而言几乎不可行。

由于以上原因,我们常常先判断一下图像是否存在过第二类篡改,从技术上讲第二类篡改相对容易识别一些,如果存在第二类篡改的话那么可能就需要仔细点对待了。

下面的部分主要针对第二类篡改进行叙述。第一类比较难搞,不是一个人在家里拿着1050ti随便搞搞就能搞定的,所以本文档就不在这方面搞事情了…然而如果实在对第一类篡改有兴趣的话,可以参考一下adobe 2019的创意者大会。

如果使用传统方法识别上面所述的第二类篡改,事实上还是有点难度的,特别是中值滤波这种高度非线性的操作,但是用了深度学习后,果真大力出奇迹,随随便便就搞定了,下面是一些相关实验。

不同篡改类型的训练和测试结果

在继续向下看之前应当明确,图像篡改识别是一件“与人斗”的事情,很难给出一个“做好”的定义。

这一点不同于一些通用的CV,比如车牌识别人脸识别等,我做好了,达到一定的标准就可以铺开了商用,尽管车牌也存在套牌,人脸存在活体、面具等问题,但是问题种类比较少,并且也都存在一些明确的方案或技术来解决这些问题。

所以,本文档只是浅尝辄止地对上面所述的第二类篡改做了一些简单的实验

代码框架以及通用的实验参数

代码包含三个文件:

一、util.py里面是一些辅助函数,包括了部分篡改类型,随机获取用于训练的图像块(image patches)等等操作,具体原理可以参考下面两个文档:

  • 图像的退化方式及python实现
  • 随机从图像中获取多个patch

二、generate_train_test_data.py用于制作训练数据,因为这里只是做个简单的实验,使用的数据并不多,所以可以一次性加载入内存中,因此数据保存为numpy的.npy格式。数据包括训练集的60张图片和测试集的30张图片,均使用手机随意拍摄得到(没有开美颜),用于拍摄的手机型号有三种:荣耀10,荣耀30,mate 30。用于训练的图像块大小是28*28,训练集截取了约30万个图像块,测试集截取了约15万个图像块。该代码文件中用于生成tampered_image部分代码可以进行修改,以测试各种篡改方式。另外如果不做额外说明,下面实验中日志的结果反映的就是代码中的数据篡改参数。

三、train.py用于训练,包括Dataset的生成,模型定义,训练和测试流程等等。因为是比较简单的实验,所以就全部写在一起了。超参数如下:

  • 网络结构:6层卷积,非常简单的VGG风格,每2层一个pooling
  • 优化器:Adam
  • epoch数量:10
  • 学习率:如果不做额外说明,那就是前5个epoch学习率1e-4,后5个是1e-5
  • batch_size:50

下面是各个代码文件的内容。如果有兴趣跑一下下面的代码的话,需要注意两点:

  • 数据自己拿手机随便去拍,原始大图分别放到两个文件夹内
  • 修改路径相关的变量

util.py

  1. # -*- coding: utf-8 -*-
  2. import os
  3. import cv2
  4. import numpy as np
  5. def uniform_random(low, high, shape=None):
  6. """ Get uniform random number(s) between low and high Parameters ---------- low: low limit of random number(s) high: high limit of random number(s) shape: shape of output array. A single number is returned if shape is None Returns ------- Uniform random number(s) between low and high """
  7. return np.random.random(shape) * (high - low) + low
  8. def add_gaussian_noise(image, mean_ratio, std_ratio, noise_num_ratio=1.0):
  9. """ Add Gaussian nosie to image. Parameters ---------- image: image data read by opencv, shape is [H, W, C] mean_ratio: ratio with respect to image_mean for mean of gaussian random numbers std_ratio: ratio with respect to image_mean for std (scale) of gaussian random numbers noise_num_ratio: ratio of noise number with respect to the total number of pixels, between [0, 1] Returns ------- noisy_image: image after adding noise """
  10. if std_ratio < 0:
  11. raise ValueError('std_ratio must >= 0.0')
  12. if not 0.0 <= noise_num_ratio <= 1.0:
  13. raise ValueError('noise_num_ratio must between [0, 1]')
  14. # get noise shape and channel number
  15. noise_shape = get_noise_shape(image)
  16. channel = noise_shape[2]
  17. # compute channel-wise mean and std
  18. image_mean = np.array(cv2.mean(image)[:channel])
  19. mean = image_mean * mean_ratio
  20. std = image_mean * std_ratio
  21. # generate noise
  22. noise = np.random.normal(mean, std, noise_shape)
  23. noisy_image = image.copy().astype(np.float32)
  24. if noisy_image.ndim == 2:
  25. noisy_image = noisy_image[..., np.newaxis] # add channel axis
  26. # add noise according to noise_num_ratio
  27. if noise_num_ratio >= 1.0:
  28. noisy_image[:, :, :channel] += noise
  29. else:
  30. row, col = get_noise_index(image, noise_num_ratio)
  31. noisy_image[row, col, :channel] += noise[row, col, ...]
  32. # post processing
  33. noisy_image = float_to_uint8(noisy_image, scale=1.0)
  34. noisy_image = np.squeeze(noisy_image)
  35. return noisy_image
  36. def float_to_uint8(image, scale=255.0):
  37. """ Convert image from float type to uint8, meanwhile the clip between [0, 255] will be done. Parameters ---------- image: numpy array image data of float type scale: a scale factor for image data Returns ------- image_uint8: numpy array image data of uint8 type """
  38. image_uint8 = np.clip(np.round(image * scale), 0, 255).astype(np.uint8)
  39. return image_uint8
  40. def get_noise_index(image, noise_num_ratio):
  41. """ Get noise index for a certain ratio of noise number Parameters ---------- image: numpy array image data noise_num_ratio: ratio of noise number with respect to the total number of pixels, between [0, 1] Returns ------- row: row indexes col: column indexes """
  42. image_height, image_width = image.shape[0:2]
  43. noise_num = int(np.round(image_height * image_width * noise_num_ratio))
  44. row = np.random.randint(0, image_height, noise_num)
  45. col = np.random.randint(0, image_width, noise_num)
  46. return row, col
  47. def get_noise_shape(image):
  48. """ Get noise shape according to image shape. Parameters ---------- image: numpy array image data Returns ------- noise_shape: a tuple whose length is 3 The shape of noise. Let height, width be the image height and width. If image.ndim is 2, output noise_shape will be (height, width, 1), else (height, width, 3) """
  49. if not (image.ndim == 2 or image.ndim == 3):
  50. raise ValueError('image ndim must be 2 or 3')
  51. height, width = image.shape[:2]
  52. if image.ndim == 2:
  53. channel = 1
  54. else:
  55. channel = image.shape[2]
  56. if channel >= 4:
  57. channel = 3
  58. noise_shape = (height, width, channel)
  59. return noise_shape
  60. def jpeg_compression(image, quality_factor):
  61. """ Apply jpeg compression to image without saving it to disk. Parameters ---------- image: image data read by opencv, shape is [H, W, C] quality_factor: jpeg quality factor, between [0, 1]. Higher value means higher quality image Returns ------- jpeg_image: jpeg compressed image """
  62. compression_factor = int(quality_factor)
  63. compression_param = [cv2.IMWRITE_JPEG_QUALITY, compression_factor]
  64. image_encode = cv2.imencode('.jpg', image, compression_param)[1]
  65. jpeg_image = cv2.imdecode(image_encode, -1)
  66. return jpeg_image
  67. def get_random_patch_bboxes(image, bbox_size, stride, jitter, roi_bbox=None):
  68. """ Generate random patch bounding boxes for a image around ROI region Parameters ---------- image: image data read by opencv, shape is [H, W, C] bbox_size: size of patch bbox, one digit or a list/tuple containing two digits, defined by (width, height) stride: stride between adjacent bboxes (before jitter), one digit or a list/tuple containing two digits, defined by (x, y) jitter: jitter size for evenly distributed bboxes, one digit or a list/tuple containing two digits, defined by (x, y) roi_bbox: roi region, defined by [xmin, ymin, xmax, ymax], default is whole image region Returns ------- patch_bboxes: randomly distributed patch bounding boxes, n x 4 numpy array. Each bounding box is defined by [xmin, ymin, xmax, ymax] """
  69. height, width = image.shape[:2]
  70. bbox_size = _process_geometry_param(bbox_size, min_value=1)
  71. stride = _process_geometry_param(stride, min_value=1)
  72. jitter = _process_geometry_param(jitter, min_value=0)
  73. if bbox_size[0] > width or bbox_size[1] > height:
  74. raise ValueError('box_size must be <= image size')
  75. if roi_bbox is None:
  76. roi_bbox = [0, 0, width, height]
  77. # tl is for top-left, br is for bottom-right
  78. tl_x, tl_y = _get_top_left_points(roi_bbox, bbox_size, stride, jitter)
  79. br_x = tl_x + bbox_size[0]
  80. br_y = tl_y + bbox_size[1]
  81. # shrink bottom-right points to avoid exceeding image border
  82. br_x[br_x > width] = width
  83. br_y[br_y > height] = height
  84. # shrink top-left points to avoid exceeding image border
  85. tl_x = br_x - bbox_size[0]
  86. tl_y = br_y - bbox_size[1]
  87. tl_x[tl_x < 0] = 0
  88. tl_y[tl_y < 0] = 0
  89. # compute bottom-right points again
  90. br_x = tl_x + bbox_size[0]
  91. br_y = tl_y + bbox_size[1]
  92. patch_bboxes = np.concatenate((tl_x, tl_y, br_x, br_y), axis=1)
  93. return patch_bboxes
  94. def _process_geometry_param(param, min_value):
  95. """ Process and check param, which must be one digit or a list/tuple containing two digits, and its value must be >= min_value Parameters ---------- param: parameter to be processed min_value: min value for param Returns ------- param: param after processing """
  96. if isinstance(param, (int, float)) or \
  97. isinstance(param, np.ndarray) and param.size == 1:
  98. param = int(np.round(param))
  99. param = [param, param]
  100. else:
  101. if len(param) != 2:
  102. raise ValueError('param must be one digit or two digits')
  103. param = [int(np.round(param[0])), int(np.round(param[1]))]
  104. # check data range using min_value
  105. if not (param[0] >= min_value and param[1] >= min_value):
  106. raise ValueError('param must be >= min_value (%d)' % min_value)
  107. return param
  108. def _get_top_left_points(roi_bbox, bbox_size, stride, jitter):
  109. """ Generate top-left points for bounding boxes Parameters ---------- roi_bbox: roi region, defined by [xmin, ymin, xmax, ymax] bbox_size: size of patch bbox, a list/tuple containing two digits, defined by (width, height) stride: stride between adjacent bboxes (before jitter), a list/tuple containing two digits, defined by (x, y) jitter: jitter size for evenly distributed bboxes, a list/tuple containing two digits, defined by (x, y) Returns ------- tl_x: x coordinates of top-left points, n x 1 numpy array tl_y: y coordinates of top-left points, n x 1 numpy array """
  110. xmin, ymin, xmax, ymax = roi_bbox
  111. roi_width = xmax - xmin
  112. roi_height = ymax - ymin
  113. # get the offset between the first top-left point of patch box and the
  114. # top-left point of roi_bbox
  115. offset_x = np.arange(0, roi_width, stride[0])[-1] + bbox_size[0]
  116. offset_y = np.arange(0, roi_height, stride[1])[-1] + bbox_size[1]
  117. offset_x = (offset_x - roi_width) // 2
  118. offset_y = (offset_y - roi_height) // 2
  119. # get the coordinates of all top-left points
  120. tl_x = np.arange(xmin, xmax, stride[0]) - offset_x
  121. tl_y = np.arange(ymin, ymax, stride[1]) - offset_y
  122. tl_x, tl_y = np.meshgrid(tl_x, tl_y)
  123. tl_x = np.reshape(tl_x, [-1, 1])
  124. tl_y = np.reshape(tl_y, [-1, 1])
  125. # jitter the coordinates of all top-left points
  126. tl_x += np.random.randint(-jitter[0], jitter[0] + 1, size=tl_x.shape)
  127. tl_y += np.random.randint(-jitter[1], jitter[1] + 1, size=tl_y.shape)
  128. return tl_x, tl_y

generate_train_test_data.py

  1. # -*- coding: utf-8 -*-
  2. import os
  3. import cv2
  4. import numpy as np
  5. from util import uniform_random
  6. from util import get_random_patch_bboxes
  7. from util import jpeg_compression
  8. from util import add_gaussian_noise
  9. ROOT_FOLDER_TRAIN = r'F:\Forensic\train'
  10. ROOT_FOLDER_TEST = r'F:\Forensic\test'
  11. OUTPUT_FOLDER = r'F:\Forensic\noise'
  12. PATCH_SHAPE = (28, 28)
  13. STRIDE = (64, 64)
  14. JITTER = (32, 32)
  15. def make_data(root_folder, phase='train'):
  16. """ Make image patches and the corresponding labels, and then save them to disk. Half of the patches are original, the other half are tampered. Parameters ---------- root_folder: root_folder of original full image phase: 'train' or 'test' """
  17. files = os.listdir(root_folder)
  18. # make data
  19. real_patches = []
  20. tampered_patches = []
  21. for i, file in enumerate(files):
  22. print(i + 1, file)
  23. image = cv2.imread(os.path.join(root_folder, file))
  24. # the following part can be modified to generate other types
  25. # of tampered_image
  26. ''' Gaussian blur '''
  27. ksize = np.random.choice([3, 5, 7, 9], size=2)
  28. ksize = tuple(ksize)
  29. tampered_image = cv2.GaussianBlur(
  30. image, ksize,
  31. sigmaX=uniform_random(1.0, 3.0),
  32. sigmaY=uniform_random(1.0, 3.0))
  33. ''' Gaussian noise '''
  34. # tampered_image = add_gaussian_noise(
  35. # image,
  36. # mean_ratio=0.0,
  37. # std_ratio=uniform_random(0.01, 0.3))
  38. ''' median blur '''
  39. # ksize = np.random.choice([3, 5, 7, 9])
  40. # tampered_image = cv2.medianBlur(image, ksize=ksize)
  41. ''' JPEG compression '''
  42. # tampered_image = jpeg_compression(image, uniform_random(50, 95))
  43. ''' brigntness '''
  44. # brightness = uniform_random(-25, 25)
  45. # tampered_image = np.float64(image) + brightness
  46. # tampered_image = np.clip(np.round(tampered_image), 0, 255)
  47. # tampered_image = np.uint8(tampered_image)
  48. ''' contrast '''
  49. # contrast = uniform_random(0.75, 1.33)
  50. # tampered_image = np.float64(image) * contrast
  51. # tampered_image = np.clip(np.round(tampered_image), 0, 255)
  52. # tampered_image = np.uint8(tampered_image)
  53. patch_bboxes = get_random_patch_bboxes(
  54. image, PATCH_SHAPE, STRIDE, JITTER)
  55. blur_patch_bboxes = get_random_patch_bboxes(
  56. image, PATCH_SHAPE, STRIDE, JITTER)
  57. for bbox in patch_bboxes:
  58. xmin, ymin, xmax, ymax = bbox
  59. real_patches.append(image[ymin:ymax, xmin:xmax])
  60. for bbox in blur_patch_bboxes:
  61. xmin, ymin, xmax, ymax = bbox
  62. tampered_patches.append(tampered_image[ymin:ymax, xmin:xmax])
  63. real_patches = np.array(real_patches)
  64. tampered_patches = np.array(tampered_patches)
  65. real_labels = np.ones(shape=real_patches.shape[0], dtype=np.int64)
  66. tampered_labels = np.zeros(shape=tampered_patches.shape[0], dtype=np.int64)
  67. patches = np.concatenate((real_patches, tampered_patches), axis=0)
  68. patches = patches.transpose([0, 3, 1, 2])
  69. labels = np.concatenate((real_labels, tampered_labels))
  70. # save data
  71. os.makedirs(OUTPUT_FOLDER, exist_ok=True)
  72. np.save(os.path.join(OUTPUT_FOLDER, '%s_data.npy' % phase), patches)
  73. np.save(os.path.join(OUTPUT_FOLDER, '%s_label.npy' % phase), labels)
  74. print('Total number of train samples is %d' % labels.shape[0])
  75. if __name__ == '__main__':
  76. make_data(ROOT_FOLDER_TRAIN, 'train')
  77. make_data(ROOT_FOLDER_TEST, 'test')

train.py

  1. # -*- coding: utf-8 -*-
  2. import os
  3. import time
  4. import numpy as np
  5. import torch
  6. import torch.nn as nn
  7. from torch.utils.data import Dataset
  8. from torch.utils.data import DataLoader
  9. import torchsummary
  10. DEVICE = torch.device("cuda" if torch.cuda.is_available() else "cpu")
  11. EPOCH = 10
  12. TRAIN_BATCH_SIZE = 50
  13. TEST_BATCH_SIZE = 32
  14. BASE_CHANNEL = 32
  15. INPUT_CHANNEL = 3
  16. INPUT_SIZE = 28
  17. TRAIN_DATA_FILE = r'F:\Forensic\noise\train_data.npy'
  18. TRAIN_LABEL_FILE = r'F:\Forensic\noise\train_label.npy'
  19. TEST_DATA_FILE = r'F:\Forensic\noise\test_data.npy'
  20. TEST_LABEL_FILE = r'F:\Forensic\noise\test_label.npy'
  21. MODEL_FOLDER = r'.\saved_model'
  22. def update_learing_rate(optimizer, epoch):
  23. """ Update learning rate stepwise for optimizer Parameters ---------- optimizer: pytorch optimizer epoch: epoch """
  24. learning_rate = 1e-4
  25. if epoch > 5:
  26. learning_rate = 1e-5
  27. for param_group in optimizer.param_groups:
  28. param_group['lr'] = learning_rate
  29. class Model(nn.Module):
  30. """ 6 layers plain model for forensic classification """
  31. def __init__(self, input_ch, num_classes, base_ch):
  32. super(Model, self).__init__()
  33. self.num_classes = num_classes
  34. self.base_ch = base_ch
  35. self.feature_length = base_ch * 4
  36. self.net = nn.Sequential(
  37. nn.Conv2d(input_ch, base_ch, kernel_size=3, padding=1),
  38. nn.ReLU(),
  39. nn.Conv2d(base_ch, base_ch, kernel_size=3, padding=1),
  40. nn.ReLU(),
  41. nn.MaxPool2d(kernel_size=2, stride=2),
  42. nn.Conv2d(base_ch, base_ch * 2, kernel_size=3, padding=1),
  43. nn.ReLU(),
  44. nn.Conv2d(base_ch * 2, base_ch * 2, kernel_size=3, padding=1),
  45. nn.ReLU(),
  46. nn.MaxPool2d(kernel_size=2, stride=2),
  47. nn.Conv2d(base_ch * 2, self.feature_length, kernel_size=3,
  48. padding=1),
  49. nn.ReLU(),
  50. nn.Conv2d(self.feature_length, self.feature_length, kernel_size=3,
  51. padding=1),
  52. nn.ReLU(),
  53. nn.AdaptiveAvgPool2d(output_size=(1, 1))
  54. )
  55. self.fc = nn.Linear(in_features=self.feature_length,
  56. out_features=num_classes)
  57. def forward(self, input):
  58. output = self.net(input)
  59. output = output.view(-1, self.feature_length)
  60. output = self.fc(output)
  61. return output
  62. class ForensicDataset(Dataset):
  63. """ Pytorch dataset for train and test """
  64. def __init__(self, data, label):
  65. super(Dataset).__init__()
  66. self.data = data
  67. self.label = label
  68. self.num = len(label)
  69. def __len__(self):
  70. return self.num
  71. def __getitem__(self, index):
  72. data = self.data[index]
  73. label = self.label[index]
  74. return data, label
  75. def load_dataset():
  76. """ Load train and test dataset """
  77. # load train dataset
  78. data = np.load(TRAIN_DATA_FILE).astype(np.float32)
  79. label = np.load(TRAIN_LABEL_FILE).astype(np.int64)
  80. data = torch.from_numpy(data)
  81. label = torch.from_numpy(label)
  82. train_dataset = ForensicDataset(data, label)
  83. # load test dataset
  84. data = np.load(TEST_DATA_FILE).astype(np.float32)
  85. label = np.load(TEST_LABEL_FILE).astype(np.int64)
  86. data = torch.from_numpy(data)
  87. label = torch.from_numpy(label)
  88. test_dataset = ForensicDataset(data, label)
  89. return train_dataset, test_dataset
  90. if __name__ == '__main__':
  91. time_beg = time.time()
  92. train_dataset, test_dataset = load_dataset()
  93. train_loader = DataLoader(dataset=train_dataset,
  94. batch_size=TRAIN_BATCH_SIZE,
  95. shuffle=True)
  96. test_loader = DataLoader(dataset=test_dataset,
  97. batch_size=TEST_BATCH_SIZE,
  98. shuffle=False)
  99. model = Model(input_ch=INPUT_CHANNEL, num_classes=2,
  100. base_ch=BASE_CHANNEL).cuda()
  101. torchsummary.summary(
  102. model, input_size=(INPUT_CHANNEL, INPUT_SIZE, INPUT_SIZE))
  103. criterion = nn.CrossEntropyLoss()
  104. optimizer = torch.optim.Adam(model.parameters())
  105. train_loss = []
  106. for ep in range(1, EPOCH + 1):
  107. update_learing_rate(optimizer, ep)
  108. # ----------------- train -----------------
  109. model.train()
  110. time_beg_epoch = time.time()
  111. loss_recorder = []
  112. for data, classes in train_loader:
  113. data, classes = data.cuda(), classes.cuda()
  114. optimizer.zero_grad()
  115. output = model(data)
  116. loss = criterion(output, classes)
  117. loss.backward()
  118. optimizer.step()
  119. loss_recorder.append(loss.item())
  120. time_cost = time.time() - time_beg_epoch
  121. print('\rEpoch: %d, Loss: %0.4f, Time cost (s): %0.2f' % (
  122. ep, loss_recorder[-1], time_cost), end='')
  123. # print train info after one epoch
  124. train_loss.append(loss_recorder)
  125. mean_loss_epoch = torch.mean(torch.Tensor(loss_recorder))
  126. time_cost_epoch = time.time() - time_beg_epoch
  127. print('\rEpoch: %d, Mean loss: %0.4f, Epoch time cost (s): %0.2f' % (
  128. ep, mean_loss_epoch.item(), time_cost_epoch), end='')
  129. # save model
  130. os.makedirs(MODEL_FOLDER, exist_ok=True)
  131. model_filename = os.path.join(MODEL_FOLDER, 'epoch_%d.pth' % ep)
  132. torch.save(model.state_dict(), model_filename)
  133. # ----------------- test -----------------
  134. model.eval()
  135. correct = 0
  136. total = 0
  137. for data, classes in test_loader:
  138. data, classes = data.cuda(), classes.cuda()
  139. output = model(data)
  140. _, predicted = torch.max(output.data, 1)
  141. total += classes.size(0)
  142. correct += (predicted == classes).sum().item()
  143. print(', Test accuracy: %0.4f' % (correct / total))
  144. print('Total time cost: ', time.time() - time_beg)

高斯模糊

可以看到,如果图像做了高斯模糊很容易被识别出来,很随意就能达到0.99+的准确率。

日志如下:

Epoch: 1, Mean loss: 0.3753, Epoch time cost (s): 59.67, Test accuracy: 0.9501
Epoch: 2, Mean loss: 0.0936, Epoch time cost (s): 58.75, Test accuracy: 0.9768
Epoch: 3, Mean loss: 0.0380, Epoch time cost (s): 58.66, Test accuracy: 0.9874
Epoch: 4, Mean loss: 0.0254, Epoch time cost (s): 58.72, Test accuracy: 0.9902
Epoch: 5, Mean loss: 0.0217, Epoch time cost (s): 58.69, Test accuracy: 0.9735
Epoch: 6, Mean loss: 0.0116, Epoch time cost (s): 58.67, Test accuracy: 0.9929
Epoch: 7, Mean loss: 0.0091, Epoch time cost (s): 60.25, Test accuracy: 0.9935
Epoch: 8, Mean loss: 0.0082, Epoch time cost (s): 62.64, Test accuracy: 0.9934
Epoch: 9, Mean loss: 0.0076, Epoch time cost (s): 62.41, Test accuracy: 0.9933
Epoch: 10, Mean loss: 0.0071, Epoch time cost (s): 59.13, Test accuracy: 0.9940

高斯噪音

高斯噪音非常容易被识别出来,准确率极其随意就上了0.99。

日志如下:

Epoch: 1, Mean loss: 0.1213, Epoch time cost (s): 58.44, Test accuracy: 0.9740
Epoch: 2, Mean loss: 0.0447, Epoch time cost (s): 58.80, Test accuracy: 0.9562
Epoch: 3, Mean loss: 0.0272, Epoch time cost (s): 58.91, Test accuracy: 0.9867
Epoch: 4, Mean loss: 0.0170, Epoch time cost (s): 59.00, Test accuracy: 0.9885
Epoch: 5, Mean loss: 0.0071, Epoch time cost (s): 58.94, Test accuracy: 0.9760
Epoch: 6, Mean loss: 0.0014, Epoch time cost (s): 58.97, Test accuracy: 0.9942
Epoch: 7, Mean loss: 0.0006, Epoch time cost (s): 59.03, Test accuracy: 0.9928
Epoch: 8, Mean loss: 0.0005, Epoch time cost (s): 58.99, Test accuracy: 0.9933
Epoch: 9, Mean loss: 0.0004, Epoch time cost (s): 59.05, Test accuracy: 0.9952
Epoch: 10, Mean loss: 0.0004, Epoch time cost (s): 58.71, Test accuracy: 0.9968

中值滤波

中值滤波也比较容易就能识别出来,最高准确率虽然没有到0.99不过也接近了,增加点数据,多训几把碰碰运气,也不是很难达到。

中值滤波是一种非常强的非线性操作,使用传统方式其实挺难识别出来的,但是使用神经网络,很随意就搞定了。

日志如下:

Epoch: 1, Mean loss: 0.4308, Epoch time cost (s): 59.61, Test accuracy: 0.8943
Epoch: 2, Mean loss: 0.1859, Epoch time cost (s): 58.92, Test accuracy: 0.9280
Epoch: 3, Mean loss: 0.1213, Epoch time cost (s): 59.03, Test accuracy: 0.9467
Epoch: 4, Mean loss: 0.0848, Epoch time cost (s): 59.04, Test accuracy: 0.9460
Epoch: 5, Mean loss: 0.0587, Epoch time cost (s): 59.03, Test accuracy: 0.9645
Epoch: 6, Mean loss: 0.0269, Epoch time cost (s): 59.00, Test accuracy: 0.9813
Epoch: 7, Mean loss: 0.0209, Epoch time cost (s): 59.27, Test accuracy: 0.9822
Epoch: 8, Mean loss: 0.0185, Epoch time cost (s): 59.06, Test accuracy: 0.9857
Epoch: 9, Mean loss: 0.0170, Epoch time cost (s): 59.00, Test accuracy: 0.9854
Epoch: 10, Mean loss: 0.0156, Epoch time cost (s): 59.02, Test accuracy: 0.9763

二次JPEG压缩

JPEG压缩相对而言稍微难识别一点,在训练过程中,学习率策略与其它有所不同,我使用了1e-4做了2个epoch的预热,然后3-7 epoch使用了1e-3, 8-9 epoch使用了1e-4,最后一个epoch使用了1e-5。训了好几次发现,如果只使用1e-4和1e-5的话准确率只能到0.90+。(哎,也没啥特别的道理,就是一顿乱试,不过这里还是有一点规律可循的,一般我们希望初期可以在不发散的情况下尽量尝试大一点的学习率,以期望网络能够覆盖更广阔的的搜索空间)

尽管二次JPEG压缩略难识别,但准确率也达到了0.95+,还算可以了。

日志如下:

Epoch: 1, Mean loss: 0.6933, Epoch time cost (s): 58.97, Test accuracy: 0.5056
Epoch: 2, Mean loss: 0.5764, Epoch time cost (s): 58.88, Test accuracy: 0.7660
Epoch: 3, Mean loss: 0.3430, Epoch time cost (s): 58.83, Test accuracy: 0.7949
Epoch: 4, Mean loss: 0.1980, Epoch time cost (s): 58.88, Test accuracy: 0.8683
Epoch: 5, Mean loss: 0.1609, Epoch time cost (s): 58.88, Test accuracy: 0.9193
Epoch: 6, Mean loss: 0.1489, Epoch time cost (s): 58.85, Test accuracy: 0.9333
Epoch: 7, Mean loss: 0.1268, Epoch time cost (s): 58.81, Test accuracy: 0.9380
Epoch: 8, Mean loss: 0.0825, Epoch time cost (s): 58.95, Test accuracy: 0.9528
Epoch: 9, Mean loss: 0.0744, Epoch time cost (s): 59.06, Test accuracy: 0.9536
Epoch: 10, Mean loss: 0.0626, Epoch time cost (s): 58.83, Test accuracy: 0.9545

亮度

亮度和对比度可以放在一起讲。亮度和对比度有很多种修改方式,可以直接在RGB空间做,但更经常的做法是转换到YUV或者Lab等空间进行操作。此处我们简简单单地选择了在RGB空间进行操作,公式如下:
t a m p e r e d _ i m a g e = α ∗ i m a g e + β tampered\_image = \alpha *image + \beta tampered_image=α∗image+β

其中 α \alpha α用于修改对比度, β \beta β用于修改亮度。

此处对 β \beta β在两组取值范围下作了实验,发现识别准确率均非常低。此处我们需明确一点,这是一个二分类问题,50%的准确率意味着“瞎猜”,也就是完全无法识别。下面日志的准确率只是略高于50%,此处没有可视化分析,但是根据两次训练结果猜测高于50%的部分很有可能是因为图像进入了uint8类型的饱和区域,也就是说当 β \beta β很小或很大时,大量的值因为截断而变成了0或者255,所以被识别了出来。这种情况下肉眼也很容易能识别出篡改,所以我们基本可以认为神经网络在应对亮度篡改方面无能为力。

β \beta β取值 [ − 50 , 50 ] [-50, 50] [−50,50]时的日志如下:

Epoch: 1, Mean loss: 0.6558, Epoch time cost (s): 59.06, Test accuracy: 0.5207
Epoch: 2, Mean loss: 0.6231, Epoch time cost (s): 58.89, Test accuracy: 0.5444
Epoch: 3, Mean loss: 0.6063, Epoch time cost (s): 58.95, Test accuracy: 0.5833
Epoch: 4, Mean loss: 0.5933, Epoch time cost (s): 58.97, Test accuracy: 0.5988
Epoch: 5, Mean loss: 0.5839, Epoch time cost (s): 58.95, Test accuracy: 0.5981
Epoch: 6, Mean loss: 0.5628, Epoch time cost (s): 58.88, Test accuracy: 0.6009
Epoch: 7, Mean loss: 0.5582, Epoch time cost (s): 58.95, Test accuracy: 0.6037
Epoch: 8, Mean loss: 0.5556, Epoch time cost (s): 58.92, Test accuracy: 0.6018
Epoch: 9, Mean loss: 0.5535, Epoch time cost (s): 59.17, Test accuracy: 0.6007
Epoch: 10, Mean loss: 0.5515, Epoch time cost (s): 60.49, Test accuracy: 0.6016

β \beta β取值 [ − 25 , 25 ] [-25, 25] [−25,25]时的日志如下:

Epoch: 1, Mean loss: 0.6765, Epoch time cost (s): 59.06, Test accuracy: 0.5201
Epoch: 2, Mean loss: 0.6618, Epoch time cost (s): 58.56, Test accuracy: 0.5219
Epoch: 3, Mean loss: 0.6505, Epoch time cost (s): 58.81, Test accuracy: 0.5259
Epoch: 4, Mean loss: 0.6425, Epoch time cost (s): 58.94, Test accuracy: 0.5289
Epoch: 5, Mean loss: 0.6350, Epoch time cost (s): 58.85, Test accuracy: 0.5378
Epoch: 6, Mean loss: 0.6199, Epoch time cost (s): 58.75, Test accuracy: 0.5464
Epoch: 7, Mean loss: 0.6157, Epoch time cost (s): 58.70, Test accuracy: 0.5483
Epoch: 8, Mean loss: 0.6135, Epoch time cost (s): 58.75, Test accuracy: 0.5475
Epoch: 9, Mean loss: 0.6117, Epoch time cost (s): 58.88, Test accuracy: 0.5478
Epoch: 10, Mean loss: 0.6100, Epoch time cost (s): 58.41, Test accuracy: 0.5498

对比度

结论同亮度,神经网络对此项篡改的识别无能为力。

日志如下:

Epoch: 1, Mean loss: 0.6914, Epoch time cost (s): 59.21, Test accuracy: 0.4888
Epoch: 2, Mean loss: 0.6782, Epoch time cost (s): 58.85, Test accuracy: 0.5637
Epoch: 3, Mean loss: 0.6682, Epoch time cost (s): 58.85, Test accuracy: 0.5439
Epoch: 4, Mean loss: 0.6622, Epoch time cost (s): 58.89, Test accuracy: 0.5502
Epoch: 5, Mean loss: 0.6562, Epoch time cost (s): 58.78, Test accuracy: 0.5383
Epoch: 6, Mean loss: 0.6400, Epoch time cost (s): 58.87, Test accuracy: 0.5725
Epoch: 7, Mean loss: 0.6361, Epoch time cost (s): 58.92, Test accuracy: 0.5743
Epoch: 8, Mean loss: 0.6335, Epoch time cost (s): 58.80, Test accuracy: 0.5721
Epoch: 9, Mean loss: 0.6312, Epoch time cost (s): 58.78, Test accuracy: 0.5781
Epoch: 10, Mean loss: 0.6293, Epoch time cost (s): 58.81, Test accuracy: 0.5798

实验总结

针对以上的实验,有一些主观的认识,可能有一些道理,也可能不对,随便看看就好:

  • CNN天然是一种针对图像区块的操作,因为它具备一定的感受野,所以可以处理一定范围的领域,如果图像篡改也涉及了邻域操作,比如模糊、JPEG压缩等,那么就会容易被CNN识别出来。
  • 还是因为邻域的问题,如果图像块中存在一定的统计特征,如噪音分布,那么也容易被识别出来。
  • 但如果篡改行为是逐像素的,并且从结果得不到统计特征,比如对比度和亮度的改变,那么CNN可能就会无能为力。

发表评论

表情:
评论列表 (有 0 条评论,580人围观)

还没有评论,来说两句吧...

相关阅读