神经网络学习小记录26——Keras EfficientNet模型的复现详解

神经网络学习小记录26——EfficientNet模型的复现详解

  • 学习前言
  • 什么是EfficientNet模型
    • EfficientNet模型的特点
    • EfficientNet网络的结构
  • EfficientNet网络部分实现代码
  • 图片预测

学习前言

2019年,谷歌新出EfficientNet,在其它网络的基础上,大幅度的缩小了参数的同时提高了预测准确度,简直太强了,我这样的强者也要跟着做下去!
在这里插入图片描述

什么是EfficientNet模型

2019年,谷歌新出EfficientNet,网络如其名,这个网络非常的有效率,怎么理解有效率这个词呢,我们从卷积神经网络的发展来看:
从最初的VGG16发展到如今的Xception,人们慢慢发现,提高神经网络的性能不仅仅在于堆叠层数,更重要的几点是:

1、网络要可以训练,可以收敛。
2、参数量要比较小,方便训练,提高速度。
3、创新神经网络的结构,学到更重要的东西。

而EfficientNet很好的做到了这一点,它利用更少的参数量(关系到训练、速度)得到最好的识别度(学到更重要的特点)

EfficientNet模型的特点

EfficientNet模型具有很独特的特点,这个特点是参考其它优秀神经网络设计出来的。经典的神经网络特点如下:
1、利用残差神经网络增大神经网络的深度,通过更深的神经网络实现特征提取。
2、改变每一层提取的特征层数,实现更多层的特征提取,得到更多的特征,提升宽度。
3、通过增大输入图片的分辨率也可以使得网络可以学习与表达的东西更加丰富,有利于提高精确度

EfficientNet就是将这三个特点结合起来,通过一起缩放baseline模型MobileNet中就通过缩放α实现缩放模型,不同的α有不同的模型精度,α=1时为baseline模型;ResNet其实也是有一个baseline模型,在baseline的基础上通过改变图片的深度实现不同的模型实现),同时调整深度宽度输入图片的分辨率完成一个优秀的网络设计。

EfficientNet的效果如下:
在这里插入图片描述
在EfficientNet模型中,其使用一组固定的缩放系数统一缩放网络深度、宽度和分辨率。
假设想使用 2N倍的计算资源,我们可以简单的对网络深度扩大αN倍、宽度扩大βN 、图像尺寸扩大γN倍,这里的α,β,γ都是由原来的小模型上做微小的网格搜索决定的常量系数。
如图为EfficientNet的设计思路,从三个方面同时拓充网络的特性。
在这里插入图片描述

EfficientNet网络的结构

EfficientNet一共由Stem + 16个Blocks + Con2D + GlobalAveragePooling2D + Dense组成,其核心内容是16个Blocks,其它的结构与常规的卷积神经网络差距不大。
此时展示的是EfficientNet-B0也就是EfficientNet的baseline的结构:
在这里插入图片描述
其中每个Block的的参数如下:

  1. DEFAULT_BLOCKS_ARGS = [
  2. {
  3. 'kernel_size': 3, 'repeats': 1, 'filters_in': 32, 'filters_out': 16,
  4. 'expand_ratio': 1, 'id_skip': True, 'strides': 1, 'se_ratio': 0.25},
  5. {
  6. 'kernel_size': 3, 'repeats': 2, 'filters_in': 16, 'filters_out': 24,
  7. 'expand_ratio': 6, 'id_skip': True, 'strides': 2, 'se_ratio': 0.25},
  8. {
  9. 'kernel_size': 5, 'repeats': 2, 'filters_in': 24, 'filters_out': 40,
  10. 'expand_ratio': 6, 'id_skip': True, 'strides': 2, 'se_ratio': 0.25},
  11. {
  12. 'kernel_size': 3, 'repeats': 3, 'filters_in': 40, 'filters_out': 80,
  13. 'expand_ratio': 6, 'id_skip': True, 'strides': 2, 'se_ratio': 0.25},
  14. {
  15. 'kernel_size': 5, 'repeats': 3, 'filters_in': 80, 'filters_out': 112,
  16. 'expand_ratio': 6, 'id_skip': True, 'strides': 1, 'se_ratio': 0.25},
  17. {
  18. 'kernel_size': 5, 'repeats': 4, 'filters_in': 112, 'filters_out': 192,
  19. 'expand_ratio': 6, 'id_skip': True, 'strides': 2, 'se_ratio': 0.25},
  20. {
  21. 'kernel_size': 3, 'repeats': 1, 'filters_in': 192, 'filters_out': 320,
  22. 'expand_ratio': 6, 'id_skip': True, 'strides': 1, 'se_ratio': 0.25}
  23. ]

Efficientnet-B0由1个Stem+16个大Blocks堆叠构成,16个大Blocks可以分为1、2、2、3、3、4、1个Block。Block的通用结构如下,其总体的设计思路是Inverted residuals结构和残差结构,在3x3或者5x5网络结构前利用1x1卷积升维,在3x3或者5x5网络结构后增加了一个关于通道的注意力机制,最后利用1x1卷积降维后增加一个大残差边。
在这里插入图片描述
Block实现代码如下

  1. def block(inputs, activation_fn=tf.nn.swish, drop_rate=0., name='',
  2. filters_in=32, filters_out=16, kernel_size=3, strides=1,
  3. expand_ratio=1, se_ratio=0., id_skip=True):
  4. bn_axis = 3
  5. # 升多少维度
  6. filters = filters_in * expand_ratio
  7. # 利用Inverted residuals
  8. # part1 1x1升维度
  9. if expand_ratio != 1:
  10. x = layers.Conv2D(filters, 1,
  11. padding='same',
  12. use_bias=False,
  13. kernel_initializer=CONV_KERNEL_INITIALIZER,
  14. name=name + 'expand_conv')(inputs)
  15. x = layers.BatchNormalization(axis=bn_axis, name=name + 'expand_bn')(x)
  16. x = layers.Activation(activation_fn, name=name + 'expand_activation')(x)
  17. else:
  18. x = inputs
  19. # padding
  20. if strides == 2:
  21. x = layers.ZeroPadding2D(padding=correct_pad(x, kernel_size),
  22. name=name + 'dwconv_pad')(x)
  23. conv_pad = 'valid'
  24. else:
  25. conv_pad = 'same'
  26. # part2 利用3x3卷积对每一个channel进行卷积
  27. x = layers.DepthwiseConv2D(kernel_size,
  28. strides=strides,
  29. padding=conv_pad,
  30. use_bias=False,
  31. depthwise_initializer=CONV_KERNEL_INITIALIZER,
  32. name=name + 'dwconv')(x)
  33. x = layers.BatchNormalization(axis=bn_axis, name=name + 'bn')(x)
  34. x = layers.Activation(activation_fn, name=name + 'activation')(x)
  35. # 压缩后再放大,作为一个调整系数
  36. if 0 < se_ratio <= 1:
  37. filters_se = max(1, int(filters_in * se_ratio))
  38. se = layers.GlobalAveragePooling2D(name=name + 'se_squeeze')(x)
  39. se = layers.Reshape((1, 1, filters), name=name + 'se_reshape')(se)
  40. se = layers.Conv2D(filters_se, 1,
  41. padding='same',
  42. activation=activation_fn,
  43. kernel_initializer=CONV_KERNEL_INITIALIZER,
  44. name=name + 'se_reduce')(se)
  45. se = layers.Conv2D(filters, 1,
  46. padding='same',
  47. activation='sigmoid',
  48. kernel_initializer=CONV_KERNEL_INITIALIZER,
  49. name=name + 'se_expand')(se)
  50. x = layers.multiply([x, se], name=name + 'se_excite')
  51. # part3 利用1x1对特征层进行压缩
  52. x = layers.Conv2D(filters_out, 1,
  53. padding='same',
  54. use_bias=False,
  55. kernel_initializer=CONV_KERNEL_INITIALIZER,
  56. name=name + 'project_conv')(x)
  57. x = layers.BatchNormalization(axis=bn_axis, name=name + 'project_bn')(x)
  58. # 实现残差神经网络
  59. if (id_skip is True and strides == 1 and filters_in == filters_out):
  60. if drop_rate > 0:
  61. x = layers.Dropout(drop_rate,
  62. noise_shape=(None, 1, 1, 1),
  63. name=name + 'drop')(x)
  64. x = layers.add([x, inputs], name=name + 'add')
  65. return x

EfficientNet网络部分实现代码

  1. #-------------------------------------------------------------#
  2. # EfficientNet的网络部分
  3. #-------------------------------------------------------------#
  4. from __future__ import absolute_import
  5. from __future__ import division
  6. from __future__ import print_function
  7. import os
  8. import math
  9. import tensorflow as tf
  10. import numpy as np
  11. from keras import layers
  12. from keras.models import Model
  13. from keras.applications import correct_pad
  14. from keras.applications import imagenet_utils
  15. from keras.applications.imagenet_utils import decode_predictions
  16. from keras.applications.imagenet_utils import _obtain_input_shape
  17. from keras.utils.data_utils import get_file
  18. from keras.preprocessing import image
  19. # 用于下载模型的默认参数
  20. BASE_WEIGHTS_PATH = (
  21. 'https://github.com/Callidior/keras-applications/'
  22. 'releases/download/efficientnet/')
  23. WEIGHTS_HASHES = {
  24. 'b0': ('e9e877068bd0af75e0a36691e03c072c',
  25. '345255ed8048c2f22c793070a9c1a130'),
  26. 'b1': ('8f83b9aecab222a9a2480219843049a1',
  27. 'b20160ab7b79b7a92897fcb33d52cc61'),
  28. 'b2': ('b6185fdcd190285d516936c09dceeaa4',
  29. 'c6e46333e8cddfa702f4d8b8b6340d70'),
  30. 'b3': ('b2db0f8aac7c553657abb2cb46dcbfbb',
  31. 'e0cf8654fad9d3625190e30d70d0c17d'),
  32. 'b4': ('ab314d28135fe552e2f9312b31da6926',
  33. 'b46702e4754d2022d62897e0618edc7b'),
  34. 'b5': ('8d60b903aff50b09c6acf8eaba098e09',
  35. '0a839ac36e46552a881f2975aaab442f'),
  36. 'b6': ('a967457886eac4f5ab44139bdd827920',
  37. '375a35c17ef70d46f9c664b03b4437f2'),
  38. 'b7': ('e964fd6e26e9a4c144bcb811f2a10f20',
  39. 'd55674cc46b805f4382d18bc08ed43c1')
  40. }
  41. # 每个Blocks的参数
  42. DEFAULT_BLOCKS_ARGS = [
  43. {
  44. 'kernel_size': 3, 'repeats': 1, 'filters_in': 32, 'filters_out': 16,
  45. 'expand_ratio': 1, 'id_skip': True, 'strides': 1, 'se_ratio': 0.25},
  46. {
  47. 'kernel_size': 3, 'repeats': 2, 'filters_in': 16, 'filters_out': 24,
  48. 'expand_ratio': 6, 'id_skip': True, 'strides': 2, 'se_ratio': 0.25},
  49. {
  50. 'kernel_size': 5, 'repeats': 2, 'filters_in': 24, 'filters_out': 40,
  51. 'expand_ratio': 6, 'id_skip': True, 'strides': 2, 'se_ratio': 0.25},
  52. {
  53. 'kernel_size': 3, 'repeats': 3, 'filters_in': 40, 'filters_out': 80,
  54. 'expand_ratio': 6, 'id_skip': True, 'strides': 2, 'se_ratio': 0.25},
  55. {
  56. 'kernel_size': 5, 'repeats': 3, 'filters_in': 80, 'filters_out': 112,
  57. 'expand_ratio': 6, 'id_skip': True, 'strides': 1, 'se_ratio': 0.25},
  58. {
  59. 'kernel_size': 5, 'repeats': 4, 'filters_in': 112, 'filters_out': 192,
  60. 'expand_ratio': 6, 'id_skip': True, 'strides': 2, 'se_ratio': 0.25},
  61. {
  62. 'kernel_size': 3, 'repeats': 1, 'filters_in': 192, 'filters_out': 320,
  63. 'expand_ratio': 6, 'id_skip': True, 'strides': 1, 'se_ratio': 0.25}
  64. ]
  65. # 两个Kernel的初始化器
  66. CONV_KERNEL_INITIALIZER = {
  67. 'class_name': 'VarianceScaling',
  68. 'config': {
  69. 'scale': 2.0,
  70. 'mode': 'fan_out',
  71. 'distribution': 'normal'
  72. }
  73. }
  74. DENSE_KERNEL_INITIALIZER = {
  75. 'class_name': 'VarianceScaling',
  76. 'config': {
  77. 'scale': 1. / 3.,
  78. 'mode': 'fan_out',
  79. 'distribution': 'uniform'
  80. }
  81. }
  82. def block(inputs, activation_fn=tf.nn.swish, drop_rate=0., name='',
  83. filters_in=32, filters_out=16, kernel_size=3, strides=1,
  84. expand_ratio=1, se_ratio=0., id_skip=True):
  85. bn_axis = 3
  86. # 升多少维度
  87. filters = filters_in * expand_ratio
  88. # 利用Inverted residuals
  89. # part1 1x1升维度
  90. if expand_ratio != 1:
  91. x = layers.Conv2D(filters, 1,
  92. padding='same',
  93. use_bias=False,
  94. kernel_initializer=CONV_KERNEL_INITIALIZER,
  95. name=name + 'expand_conv')(inputs)
  96. x = layers.BatchNormalization(axis=bn_axis, name=name + 'expand_bn')(x)
  97. x = layers.Activation(activation_fn, name=name + 'expand_activation')(x)
  98. else:
  99. x = inputs
  100. # padding
  101. if strides == 2:
  102. x = layers.ZeroPadding2D(padding=correct_pad(x, kernel_size),
  103. name=name + 'dwconv_pad')(x)
  104. conv_pad = 'valid'
  105. else:
  106. conv_pad = 'same'
  107. # part2 利用3x3卷积对每一个channel进行卷积
  108. x = layers.DepthwiseConv2D(kernel_size,
  109. strides=strides,
  110. padding=conv_pad,
  111. use_bias=False,
  112. depthwise_initializer=CONV_KERNEL_INITIALIZER,
  113. name=name + 'dwconv')(x)
  114. x = layers.BatchNormalization(axis=bn_axis, name=name + 'bn')(x)
  115. x = layers.Activation(activation_fn, name=name + 'activation')(x)
  116. # 压缩后再放大,作为一个调整系数
  117. if 0 < se_ratio <= 1:
  118. filters_se = max(1, int(filters_in * se_ratio))
  119. se = layers.GlobalAveragePooling2D(name=name + 'se_squeeze')(x)
  120. se = layers.Reshape((1, 1, filters), name=name + 'se_reshape')(se)
  121. se = layers.Conv2D(filters_se, 1,
  122. padding='same',
  123. activation=activation_fn,
  124. kernel_initializer=CONV_KERNEL_INITIALIZER,
  125. name=name + 'se_reduce')(se)
  126. se = layers.Conv2D(filters, 1,
  127. padding='same',
  128. activation='sigmoid',
  129. kernel_initializer=CONV_KERNEL_INITIALIZER,
  130. name=name + 'se_expand')(se)
  131. x = layers.multiply([x, se], name=name + 'se_excite')
  132. # part3 利用1x1对特征层进行压缩
  133. x = layers.Conv2D(filters_out, 1,
  134. padding='same',
  135. use_bias=False,
  136. kernel_initializer=CONV_KERNEL_INITIALIZER,
  137. name=name + 'project_conv')(x)
  138. x = layers.BatchNormalization(axis=bn_axis, name=name + 'project_bn')(x)
  139. # 实现残差神经网络
  140. if (id_skip is True and strides == 1 and filters_in == filters_out):
  141. if drop_rate > 0:
  142. x = layers.Dropout(drop_rate,
  143. noise_shape=(None, 1, 1, 1),
  144. name=name + 'drop')(x)
  145. x = layers.add([x, inputs], name=name + 'add')
  146. return x
  147. def EfficientNet(width_coefficient,
  148. depth_coefficient,
  149. default_size,
  150. dropout_rate=0.2,
  151. drop_connect_rate=0.2,
  152. depth_divisor=8,
  153. activation_fn=tf.nn.swish,
  154. blocks_args=DEFAULT_BLOCKS_ARGS,
  155. model_name='efficientnet',
  156. weights='imagenet',
  157. input_tensor=None,
  158. input_shape=None,
  159. pooling=None,
  160. classes=1000,
  161. **kwargs):
  162. input_shape = [416,416,3]
  163. img_input = layers.Input(tensor=input_tensor, shape=input_shape)
  164. bn_axis = 3
  165. # 保证filter的大小可以被8整除
  166. def round_filters(filters, divisor=depth_divisor):
  167. """Round number of filters based on depth multiplier."""
  168. filters *= width_coefficient
  169. new_filters = max(divisor, int(filters + divisor / 2) // divisor * divisor)
  170. # Make sure that round down does not go down by more than 10%.
  171. if new_filters < 0.9 * filters:
  172. new_filters += divisor
  173. return int(new_filters)
  174. # 重复次数,取顶
  175. def round_repeats(repeats):
  176. return int(math.ceil(depth_coefficient * repeats))
  177. # Build stem
  178. x = img_input
  179. x = layers.ZeroPadding2D(padding=correct_pad(x, 3),
  180. name='stem_conv_pad')(x)
  181. x = layers.Conv2D(round_filters(32), 3,
  182. strides=2,
  183. padding='valid',
  184. use_bias=False,
  185. kernel_initializer=CONV_KERNEL_INITIALIZER,
  186. name='stem_conv')(x)
  187. x = layers.BatchNormalization(axis=bn_axis, name='stem_bn')(x)
  188. x = layers.Activation(activation_fn, name='stem_activation')(x)
  189. # Build blocks
  190. from copy import deepcopy
  191. # 防止参数的改变
  192. blocks_args = deepcopy(blocks_args)
  193. b = 0
  194. # 计算总的block的数量
  195. blocks = float(sum(args['repeats'] for args in blocks_args))
  196. for (i, args) in enumerate(blocks_args):
  197. assert args['repeats'] > 0
  198. args['filters_in'] = round_filters(args['filters_in'])
  199. args['filters_out'] = round_filters(args['filters_out'])
  200. for j in range(round_repeats(args.pop('repeats'))):
  201. if j > 0:
  202. args['strides'] = 1
  203. args['filters_in'] = args['filters_out']
  204. x = block(x, activation_fn, drop_connect_rate * b / blocks,
  205. name='block{}{}_'.format(i + 1, chr(j + 97)), **args)
  206. b += 1
  207. # 收尾工作
  208. x = layers.Conv2D(round_filters(1280), 1,
  209. padding='same',
  210. use_bias=False,
  211. kernel_initializer=CONV_KERNEL_INITIALIZER,
  212. name='top_conv')(x)
  213. x = layers.BatchNormalization(axis=bn_axis, name='top_bn')(x)
  214. x = layers.Activation(activation_fn, name='top_activation')(x)
  215. # 利用GlobalAveragePooling2D代替全连接层
  216. x = layers.GlobalAveragePooling2D(name='avg_pool')(x)
  217. if dropout_rate > 0:
  218. x = layers.Dropout(dropout_rate, name='top_dropout')(x)
  219. x = layers.Dense(classes,
  220. activation='softmax',
  221. kernel_initializer=DENSE_KERNEL_INITIALIZER,
  222. name='probs')(x)
  223. # 输入inputs
  224. inputs = img_input
  225. model = Model(inputs, x, name=model_name)
  226. # Load weights.
  227. if weights == 'imagenet':
  228. file_suff = '_weights_tf_dim_ordering_tf_kernels_autoaugment.h5'
  229. file_hash = WEIGHTS_HASHES[model_name[-2:]][0]
  230. file_name = model_name + file_suff
  231. weights_path = get_file(file_name,BASE_WEIGHTS_PATH + file_name,
  232. cache_subdir='models',
  233. file_hash=file_hash)
  234. model.load_weights(weights_path)
  235. elif weights is not None:
  236. model.load_weights(weights)
  237. return model
  238. def EfficientNetB0(weights='imagenet',
  239. input_tensor=None,
  240. input_shape=None,
  241. pooling=None,
  242. classes=1000,
  243. **kwargs):
  244. return EfficientNet(1.0, 1.0, 224, 0.2,
  245. model_name='efficientnet-b0',
  246. weights=weights,
  247. input_tensor=input_tensor, input_shape=input_shape,
  248. pooling=pooling, classes=classes,
  249. **kwargs)
  250. def EfficientNetB1(weights='imagenet',
  251. input_tensor=None,
  252. input_shape=None,
  253. pooling=None,
  254. classes=1000,
  255. **kwargs):
  256. return EfficientNet(1.0, 1.1, 240, 0.2,
  257. model_name='efficientnet-b1',
  258. weights=weights,
  259. input_tensor=input_tensor, input_shape=input_shape,
  260. pooling=pooling, classes=classes,
  261. **kwargs)
  262. def EfficientNetB2(weights='imagenet',
  263. input_tensor=None,
  264. input_shape=None,
  265. pooling=None,
  266. classes=1000,
  267. **kwargs):
  268. return EfficientNet(1.1, 1.2, 260, 0.3,
  269. model_name='efficientnet-b2',
  270. weights=weights,
  271. input_tensor=input_tensor, input_shape=input_shape,
  272. pooling=pooling, classes=classes,
  273. **kwargs)
  274. def EfficientNetB3(weights='imagenet',
  275. input_tensor=None,
  276. input_shape=None,
  277. pooling=None,
  278. classes=1000,
  279. **kwargs):
  280. return EfficientNet(1.2, 1.4, 300, 0.3,
  281. model_name='efficientnet-b3',
  282. weights=weights,
  283. input_tensor=input_tensor, input_shape=input_shape,
  284. pooling=pooling, classes=classes,
  285. **kwargs)
  286. def EfficientNetB4(weights='imagenet',
  287. input_tensor=None,
  288. input_shape=None,
  289. pooling=None,
  290. classes=1000,
  291. **kwargs):
  292. return EfficientNet(1.4, 1.8, 380, 0.4,
  293. model_name='efficientnet-b4',
  294. weights=weights,
  295. input_tensor=input_tensor, input_shape=input_shape,
  296. pooling=pooling, classes=classes,
  297. **kwargs)
  298. def EfficientNetB5(weights='imagenet',
  299. input_tensor=None,
  300. input_shape=None,
  301. pooling=None,
  302. classes=1000,
  303. **kwargs):
  304. return EfficientNet(1.6, 2.2, 456, 0.4,
  305. model_name='efficientnet-b5',
  306. weights=weights,
  307. input_tensor=input_tensor, input_shape=input_shape,
  308. pooling=pooling, classes=classes,
  309. **kwargs)
  310. def EfficientNetB6(weights='imagenet',
  311. input_tensor=None,
  312. input_shape=None,
  313. pooling=None,
  314. classes=1000,
  315. **kwargs):
  316. return EfficientNet(1.8, 2.6, 528, 0.5,
  317. model_name='efficientnet-b6',
  318. weights=weights,
  319. input_tensor=input_tensor, input_shape=input_shape,
  320. pooling=pooling, classes=classes,
  321. **kwargs)
  322. def EfficientNetB7(weights='imagenet',
  323. input_tensor=None,
  324. input_shape=None,
  325. pooling=None,
  326. classes=1000,
  327. **kwargs):
  328. return EfficientNet(2.0, 3.1, 600, 0.5,
  329. model_name='efficientnet-b7',
  330. weights=weights,
  331. input_tensor=input_tensor, input_shape=input_shape,
  332. pooling=pooling, classes=classes,
  333. **kwargs)

图片预测

建立网络后,可以用以下的代码进行预测。

  1. # 处理图片
  2. def preprocess_input(x):
  3. x /= 255.
  4. x -= 0.5
  5. x *= 2.
  6. return x
  7. if __name__ == '__main__':
  8. model = EfficientNetB0()
  9. model.summary()
  10. img_path = 'elephant.jpg'
  11. img = image.load_img(img_path, target_size=(224, 224))
  12. x = image.img_to_array(img)
  13. x = np.expand_dims(x, axis=0)
  14. x = preprocess_input(x)
  15. print('Input image shape:', x.shape)
  16. preds = model.predict(x)
  17. print(np.argmax(preds))
  18. print('Predicted:', decode_predictions(preds, 1))

预测所需的已经训练好的EfficientNet模型会在运行时自动下载,下载后的模型位于C:\Users\Administrator.keras\models文件夹内。
通过函数EfficientNetB0,EfficientNetB1……EfficientNetB6,EfficientNetB7可以获得不同size的EfficientNet模型。














































































Top-1 Top-5 10-5 Size Stem
EfficientNet-B0 22.810 6.508 5.858 5.3M 4.0M
EfficientNet-B1 20.866 5.552 5.050 7.9M 6.6M
EfficientNet-B2 19.820 5.054 4.538 9.2M 7.8M
EfficientNet-B3 18.422 4.324 3.902 12.3M 10.8M
EfficientNet-B4 17.040 3.740 3.344 19.5M 17.7M
EfficientNet-B5 16.298 3.290 3.114 30.6M 28.5M
EfficientNet-B6 15.918 3.102 2.916 43.3M 41.0M
EfficientNet-B7 15.570 3.160 2.906 66.7M 64.1M

发表评论

表情:
评论列表 (有 0 条评论,54人围观)

还没有评论,来说两句吧...

相关阅读