Skip to main content

Table 13 Performance rates in which FractalDB was better than the ImageNet pre-trained model on C10/C100/IN100/P30 fine-tuning

From: Pre-Training Without Natural Images

Dataset Category\(^{(rates)\%}\)
C10
C100 Bee\(^{(89 versus 87)}\), chair\(^{(92 versus 89)}\), keyboard\(^{(95 versus 93)}\), maple tree\(^{(72 versus 71)}\), motorcycle\(^{(99 versus 95)}\),
orchid\(^{(92 versus 90)}\), pine tree\(^{(70 versus 69)}\)
IN100 Kerry blue terrier\(^{(88 versus 87)}\), marmot\(^{(92 versus 90)}\), giant panda\(^{(92 versus 91)}\), television\(^{(80 versus 79)}\),
dough\(^{(64 versus 62)}\), valley\(^{(94 versus 93)}\)
P30 cliff\(^{(64 versus 62)}\), mountain\(^{(40 versus 27)}\), skyscraper\(^{(85 versus 84)}\), tundra\(^{(79 versus 77)}\)
  1. We listed the performance rates with FractalDB-1k versus ImageNet pre-trained models