Human Protein Atlas Image

Human Protein Atlas Image:

This dataset contains protein images of human body available from the Human Protein Atlas Image Classification Competition on Kaggle or from The Human Protein Atlas page https://www.proteinatlas.org/cell. The dataset might be either used for the Kaggle Competition, research and education and non-commercial purposes. Please refer to the competition rules on Kaggle for more information about the Terms of Use and the Rules regarding the dataset https://www.kaggle.com/c/human-protein-atlas-image-classification/rules.

Here is some information regarding this dataset:

  • Number of classes: 28 categories as integers from 0 to 27, each referring to a human protein.

  • Available separate datafiles for training and testing with three resolutions: 512×512 PNG, 2048×2048 TIFF, 3072×3072 TIFF

If you use this dataset:

Make sure to use the dataset for non-commercial purposes only.

keywords: Vision, Image, Biology and Health, Classification, Protein, Cell, Object Detection

COIL-100

COIL-100:

This dataset contains color images of objects at every 5 angles in a 360 degree rotation. The dataset was collected by the Center for Research on Intelligent Systems at the Department of Computer Science, Columbia University. This dataset was used in a real-time image recognition study.

Here is some information regarding this dataset:

  • Number of images in the dataset: 7200 images

  • Number of classes: 100 object categories each with 72 poses

  • Image resolution: 128×128

More information can be found in the technical report in bellow, or the Kaggle page https://www.kaggle.com/jessicali9530/coil100/home.

The main page for the dataset can be found on http://www1.cs.columbia.edu/CAVE/software/softlib/coil-100.php.

If you use this dataset:

Please make sure to use the dataset for non-commercial research purposes only (Terms of Use).

Please refer to the technical report in bellow and cite:

S. A. Nene, S. K. Nayar and H. Murase, Columbia Object Image Library (COIL-100), Technical Report CUCS-006-96, February 1996.

keywords: Vision, Image, Classification, Object Detection, Rotation

LFW: Labeled Faces in the Wild

LFW: Labeled Faces in the Wild:

This dataset contains labeled face images collected from the web with names of the people in the images as the labels. Some of these people have two or more number of images in the dataset. This dataset is designed for studying the problem of unconstrained face recognition and face verification. The original LFW dataset is available for download along with 3 sets of aligned images (funneled images, LFW-a, deep funneled).

Here is some information regarding this dataset:

  • Number of images in the dataset: 13,000 images (10-fold cross validation is recommended and training and test splits can be downloaded from the dataset page)

  • Number of identities: 5749

  • Image resolution: 250×250

More details and links for download can be found on the dataset page http://vis-www.cs.umass.edu/lfw/.

If you use the any of these versions of the LFW image dataset:

Please make sure to cite the paper:

G. B. Huang, M. Ramesh, T. Berg, E. Learned-Miller, Labeled Faces in the Wild: A Database for Studying Face Recognition in Unconstrained Environments. University of Massachusetts, Amherst, Technical Report 07-49, October 2007.

If you use the LFW imaged aligned by deep funneling:

Please make sure to cite the paper:

G. B. Huang, M. Matter, H. Lee, E. Learned-Miller, Learning to Align from Scratch. Advances in Neural Information Processing Systems (NIPS), 2012.

If you use the LFW imaged aligned by funneling:

Please make sure to cite the paper:

G. B. Huang, V. Jain, E. Learned-Miller, Unsupervised Joint Alignment of Complex Images. International Conference on Computer Vision (ICCV), 2007.

keywords: Vision, Image, Face, Object Detection, Segmentation, In the Wild

COCO

COCO:

This image dataset contains image data suitable for object detection and segmentation. It contains 5 annotation types for Object Detection, Keypoint Detection, Stuff Segmentation, Panoptic Segmentation and Image Captioning all explained in details on the data format section of the dataset page (http://cocodataset.org/#format-data).

Here is some information regarding the latest version of this dataset:

  • Number of images in the dataset: 330,000 images while more than 200,000 are labeled (roughly equal halves for training and validation+test)

  • Number of classes: 80 object categories, 91 stuff categories

  • Image resolution: 640×480

More details and links for download can be found on the dataset and challenge page http://cocodataset.org/#home and http://cocodataset.org/#overview.

If you use this dataset:

Please make sure to read Terms of Use available on http://cocodataset.org/#termsofuse.

Please make sure to cite the paper:

T. Lin, M. Maire, S. Belongie, L. Bourdev, R. Girshick, J. Hays, P. Perona, D. Ramanan, C. Zitnick, P, Microsoft COCO: Common Objects in Context. Dollar. European Conference on Computer Vision (ECCV), 2014.

keywords: Vision, Image, Object Detection, Segmentation

SUN

SUN:

This dataset contains thousands of color images for scenes recognition provided by Princeton University. The images include environmental scenes, places and objects. To create the dataset, WordNet English dictionary is used to find any nouns completing the sentence “I am in -a place-“ or “Let’s go to -the place-“ and data samples are manually categorized. The number of images per category are different for this dataset with the minimum of 100 images per category for the LSUN397 version.

Different versions are available for the dataset. Here is some information about LSUN397 dataset:

  • Number of images in the dataset: 16,873

  • Number of classes: 397 (Abbey, Access_road, etc.)

Here is some information regarding the latest version of this dataset:

  • Number of images in the dataset: 131,067

  • Number of classes: 908 scene categories and 3819 object categories

More details and links of download are available on the dataset pages https://vision.princeton.edu/projects/2010/SUN/ and https://groups.csail.mit.edu/vision/SUN/. Recommendations for training and testing split are also available in the mentioned pages.

If you use this dataset, make sure to cite these two papers:

J. Xiao, J. Hays, K. Ehinger, A. Oliva, and A. Torralba. Sun Database: Large-scale Scene Recognition from Abbey to Zoo, IEEE Conference on Computer Vision and Pattern Recognition, 2010.

J. Xiao, K. A. Ehinger, J. Hays, A. Torralba, and A. Oliva, Sun Database: Exploring a Large Collection of Scene Categories. International Journal of Computer Vision (IJCV), 2014.

keywords: VisionImage, Classification, Scene, Object Detection

LSUN

LSUN:

This dataset contains millions of color images for scenes and objects which is far bigger than ImageNet dataset. The labels for this dataset are available based on human’s effort for labeling in conjunction with several different image classification models. The images are from parent databases Pascal Voc 2012 and 10 Million Images for 10 Scene Categories.

Here is some information regarding the LSUN dataset:

  • Number of images in the dataset: More than 59 million and still growing

  • Number of classes: 10 scene categories and 20 object categories

  1. Scene categories (bedroom, bridge, church_outdoor, classroom, conference_room, dining_room, kitchen, living_room, restaurant, tower)

20 object categories (airplane, bicycle, bird, boat, bottle, bus, car, cat, chair, cow, dining_table, dog, horse, motorbike, person, potted_plant, sheep, sofa, train, tv-monitor)

The dataset can be downloaded either from GitHub https://github.com/fyu/lsun or the categories lists on http://tigress-web.princeton.edu/~fy/lsun/public/release/. More details are available on the dataset page http://www.yf.io/p/lsun.

If you use this dataset, make sure to cite the paper:

Fisher Yu, Ari Seff, Yinda Zhang, Shuran Song, Thomas Funkhouser and Jianxiong Xiao. Corr, LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. abs/1506.03365, 2015

keywords: Vision, Image, Classification, Scene, Object Detection