Content based imageretrieval (CBIR) is the task of searching digital images from a large database based on the extraction of features, such as color, texture andshape of the image. Most of the research in CBIR has been carried out with complete queries which were present in the database. This paper investigates utility of CBIR techniques for retrieval of incompleteand distorted queries. Studies were made in two categories of the query: first is complete and second is incomplete. The query image is considered to be distorted or incompleteimage if it has some missing information, some undesirable objects, blurring, noise due to disturbance at the time of image acquisition etc. Color (hue, saturation and value (HSV) color space model) andshape (moment invariants and Fourier descriptor) features are used to represent the image. The algorithm was tested on database consisting of 1875 images. The results show that retrieval accuracy of incompletequeries is highly increased by fusing colorandshapefeatures giving precision of 79.87%. MATLAB ® 7.01 and its image processing toolbox have been used to implement the algorithm.
Several problems remain including retrieval of features based on location within an image, the extension of 2-dimensional features to 3-dimension, and appropriate segmentation of video images. Although research in higher order CBIR is un- derway, current systems are not capable of retrieving all in- stances of horses based on the shape, color, or texture of a single instance of a horse in a query. For example, a shape- based query depicting a side view of a horse does not retrieve images of horses from behind or above. Research in object recognition conducted by Forsythe et al (1997) has sought to develop techniques for modeling a class of objects and identi- fying defining attributes andfeatures for that class. Rorvig has examined the use of human judgments to train the system to recognize patterns of user-defined similarity for automatic identification of image classes. Chang et al, (1998) also util- ized users' relevance judgments to refine searches and to as- sign semantic keywords to images that can be used by subse- quent users to query the system.
performs well compared to other descriptors when images have mostly uniform color distribution. In most of the images categories color moments also show better performance . A CBIR system for Iris matching uses color texture andshape as feature descriptor . Another system Sketch4Match uses color as the feature descriptor for matching of a sketch with the corresponding image . CBIR gives better result where application contains more of visual content rather than semantic content. For e.g.:- CBIR is suitable for queries in medical diagnosis involving comparison of an X-ray picture with previous cases. But suppose, a query demands retrieval images of “cancerous tissues/cells”, or “A man holding shooting camera” are such examples where text outweighs visual content  and it is not clear what kind of image should be used. Now here CBIR fails because of the fact that visual features cannot fully represent the semantic concepts.
Other features that are incorporated in recognizing plants are extracted from vein of leaf (venation). Nam et al.  used shapeand venation for leaf imageretrieval. They used MPP (Minimum Perimeter Polygons) algorithm to solve the shape of the leaf and the type of venations as venation features. Li et al.  used ICA (Independent Component Analysis) to extract leaf vein. Other approach has done by Wu et al. , by performing morphological opening on grayscale image with flat, disk-shaped structuring element of radius 1, 2, 3, and 4. Then, by subtracting remaining image by the margin, the appearance of vein like image is obtained. That operation is quite simple and fast.
Imageretrieval systems attempt to search through a database to find images that are perceptually similar to a query image. CBIR is an important alternative and complement to traditional text-based image searching and can greatly enhance the accuracy of the information being returned. It aims to develop an efficient visual- content-based technique to search, browse and retrieve relevant images from large-scale digital image collections. Most of the CBIR techniques automatically extract low-level features (e.g. color, texture, shapes and layout of objects) to measure the similarities among images by comparing the feature differences.
A content based imageretrieval system using the colour and texture features of selected sub-blocks and global colour andshapefeatures of the image is proposed. The colour features are extracted from the histograms of the quantized HSV color space, texture featuresfrom GLCM andshapefeaturesfrom EHD. A modified IRM algorithm is used for computing the minimum distance between the selected sub-blocks of the query imageand the candidate images in the database. Unlike the most sub-block based methods that involves all the sub-blocks of the query image to be compared with that of the candidate images, our system involves only selected sub-blocks for similarity measurement, thus reducing the number of comparisons and computational cost. Experimental results also show that the proposed method provides better retrieving result than some of the existing methods. Future work aims at the selection of sub-blocks based on their saliency in the image to improve the retrieval precision. Also the proposed method has to be tested on various databases to test the robustness.
In general, current researches primarily focus on imageretrieval based on low-level features (color, texture andshape, etc.) ( Wang M et al., 2013). Colorfeatures tend to be closely associated with objects or scenes in images and depend less on image size, direction and viewpoint. Texture features can well express abundant visual information of images and are appreciated in machine vision field. Thus colorand texture features are widely used in CBIR community because of above characteristics. In addition, retrieval method using combination of colorand texture features is increasingly drawing more attention. Nevertheless, because of the existence of semantic gap between low-level featuresand high-level semantics, aforementioned strategies tend to obtain unsatisfactory results, particularly for remote sensing images with multiple objects.
Texture is that property of surfaces that describes visual patterns. Co occurrence Matrix method is used for Texture-Based Features extraction. Texture represented by pixels gives relative brightness of consecutive pixels and finds the degree of contrast, regularity, coarseness and directionality which classifies textures as 'smooth', 'rough' etc. Texture is a visual pattern where there are a large number of visible elements densely and evenly arranged. A texture element is a uniform intensity region of simple shape which is repeated. We divide the image into 55 blocks and compute texture featuresusing Gabor-wavelet filters in each block. The merit of texture-based features is that they can be effectively applied to applications in which texture information is salient in videos. However, these features are unavailable in non texture video images. Depending on the texture on the foreground and background regions in a given video, we will get the trajectories from both parts from input video andfrom database videos.
images and their signatures are stored in order to make easier the search procedure (indexing step). Finally, in the third step, an algorithm performs a search in the database in order to return the most similar images to the input (matching andretrieval step). A relevance feedback approach is commonly applied by visual inspection on the resulting images [16,18]. The feature extraction step of a CBIR system is especially critical and, as described in , its corresponding methods can be divided into two main classes: (1) global feature based methods, as those proposed in [4,10,8,6], to cite just a few, and (2) local feature based methods, as for example, the ones proposed in [11,1,8,2]. According to , a major shift has been observed from global feature representations, such as color histograms and global shape descriptors, to local features, such as salient points, region-based featuresand spatial model features . This shift is related to the fact that the image domain is too deep for global features to reduce the semantic gap. Local features, in turn, often correspond with meaningful image components, such as rigid objects, making association of semantics with image portions forthright. In  and , two well-known local feature detectors are described: the Scale-Invariant Feature Transform (SIFT) and the Speeded Up Robust Features (SURF), respectively. The SURF detector is partly inspired by the SIFT descriptor, being however several times faster than SIFT. Both works have contributed significantly to advances in the development of novel local features that are robust to scale, rotation and illumination variations.
A typical content-based imageretrieval system can be described by the figure 1 here the visual contents of the images in the database are described and extracted by multi-dimensional feature vectors. This builds a feature database from the feature vectors. To retrieve images, users provide the retrieval system with query image. Feature vectors are then built. The distances between the feature vectors of the query imageand images of database are computed for similarity. In this paper we have used the featurescolor, shapeand texture and the relevant retrieval experiments show that the multi featuresretrieval brings better visual feeling than the single feature retrieval, which means better retrieval results. The results of our experiments are the compared with that of another approach. This Comparision clearly shows that our approach gives better performance over the approach used by Rao et al in their research.
Darshana Mistry  has proposed that in content Based ImageRetrieval, images were retrieved based on color, texture andshape (low level perception). There was a gap between user semantics (high level perception) and low level perception. Relevance feedback (RF) learns association between high level semantics and low level features. Bayesian method, nearest neighbour search method, Log based RF; Support Vector Machine (SVM) was methods of Relevance Feedback. Bayesian method was good for understand but it has been not worked for fast access. Survey of different methods of Relevance feedback, SVM has been the best method because of structure risk minimization. Using of SVM, Using of relevance feedback with SVM, results were more efficient as user perception. SVM classification could be even better if the feature vector used in more relevant to images.
Imageretrieval is one of the efficient and new problems in image processing area. The main aim in imageretrieval is retrieving the most similar images to the query imagefrom a huge database. In the direction of achieving this aim, researchers try to define the most effective and meaningful features to compare images. Yet, various methods have been proposed for imageretrieval. In  a method based on energy compaction has been given. In a recent work Ershad  described a method based on primitive pattern units. In  some features based on texture analysis has been given. Vasconcelos  has proposed a method according to the features of shape. In a large set of methods presented so far, color histogram analysis of image has been used to define features. One of the approaches which had some desirable results is color based imageretrieval (CBIR) . In this method for comparing the similarity rate of two image, firstly the histogram of two images would be individually extracted in each color channel Red, green and blue and then compared by a criterion like Euclidean one . Equation (1) represents this affair.
The need to find a desired imagefrom a collection is shared by many professional groups, including journalists, design engineers and art historians. While the requirements of image users can vary considerably, it can be useful to characterize imagequeries into three levels of abstraction: primitive features such as color or texture, logical features such as the identity of objects shown and abstract attributes such as the significance of the scenes depicted. While CBIR systems currently operate effectively only at the lowest of these levels, most users demand higher levels of retrieval. Due to the use of global descriptor values to extract images the process of retrieval method is much faster than the conventional approach of retrieval of images.
This study aims to develop a classification techniques for benthic habitat mapping which may interest remote sensing and marine research community. Benthic habitat mapping is an important tool to study the trends of local landscape changes, anthropogenic disturbances on benthic organisms, and climate changes. Various area utilizations can be proficiently planned by having prior knowledge about certain habitats and their changing tendencies especially that coastal areas represent a very dynamic case regarding their locations. Directory of Remote Sensing Applications for Coral Reef Management (2010) has shown certain requirements for data archiving and imagery used for creating benthic habitat maps.
To investigate this, in a sixth experiment we asked caregivers to teach their 17- to 22-month-old children names of two objects novel to the children. Parents were not told the experimental hypothesis nor was the use of space mentioned in any way. An experimenter later tested the children to determine whether they had learned the object names. Videotapes of parent-child interactions were coded to determine the spatial consistency of the objects while on the table, in the parent’s hands, or in the child’s hands, and to record all naming events. Overall, parents spontaneously maintained a consistent spatial arrangement of the two objects during the social interaction, consistently holding the objects in different hands .75 of the time (i.e., the binoculars in the left hand, the spring in the right hand), and maintaining spatial consistency .65 of the time even when only one object was being presented (e.g., holding and presenting the spring always with the right hand). Interestingly, parents differed in the degree to which they maintained this spatial consistency: the proportion of time during which the two objects were in two different hands ranged from .32–1.0 and the spatial consistency ranged from .43–.86 (see Fig. 4 for data from two caregiver-child pairs). Critically, this mattered for children’s learning of the object names: the more consistent a parent kept the spatial location of the objects, the better her child did on the later comprehension task, r = .69, p,.001, two-tailed (see Fig. 5). Children’s learning was not correlated with age (r = .23, ns), total number of times parents
As the FCM algorithm is very sensitive to the number of cluster centers, cluster centers initialization often artificially get significant errors, and even get the actual opposite results .FCM algorithm is hard on data sets too ,so the data sets must be quite regular, in order to solve problems, first of all we use information entropy to initialize the cluster centers to determine the number of cluster centers. it can be reduce some errors, and also can improve the algorithm introductions an weighting parameters after that combine with the merger of ideas and divide the large chumps into small clusters. Then merge various small clusters according to the merger of the conditions, so that you can solve the irregular datasets clustering. Document similarity measures as shown in below.
Recently, with the greater demand for digital world, the security of digital images has become more and more important since the communications of digital products over open network occur more and more frequently [2, 3, 6]. Surveys of the existing work on image encryption were also gave general guideline about cryptography and concluded that all techniques were useful for real-time image encryption. Techniques described in those studies can provide security functions and an overall visual check, which might be suitable in some applications. So no one can access the image which transferring on open network.
Abstract—Visual information has been extensively used in the areas of multimedia, medical imaging and other numerous applications. Management of these visual information is challenging as the quantity of data available is very huge. Current utilization of available medical databases is limited due to the issues in retrieval of the necessary images from a huge repository. In this paper we propose a novel image matching scheme using Discrete Sine Transform and decision tree classification techniques for classification of the given input medical image. Our system classifies the images based on the similarity of the images.
Quellec et al  proposed a CBIR method for diagnosis in medical fields. In this, images are indexed in a generically, without extracting domain-specific features: a signature is built into each imagefrom wavelet transform. These signatures characterize wavelet coefficient distribution in each decomposition subband. A distance measure compares two image signatures and retrieves most similar images from the database when a physician submits a query image. To retrieve relevant images from a medical database, signatures and distance measure should be related to medical image interpretation. Subsequently the system requires much freedom to tune it to any pathology with image modality being introduced. The scheme proposed using a custom decomposition scheme to adapt the wavelet basis with lifting scheme framework. Weights are introduced between subbands. All parameters are tuned by an optimization procedure, using database medical image grading to define performance measures. System assessment is through two medical image databases: one for diabetic retinopathy follow up and another for mammography screening, as well as a general purpose database. Results are promising: a mean precision of 56.50%, 70.91% and 96.10% are achieved for the three databases, when the system returned five images.
A comprehensive musical database for all tones that could be played by Nay and Shabbaba are initially prepared and used in the following analysis. This database consists of 37 Nay tones and 14 Shabbaba tones. Each of these reference tones is chosen to be 5s long and is carefully played and recorded to ensure maximum quality of reference tones. Each tone is then segmented into 100 samples, each of 50 ms period. Cross correlation and Fourier analysis are then applied on the obtained samples. The obtained results indicated that Fourier analysis is more accurate and simpler than the cross correlation for pitch detection. Such results are also found in correlation with the findings that were previously reported in references , . It was mentioned that cross correlation in AMT is usually used for separation process, which is to separate different instruments sounds but Fourier analysis is more accurate only for segmentation of one instrument’s sound, especially for the flute.