( ) computes the smallest intersecting angle between lines and .f( 𝜭( ) is a nonlinear penalty function to map an angle to a scalar. It is necessary to ignore small angle variation but penalize heavily on large deviation. The quadratic function = ⁄ is used, where W is the weight to be determined by a training process . The designs of the parallel and perpendicular displacements can be illustrated with a simplified example of two parallel lines, and . ‖ ( ) is defined as the minimum displacement to align either the left end points or the right end points of the lines. is simply the vertical distance between the two lines. Normally, and would not be parallel, but one can rotate the shorter line with its midpoint as rotation center to the desirable orientation before computing ‖ ( ) and .
Overall, our results suggest that infants are capable of recognizing the invariant aspects of a face thus creating a robust face representation even in the presence of an add-on hat. However, when the distracting item serves a discriminative function - i.e. the novelty of the hat competes with the novelty of the face - the hat triggers an infants’ visual attention, interfering with the amount of looking time spent on face information. The findings from the current study shed light on how faces and objects are processed when they are simultaneously presented in the same visual scene, contributing to an understanding of how an infant responds to the multiple and composite information available in its surrounding environment, which often differs significantly from the artificial stimuli employed to examine the emergence and development of cognitive processes in the first months of life .
This chapter is crucial for a better understanding of the pro- cesses and choices behind the development of the practical section of this research. Every user interface must be carefully created; the user does not need to have an in-depth studyof the services a mobile application provides, the main focus is straightforward accessibility and memorability. It must be visu- ally hierarchized for a superior perception, without expending any effort thinking about it. “As a user, I should never have to devote a millisecond of thought to whether things are clickable – or not” (Krug, 2014, p. 14). The same applies to the manoeu- vrability of a mobile application – the author must make it as self-explanatory as possible. Whenever frustrating users, they will just leave the app and eventually give up, hence the empha- sis on gaining superior knowledge through a blueprinting of services and user testing. Ultimately leading to a sample anal- ysis of how users perceive each category, subcategory, choice of font, colour, positioning, fluidity throughout the interface, comprehension and so on. All of this exploration proves indis- pensable towards the iteration process the researcher leads to. There are no average users, and the creation of the personas will be further established throughout the investigation, and it will ensure that a figurative sample of skeleton personas will be accounted for. Presenting circumstantial scenarios, for an understanding of individual process flows for an improved product achievement towards each idealization. The author must: (1) create a clear and visual hierarchy on the wireframes, organize and prioritize in a way everyone is able to grasp; (2) break the interface into clearly delineated areas; (3) self-evi- dence of what is clickable, swiped, scrolled and typing indica- tors; (4) minimize the confusion a visual hierarchy may lead to with too much information.
considered as two totally different image templates in rigid- body sense. In order to overcome this "surface deformity", elastic matching algorithm trains a 2D mesh-like neural net- work to model the face surface. If the mesh (the deformable template) is successfully trained, it is possible to "correct" the expression changes when doing the recognition (Lades, 1993). Another way to deal with facial expression changes is, instead of using the whole facial area to perform recognition task, using only the "significant facial region". The significant fa- cial region is a square area close to the center of the human face. It contains both eyes and the nose, but excludes the mouth and ears. Study shows that facial expressions and hair- style changes have less influence on the significant facial re- gion, and yet the face is still recognizable by viewing only the significant facial region (Lin, 1997). Figure 4 shows the sig- nificant facial region.
In the preprocessing stage the preprocessing of both the face image and finger vein pattern is to be done. Among the various feature extraction techniques available Local Binary Pattern is the most commonly used feature extraction technique in face biometrics. But the extracted output feature consist of both the uniform and nonuniform pattern and it has undesirable characteristics such as high dimension ,partial correlation and unwanted noise that produces irregular distribution in texture classification. The increase in feature length will reduce the accuracy of output result. To overcome these drawbacks of LBP another method called ULBP (Uniform Local Binary Pattern) is used.
Graph matching is another method used to recognize face. M. Lades et al  presented a dynamic link structure for distortion invariant object recognition, which employed elastic graph matching to find the closest stored graph. This dynamic link is an extension of the neural networks. Face are represented as graphs, with nodes positioned at fiducial points, (i.e., exes, nose…,), and edges labeled with two dimension (2-D) distance vector. Each node contains a set of 40 complex Gabor wavelet coefficients at different scales and orientations (phase, amplitude). They are called "jets". Recognition is based on labeled graphs . A jet describes a small patch of grey values in an image I (~x) around a given pixel ~x = (x; y). Each is labeled with jet and each edge is labeled with distance. Graph matching, that is, dynamic link is superior to all other recognitiontechniques in terms of the rotation invariance. But the matching process is complex and computationally expensive.
Blobs, Binary Linked Objects, are groups of pixels that share the same label due to their connectivity in a binary image. After a blob analysis, all those pixels that belong to a same object share a unique label, so every blob can be identified with this label. Blob analysis creates a list of all the blobs in the image, along with global features: area, perimeter length, compactness and mass center about each one. After this stage, the image contains blobs that represent skin areas of the original image. The user’s hand may be located using the global features available for every blob, but the system must have been informed whether the user is right or left handed. Most likely, the two largest blobs must be the user’s hand and face, so it will be assumed that the hand corresponds to the right most blob for a right-handed user and vice versa.
These traditional methods can be categorized into two different groups: holistic resources and local resource approaches. The holistic group can also be separated into the linear and nonlinear projection methods. Many applications demonstrate good results from the methods used in linear projection, such as principal component analysis (PCA) (Turk, Pentland, & J. Cogn, 1991) , independent component analysis (ICA) (Bartlett, Movellan, & Sejnowski, 2002) , linear discrimination analysis (LDA) (Belhumeur, Hespanha, & Kriegman, 1997)  and linear regression classifier (LRC) (Naseem, Togneri, & Bennamoun, 2010) . However, due to changes in lighting, facial expression, and other aspects, these methods cannot classify faces as correctly as possible. The main cause is complex variability with non-convex, nonlinearly exposed face patterns. To use these cases, applications may be non linear such as kernel PCA, kernel LDA (KLDA) (Lu, Plataniotis, & Venetsanopoulos, 2003)  or linear local embedding (LLE) (He, Yan, Hu, Niyogi, & Zhang, 2005) . These nonlinear methods use kernel techniques to map face images into a larger space without any simplified linear face variety, what makes traditional linear methods usable. But while there is a strong theoretical basis for kernel methods, their application in practice does not yield a significant improvement in comparison to linear methods. That said, the methods of nonlinear projection, inherited the simplicity of linear methods and the ability to handle complex data from nonlinear methods. Among which it is worth highlighting LLE and LLP (Xiaofei & Partha, 2003) .
In neural network training, the most commonly used algorithms are versions of the back propagation algorithms developed by Rummelhart et al. (1986). The well known limitations of gradient search techniques applied to complex nonlinear optimization problems have often resulted in inconsistent and unpredictable performance. They typically start at a randomly chosen point (set of weights) and then adjust weights to move in the direction which will cause the errors to decrease most rapidly. These types of algorithms work well when there is a smooth transition toward the point of minimum error. But the surface of the neural network is not smooth. It is characterized by hills and valleys that cause techniques such as BPN trapped in local minimum.
Information on Internet is increasing rapidly day by day, and it is applied for different purposes in decision support system. Web mining is the application of data mining and is used for the same task. Web mining is classified into three categories: Web Usage Mining (WUM), Web Content Mining (WCM), and Web Structure Mining (WSM). Among these, WUM is applied on usage data and it is being used at large scale by the organizations to study the behaviour of their web users. In WUM the user’s web log is collected for inferring the useful information by analyzing it. In present scenario every organization trusts on their websites for the growth of their business. The organizations collect the data from their web server to analyze the behaviour and investigating interest of the users. The ability to track user browsing behaviour down to individual mouse click has brought the vendor and end customer closer than ever before, it is now possible for a vendor to personalize his product message for individual customer .
In the signal processing context the wavelet transform is often referred to as sub-band filtering and the resulting coefficients describe the features of the underlying image in a local fashion in both frequency and space making it an ideal choice for sparse approximations of functions. Locality in space follows from their compact support, while locality in frequency follows from their smoothness (decay towards high frequencies) and vanishing moments (decay towards low frequencies). Therefore, 3D wavelet-based object modeling techniques have appeared recently as an attractive tool in the computer However, traditional 2D wavelet methods cannot be directly extended to 3D computer vision environments, possibly for two main reasons: Wavelet representations are not translation invariant [5, 32]. The sensors used in 3D vision provide data in a way which is difficult to analyze with standard wavelet decompositions. Most 3D sensing techniques provide sparse measurements which are irregularly spread over the object’s external surface. This is also important, because sampling irregularity prevents the straightforward extension of 1D or 2D wavelet techniques.
Principle Component Analysis PCA is a classical feature extraction and data representation technique widely used in pattern recognition. It is one of the most successful techniques in facerecognition. But it has drawback of high computational especially for big size database. This paper conducts a study to optimize the time complexity of PCA (eigenfaces) that does not affects the recognition performance. The authors minimize the participated eigenvectors which consequently decreases the computational time. A comparison is done to compare the differences between the recognition time in the original algorithm and in the enhanced algorithm. The performance of the original and the enhanced proposed algorithm is tested on face94 face database. Experimental results show that the recognition time is reduced by 35% by applying our proposed enhanced algorithm. DET Curves are used to illustrate the experimental results.
Objectives: To identify acknowledgement of Acute Pain diagnosis, through its defining characteristics, by nurses in a coronary intensive care unit, correlate their interventions to the nomenclature established for Classification of suggested and optional Nursing Interventions. Method: Qualitative, descriptive study, realized with thirteen nurses from the Coronary Care Unit of a hospital in Rio de Janeiro. Results: of the eighteen defining characteristics described by NANDA, only seven of the nurses interviewed are recognized. Among twenty-seven possible interventions described in the Nursing Care Intervention (NIC), only seven are recognized, and, the majority ruled in the administration of analgesics, which affects nursing actions directed to the biomedical model. Conclusion: It is necessary that nurses reflect on the patient’s individual needs, valorizing a care that involves a complex set of actions beyond biomedical care, ruled in comfort measures that promote the reduction of this symptom and consequently the improvement in the quality of care. Descriptors: Nursing, Nursing process, Pain, Cardiology.
Wavelet transform can be used to analyse time series that contain non stationary power at many different frequencies. Wavelets are mathematical functions that cut up data in to different frequency components and then study each component with resolution matched to its scale. Wavelet transforms are multi-resolution image decomposition tool that provide a variety of channels representing the image feature by different frequency subbands at multi-scale. It is a famous technique in analyzing signals. When decomposition is performed, the approximation and detail component can be separated . The Daubechies wavelet (db2) decomposed up to five levels has been used here for image fusion. These wavelets are used here because they are real and continuous in nature and have least root-mean-square (RMS) error compared to other wavelets  .
techniques for recognitionof a person is based on (i) physiological characteristics such as fingerprint, face, iris, retinal blood vessel patterns, hand geometry, vascular pattern, and DNA, and (ii) behavioral characteristics such as voice, signature and keystroke. The verification of a person using biometrics is more secured since, biometric parameters are the parts of human body hence cannot be stolen and/or modified, compared to traditional systems such as Personal Identification Number (PIN), passwords, smartcards etc. Facerecognition is a nonintrusive method, and facial images are the most common biometric characteristic used by humans to make a personal recognition. The popular approaches for facerecognition are based on either: (i) The location and shape of facial attributes such as the eyes, eyebrows, nose, lips and chin, and their spatial relationships, or (ii) The overall analysis of the face image that represents a face as a weighted combination of a number of canonical faces. Some of the facerecognition systems are commercially available and their performance is reasonably good but they impose some restrictions on variation such as illumination, expression, pose, occlusions.
Abstract: Most doors are controlled by persons with the use of keys, security cards, password or pattern to open the door. Theaim of this paper is to help users forimprovement of the door security of sensitive locations by using face detection and recognition. Face is a complex multidimensional structure and needs good computing techniques for detection and recognition. This paper is comprised mainly of three subsystems: namely face detection, facerecognition and automatic door access control. Face detection is the process of detecting the region offace in an image. The face is detected by using the viola jones method and facerecognition is implemented by using the Principal Component Analysis (PCA). FaceRecognition based on PCA is generally referred to as the use of Eigenfaces.If a face is recognized, it is known, else it is unknown. The door will open automatically for the known person due to the command of the microcontroller. On the other hand, alarm will ring for the unknown person. Since PCA reduces the dimensions offace images without losing important features, facial images for many persons can be stored in the database. Although many training images are used, computational efficiency cannot be decreased significantly. Therefore, facerecognition using PCA can be more useful for door security system than other facerecognition schemes.
Nowadays, various feature extraction approaches have been employed for facerecognition. Among these approaches, Principal Component Analysis (PCA) , Linear Discriminant Analysis (LDA)  and their related extensions [11–15]have been well studied and widely uti- lized to extract low-dimensional features from the high-dimensional face images. However, since some recent studies have shown that high-dimensional face images possibly reside on a nonlinear manifold, many manifold learning methods such as Isometric Feature Mapping (ISOMAP) , Local Linear Embedding (LLE) , Laplacian Eigenmap (LE)  and their extensions have also been proposed for facerecognition. Although the aforementioned feature extraction algorithms worked well, they all belong to the subspace based method and can only extract the holistic features offace images, which may lead them to be unstable to local vari- ances such as expression, occlusion, and misalignment . As a result, local descriptors such as Local Binary Pattern (LBP) have attracted more and more attention for their robustness to local distortions [20, 21]. The LBP operator  is a texture descriptor which describes the neighboring changes around each pixel. It has been successfully used in facerecognition appli- cations due to its invariance to the changes of illumination and expression in face images and computational efficiency. Considering the advantages of LBP in facerecognition , many LBP variants have been proposed. In LGBP , GVLBP  and HGPP , instead of directly using the pixel intensity to compute the LBP features, multi-scale and multi-orienta- tion Gabor filters were employed for encoding the face images. Then, the LBP histogram was obtained from the encoded images. Zhao et al. first extracted the gradient information from face image using Sobel operator and then applied LBP to the gradient images for feature extrac- tion . The LBP has also been adopted to extract the features for representation based classi- fication techniques. In  and , some researchers combined LBP with SRC for facerecognition. In their methods, the LBP features were first extracted from the face images. Then, the SRC was utilized for classification. Kang et al. employed LBP to extract local features of the face images so that the performance of kernel SRC could be improved . In , Lee also used the Gabor-LBP features for face image representation in SRC.
For face detection here Bayesian Tangent Shape Model (BTSM) is used. At that moment system contains two models. One indicates the prior shape distribution in tangent shape space and the other is a likelihood model in image shape space. Based on these two models, the posterior distribution of model parameters can be derived. To replace the face, it requires outlines of the facial profile and structures. For face alignment 2D landmark points are used from BTSM (as mentioned above). To identify appropriate source pose and expression, system allows user to select the best candidate for each frame independently in the target video. To achieve this, system splits the target video into several screens. This process is referred to as clustering. After clustering, user needs to select appropriate frame for each cluster. The best candidate face must have the most similar pose and expression.
Dahm and Yongsheng Gao  have explained that several FaceRecognitiontechniques have focal point on 2D- 2D comparison or 3D-3D comparison; but only some techniques survey the idea of cross-dimensional assessment. Their paper offered a new facerecognition approach that outfit cross-dimensional assessment to solve the problem of pose invariance. Their approach implements a Gabor representation during comparison to allow for variations in texture, illumination, expression and pose. Kernel scaling was used to shrink comparison time during the branching search, which establishes the facial pose of input images. In their paper, they present a novel facerecognition approach that utilizes 3D data to conquer changes in facial pose, while remaining non- interfering. To realize this, we use 3D textured head models in the gallery, as gallery data is generally taken from supportive subjects (e.g. identification photos, mug shots). For the query, we use 2D images, which can be taken from submissive cameras such as ceiling mounted surveillance cameras. Together this gives us a cross- dimensional approach, combined with a non-interfering nature.
We use a set of features which are the derivatives of the Haar Basis functions. Haar like functions can be computed at any scale or location in constant time. Rectangular features are primitive compared to the steerable filter- (which provides texture analysis, Image compression), but rectangular features provide Rich image representation which provides effective learning. There are over 180,000 rectangle features associated with every image. These numbers are very large compared to other numbers. But our aim is to combine very small features to form an efficient classifier. In order to support this we have to design a weak learning algorithm. For every feature the weak classifier determines the optimum threshold classifier function.