It is important to mention that one of the problems of the KNORA-E algorithm is the com- putational cost of reducing the neighborhood. When none of the base classifiers correctly clas- sifies all the neighbors, the neighborhood is reduced and the algorithm computes again. This becomes a problem when there are many noisy patterns in the dataset. The algorithm needs to reduce the neighborhood often, which increases considerably the computational time. Using the ENN filter and the adaptive k-NN, less noise are selected as neighbors (some are elimi- nated by the ENN and some are not selected by the adaptive k-NN rule). Therefore the number of times that the KNORA-E algorithm needs to reduce the neighborhood decreases consider- ably. Also the ENN rule eliminates some patterns of the validation set which contributes in decreasing the cost of computing the nearest neighbor. Table 4.4 shows the processing time (time to process the whole database) obtained by the KNORA-E algorithm and the DES-FA. In most datasets the processing time is much lower and it is explained by the better quality of the selected region of competence.
Keywords are simple to specify but their efficiency as seeds for user-generated content may vary greatly. Rather than leading users into more complex conceptual formulations, for example by asking for keyword pairs or by proposing ontologies, we think we may considerably improve the selection process by framing user keywords in a broader place context defined by physical location, organizational context, type of setting, or even the overall set of user-generated keywords. In previous work  we argued that these contextual elements per se were not enough to support adaptive content selection because it was far from obvious what type of association rules could be created between them and particular content. That was part of the motivation for exploring keywords as an alternative and more intentional path for context-aware displays. This work has confirmed that keywords are normally efficient as content selectors, but it has also shown that they are not always reliable as representatives of the concepts that people had in mind when proposing them. As a result, we conclude that the combination of the two may offer the most promising approach. Even simple contextual clues may be enough in disambiguating ambiguous keywords and in providing an interpretation aligned with the nature of place.
Abstract— In public display systems determine what to present and when is a central feature. Although several adaptive scheduling alternatives have been explored, which introduce sensibility of the display to some type of external variable, they are still very dependent on the user in their behavior, content specific in their nature and very rigid in their adaptation to their social environment, not providing visitors of the place with appropriate, rich and personalized information according to their interests and expectations. There is a need for solutions that successfully integrate the wealth of dynamic web sources as providers for situated and updated content with social and contextual environment around the display so as to present the most appropriate content at every moment, and thus improving the utility of the system. In this paper, we present a recommender system for public situated displays that is able to autonomously select relevant content from Internet sources using a keyword-based place model as input. Based on external relevance criteria the system finds and pre-selects only those sources that are more relevant, and an adaptive scheduling algorithm continuously select content that are relevant, timely, in accordance with the place model, sensitive to immediate indications of interest and balanced to serve the broad range of interests of the target population. To evaluate this system we have carried out two partial experiments. The results showed that keyword-based shared place models jointly with content specific relevance models are a simple and valid approach to user-generated content for public displays.
In our effort to implement an adaptive classification system, we accomplish three major goals. The first is to develop a method for searching for optimum values for SVM hyper parameters over time. We face two main challenges in this endeavor: (1) overcoming common difficulties involving optimization processes, such as the presence of multimodality or discontinuities in the parameter search space, and (2) quickly identifying optimum solutions that fit both histori- cal data and new, incoming data. If we do not meet these challenges, the processes for searching hyper parameters over sequences of datasets could perform poorly or be very time-consuming. To tackle these two issues, we first study the SVM model selection task as a dynamic opti- mization problem considering a gradual learning context in which the system can be tested with respect to different levels of uncertainty. In particular, we introduce a Particle Swarm Optimization-based framework which combines the power of Swarm Intelligence Theory with the conventional grid-search method to progressively identify and evaluate potential solutions for gradually updated training datasets. The key idea is to obtain optimal solutions via re- evaluations of previous solutions (adapted grid-search) or via new dynamic re-optimization processes (dynamic Particle Swarm Optimization, or DPSO). Experimental results demonstrate that the proposed method outperforms the traditional approaches, while saving considerable computational time. This framework was presented in [57, 55].
Other authors have used GP for the same purpose of transforming the original input features to provide more appropriate representations for ML classifiers. Smith and Bull [ SB05 ] use GP for feature construction followed by feature selection to improve the accuracy of the C4.5 decision- tree classifier. They test their approach on known datasets and conclude that it “provides marked improvement in a number of cases”. Lin and Bahnu [ LB05 ] use co-evolutionary genetic program- ming (CEGP) to improve the performance of object recognition. CEGP algorithms are an exten- sion of GP in which several subpopulations are maintained; each subpopulation evolves a partial solution and a complete solution is obtained by combining partial solutions. In their approach, in- dividuals in subpopulations are “composite operators”, represented by binary trees whose terminal nodes are the original features and whose function nodes are domain-independent operators. A “composite operator vector” is thereby evolved cooperatively and applied to the original features to produce “composite feature vectors”, which are then used for classification. They point out that the original features can either be very simple or incorporate an expert’s domain knowledge. Their results show that CEGP can evolve composite features which lead to better classification performance. Neshatian and Zhang [ NZA12 ] also use GP for feature construction to improve the accuracy of symbolic classifiers. They use an entropy-based fitness measure to evaluate the constructed features according to how well they discriminate between classes. Their results re- veal “consistent improvement in learning performance” on benchmark problems. Ahmed et al. [ AZPX14 ] also use a GP approach to feature construction for biomarker identification in mass spectrometry (MS) datasets. Biomarker identification means detecting the features which discrim- inate between classes. They point out that it is a difficult task for most MS datasets because the number of features is much larger than the number of samples and that feature construction can solve this problem by reducing the dimensionality of the inputs. Their method produces nonlinear high-level features from low-level features and is shown to improve the classification performance when tested on a number of MS datasets.
Although it is also based on the SDN paradigm, (Cofano et al., 2017)’s solution gives a great focus on the ABR algorithm similarly to some others solutions that have been mentioned earlier in this chapter. Authors used the SDN paradigm to came out with a new concept called Video Control Plane (VCP). VCP has some similarities with the normal Control Plane but is more oriented to video streaming and aim to enforce Video Quality Fairness (VQF). To reach an optimal solution, they have compared two possible approaches in an SDN network: one where they allocate network bandwidth slices to video flows and another where they guide the video players in the bitrate selection. In the ABR part, they compared several algorithms to determine the impact at the client-side. In this case, they compared three algorithms: Conventional, PANDA and Elastic. Conventional is a rate-based algorithm that uses bandwidth estimation. PANDA is also a rate-based algorithm but is a probe-and-adapt algorithm that increments the bitrate to probe the available bandwidth. Elastic is a level-based algorithm that controls the playout buffer length by varying the video bitrate. Their research also only uses one network type in its evaluation experiments.
as sound as it is, relegates the problem of selecting an appropriate timeout to a sec- ondary plane, because its value has no effect on the safety of the protocol (and liveness just requires for this value to eventually become sufficiently large). Thus, this problem is often dismissed as being ‘merely’ an engineering decision, where a fixed timeout is conservatively selected based on ad hoc approaches or on empirical observations of the network. However, while in any soundly designed protocol correctness is not dependent on specific timeout values, performance is. A too small timeout will raise many false positives (if failure detection is involved) or cause too much contention (if retransmission is used). A too large timeout will hinder a quick recovery from failures. Therefore, to reason about performance, a crucial issue is the characterization of the temporal behavior of the network on which the consensus protocol is executed. Local Area Networks (LANs) constitute a very favorable environment from a tempo- ral perspective. Network delays are small (in the order of microseconds), very stable across different LANs, independent of the communicating nodes, and they are not much affected by contention. On the other hand, in large-scale networks (WANs) de- lays are much higher and they strongly depend on the locations of specific end points. Moreover, delays are not so stable over time, in particular because routes may change dynamically and global load fluctuations also have some impact on observed delays. The problem becomes even more relevant when considering the operation in wireless environments. Network delays are also much larger than in LANs, but in addition they are strongly exposed to the effects of contention ( Acharya et al. , 2008 ; Jardosh et al. , 2005 ). Varying the number of nodes that actively execute a distributed protocol and communicate within a single hop distance has a clear impact on observable de- lays. Consequently, a correct timeout selection, namely involving dynamic adaptation, becomes more relevant in these environments.
The project ‘Numerical thinking and flexible calculation: critical issues’ aims to study students’ conceptual knowledge associated with the understanding of the different levels of learning numbers and operations. We follow the idea proposed by several authors that flexibility refers to the ability to manipulate numbers as mathematical objects which can be decomposed and recomposed in multiple ways using different symbolisms for the same objet (Gravemeijer, 2004; Gray &Tall, 1994;). The project plan is based on a qualitative and interpretative methodology (Denzin & Lincoln, 2005) with a design research approach (Gravemeijer & Cobb, 2006). This article focus the preparation of a teaching experience centered in the flexible learning of multiplication. It describes the analysis of a clinical interview where Pedro (9 years) solves the task 'Prawn skewers'. It illustrates how we identify and describe Pedro’s conceptual knowledge associated with the different levels of understanding of numbers and multiplication/division and analyzes if and how this knowledge facilitates adaptive thinking and flexible calculation.
To address these issues several identity concepts have emerged, that promote identity models with stronger authentication and improved user privacy. Emerging identity frameworks like OpenID or SAML , envision identity management systems as an additional layer on top of the OSI stack(fig. 1.1). In practice such solutions allow the user to select which virtual identity they wish to use, when contacting different services. Ideally different user identities would be unlikable, in order to preserve user privacy. While they have succeed in providing users with increased control over their identity, their work is confined on top of the application layer, and neglects that the underlying layers do not provide the necessary privacy requirements for identity privacy. User identities can still be linked through the identifiers used in the lower layers.
In order to determine the parameters that most accurately characterize the chan- nel considered, a minimization of the mean square error between the experimental and simulated data was made. It was noticed that experimental data is affected by an offset due to the wrong calibration of the water level sensors. For this reason, sensor offsets have been estimated as well. Another reason for the differences be- tween experimental data and model output is that the SIMULINK model does not completely represent the channel in the segment upstream of the gates, that has a geometry different from the one modeled.
The application of FARM model, although not considered a limitation, should be only seen as indicative of what benefits and impacts may lie in the application of Pacific oyster aquaculture in the estuary. If further developments are seen in Portuguese aquaculture, more specifically in the Sado estuary, more efficient and complex models must be applied (e.g. the inclusion of Portuguese oyster individual and production models), in order to properly evaluate the environmental and socio-economic effects. In short, the thematic map showed the most important criteria for aquaculture site selection. However, alternative variables and/or classification might produce different outcomes. The application of FARM illustrates the value of dynamic models in providing detailed information on oyster culture feasibility. This is illustrated by the contrast between test farms A and B: the former is at a location where extensive intertidal aquaculture will provide significant profit and associated ecosystem services, whereas the latter will not be viable, despite being flagged as suitable using the GIS tool.
As shown in the Figure 2, the system consists of the several units which are all implemented in the systems. The web camera input might be produced by a motorized pan-and- tilt camera simulating a moving eye. Here we adopt a stationary camera image and only simulate the eye movements. The work space that the eye can explore is an area formed by 352*288 RGB pixels. We convert the input frames into HSV which can enlarge the color contrast in spatial dimension. After obtaining the statistical histogram and designating the attention region, we can get the probability map (PM) , we adopt normalization and initialization to search the attention regions on each frame. The saliency map is a key process of our flowchart, which combines a number of visual feature maps into a combined map that assigns a saliency to every location in the visual field (Itti, Koch and Neibur, 1998). Each feature map is typically the result of applying some simple visual operator to the input image. For example, a feature map could consist of an activity pattern that may code for intensity, color, motion or some other visual cues. The result of summing the different feature maps is that the saliency map will code for locations in the image with many features.
Web frameworks can be of great use when developing a dynamic web application, as they can significantly reduce the overhead associated with designing, implementing and testing certain key components like database access and page templating. However, when developing smaller applications, they can actually increase the overhead. In a midsized application, the time spent getting familiar with the framework is usually easily compen- sated by the built-in functionality, but in smaller and more focused applications this can easily outgrow the resources necessary to develop the base components from scratch.
Unlike the previously cited papers, we propose a specific architecture that allows real-time stream processing analysis based on the support of a historical database and incoming honeypot data. Hence, our architecture learns and adapts to new attacks captured by the honeypots and monitors legitimate user behavior to detect anomalies. Moreover, we implement our detection methods using stream process- ing, to detect threats in real-time and to scale using distributed processing. Our architecture prevents threats in a fast and scalable way by providing real-time accu- rate detection of known and zero-day attacks through automatized classification and anomaly detection methods. In this work, we implement five classification meth- ods, two of them with real-time training, adapting without manual intervention to new threats. In addition to these methods, we also propose two anomaly detection algorithms that are also trained in real-time. Therefore, our work presents a solid threat detect, since it handles known attacks, learns new attacks through honeypot data, and also monitors normal usage behavior to detect anomalies that are poten- tial threats. Moreover, to efficiently protect the network, our innovative proposal benefits from the fast stream processing, the rapid machine-learning analysis of the first few packets of each flow, and, due to the software defined network features, the prompt threat block even when the attackers changes its IP. We use SDN to mirror the traffic to the sensor elements, therefore avoiding any delay in legitimate user communication. Our threat detection architecture sends alerts to the controller, which is able to block the source IP in all network switches. If the attacker changes its IP, this rule is ineffective. Therefore, we propose a scheme to protect the net- work against these attacks, based on the time between the alerts. The intuition behind this method is that, when two alerts arrive in short period, it represents the possibility of a spoofed IP threat.
In the study of a cell, when the activity of the genes and the concentrations of the proteins stabilizes around some values, we call that configuration a steady state. We highlight that it is possible to exist more than one steady state in the same regulatory system. The existence of steady states is one of the most studied properties of a biological models. Steady states may represent either convergent or cyclic behaviors. Their existence as well as their study may be useful to better understand the regulatory system of the cell. However, the cell contains several components and even the study of an isolated module of a cell with 10 components would create a digraph with 2 10 states (since each state is considered as a possible combination of 0’s and 1’s for each component of the cell). This fact turns computationally inviable to study complex regulatory systems which usually contain much more than 10 components. Thereby, it is usually necessary to apply some model reduction before proceeding with the analysis of one of these graphs.
An f-I curve, defined as the mean firing rate in response to a stationary mean current input, is one of the simplest ways to characterize how a neuron transforms a stimulus into a spike train output as a function of the magnitude of a single stimulus parameter. Recently, the dependence of f-I curves on other input statistics such as the variance has been examined: the slope of the f-I curve, or gain, is modulated in diverse ways in response to different intensities of added noise [1–4]. This enables multipli- cative control of the neuronal gain by the level of background synaptic activity : changing the level of the background synaptic activity is equivalent to changing the variance of the noisy balanced excitatory and inhibitory input current to the soma, which modulates the gain of the f-I curve. It has been demonstrated that such somatic gain modulation, combined with saturation in the dendrites, can lead to multiplicative gain control in a single neuron by background inputs . From a computa- tional perspective, the sensitivity of the firing rate to mean or variance can be thought of as distinguishing the neuron’s function as either an integrator (greater sensitivity to the mean) or a differentiator/coincidence detector (greater sensitivity to fluctua- tions, as quantified by the variance) [3,6,7].
As one may notice, further experiments concerning the Least Squares Method are not performed, since it was already studied in [Silva et al. 2018], being its appearance in this section summarized by the results obtained in that work. Also, neither the Extended Kalman Filter, nor the Unscented Kalman Filter are tested in its 12-state forms, which also estimates the inertia tensor components. The main reason for which these methods were not tested is that these methods require an input torque and the only available momentum exchange devices - the reaction wheels and the magnetorquers - cannot provide the required torque magnitude. Magnetorquers are used in slower dynamics experiments when compared with reaction wheels, since they provide torques with low magnitude. On the other hand, the reaction wheels, which would be the best actuator candidate, also cannot provide an ade- quate torque level, taking into account the resolution of the gyroscopes in the IMU, which is 0.01 rad/s, and the signal-noise ratio in this sensor. To illustrate it, it is possible to estimate the reachable angular velocity of the testbed when the reaction wheels saturate at maximum velocity by applying the angular momentum conservation principle, which in this case states that
At the beginning, it is usual to try to imagine something that no-one has ever done before. Something we believe could solve most of the problems in our area. en realize that most ideas we have had already been thought of…probably years ago. is is why one is required to do an extensive review of the literature: to learn about all those previous ideas⁶ and try focus on the speciﬁc problems yet to be solved. But how exactly are we supposed to build on top of those ideas? Perhaps in a branch such as mathematics⁷, one could point to other’s proofs and theories and then move to our own. But in engineering, as the branch of science concerned with the design, building, and usage of structures, one must have those previous structures. And when they aren’t readily available, we are faced with a dilemma: either we restrict to build prototypes — proof of concepts — and leave the burden of integration for someone else, or we must get our hands “dirty” and start from scratch. Looking back, I cannot be absolutely sure if the former was the right choice, given the available timespan, but it surely was the best way to make this work usable in settings beyond the academy. Novel should thus be regarded as advancing the state-of-the-art, taking care to stand on the shoulders of giants.