Wireless body area networks (WBANs) have received a lot of attention from both academia and industry due to the increasing need of ubiquitous computing for eHealth applications, the continuous advances in miniaturization of electronic devices, and the ultra-low-power wireless technologies. In these networks, various sensors are attached either on clothes, on human body or even implanted under the skin for real-time health monitoring of patients in order to improve their independent daily lives. The energy constraints of sensors, the vital and large amount of data collected by WBAN nodes require powerful and secure storage, and a query processing mechanism that takes into account both real-time and energy constraints. This paper addresses these challenges and proposes a new architecture that combines a cloud-based WBANs with statistical modeling techniques in order to provide a secure storage infrastructure and optimize the real-time user query processingin terms of energy minimization and query latency. Such statistical model provides good approximate answers to queries with a given probabilistic confidence. Furthermore, the combination of the model with the cloud-based WBAN allows performing a query processing algorithm that uses the error tolerance and the probabilistic confidence interval as query execution criterions. The performance analysis and the experiments based on both real and synthetic data sets demonstrate that the new architecture and its underlying proposed algorithm optimize the real-time query processing to achieve minimal energy consumption and query latency, and provide secure and powerful storage infrastructure.
Products are the electronic devices (e.g. machines). Field device are sensors to gather data. Control devices are the main component of the machine, like PLCs. Station analyse datainrealtime to monitor the system. Work centers may control the production state, requiring new components and restoring the production goals. Enterprise is the business management like planning and control, statistics, marketing, sales. Connected world is the communication between stakeholders, making it possible the information sharing (Carvalho, 2018).
Innovative green tools for monitoring the degradation processes are currently in the spotlight. Sensors, as compared to traditional sampling and analysis procedures, can provide fast response on the output datain a continuous, safe, and cost-effective way, and therefore may play a role in monitoring contaminants’ dynamics . Among sensors, the electronic tongue (e-tongue) is gaining special attention for liquid matrices. An e-tongue is a multi-sensory system, which formed by an array of sensors with low-selective thin film layers or sensorial layers, and combined with advanced mathematical procedures for signal processing based on pattern recognition and/or multi-variate data analysis . E-tongues have proved to be suitable devices for monitoring aqueous environmental matrices contaminated with EOCs [22,23]. Some examples are Campos et al. (2012) , who developed a voltammetric e-tongue [set of noble (Au, Pt, Rh, Ir, and Ag) and non-noble (Ni, Co, and Cu) electrodes to the prediction of concentration levels of soluble chemical oxygen demand, soluble biological oxygen demand, ammonia, orthophosphate, sulphate, acetic acid, and alkalinity from influent and effluent wastewater. Years later, Cetó at al. (2015)  used a voltammetric bio e- tongue for simultaneous monitoring of catechol, m-cresol, and guaiacol mixtures in wastewater. Thus, one of the most interesting aspects that motivate the development of e-tongues is their potential for real-time parallel monitoring of multiple species and multi-analyte determination in a single sample analysis [26,27]. The working electrodes in the e-tongues array can be covered with films (coatings), which improves the sensitivity of the electrical measurements. The ability to tune the composition of nanostructured thin films allow for an improvement in the sensor’s intrinsic (chemical or physical) properties for sensing applications. The layer-by-layer (LbL) nano-assembly technique, where the
To meet the present stringent emission norms. These systems are generally termed as Electronic Fuel Injection (EFI) systems. The fuel is injected into the throttle body or into the inlet manifold through an electronic fuel injector, which is controlled by an Electronic Control Unit (ECU). The quantity of fuel injected by the injector plays a vital role as far as performance and emission characteristics of spark ignition engines are concerned. This paper deals with the static and dynamic fuel injection characteristics of two gasoline fuel injectors. The effect of different injection parameters like fuel injection pressure, injection duration, supply voltage to injector and engine speed on the quantity of fuel injected have been studied for two injectors. The injection dead time and its variation with respect to fuel pressure and supply voltage to injector have been analyzed. Based on the analysis of results, an empirical formula has been obtained to determine the dynamic fuel injection quantity from the static fuel injection characteristics and it was compared with the measured values. It is found that the empirical formula developed in this work gives reasonably good results and therefore, it can be used with confidence for predicting the dynamic characteristics of any given injector from its static injection characteristics.
The metrics observed in Table 1 are capacity and execution latency, extracted using Storm’s UI daemon. The former is referent to the pro- cessing capacity of the bolts deployed in the Storm topology. The closer the value is to “1.0”, the closer the bolt is to be running as fast as it can. This is useful to verify if the parallelism of the topology needs to be adjusted. The latter refers to how long each tuple takes on average (in milliseconds) to be executed by the respective bolt. The Logger Bolt was excluded because during testing it was only quickly logging the model updates, thus its metrics were very close to zero and not relevant for comparison. An additional metric that was extracted was the complete topology latency, meaning the time it takes each tuple, on average, to be fully processed and acknowledged by the entire topology. Its values were 11.452 ms, 13.441 ms and 16.222 ms, for 75, 150 and 300 agents respectively. These results will be discussed in Section 6.
In Germany the situation with our cooperation partner Ericsson is different. Ericsson operates the CML network for T-mobile, but does not store the CML performance re- ports. Hence, in our initial approach, we used small data loggers directly at the CML towers to record the RX-level, which is available at the analog output of the automatic gain control (AGC) of older hardware (Chwala et al., 2012). The TX-level was constant
of such predominant importance in our species that elaborate symbolic mechanisms have emerged to support the execution of this function. The evolutionary processes have partly been biological and partly cultural. Eccles (in Popper and Eccles, 1987) paints a picture of divergent specialization between the two hemispheres of the brain according to which the `minor' (usually the right) hemisphere plays roles central to run-time problem-solving, involving pattern-handling and spatial and so- cial orientation. Yet this hemisphere almost wholly lacks ca- pabilities of symbolic reasoning, notably those associated with language and logic. The dominant (usually left) hemisphere, by contrast, not only
uently handles the deciphermentof linguistic and logical expressions, but is also the clearing-house for reports on subgoal attainment during problem-solving. Eccles argues that, although `consciousness' is also manifested by the right hemisphere in the sense of a diuse awareness, the focussed and organized forms of goal-oriented awareness which we associate with `self' are functions of the left brain. More recently the pos- sibility has been aired in neurobiological circles (see Benjamin Libet's observations and associated discussion in Behavioural and Brain Science, 1988-89) that the seat of consciousness acts more as a news room than as a planning headquarters, putting a coherent retrospective gloss on the consequences of decision. The decisions themselves, in this model, emanate from activi- ties localized elsewhere. An elaboration of this view has recently been developed by Dennett (1992). Whatever the neural nature of functions (1) and (2) above, modern brain science sees them as operationally and topographically distinct. In such a view, the mechanisms of (2) face a serious problem. Modules special- ized to symbolic reporting must interface with dissimilar, even alien, architectures if explanations of the `self's problem-solving decisions are to be generated. When required to support the more intuitive eld of real-time skills, the brain's explanation module tends to fail, or resorts, when pressed by the dialogue- elicitation specialist, to confabulation.
90 initiated, by injecting McCoy's 5a modified complete growth medium at an appropriate flow rate that avoids wash-out of the cells while allowing for sufficient medium renewal, as seen in Figure 1-B. Once 70-80% of cell confluence is reached (ca. 48h), a suspension of fluorescently labeled phages are injected into the microfluidic chip. As a model, a M13 bacteriophage genetically engineered to display a VHH-based antibody, anti-CXCR4, was used, due to its binding to the CXCR4 chemokine receptor expressed by the tumor cells. As a negative control, a VCSM13-helper phage was used since it is not genetically engineered to display any protein that confers specificity for this cell line and no fluorescence was registered for this control. By monitoring the fluorescence over time it is possible to calculate the flux of phages captured by the cells in these conditions and estimate the total number of phages captured as seen in Figure 2. In these fluidic conditions, for the duration of the experiment it is safe to assume that all phages were captured.
Nos os ´ultimos 20 anos, a quantidade de dados armazenados e pass´ıveis de serem proces- sados, tem vindo a aumentar em ´areas bastante diversas. Este aumento explosivo, aliado `as potencialidades que surgem como consequˆencia do mesmo, levou ao aparecimento do termo Big Data. Big Data abrange essencialmente grandes volumes de dados, possivelmente com pouca estrutura e com necessidade de processamento em tempo real. As especificida- des apresentadas levaram ao apareciemento de desafios nas diversas tarefas do pipeline t´ıpico de processamento de dados como, por exemplo, a aquisic¸˜ao, armazenamento e a an´alise. A capacidade de realizar estas tarefas de uma forma eficiente tem sido alvo de es- tudo tanto pela ind ´ustria como pela comunidade acad´emica, abrindo portas para a criac¸˜ao de valor. Uma outra ´area onde a evoluc¸˜ao tem sido not´oria ´e a utilizac¸˜ao de biom´eticas com- portamentais que tem vindo a ser cada vez mais acentuada em diferentes cen´arios como, por exemplo, na ´area dos cuidados de sa ´ude ou na seguranc¸a. Neste trabalho um dos ob- jetivos passa pela gest˜ao do pipeline de processamento de dados de uma aplicac¸˜ao de larga escala, na ´area das biom´etricas comportamentais, de forma a possibilitar a obtenc¸˜ao de m´etricas em tempo real sobre os dados (viabilizando a sua monitorizac¸˜ao) e a classificac¸˜ao autom´atica de registos sobre fadiga na interac¸˜ao homem-m´aquina (em larga escala).
All the 5 extracts corresponding to the refining steps (oil samples) tested also positively by real-time PCR for the lectin gene, confirming the previous results obtained in the qualitative PCR. The results contradict other authors , which demonstrated that only the neutralized phase presents traces of DNA, when using high quantities of neutralized oil (> 250g). It was also possible to obtain positive results for the amplification of the specific RR construction in crude, neutralized and deodorized oils. These results are in complete agreement with the previously qualitative PCR assays (Figure 3A). The RR amplification for the neutralized oil was below limit of quantification, reason because the %GM for this sample was not obtained. The samples corresponding to the washed and bleached oils gave negative results as in the previous PCR amplifications, probably due to the higher level of DNA degradation caused by the instability of the samples when stored prior to analysis. The results for crude and deodorized oils gave also high proportions of RR soybean, respectively, 83.9 and 61.4%. These findings were never reported and represent a great achievement when considering the detection GMO in vegetable oils.
Many systems have been proposed for connecting sensors and sensor networks. We will use Sensor Andrew middleware as the basis of our system, and we intent to build a system to facilitate the process of sharing sensor data, and access to data. More specifically, we want the system be able to maintain the devices, whether registering sensors and actuators, allowing persons or corporations to monitor their devices and define who has access to information; present data to users in form of charts, giving a better interpretation and meaning to data; supporting groups, allowing a set of permissions and accesses; recording of sensor data when requested, enabling monitoring at long term; and user policies which verify certain sensor conditions and execute a set of actions, including calls to remote functions in actuators.
cost). This led to a trend towards integrating separate systems as subsystems of a more complex mixed-criticality system on a common computing platform. Such system features the coexistence of different classes of real-time (SRT and HRT), and subsystems which may be developed by different teams and with different levels of assurance. The classical approach, often called federated, was to host each of these subsystems in separate communicating nodes with dedicated resources — with the consequent added weight and cost of computing hardware and cables. The added complexity of the system propagates to system’s development, testing, validation and maintenance activities. Designing such complex systems around the notion of component, thus allowing component-based analysis, brings several benefits, some specific to real-time systems (Lipari et al., 2005; Lorente et al., 2006). In the case of different classes of real-time, the advantages of keeping the SRT and HRT parts of the system logically separated (and analyzing them as such) are twofold. On the one hand, the separate analysis allows fulfilling the HRT requirements of such components without imposing unnecessary pessimism on the analysis of the SRT components. On the other hand, with appropriate design considerations, the tar- diness permitted to the SRT components shall not void the timeliness of the HRT components (Abeni & Buttazzo, 1998). One such design approach is time and space partitioning (TSP). Each component is hosted on a logical separation and containment unit — partition. In a TSP system, the various onboard functions are integrated in a shared computing platform, however being logically separated into partitions. Robust temporal and spatial partitioning means that partitions do not mutually interfere in terms of fulfillment of real-time and addressing space encapsu- lation requirements.
not be computationally tractable. Consequently, we apply computational methods that iteratively search for a “better” solution according to a given strategy. Among many different metaheuristics that are widely used in various scientific and applica- tion domains we decided in favour of simulated annealing by Kirkpatrick et al. , which is very popular for tackling combinatorial problems. Inspired by the annealing technique in metallurgy, simulated annealing attempts to replace the current solution of the problem with another candidate solution (often randomly obtained) at each its iteration. A candidate solution that improves on the current one is always accepted. However, occasionally, the algorithm will also accept a “worse” candidate solution with a probability which depends on the value of probability function. This function takes as parameters a variable T (also called as “the temperature”) and the difference of the utilities of the current solution and the candidate solution. Higher tempera- tures and lower reduction in utility makes it likelier that such a candidate solution will be chosen. Occasionally accepting “worse” solutions helps avoid the pitfall of getting stuck at a local optimum of the optimization problem. With the number of iterations, T is decreased according to a given “annealing schedule”.
Abstract The number of traffic accidents in Brazil has reached alarming lev- els, and is currently one of the leading causes of death in the country. With the number of vehicles on the roads increasing rapidly, these problems will tend to worsen. Consequently, huge investments in resources to increase road safety will be required. The vertical R-19 system for optical character recogni- tion of regulatory traffic signs (maximum speed limits) according to Brazilian standards developed in this work uses a camera positioned at the front of the vehicle, facing forward. This is so that images of traffic signs can be cap- tured, enabling the use of image processing and analysis techniques for sign detection. This paper proposes the detection and recognition of speed limit signs based on a cascade of boosted classifiers working with haar-like features. The recognition of the sign detected is achieved based on the Optimum-path Forest classifier (OPF), Support Vector Machines (SVM), Multi-layer Percep- tron (MLP), k-Nearest Neighbor (kNN), Extreme Learning Machine (ELM), Least Mean Squares (LMS), and Least Squares (LS) machine learning tech-
Considering the evolution of communication, there is a possibility of interaction and acquiring information by groups of people who are passing by or living in risk areas. Data can be collected in alternative ways to aid in a crucial problem in flood forecasting models that is the availability of water level and rainfall gauges in all points of interest. Citizen Science has a very broad concept that has been applied in several areas; it usually has the main meaning is the involvement of citizens in the collection of data and knowledge for scientific research. The concept embraces both the collection of scientific data with the active participation of volunteers on research hypotheses and issues, as well as in less active activities such as "human sensors" (Roy et al., 2012). Goodchild (2007) coined the term VGI - Volunteered Geographic Information as digital geospatial data generated by common citizens. Leyh et al. (2017) emphasise that VGI is one of the practices of Citizen Science, where information provided by volunteers must necessarily be georeferenced. Leyh et al. (2016) raised an important question about how collaborative data can be distributed to be used as raw data or as input for hydrological models. Supposedly, data used in disaster prevention should be updated as soon as possible and be easily available; the internet is considered the most efficient solution to make these data available quickly (Rathore et al., 2016).
Tal como foi visto anteriormente, o conceito de tempo está muito interligado com os sistemas de Complex Event Processing. Isso causa problemas em processar dados em tempo real, porque na prática os objetos que criam os eventos (por exemplo, sensores) podem ter problemas de comu- nicação e fazer com que os eventos cheguem com “payloads” errados e fora de ordem (os eventos não chegam ordenados pelo tempo) ao sistema de Complex Event Processing [JAF + 06]. Este úl- timo causa problemas para os sistemas de processamento de eventos, obrigando a que o sistema necessite de esperar um certo tempo pela chegada de possíveis eventos que estejam atrasados. No entanto, quando mais tempo o sistema esperar, maior será a latência do sistema, o que pode re- mover a vantagem de realizar o processamento em temporal que os sistemas de Complex Event Processing fornecem. Existe portanto um “trade-off ” entre ter uma resposta mais correta (porque espera mais tempo pelos eventos fora de ordem) e entre ter uma resposta mais rápida.
For total microbes: F-5’-CGG CAA CGA GCG CAA CCC-3’ and R-5’-CCA TTG TAG CAC GTG TGT AGC C-3’ (Denman & McSweeney, 2006); for Lactobacillus: F-5’-CAT CCA GTG CAA ACC TAA GAG-3’ and R- 5’- GAT CCG CTT GCC TTC GCA-3’ (Wang et al., 1996), for Escherichia coli F-5’-GTG TGA TAT CTA CCC GCT TCG C-3’ and R-5’-AGA ACG CTT TGT GGT TAA TCA GGA-3’ (Frahm & Obst, 2003), For Enterococcus genus F-5’- CCC TTA TTG TTA GTT GCC ATC ATT-3’ and R-5’-ACT CGT TGT ACT TCC CAT TGT-3’ (Rinttila et al., 2004) and for Enterobacter : F- 5’-CAT TGA CGT TAC CCG CAG AAG AAG C-3’ and R-5’-CTC TAC GAG ACT CAA GCT TGC-3’ (Bartosch et al., 2004). Real-time PCR was performed with BioRad CFX96 Touch (BioRad, USA) using optical grade plates. The PCR reaction was performed on a total volume of 25 µL using the iQTMSYBR Green Supermix (BioRad, BioRad, USA). Each reaction included 12.5 µL SYBR Green Supermix, 1 µL of each Primer, 1 µL of DNA samples and 9.5 µL H 2 O. The reaction conditions for DNA amplification were 94°C for 5 min, 40 cycles of 94°C for 20 s, 55°C, 58°C, or 60°C for 30 s for total microbes, Lactobacillus and other bacteria respectively, and 72°C for 20 s. To confirm the specificity of amplification, melting curve analysis was carried out
First of all, I would like to thank my supervisor, Prof. Dr. António Abelha, for his availability, encouragement, and guidance provided through the development of such an important project as this dissertation is, for me. Similarly, I would like to thank Prof. Dr. José Machado for his confidence and patience over the year. I want to express my most sincere gratitude for every opportunity and knowledge provided, indispensable to expand my experience, but above all, for giving me the necessary freedom to explore and learn from my decisions. I am indebted to all the clinicians that constitute the Gabinete Coordenador de Colheita e Transplantação at Centro Hospitalar do Porto, Hospital de Santo António, EPE who kindly shared their valuable time, insights and knowledge and without whom this dissertation would not have been concluded. At each stage of this project, whether in the early beginning or in the final phase, every meeting had its relevance in discussing and providing valuable inputs so the project could move forward.
As a discrete state space is needed for the compositional process and possible value interval is large for some of these tables, measures with large value possibilities need to be segmented into a more manageable number of states. If not, a rather useless sparse matrix would be created as, for example, 20 recorded notes were added to a table with 128 possible states as is the case with velocity or pitch. With that in mind, the duration values were segmented into 200 milliseconds wide blocks which, unless the piece presents some unusual long notes, reduces the state space to a more reasonable average 15 states. Such segments provide a middle-ground between a large sparse matrix and large bins that would bundle very different notes in the same category. Following a similar logic, the velocity transition matrix was segmented in 8 integer wide states (recall that velocity value ranges from 0 to 127) but since velocity value 0 is meaningless on this context, only 15 states are needed. Since there is a static 12 states progression table to help the generative process, making bins out of pitch values makes no musical sense and a common MIDI controller such as the one used for development only has 61 to 88 keys, the pitch transition matrix was left complete with all possible states. Also to note that a piece of music generally only uses a small subset of these states. In sum, the following probability tables are considered adequate: