Towards optimization of these platforms, nanostructured materials have been explored [28,75]. One such examples of wide application are nanowires (NW) that operate like typical FET-basedsensors, only that the analyte affects the current flow through the whole of the diameter of the NW rather than just at the surface, as is the case with planar FETs, rendering them far more sensitive . The main disadvantage of nanowire-based oligonucleotide sensors is that they are complex and expensive to fabricate . However, nanowire-based FED devices are highly sensitive, promise fast response times and can be fabricated in arrays for multiplex detection into small volume platforms. Nanowire sensors are most commonly fabricated from silicon, but other materials have been gaining momentum, e.g., gold gallium nitride, graphene oxide and carbon nanotubes [29–38,75–77].
In pH sensing, either pH sensors or pH-based enzyme functionalized sensors, the Debye length is of small importance since the diminute size of hydrogen ions allows these to easily reach the sensor surface and chemically react with amphoteric sites at the oxide’s surface thus changing the surface potential. Consequently the contribution of the electrochemical double layer is constant and the generally accepted detection mechanism for fieldeffect pH sensing is mainly described by the site-binding theory. However the fieldeffect detection of charged biomolecules cannot be treated as adsorbed charge in the site-binding model because of their large volume, so many additional factors need to be considered. Actually the detection mechanism is not yet completely clear and due to the complexity of variables involved a complete model for the fieldeffect detection of biomolecules based on their intrinsic charge is still lacking. (Landheer 2005; Poghossian 2005; Shinwari 2007)
Abstract: The electromechanical impedance (EMI) technique is considered to be one of the most promising methods for developing structural health monitoring (SHM) systems. This technique is simple to implement and uses small and inexpensive piezoelectric sensors. However, practical problems have hindered its application to real-world structures, and temperature effects have been cited in the literature as critical problems. In this paper, we present an experimental study of the effect of temperature on the electrical impedance of the piezoelectric sensors used in the EMI technique. We used 5H PZT (lead zirconate titanate) ceramic sensors, which are commonly used in the EMI technique. The experimental results showed that the temperature effects were strongly frequency-dependent, which may motivate future research in the SHM field.
Nowadays, printed electronics (PE) has become a hot-topic since there is a strong interest in replacing conventional silicon-based technology by simpler processing techniques in some low cost consumer electronics. Comparing the two technologies (conventional and PE) there are some clear advantages for the latter one, which are related to factors such as reduced fabrication costs associated to the manufacturing processes, the wasted material and benefits of the individual printing techniques (flexo, screen, rotary-screen, inkjet and off-set printing). Out of these advantages other opportunities arise, such as the use of flexible substrates (e.g. polyethylene terephthalate (PET) or even paper) in order to design ultra-cost-effective, flexible, thin and environmentally friendly electronic devices. Additionally, they enable the production of large area printing on flexible substrates by roll-to-roll (R2R) processes, surpassing, in terms of throughput, the standard silicon wafer technologies available. Apart from that, printing techniques will reduce significantly the complexity of the manufacturing process, when lithography and etching steps are no longer necessary for circuit production. Regarding the market needs, the IDTechEx Research predicts that for conductive inks and pastes a gross market revenue of $2.8 bn and $3 bn will be reached in 2020 and subsequently in 2025, respectively.
Notwithstanding the positive response behavior of the thin-films sensors in the presence of the TCS molecules in different aqueous matrices, both indi- vidually and in array tell us that we are on the right track for TCS detection, there is a need to evalu- ate the stability of the thin-films in order to confirm that the positive results are due to the presence of TCS in the aqueous matrix and not to the loss of polyelectrolyte molecules from the films by desorp- tion. It is known that the LbL film stability is re- lated with the electrostatic interactions, in the range of one hundred of kJ/mol, between the opposite ion- ized groups of cationic and polyelectrolyte molecules [39,40]. Therefore electrostatic interactions play a vital role in the adsorption of these molecular bilay- ers but are also strongly pH dependent since the de- gree of ionization of each polyelectrolyte is a func- tion of pH [39,40]. Having in mind the goal of de- velop a sensor dedicated to detect a specific molecule in complex media that can present different pH, the electrostatic interactions must be taken into account. Moreover, the salts and/or other elements present in the environmental matrices, change the degree of ion- ization of LbL films and of molecules involved  and therefore affecting the electrical properties (e.g. impedance) of the sensor. Additionally, the ionic ele- ments present in the matrices, can be adsorbed onto the polyelectrolyte’s layers, changing the electrical properties of the sensor. Thus, in order to ascertain which sensor holds the best features for the detec- tion of TCS with no loss/desorption of thin-film lay- ers and no irreversible adsorption of TCS onto thin- film, the adsorbed amount on the different thin-films was analyzed before and after the thin-films to be im- mersed on the different TCS aqueous matrices. The adsorbed amount per unit of area can be easily es-
Field Of View (FOV) scalability – To support the richer and flexible interaction functionalities that arise in LF imaging applications, a new scalability concept, named FOV scalability, and a novel Field Of View Scalable Light Field Coding (FOVS-LFC) solution are proposed. Taking advantage of the 4D radiance distribution, the FOV scalability progressively supports richer forms of the same LF content by hierarchically organizing the angular information of the captured LF data. More specifically, the base layer contains a subset of the LF raw data with narrower FOV, which can be used to render a 2D version of the content with very limited rendering functionalities. Following the base layer, one or more enhancement layers are defined to represent the necessary information to obtain more immersive LF visualization with a wider FOV. Therefore, this new type of scalability creates bitstreams adaptable to different levels of user interaction, allowing increasing degrees of freedom in content manipulation at each higher layer. This means that, for instance, a user who wants to have a simple 2D visualization will only need to extract the base layer of the bitstream, thus reducing the necessary bitrate and the required computational power. On the other hand, a user who wants to creatively decide how to interact with the LF content can promptly start visualizing and flexibly manipulating the LF content, even over limited bandwidth connections, by extracting only the adequate bitstream subsets (which fit in the available bitrate). Additionally, this coding architecture enables easy support to quality scalability and Region Of Interest (ROI) coding . Exemplar-based Inter-Layer (IL) coding tools – To improve the efficiency when coding an enhancement layer, two novel inter-layer prediction schemes are also proposed: i) a direct IL prediction, and ii) an IL compensated prediction. In the direct IL prediction, a set of samples from a previously coded layer is used as exemplar samples for estimating a good prediction block. Therefore, no further information about the used predictor block needs to be transmitted to the decoder side. The IL compensated prediction relies on an IL reference picture, which is constructed using samples from previously coded layers and a new exemplar-based  algorithm for texture synthesis.
The counterfeiting and recycling of integrated circuits (ICs) have become major issues in recent years, potentially impacting the security and reliability of electronic systems bound for military, financial, or other critical applications. With identical functionality and packaging, it would be extremely difficult to distinguish recycled ICs from unused ICs. In the existing Ring Oscillator (RO) based sensor with 90nm technology test chip shows the effective detection of recycled ICs. The impact of RO based sensor is that, it is difficult to identify recycled ICs used shorter than one month and it requires more power and area overhead. To provide a solution to the existing system, Clock Anti fuses (CAF) based sensor is implemented to enhance the effective recognition of recycled ICs even if the IC used for a very short period. MAF is implemented in FPGA to verify its effectiveness.
Abstract. In a recent paper de Paor put forward a new theory of the Earth’s magnetic field that depended on the Hall effect as an energy transfer mechanism. The purpose of this paper is to demonstrate that the mechanism invoked is unimportant except in certain gaseous plasmas.
Abstract: This paper contains first some definitions and classifications regarding the fluidic elements. The general current status is presented, nominating the main specific elements based on the Coanda effect developed specially in Romania. In particularly the development of an original bistable element using industrial compressed air at industrial pressure supply is presented. The function of this element is based on the controlled attachment of the main jet at a curved wall through the Coanda effect. The methods used for particular calculation and experiments are nominated. The main application of these elements was to develop a specific execution element: a fluidic step–by-step motor based on the Coanda effect.
The number of places where indoor localization can be applied is increasing. Universities and airports are example of public locations where navigation systems and location-based services can have a great utility. The PDR methodology applied on smartphones can be a great contribute in that area, since it doesn’t require an environment change or any other technology. Even though the requirements for the proposed system to work reliably are strict, currently PDR mechanization appears to be the only choice for indoor positioning when RF-based methods are unavailable.
Some of the teams that participated in the DARPA Urban Challenge implemented their own simulators, while others used already existing ones. Some of their own simulators had very practical features that are worth mentioning, as summarized in . For instance, the simulator used by Princeton’s team allowed the user to test their code in the simulator and then transfer it to the vehicle without the need of recompiling it. MIT’s simulator could play back data recorded in real life test runs, but the simulated obstacles reflected perfect data during those recorded runs, something that actual sensors did not obtain during real life test runs. CarOLO also used their simulator to test new software implementations before adding them to the vehicle, as well as confirming bugs found during real world tests. That is something that previous teams also did. On the other hand, further development of this simulator has yielded a version in which multiple instances of their autonomous vehicle could be operated. In doing this, their software could learn efficient driving behavior in an environment in which multiple traffic vehicles may exist. Additionally, different versions of code could be run from the same starting point, running the same mission file, in order to compare their performances. Tartan’s simulator also had the ability to add virtual obstacles to a real world environment during testing. Therefore, the vehicle was led to think there were obstacles within its path, causing avoidance strategies to be executed, even though there were none actually. All the aforementioned features are very time-saving when simulating and testing, and most of them were not easy to implement. They were developed for these simulators because time was the most important factor during the DARPA Urban Challenge, since the teams only had a few hours to update and validate their code between events.
The first experimental attempt to verify the existence of the Casimir effect for two parallel metallic plates was made by Sparnaay  only ten years after Casimir’s theoretical prediction. However, due to a very poor accuracy achieved in this experiment, only compatibility between experimental data and theory was established. One of the great difficulties was to maintain a perfect parallelism between the plates. Four decades have passed, approximately, until new experiments were made directly with metals. In 1997, using a torsion pendulum Lamoreaux  inaugurated the new era of experi- ments concerning the Casimir effect. Avoiding the parallelism problem, he measured the Casimir force between a plate and a spherical lens within the proximity force approximation . This experiment may be considered a landmark in the history of the Casimir effect, since it provided the first reliable exper- imental confirmation of this effect. One year later, using an atomic force microscope , Mohideen and Roy  measured the Casimir force between a plate and a sphere with a better accuracy and established an agreement between experimen- tal data and theoretical predictions of less than a few percents (depending on the range of distances considered). The two precise experiments mentioned above have been followed by many others and an incomplete list of the modern series of ex- periments about the Casimir effect can be found in -. For a detailed analysis comparing theory and experiments see [27, 28]
Os métodos de monitoramento de integridade estrutural (Structural Health Monitoring - SHM) são utilizados para inspecionar e detectar danos em estruturas.. E um dos métodos que está mostrando resultados promissores é o de monitoramento de integridade estrutural baseado em Impedância eletromecânica (Impedance-based Structural Health Monitoring - ISHM). Esta ferramenta destaca- se como uma técnica versátil, que pode ser facilmente aplicada em estruturas complexas devido à portabilidade dos instrumentos, facilidade e agilidade no processamento de dados, gerando informações em tempo real sobre a estrutura. Este método baseia-se no princípio da impedância eletromecânica (EMI), que utiliza a interação entre as propriedades mecânicas e elétricas de um transdutor piezoelétrico acoplado ou incorporado em uma estrutura para medir as assinaturas de impedância. Com essas assinaturas, é possível avaliar qualitativamente a presença do dano. E para quantificar as mudanças nas assinaturas de impedância, são utilizadas funções matemáticas chamada métricas de danos. Apesar das vantagens do método de impedância, esta técnica ainda apresenta algumas desvantagens, como influência nas medições da variação de temperatura, de cargas estáticas e dinâmicas, provocando falsos positivos na detecção de dano. Com isso, este artigo se concentra na aplicação do método SHM baseado em Impedância, realizando dois experimentos em estruturas sob vibrações externas, cargas estáticas e variações de temperatura para analisar sua influência nas assinaturas de impedância. Para isso, foi feito uma viga de alumínio com um transdutor piezoelétrico. Além disso, para avaliar a detecção de danos da técnica sob várias condições de contorno, foi simulado um dano na estrutura , no caso, foi removido a porca e o parafuso da viga.
For discrete variable structural problems, a variety of methods including simulated annealing can be used [14,23,24]. As pointed out by Correia et al [24,25], among others, the main advantage of this method, in comparison with gradient-based methods, is the ability to overcome the premature convergence towards a local optimum. By other hand, the main disadvantage is related with the computational lost, because of the high number of objective function evaluations usually required to reach the optimal solution, which is especially relevant when the objective function evaluation is computationally expensive. The implemented simulated annealing procedure employs a random search that generates feasible sets of design variables, accepting not only changes in the design variables that decrease the objective function but also changes that increase it. The latter changes are accepted with a certain probability. The basic functioning of the simulated annealing algorithm can de easily described as follows [16,23]:
recovery algorithm was implemented in the GSM library to prevent the firmware to get infinitely stuck waiting for a GSM link. Another issue that occurred on some iSensA real-world deployments, was caused by the signal loss in cabling between sensors and controllers. After thorough electrical analysis and several tests, it was concluded that this problem resulted from two main reasons. The first one due to long cables. This was solved by decreasing the pullup resistors used in the sensors connections, thus allowing the controllers to receive the correct values from sensors. The second one resulted from cables that were installed in noisy environments, near to pumps, electrical links, etc. In such situations, the problem was solved by using shielded cables to connect sensors to controllers.
We investigated the effect of adding the field-dependent recombination process, namely field-enhanced trap- ping, to the generation-recombination processes of charge carriers that model current oscillations in semicon- ductors. The main new features arising from this modification are identified in bifurcation diagrams with the electric field as the control parameter. The characteristic of the bifurcation diagrams is a function of impurity energy. Thus, we generated a set of bifurcation diagrams for a range of the impurity energy and applied bias. The energy dependence of the bifurcation diagrams is discussed considering the context of the competition between the generation-recombination mechanisms impact ionization and field-enhanced trapping.
coding efficiency also increases. This is especially notice- able when using more than one MI for training. However, increasing the order does not always result into higher bitrate savings. This occurs because the use of higher LSP orders requires larger areas of reconstructed pixels, for LSP training. The size of the available training grows from the first frames to the last ones, affecting the quality of the training step. Thus, the use of higher LSP orders will be more efficient only at later stages of the coding process, while using a lower order may be beneficial since an earlier stage of the coding process. Higher prediction orders and larger training areas also have a negative impact (i.e., increase) on the compu- tational complexity. In Table 6, the best three LSP based prediction methods in terms of bitrate savings vs. computa- tional complexity are represented in bold (LSP3, LSP5 and LSP7, using 5 MIs for training). These modes were included in HEVC-HR. LSP9 modes were excluded, due to their high computational complexity.
the pineal gland are under the control of visible light. In this study, a pulsed electromagnetic field (PEMF) was used to relieve the effects caused by microwaves. Following the experiments of DiCarlo et al 26 on chick embryos exposed to radiation, the protection conferred by the magnetic fields appears to influence free radical scavenging. PEMF has a variety of biological effects, such as its effects on bone healing, 27,28 pain relief, 29 and the balance of the neuroendo- crine system (including hormone production and melatonin levels). 30 PEMF has frequencies at the lower end of the electromagnetic spectrum (from 6 Hz to 500 Hz). In this study, we demonstrate that PEMF can be used as healing agent that activates the normal physiology of the body. PEMF activates normal metabolic processes, indirectly acts through the endocrine system, and controls the main cause of stress in the exposed system. We treated rats that had previously been exposed to 2.45 GHz radiation with PEMF. As biomarkers of this treatment, we chose the following parameters: melato- nin, creatine kinase activity, caspase assay, and testosterone.
deposited on the medium surface. The plates were kept in an incubator at 25ºC, in the dark, for 8 h. Conidial germination (presence of a germ tube greater than the spore diameter) was determined visually by using a microscope at 100X magnification. In each replicate, 100 spores were counted per concentration. Results were expressed as the relative germinated spores (RGS) when compared with the control. For each concentration/fungicide, RGS of an isolate i was calculated by RGS = ((GSc - GSi) / GSc) x 100, where GSc = germinated spores for the control (no fungicide added), and GSi = germinated spores of the isolate grown on medium added of fungicide. For each replicate of each isolate-fungicide concentration combination, RGSi values were linearly regressed on the logarithm (log 10 ) of fungicide concentration to estimate the dose that inhibited spore germination by 50% (EC 50 value). The criteria used to establish the sensitivity level of the isolates to fungicides [as “Sensitive” (S), “Moderately Sensitive” (MS) or “Non- Sensitive” (NS) ] were based on those proposed by Leroux et al. (20) with modifications: sensitive (EC 50 < 0.16 µg mL -1
environments. The Rao-Blackwellization unscented Kalman filter (RBUKF)  was implemented to fuse the data acquired from a compass, a gyroscope, and a GPS receiver. The Kalman filter was used to fuse the data acquired from the GPS receiver and the gyroscope in order to support a navigation system . The Naïve Bayes classifier is used to fuse the data acquired from acoustic, accelerometer and GPS sensors to recognize different situations during daily life . The Autoregressive-Correlated Gaussian Model was implemented in the KNOWME system . Bayesian analysis and Kalman filter where used to data acquired from the several sensors available in mobile devices for the identification of the ADL . The CHRONIOUS system implements several methods to recognize several ADL, such as Support Vector Machine (SVM), random forests, Artificial Neural Networks (ANN), decision trees, decision tables, and Naïve Bayes classifier, in order to fuse the data collection from several sensors available in mobile devices . In , the authors used the empirical mode decomposition (EMD) applied to the inertial sensors available in a mobile device, including accelerometer, gyroscope, and magnetometer, for the identification of several ADL. The authors of  implements several methods for data fusion, including SVM, random forest, hidden Markov models (HMMs), conditional random fields (CRFs), Fisher kernel learning (FKL), and ANN for several sensors, such as Accelerometer, RFID, and Vital monitoring sensors for the correct identification of ADL.