Specifica ia AES standardizeaz ă dimensiunile de 128, 192 si 256 de bi i pentru lungimea cheii, dar restric ioneaz ă lungimea blocului la 128 de bi i. Astfel, intrarea şi ieşirea algoritmilor de criptare şi decriptare este un bloc de 128 de bi i. În publica ia FIPS num ărul 197, opera iile AES sunt definite sub form ă de opera ii pe matrice, unde atât cheia, cât şi blocul sunt scrise sub form ă de matrice. La începutul rul ării cifrului, blocul este copiat într-un tablou denumit stare (state), primii patru octe i pe prima coloan ă, apoi următorii patru pe a doua coloan ă, şi tot aşa până la completarea tabloului. Algoritmul modific ă la fiecare pas acest tablou de numere denumit state, şi îl furnizează apoi ca ieşire.
Daniela da Cruz received a degree in “Mathematics and Computer Science”, at University of Minho (UM), and now she is a Ph.D. student of ”Computer Science” also at University of Minho, under the MAPi doctoral program. She joined the research and teaching team of “gEPL, the Language Processing group” in 2005. She is teaching assistant in different courses in the area of Compilers and Formal Development of Language Processors; and ProgrammingLanguages and Paradigms (Procedural, Logic, and OO). As a researcher of gEPL, Daniela is working with the development of compilers based on attribute grammars and automatic generation tools. She developed a completed compiler and a virtual machine for the LISS language (Language of Integers, Sequences and Sets - an imperative and powerful programming language conceived at UM). She was also involved in the PCVIA (Program Comprehension by Visual Inspection and Animation), a FCT funded national research project; in that context, Daniela worked in the implementation of “Alma”, a program visualizer and animator tool for program understanding. Now she is working in the intersection of formal verification (design by contract) and code analysis techniques, mainly slicing.
In this Section, we will briefly discuss some important characteristics of this implementation. Programming a linear optimization algorithm differs much more from its simple pseudo-code description. Linear problems of small sizes can be easily solved without emphasizing the implementation. But this is not feasible for large scale linear programming. One of the main characteristics of the benchmark problems, for example, is that most of them are degenerated. During our implementations we have given priority to the variables having the minimum index value at the basic list , excluding phase I, where the artificial variable must leave as soon as possible so we have given priority to the maximum value at this point.
Abstract : This paper discusses the effective coding of Rijndael algorithm, Advanced Encryption Standard (AES) in Hardware Description Language, Verilog. In this work we analyze the structure and design of new AES, following three criteria: a) resistance against all known attacks; b) speed and code compactness on a wide range of platforms; and c) design simplicity; as well as its similarities and dissimilarities with other symmetric ciphers. On the other side, the principal advantages of new AES with respect to DES, as well as its limitations, are investigated. Thus, for example, the fact that the new cipher and its inverse use different components, which practically eliminates the possibility for weak and semi-weak keys, as existing for DES, and the non-linearity of the key expansion, which practically eliminates the possibility of equivalent keys, are two of the principal advantages of new cipher. Finally, the implementation aspects of Rijndael cipher and its inverse are treated. Thus, although Rijndael is well suited to be implemented efficiently on a wide range of processors and in dedicated hardware, we have concentrated our study on 8-bit processors, typical for current Smart Cards and on 32-bit processors, typical for PCs.
In this paper we show that programminglanguages can be translated into recurrent (analog, rational weighted) neural nets. Implementation of programminglanguagesin neural nets turns to be not only theoretical exciting, but has also some practical implications in the recent efforts to merge symbolic and subsymbolic computation. To be of some use, it should be carried in a context of bounded resources. Herein, we show how to use resource bounds to speed up computations over neural nets, through suitable data type coding like in the usual programminglanguages. We introduce data types and show how to code and keep them inside the information flow of neural nets. Data types and control structures are part of a suitable programming language called NETDEF .
Hardware-accelerated AES was more efficient than every other algorithm, achieving a very good encryption throughput of 426.964 MiB/s with a 128 bit key and a packet size of 10 MiB. The battery drain was also minimal, being below 1 mAh for every supported key size. From , we know that AES has high memory requirements, so unless our device has very limited memory resources, AES seems to be one of the best solutions in terms of speed and energy efficiency, provided the CPU has support for hardware acceleration. Otherwise, a lightweight block cipher should be used. From our tests, SPECK seems to be the overall best option when compared to LEA for a software implementation, since it was faster in most scenarios and drained less battery. SPECK also supports smaller block sizes, making it more flexible than LEA, but block sizes smaller than 128 bits should be used with care and only if the device is very constrained memory-wise to better protect against collision attacks . It is also worth noting that, for block sizes other than 128 bits, the standard encryption modes of operation like GCM cannot be used as they are only defined for 128 bit block sizes. With this, other ways of authenticating the encrypted data must be explored.
Cosine function, as one of the most relevant trigonometric functions in many areas of computer science, is generally calculated by CORDIC algorithm – algorithm which shows the best performances in sequentially executed programs. Slower alternatives are algorithms which use calculation based on Taylor or rather, Maclaurin series. However, these algorithms are suitable for parallelisation. Implementation of one of those algorithms in OpenCL – programming framework for parallel heterogeneous systems, is presented in this paper and the comparison in time efficiency between sequential and parallel implementations was made. Results showed extraordinary decrease in time needed for program execution when it is executed in parallel, in comparison to sequential execution. However, this is the case only when the number of iterations within the algorithm is big enough. For smaller scale, due to time overheads caused by communication between CPU and GPU, sequential execution showed to be faster. This makes parallel implementation most efficient in situations when a large number of addends is used – situations when a highly precise value of cosine function is required.
ate an arbitrary number of potential solutions in each generation. This is usually known as implicit paral- lelism. Of course this parallelism idea is, in fact, im- plemented sequentially in the major part of the com- puter languages. Additionally this type of parallelism tends to collapse since, after a sufficient number of generations, the selection pressure bias the solutions to became the same. Hence, in fact, when the popula- tion converges the algorithm has the task to evaluate a large number of very similar solutions. This lack of population diversity can be tackled by means of several strategies. For example using mutation oper- ators or by means of fitness sharing techniques. An- other more interesting technique is multi-population evolution where several populations evolve simulta- neously. This paradigm can be easily implemented using programmable logic devices since, within this type of hardware, it is possible to effectively evolve, in true concurrent configuration, a set of smaller size populations. Those populations share information be- tween them by an intelligence exchange mechanism in order to simultaneously explore the full extend of the search space while maintaining inter-population diversity. This kind of concurrent evolution of several populations is usually referred as explicit parallelism. In the case of genetic algorithms, there have been already many attempts to implement this method into a FPGA. However, they usually involve only a sin- gle population with the aim of take advantage of the fast processing power provided by combinatorial processors (Scott et al., 1995; Tommiska and Vuori, 1996; Tang and Yip, 2004; Narayanan, 2005; Fer- nando et al., 2010; Spina, 2010). In this work an alternative evolutionary structure is embedded into a FPGA. However, as far as the present authors have knowledge, there was never been any attempt to im- plement the PBIL in hardware. This believe is even stronger given that our aim is to put forward an ar- chitecture that is able to handle several populations in parallel. In this context Figure 2 presents the overall multi-population PBIL algorithm structure embedded within the FPGA.
We feel our objective of constructing a library of functions for high level manipulation of al- gebraic formulas in a pure functional language has been achieved. Although these days the scenario for programminglanguages is still dominated by the imperative languages, there is nothing that forbides the use of declarative languages for solving real world problems. In par- ticular, the functional languages are ready for use by the software industry and our system is just an example of how that can be done. One can benefit from the higher level of these lan- guages for the description of data and algorithms without making sacrifices to the efficiency of the resulting system, as there are good implementations of functional languages. For instance, the implementation of the Haskell language, the language we have been using in this project, is very good, sometimes better than implementations of conventional languages like C.
To replace the old Data Encryption Standard, in Sept 12 of 19997, the National Institute of Standard Technology (NIST) required proposals to what was called Advanced Encryption Standard (AES). Many algorithms were presented originally with researches from 12 different nations. Fifteen algorithms were selected to the Round one. Next five were chosen to the Round two. Five algorithms finalized by NIST are MARS, RC6, RIJNDAEL , SERPENT and TWOFISH . On October 2 nd 2000, NIST  has announced the Rijndael algorithm is the best in security, performance, efficiency, implement ability, & flexibility. The Rijndael algorithm was developed by Joan Daemen of Proton World International and Vincent Rijmen of Katholieke University at Leuven.
Now-a-days, hardware cryptography plays a vital role as because of reconfigurability in its architecture for its high speed applications and low power consumption. In general a cryptography algorithm proposed should provide greater resistance to attacks either in hardware or software implementation where the attack is on the Block cipher where the information related security key is obtained and the data is tampered by the third party in the middle during transmission and the receiver gets a wrong data which was not send by the sender. Authentication plays a vital role in security systems which establish the relationship between the two ends but at the same time it should be resistant to the attacks so that the information shared between the two parties is secure either in terms of Private or Public Key cryptographic method . Therefore the authenticated Key obtained during the encryption process should be modified from time to time in order to maintain secrecy. Mostly faults are injected in the algorithm applied to retrieve the key and corrupt the system so that loss occurs in the transmission. Therefore apart from secure data transmission, fault tolerant architectures and error detection schemes also have to be developed so that no data loss occurs. The encryption algorithm applied adds extra confidential data so that the originality of the data remains the same but in encrypted format which has to be transmitted over networks. The algorithm applied should be strong such that the cryptanalyst should not be able to find the weakness of it. Once the fault has been detected, the sender has to immediately apply the corrective measures so that the secrecy of the message is not lost and the strength of the algorithm is retained.
Abstract. In a recent paper [Neto et al. 97] we showed that programminglanguages can be translated on recurrent (analog, rational weighted) neural nets. The goal was not efficiency but simplicity. Indeed we used a number-theoretic approach to machine programming, where (integer) numbers were coded in a unary fashion, introducing a exponential slow down in the computations, with respect to a two-symbol tape Turing machine. Implementation of programminglanguagesin neural nets turns to be not only theoretical exciting, but has also some practical implications in the recent efforts to merge symbolic and subsymbolic computation. To be of some use, it should be carried in a context of bounded resources. Herein, we show how to use resource boundedness to speed up computations over neural nets, through suitable data type coding like in the usual programminglanguages. We introduce data types and show how to code and keep them inside the information flow of neural nets. Data types and control structures are part of a suitable programming language called NETDEF . Each NETDEF program has a specific neural net that computes it. These nets have a strong modular structure and a synchronisation mechanism allowing sequential or parallel execution of subnets, despite the massive parallel feature of neural nets. Each instruction denotes an independent neural net. There are constructors for assignment, conditional and loop instructions. Besides the language core, many other features are possible using the same method. There is also a NETDEF
Systems Biology aims at a system-level investigation of biological processes, by considering the complex interactions among biomolecules . In this context, mathematical models and computational methods represent valuable and integrative tools to experimental biology, thanks to their capability to simulate the emergent behavior of biological processes and to elucidate the mechanisms governing their functioning . A precise assignment of the kinetic parameters involved in these models is mandatory to perform accurate simulations of the dynamics, since their values determine the rate of the reactions and ultimately drive the temporal evolution of the system. Unfortunately, these parameters are generally difficult—or often impossible—to measure by means of laboratory experiments; therefore, parameters need to be estimated by using, for instance, CI methods . Specifically, parameter estimation (PE) is an optimization problem that consists in the identification of the (unknown) vector k of kinetic parameters able to minimize the distance between any available target time series (given by, e.g., the concentration of some biochemical species that can be measured experimentally) and the simulated dynamics obtained by using the vector k.
The success of the internet gave renewed focus to developing a machine independent programming language the same year the internet was commercialized five technologists at Sun Microsystems, Inc. set out to develop a machine independent programming language. Although java is closely associated with the internet, java was developed as a language for programming software that could be embedded into electronic devices regardless of the type of CPU used by the device such as programs that run consumer appliances.
The Internet is the roads and the highways in the information World, the content providers are the road workers, and the visitors are the drivers. As in the real world, there can be traffic jams, wrong signs, blind alleys, and so on. The content providers, as the road workers, need information about their users to make possible Web site adjustments. Web logs store every motion on the provider's Web site. So the providers need only a tool to analyze these logs. This tool is called Web Usage Mining. Web Usage Mining is a part of Web Mining. It is the foundation for a Web site analysis. It employs various knowledge discovery methods to gain Web usage patterns. Keywords: Web mining, LCS, Sessionization, Clustering, WUM.
This article examines the financial planning of working capital organizations, in particular presented a software implementation of the algorithm analyzes the budget forecast working capital, identify and take advantage of temporarily free money using a model of a decision on the choice of the optimal bond portfolio, consistent with the free flow of liquidity of the enterprise.
As the default weight settings do not conform to the original popularity index of the languages, so there should be a different weighting criterion. However, it is very hard to come up with a generic and correct weighting criterion. Therefore, the scoring function should be customizable and the user should be able to tune the weight of each feature based on her preferences. As an example, consider the fact that Ada holds 3 rd position in overall scoring, but is not being considered among highly used FPLs as of now, as shown in Table 1. The most probable reason seems to be that it fails to create any impact from the perspective of Industrial Demands, as shown in Table 23. Based on this observation a user may consider ‘‘demand in industry’’ and ‘‘easy transition’’ more important than the rest of the parameters, and assigns them weights of 3, and 2, respectively. Then, as shown in Table 25, the ranks of C#, C++, and C are elevated, whereas, Ada, Modula-2, Pascal, and Fortran are degraded with this weighting scheme, while Java and Python are not affected on the ratings list, though their degrees of conformance is affected with the new weights. This certainly shows the strength of our proposed framework and scoring function, as it re-ranks the languages based on the customized settings. Hence, every user can look for an appropriate language based on her personal preferences. However, based on the discussion in the previous section, it is clear that the user of this framework should have a reasonable understanding of the language theory to evaluate the language from technical perspective, Table
In bioinformatics, DP is fundamentally used to discover sequence alignments, considering both local and global alignments . This task involves basically the search for similarities, analysing the involved sequences and pointing out the correct correlated segments. Since biological homologies are commonly approximate, the similarities may contain an acceptable degree of deviation, corresponding to an admissible number of mismatches; thus this attribute increases the problem’s complexity. Similarity evaluation is based on the “edit distance” concept. The edit distance corresponds to the minimum number of operations required to convert a sequence into another using three edit operations to insert, delete or substitute symbols. In order to evaluate the correlation degree, a scoring scheme is necessary to assess the similar regions and, on the other hand, penalize deviations (mismatches, substitutions and gaps). The obtained scores are stored in a similarity or scoring matrix providing the basis for further analysis.
Abstract —Sequence alignment is an important problem in computational biology and finding the longest common subsequence (LCS) of multiple bi- ological sequences is an essential and effective tech- nique in sequence alignment. A major computational approach for solving the LCS problem is dynamic pro- gramming. Several dynamic programming methods have been proposed to have reduced time and space complexity. As databases of biological sequences be- come larger, parallel algorithms become increasingly important to tackle large size problems. In the mean- time, general-purpose computing on graphics process- ing units (GPGPU) has emerged as a promising tech- nology for cost-effective high performance computing. In this paper, we develop an efficient parallel algo- rithm on GPUs for the LCS problem. We propose a new technique that changes the data dependency in the score table used by dynamic programming algo- rithms to enable higher degrees of parallelism. The algorithm takes advantage of the large number of pro- cessing units and the unique memory-accessing prop- erties of GPUs to achieve high performance. The al- gorithm was implemented on Nvidia 9800GT GPUs and tested on randomly generated sequences of dif- ferent lengths. The experiment results show that the new algorithm is about 6 times faster on GPUs than on typical CPUs and is 3 times faster than an exist- ing efficient parallel algorithm, the diagonal parallel algorithm.