Top PDF Web Data Identification and Extraction

Web Data Identification and Extraction

Web Data Identification and Extraction

Abstract: Nowadays, with the rapid growth of the web, a large volume of data and information are published in numerous web pages. As web sites are getting more complicated, the construction of web information extraction systems becomes more difficult and time-consuming. In this paper proposes a new method to perform the task automatically which is more effective than machine learning and semi automated system. The proposed method consists of two steps, (1) identifying individual data records in a page, and (2) aligning and extracting data items from the identified data records. For step 1, we propose a method based on visual information to segment data records, which is more accurate than existing methods. For step 2, we propose a novel partial alignment technique based on tree matching. Partial alignment means that we align only those data fields in a pair of data records that can be aligned (or matched) with certainty, and make no commitment on the rest of the data fields.
Mostrar mais

8 Ler mais

Corpus-based extraction and identification of Portuguese Multiword Expressions

Corpus-based extraction and identification of Portuguese Multiword Expressions

Abstract This presentation reports the methodology followed and the results attained on an on-going project aiming at building a large lexical database of corpus-extracted multiword (MW) expressions for the Portuguese language. MW expressions were automatically extracted from a balanced 50 million word corpus compiled for this project, furthermore statistically interpreted using lexical association measures and are undergoing a manual validation process. The lexical database covers different types of MW expressions, from named entities to lexical associations with different degrees of cohesion, ranging from totally frozen idioms to favoured co-occurring forms, like collocations. We aim to achieve two main objectives with this resource: to build on the large set of data of different types of MW expressions to revise existing typologies of collocations and to integrate them in a larger theory of MW units; to use the extensive hand-checked data as training data to evaluate existing statistical lexical association measures.
Mostrar mais

10 Ler mais

FAST REAL TIME ANALYSIS OF WEB SERVER MASSIVE LOG FILES USING AN IMPROVED WEB MINING ARCHITECTURE

FAST REAL TIME ANALYSIS OF WEB SERVER MASSIVE LOG FILES USING AN IMPROVED WEB MINING ARCHITECTURE

In this concept we used path extraction using sequential pattern clustering algorithm. In our web usage mining area many researchers have proved that their databases must be efficient from the Ref of “Similarity measure to identify only user’s profiles in web usage mining” (Latif et al., 2010). By this concept they identify the user profile details using web usage mining. Data mining also used in e-commerce mainly proved from the concept with the Ref of “A Comprehensive Survey on Frequent Pattern Mining from Web Logs” (Chauhan and Jain, 2011). They rectified the best result by web logs files as an efficient data from the clustering of web log data and the data are analyzed according to the user by the Ref of the” Clustering of Web Log Data to Analyze User Navigation Patterns” (Shrivastava et al., 1976). Krishnapuram et al. (2001) using relational and clustering methods they prove that the “Low-Complexity Fuzzy Relational Clustering Algorithms for Web Mining” they stated that the data can cluster the log files and they find the accurate result. Koh and Yo (2005) they described the method to analyze and mine the patterns using association rules from “An Efficient Approach for Mining Fault-Tolerant Frequent Patterns based on Data mining with Association rule Techniques”.
Mostrar mais

9 Ler mais

Image Processing Techniques for Denoising, Object Identification and Feature Extraction

Image Processing Techniques for Denoising, Object Identification and Feature Extraction

Unless the threshold level can be arranged to adapt to the change in brightness level, any thresholding technique will fail. Its attraction is simplicity: thresholding does not require much computational effort. If the illumination level changes in a linear fashion, using histogram equalization will result in an image that does not vary. Unfortunately, the result of histogram equalization is sensitive to noise, shadows and variant illumination; noise can affect the resulting image quite dramatically and this will again render a thresholding technique useless. Thresholding after intensity normalization is less sensitive to noise, since the noise is stretched with the original image and cannot affect the stretching process by much. It is, however, still sensitive to shadows and variant illumination. Again, it can only find application where the illumination can be carefully controlled. This requirement is germane to any application that uses basic thresholding. If the overall illumination level cannot be controlled, it is possible to threshold edge magnitude data since this is insensitive to overall brightness level, by virtue of the implicit differencing process. However, edge data is rarely continuous and there can be gaps in the detected perimeter of a shape. Another major difficulty, which applies to thresholding the brightness data as well, is that there are often more shapes than one. If the shapes are on top of each other, one occludes the other and the shapes need to be separated.An alternative approach is to subtract an image from a known background before thresholding. This assumes that the background is known precisely, otherwise many more details than just the target feature will appear in the resulting image. Clearly, the subtraction will be unfeasible if there is noise on either image, and especially on both. In this approach, there is no implicit shape description, but if the thresholding process is sufficient, it is simple to estimate basic shape parameters, such as position. Even though thresholding and subtraction are attractive (because of simplicity and hence their speed), the performance of both techniques is sensitive to partial shape data, noise, variation in illumination and occlusion of the target shape by other objects. Accordingly, many approaches to image interpretation use higher level information in shape extraction, namely how the pixels are connected within the shape [22].
Mostrar mais

6 Ler mais

Semi-structured data extraction and modelling: the WIA Project

Semi-structured data extraction and modelling: the WIA Project

Over the last decades, the amount of data of all kinds available electronically has increased dra- matically. Data are accessible through a range of interfaces including Web browsers, database query languages, application-specific interfaces, built on top of a number of different data exchange formats. The management and the treatment of this large amount of data is one of the biggest challenges for In- formation Technology these days [13, 14, 10, 15]. All these data span from un-structured (e.g. textual as well as multimedia documents, and presentations stored in our PCs) to highly structured data (e.g. data stored in relational database systems). Very often, some of them have structure even if the structure is implicit, and not as rigid or regular as that found in standard database systems (i.e., not table-oriented as in a relational model or sorted-graph as in object databases). Following [1], we refer here to this kind of data in terms of “semi–structured”, which sometimes are also called “self-describing”. Their main characteristics are: no fixed schema, the structure is implicit and irregular, and they are nested and heterogeneous.
Mostrar mais

6 Ler mais

A Novel Semantically-Time-Referrer based Approach of Web Usage Mining for Improved Sessionization in Pre-Processing of Web Log

A Novel Semantically-Time-Referrer based Approach of Web Usage Mining for Improved Sessionization in Pre-Processing of Web Log

Implementation of all the techniques has been done in #c using .net software. Three log files of different format have been used for testing namely colg, guitar and NASA. Table 1 shows details of these log files before cleaning. It shows the size, time period, number of entries, format of log file, type of log file and total data transferred in KB. Data cleaning is performed in the first phase of pre-processing, irrelevant or noisy data is removed from these three log files shown in Table 2. Table shows the number of entries for robots, failure status code, multimedia, text files, and other than GET method requests. Detailed number of requests by multimedia, text files and method requests is shown in Table 3. Figure 8 shows the change in log file before and after cleaning in form of bar chart. Table 4 shows the unique number of users called as user identification using IP and agent field for three different log files. Number of unique IPs and unique URLs of log file shows the access rate, no of different pages accessed by the users during particular time period. It also shows the comparison of the three different sessionization methods namely Time, Referrer-time and Semantically-time-referrer based heuristics on three different log files. In time heuristic sessionization we have taken the standard threshold value 10 min for consecutive page access time and 30 min for maximum session time. For time-referrer, the results mainly depends upon the referrer field. Otherwise in case there is no referrer, the algorithm will behave like time-heuristic. In our proposed algorithm named time-referrer-semantical sessionization the sessions depends upon all three factors namely time, referrer and semantics. The new session is created only when all three conditions are failed. Time-oriented heuristics estimate denser sessionization than two other methods. The referrer-time and semantically-time- referrer sessionization methods decreased the number of sessions to 89.15% and 86.70% respectively for colg log file, 95.54% and 90.32% respectively for guitar log file and zero percent and 74.41% respectively for NASA log file. Tested log files are large in size and contain huge data. So it is difficult to find the accuracy of algorithms on whole data. Due to this we have taken small data to test the performance of proposed algorithms. The small testing data contain 20 true sessions which are counted manually. Every true session contain more than two entries. Table 7 shows the performance of existing and proposed algorithms on 22 true sessions for all three log
Mostrar mais

11 Ler mais

Data extraction in e-commerce

Data extraction in e-commerce

Nevertheless, there is at least one solution to solve this challenging problem; it is to consider a lexical database that can help to interpret the different meanings and to find the synonyms of words. WordNet, [9, 11, 38, 64] (see also Sec. 3.3.2) can be seen as a “dictionary of meaning,” integrating the functions of a dictionary and a thesaurus. As the data extracted by the web crawler can be in different languages: English, Por- tuguese or Spanish, the adoption of the various adjustments to the WordNet to other languages can be one of the approaches. The other one could be to translate all the data extracted by the web crawler, independently of the language in which it is, to the same language, for instance, to English. By default, the web crawler searches the information in English, nevertheless users comments can appear in several languages. If the decision is not to translate, in Global Wordnet Association Website [11], there is information about the languages, the name of the resources and the type of license on the various adaptations available worldwide. For instance, adaptations to the Por- tuguese are three, two for the Portuguese of Portugal (Onto.PT and WordNet.PT) and one for Portuguese of Brazil (OpenWN-EN) [4, 68].
Mostrar mais

118 Ler mais

DATA, TEXT, AND WEB MINING FOR BUSINESS INTELLIGENCE: A SURVEY

DATA, TEXT, AND WEB MINING FOR BUSINESS INTELLIGENCE: A SURVEY

authors stated that WM has these main task, associations, classification, and sequential analysis. The paper included a WM study on two online courses. Using WM to improve the two online courses experience, based on the results from the WM tool that used the logging files. An excellent discussion on the characteristics of WM is found in [26]. The authors relate the development of the soft computing, which is a set of methodologies to achieve flexible and natural information processing capabilities. The paper discusses how it is difficult to mine the WWW with its unstructured, time varying, and fuzzy data. The paper also specifies four phases that include information retrieval IR, information extraction, generalization, and finally analysis of the gathered data. The authors also classified WM into three main categories, Web Content Mining WCM, Web Structured Mining WSM, and Web Usage Mining WUM. WCM is about retrieving and mining content found in the WWW like multimedia, metadata, hyperlinks, and text. WSM the mining of the structure of WWW, it finds all the relations regarding the hyperlinks structure, thus we can construct a map of how certain sites are formed, and the reason why some documents have more links than others. Finally, WUM, which is the mining of log files of web servers, browser generated logs, cookies, bookmarks and scrolls. WUM helps to find the surfing habits customers and provides insights on traffic of certain sites.
Mostrar mais

21 Ler mais

IMAGING SPECTROSCOPY AND LIGHT DETECTION AND RANGING DATA FUSION FOR URBAN FEATURES EXTRACTION

IMAGING SPECTROSCOPY AND LIGHT DETECTION AND RANGING DATA FUSION FOR URBAN FEATURES EXTRACTION

This study presents our findings on the fusion of Imaging Spectroscopy (IS) and LiDAR data for urban feature extraction. We carried out necessary preprocessing of the hyperspectral image. Minimum Noise Fraction (MNF) transforms was used for ordering hyperspectral bands according to their noise. Thereafter, we employed Optimum Index Factor (OIF) to statistically select the three most appropriate bands combination from MNF result. The composite image was classified using unsupervised classification (k-mean algorithm) and the accuracy of the classification assessed. Digital Surface Model (DSM) and LiDAR intensity were generated from the LiDAR point cloud. The LiDAR intensity was filtered to remove the noise. Hue Saturation Intensity (HSI) fusion algorithm was used to fuse the imaging spectroscopy and DSM as well as imaging spectroscopy and filtered intensity. The fusion of imaging spectroscopy and DSM was found to be better than that of imaging spectroscopy and LiDAR intensity quantitatively. The three datasets (imaging spectrocopy, DSM and Lidar intensity fused data) were classified into four classes: building, pavement, trees and grass using unsupervised classification and the accuracy of the classification assessed. The result of the study shows that fusion of imaging spectroscopy and LiDAR data improved the visual identification of surface features. Also, the classification accuracy improved from an overall accuracy of 84.6% for the imaging spectroscopy data to 90.2% for the DSM fused data. Similarly, the Kappa Coefficient increased from 0.71 to 0.82. on the other hand, classification of the fused LiDAR intensity and imaging spectroscopy data perform poorly quantitatively with overall accuracy of 27.8% and kappa coefficient of 0.0988.
Mostrar mais

11 Ler mais

A COMPARATIVE ANALYSIS OF WEB INFORMATION EXTRACTION TECHNIQUES DEEP LEARNING vs. NAÏVE BAYES vs. BACK PROPAGATION NEURAL NETWORKS IN WEB DOCUMENT EXTRACTION

A COMPARATIVE ANALYSIS OF WEB INFORMATION EXTRACTION TECHNIQUES DEEP LEARNING vs. NAÏVE BAYES vs. BACK PROPAGATION NEURAL NETWORKS IN WEB DOCUMENT EXTRACTION

algorithm provides the benefit in excess of Bayesian networks since Bayesian network is not considered in any learning architecture like to propose this technique. The deep learning methodology serves to identify the relevant content from the web sites through the layer by layer methodology of the deep learning architecture. A web document is similar in concept to a web page. Every web document has its individual URI. Note that a Web document is not the same as a file: a single web document can be accessible in various arrangements and dialects, and a single document, for instance a PHP script, may be in charge of creating a substantial number of web documents with different URIs. A Web document is characterized as something that has a URI and can return representations of the identified asset in response of HTTP requests. In technical literature the term data Resource is used instead of net document. The document clump (or text clustering) is that the application of cluster analysis to matter documents. It’s applications in automatic document, organization, topic extraction and quick information retrieval or filter. It includes the utilization of descriptor extraction. Descriptors are sets of words that depict the contents inside the cluster. Document clustering is generally considered to be a centralized process. For instance, document clustering include web document clustering for search users. The utilization of document clustering can be categorized into online and offline. When compared with offline applications, the online applications are typically compelled by efficiency problems. This Proposed methodology provides a Comparative Analysis of three Algorithms such as Deep Learning Algorithm, Naive Bayes and Back Propagation Neural Network Algorithm. In this method a comparative analysis of three algorithms with the performance
Mostrar mais

7 Ler mais

Declarative Approach to Data Extraction of Web pages

Declarative Approach to Data Extraction of Web pages

There are many HTML Editors, some of them free, some commercial. Among several editor types we are just interested for this case in the so-called WYSIWYG (“What You See Is What You Get"), such as "Macromedia Dreamweaver", "CoffeeCup HTML Editor" or "Microsoft FrontPage". These allow the construction of HTML pages only with gestures and interactions on a graphical canvas, as well as inserting and deleting elements or modify their properties. These actions are carried out after selecting an element within the page. This simple interaction consists only in clicking on the element that we intend to select, so that the application will display it in a different colour to show that has been selected. Thus it is possible that a user who is not a specialist, can easily build webpages without having to engage with the code needed to produce them. Even though this interaction wasn't created with the intend to graphically select a DOM webpage element (since the HTML code generated by these editors is built from the webpage graphical design present in the editor's canvas component and any changes in it causes the fully re-built of html code), it could be used with this purpose creating a link between the webpage DOM elements and their graphical representation.
Mostrar mais

135 Ler mais

The use of external event monitoring (web-loop) in the elucidation of symptoms associated with arrhythmias in a general population

The use of external event monitoring (web-loop) in the elucidation of symptoms associated with arrhythmias in a general population

tracings transmitted periodically and documented the electrocardiographic diagnosis. Standard duration of monitoring was 10 days, but when necessary it was stopped prematurely, and in case of significant arrhythmia record or if requested by the physician the monitoring was prolonged. Specific symptoms were defined as palpitations, pre- syncope or syncope presented during monitoring. Significant arrhythmias were defined as paroxysmal supraventricular tachycardia, atrial flutter, atrial fibrillation, ventricular tachycardia, both supported (more than 30 seconds of length) and non-supported, besides pauses greater than 2 seconds or second and third degree atrioventricular block. Symptomatic arrhythmias were defined as any arrhythmia along with symptoms (significant, but also supraventricular and ventricular extrasystoles, isolated or matched).
Mostrar mais

5 Ler mais

LIQUID-LIQUID EXTRACTION EQUILIBRIUM FOR PYRUVIC ACID RECOVERY: EXPERIMENTAL DATA AND MODELING

LIQUID-LIQUID EXTRACTION EQUILIBRIUM FOR PYRUVIC ACID RECOVERY: EXPERIMENTAL DATA AND MODELING

The equilibrium distributions of pyruvic acid between organic and aqueous phase were interpreted for various solvents (Fig. 1). The distribution coefficient (K D ) was found to be higher for the active solvents in the order of TBP > decanol > MIBK > toluene > n-heptane. This trend reflects the higher extraction strength of active solvents over the inactive ones. TBP possesses the phosphate group which is able to form a strong hydrogen bond with the target solute (pyruvic acid) in the aqueous phase, thus providing higher extraction efficiency (K D,avg =1.612). Alcohols also proved to be a better extraction media; however, their extraction strength decreases with the increase in carbon chain length (Marti et al., 2011). The K D,avg value for octanol is 0.296, while for decanol it is slightly lower (K D,avg =0.280).Besides, the short chain alcohols are soluble in water. Thus, alcohols such as octanol, decanol etc. are often preferred as solvents for the extraction. In the case of MIBK probably the keto (>C=O) group forms a strong hydrogen bond (H—O—H) with the carboxylic group (—COOH) of pyruvic acid, and thus provides fairly good K D,avg =0.245. The aliphatic and aromatic hydrocarbons (without functional groups) such as toluene (K D,avg =0.055) and heptane (K D,avg =0.017) used in this study provided low extraction efficiency. They extract the acid through solvation mechanism and ionic interaction involving weak van der Waals’ forces.
Mostrar mais

7 Ler mais

Comparing and evaluating Data-driven Journalism: Data visualization performance from perspective of web analytics

Comparing and evaluating Data-driven Journalism: Data visualization performance from perspective of web analytics

As Klein (2016b) points out, statistical graphs were very rare in newspapers in the 1840-1850s, in part for technical reasons. The technological advances brought by the industrialization of the newspapers in the mid to late nineteenth century made the impression of maps, images and charts less laborious. This period also witnessed an outbreak of data provided by scientists, companies, state and federal bureaus, etc. (Reilly 2017; Örnebring 2010; Douglas 1999; Schudson 1981). Although databases are present in journalism from the beginning, it is in this period, with the use of documents (Anderson, 2015), the invention of the interview (Schudson, 1996), and the commitment to objectivity and facts (Schudson, 1981, 2001), that databases begin to gain some prominence in newsrooms. At the end of the 1870s, early box scores for sports arose, and more complex tables appeared with the rise of specialized business journals, such as the Wall Street Journal in 1889, and by 1896 maps with electoral information emerged on front pages (Usher, 2016). Nevertheless, it is outside the newsrooms that databases, charts, and maps develop more during this period.
Mostrar mais

329 Ler mais

Automatic 3D Extraction of Buildings, Vegetation and Roads from LIDAR Data

Automatic 3D Extraction of Buildings, Vegetation and Roads from LIDAR Data

To filter the results of this segmentation, we use masks as binary image of each extracted classes. Thereafter, the superposition of point cloud on the filters (treated binary image of each class) allows the elimination of noise. At first, we begin by processing the mask from building class by applying mathematical morphology which is divided into two stages: elimination of residual segments and then fill holes in the segments body. This is given by the succession of two operators: the opening used to remove small segments, and the closing used to fill holes in the ground surface segments. In the first stage which is the elimination of residual segments, we can found some small gaps in roof building caused by surveying lack which are automatically accentuated in that stage, so we applied the closing to fill holes in the roof surface segments and after that we superpose the original point cloud to the roof surface mask to extract all roof points. That’s the particularity in our algorithm (Figure 5). Thereafter, a dilatation are applied to results and multiplied by the upper and lower contour mask to obtain the points of buildings contour.
Mostrar mais

8 Ler mais

Significant Interval and Frequent Pattern Discovery in Web Log Data

Significant Interval and Frequent Pattern Discovery in Web Log Data

We are Apply the One Pass SI algorithm[4],One Pass AllSI[4] and One Pass FED[5] on the web log data so we need the different website name such as citeseer.com,sports.com,newsworld.com etc The time- series can be represented with an Website timestamp model. A website w (for example citeseer.com is access, sports.com is not access, etc.) is associated with a sequence of timestamps {T1, T2, • • • , Tn} that describes its access over a period of time. The notion of periodicity (such as daily, weekly, monthly, etc.) is used to group the website accesses. For each website, the number of accesses at each time point can be obtained by grouping on the timestamp (or periodicity attribute). We term the number of accesses of each website as access count (ac). Thus the time series data can be represented as < w {Tl, al}, {T2, a2}, {T3, a3}, {Tn, an}>, where Ti represents the timestamp associated with the website w and ai represents number of accesses. ai can be referred as the access count of the website w at Timestamp Ti . This is referred to as folding of the time-series data using periodicity and time granularity (e.g., daily on seconds, daily on minutes, weekly on minutes).
Mostrar mais

8 Ler mais

Electrodialytic removal of tungsten and arsenic from secondary mine resources – Deep eutectic solvents enhancement

Electrodialytic removal of tungsten and arsenic from secondary mine resources – Deep eutectic solvents enhancement

Tungsten is a critical raw material for European and U.S. economies. Tungsten mine residues, usually considered an environmental burden due to e.g. arsenic content, are also secondary tungsten resources. The electrodialytic (ED) process and deep eutectic solvents (DES) have been successfully and independently applied for the extraction of metals from different complex environmental matrices. In this study a proof of concept demonstrates that coupling DES in a two-compartment ED set-up enhances the removal and separation of arsenic and tungsten from Panasqueira mine secondary resources. Choline chloride with malonic acid (1:2), and choline chloride with oxalic acid (1:1) were the DES that in batch extracted the average maximum contents of arsenic (16 %) and tungsten (9 %) from the residues. However, when ED was operated at a current intensity of 100 mA for 4 days, the extraction yields increased 22 % for arsenic and 11 % for tungsten, comparing to the tests with no current. From the total arsenic and tungsten extracted, 82 % and 77 % respectively were successfully removed from the matrix compartment, as they electromigrated to the anolyte compartment, from where these elements can be further separated. This achievement potentiates circular economy, as the final treated residue could be incorporated in construction materials production, mitigating current environmental problems in both mining and construction sectors.
Mostrar mais

36 Ler mais

Extraction of Eyes for Facial Expression Identification of Students

Extraction of Eyes for Facial Expression Identification of Students

With the ubiquity of new information technology and media, more effective methods for Human Computer Interface are being developed which rely on higher level image analysis techniques which has its wide applications in automatic interactive tutoring, multimedia and virtual environments. For these tasks, the required information about the identity, state and intent of the user can be extracted from images and make the computers to react accordingly, ie. by observing a person’s facial expressions. Faces are rich in information about individual identity, and also about mood and mental state, being accessible windows into the mechanisms governing our emotions. The most expressive way humans display emotions is through facial expressions. Facial expressions are the primary source of information, next to words, in determining an individual's internal feelings. In the virtual environments, for the computer to interact with humans, it needs to have the ability to understand the emotional state of the person which is also applicable for the virtual classrooms too.
Mostrar mais

6 Ler mais

Synergy Between LiDAR and Image Data in Context of Building Extraction

Synergy Between LiDAR and Image Data in Context of Building Extraction

boundaries. In general, fine details, like edges, corners etc., should be extracted with the help of photogrammetric images. Finally, photogrammetric images are also necessary to attribute accurate and complete semantic meaning to a whole building or to parts of a building. A general comparison of LiDAR and Photogrammetry paradigms can be found in Baltsavias (1999). Kaartinen et al. (2005) presented an empirical evaluation that supports the theoretical analysis discussed above. The study compares accuracies obtained with aerophotogrammetry and LiDAR in building extraction. It consists of four test sites, three in Finland and one in France. The following data was used in the evaluation tests: aerial images (GSD ~ 6cm), camera calibration and image orientation information, ground control points, LiDAR data (2-20 pts./m 2 ), and cadastral map vectors of selected buildings. Evaluation tests were carried out by 11 participants, leading to 3D building models. These 3D models were numerically compared to ground reference data. The main conclusions drawn from the tests are:
Mostrar mais

5 Ler mais

introducing the portuguese web archive initiative

introducing the portuguese web archive initiative

Using a blade system instead of independent servers saves on electrical power consumption and physical space on the cabinets because it uses compact components and shares power supplies. System administration costs are also reduced. Cable failures are a prime cause of downtime and 25% of a system’s adminstrator time is spent on cable management [23]. A blade system uses inter- nal network connections that reduce cabling inside a cabinet. The blades are plug-and-play and can be quickly replaced in case of failure. However, there are disadvantages regarding the use of a blade system. Although it contains redundant components and so- phisticated monitoring and alert mechanisms, a blade system setup is less fault-tolerant than having several independent servers be- cause all the blades are managed by a single device. Maintenance operations, such as firmware upgrades of shared components may impose shutting down all the blades. Another disadvantage is that blades from different vendors are usually not compatible.
Mostrar mais

9 Ler mais

Show all 10000 documents...