I would also like to thank the enormous support I have received from my colleagues at the company I work for: Nicolau Romão and Joaquim Calhau for making my studies financially possible and for giving me the time and corporate knowledge needed to better align research with corporate needs; Rui Costa for supporting the Case Study and for believing that doing thinks right is researching, studying and implementing a real aligned solution with best practises, frameworks and body of knowledge; Telmo Henriques and Nuno Perry for his wisdom on research and for is enormous support when time, patience and other priorities almost made this research a second priority; Tito Torres for his knowledge on IT-Governance best practises and for believing this is a true important topic that deserved more research; Carlos Gouveia for his belief in the project and for is support on making meetings happen with all parties involved in the project. I would also like to thank a big friend of mine that shares the same passion for DataDrivenDecisionSupportSystems for is contribution in validating the model even when is time balance was already negative.
The main aim of this study is to identify and understand the factors of success in the implementation, use and maintenance of electronic libraries (e-libraries) in the academic context of higher education, based on Resource-Based View (RBV) and Social Learning Theory (SLT). To achieve this goal, a qualitative approach was adopted, through a case study of the e-library of the University of Beira Interior (UBI), Portugal. The data were obtained from direct observation, interviews with various actors (staff of the Library, Computing and Administration Services of this institution) involved in the process of implementing and maintaining the e-library, and also documentary analysis. The empirical evidence obtained reveals the most relevant factors for e-library success to be: (1) minimization of costs, (2) acceptance and use of e-libraries, and (3) staff training. This study shows the great importance of library staff training regarding use and exploitation of the e-library’s functions. Thorough knowledge of the e-library means improved use and search effectiveness. The minimization of costs associated with its implementation and maintenance is also a factor determining its success. Given the limited number of empirical studies exploring the topic, this study is particularly important and innovative in the context of Higher Education Institutions (HEI). Based on the empirical evidence obtained, a framework is proposed, grouping and reflecting the most important factors for the success of e-libraries in the HEI context.
Research focusing on improving targeting for telemarketing campaigns has been prolific for a quite large period. Back in the nineteen sixties, some pioneers such as Cox & Good (1967) were already working on designing architectures for Marketing Information Systems that could effectively provide better decisionsupport on several related areas. However, it was only a few decades later, with more matured business information systems contemplating structured and efficient databases, that such systems evolved for real decisionsupport customer-based systems (Abraham & Lodish, 1993; Van Bruggen et al., 1998). One of the most widely spread terms in such domain is Customer Relationship Management (CRM) systems, benefiting largely from database analysis techniques such as data mining (Berry & Linoff, 1999). In fact, the more recent work of Ngai et al. (2009) evaluated the vast literature published on data mining applications to CRM. Specifically for targeting efficiency, Young (2002) proposed a choice- based segmentation approach, while the work of Rotfeld (2004) shows interesting insights for opportunities emerged with the apparent negative effect of the opt-out registries for telemarketing. New mobile devices introduced other forms of complementary telemarketing, such as the usage of mobile Short Message Services (Rettie et al., 2005). Another different trend of research is the analysis of customers’ receptivity toward telemarketing campaigns, using such knowledge for improving future campaigns (Mehrotra & Agarwal, 2009). On a very recent work with a similar goal of improving telemarketing through feature selection, Tan et al. (2014) proposed a completely different approach, using a single-feature evaluator specifically for addressing the imbalance class associated with telemarketing problems. Their study tested the method proposed on a large online employment advertising company, being able to improve a standard classification approach. However, it is limited to a standard database, not focusing on improving its value in terms of business knowledge. Also their single feature selection evaluation does not seem to apprehend the relations between different features which affect the final outcome. Finally, it follows a solely automated procedure, failing to use valuable and irreplaceable expert domain knowledge.
The dss provides the user with three lists of options: key location fac- tors that may be selected and weighted (totalizing 100), land-use areas, and facility types. At current stage, the dss does not offer information on facilities within the top location areas rendered, as it has not yet included real estate collaborators, whose role would be to detail facility offers. The prospective real estate collaborators are supposed to supply geo-tagged information on facilities by filling in their real estate offer forms. At that point those data will be embedded into the dss, thus location results will provide links to the real estate offers falling within the result areas.
For a public transport company, the process of defining trips offers is a central task because trips are the main product they have to offer to their clients. As in other business areas the offer should maximize clients’ satisfaction at a minimum cost. Traditionally, timetables were defined assuming a deterministic travel time. However, with the investments done in the last decade in Advanced Public Transportation Systems, a large amount of current data obtained from Automatic Vehicle Location systems is now available. This data can be used to enhance travel time modelling for timetable definition. This is the subject of this paper, which imposed us the study of how to use current data in order to better define timetables aiding public transport companies to accomplish their mission. We present a DecisionSupport System (DSS) for the special case of timetable adjustments, assuming therefore that the schedule under study is not new, i.e., there are actual trips for that schedule.
This paper describes the possibility of the Geographic Information Systems (GIS) as a means to supportdecision making in solving spatial problems. Spatial problems accompany every human activity, of which agriculture is no exception. The solutions to these problems requires the application of available knowledge in the relevant decision-making processes. GISs integrate hardware, software, and data for capturing, managing, analyzing, and displaying all forms of geographically referenced information. Coupled with GISs, geography helps to better understand and apply geographic knowledge to a host of global problems (unemployment, environmental pollution, the loss of arable land, epidemics etc.). The result may be a geographical approach represents a new way of thinking and solutions to existing spatial problems. This approach allows to apply existing knowledge to model and analyze these problems and thus help to solve them.
The results of system evaluation displayed high accuracy and specificity either sensitivity of system announced the slender probability of missing patients with bacterial meningitis Because the sensitivity of system is 97% . In this study, the system missed a patient with bacterial meningitis and negative CSF culture. On the other hand, Francois’s system missed 2 patients with bacterial meningitis among 212 patients . Ocampo’s systems had high accuracy dependent on users’ experiences. These systems were used to diagnose and treat many diseases, although no specific results were obtained regarding bacterial meningitis [16,17].
Decision making is one of the fundamental cognitive processes represented in the layered reference brain model LRBM Wang et al., ; Wang, b . The decision making study raises the interest in various fields such as cognitive informatics, computer science, psychology, management science, decision science, economics, sociology, political science, and statistics Edwards and Fasolo, ; (astie,
The absence of computational means could be a main factor to limit the region in competition with another’s in terms of their development. The institutional norms (PRTs, PROTs,...) together with a design that enables the decision process, and allows quick and efficient manipulation of the great quantity of information, should be one of the best ways to arrive at a decision for a localised problem and to find a solution to eliminate the region asymmetries.
procedure is one proposed by Ziegler and Nichols for tuning proportional-integral-derivative (PID) controllers. However, many data-driven control algorithms were reported in the literature, till now. These are virtual reference feedback tuning (VRFT), iterative feedback tuning (IFT) and data-driven model-free adaptive control (MFAC), to mention a few. The VRFT is reference model adaptive control technique. Within VRFT framework, it is assumed that controller structure is known a-priori and controller parameters are determined by system identification procedure, using virtual reference signal. It is designed for discrete-time single-input and single output (SISO) linear time invariant (LTI) systems. The VRFT procedure may produce controller that results in a non stable closed loop system [2,3]. The IFT, also proposed for an unknown discrete-time SISO LTI system, determines controller parameters through iterative gradient-based local search procedure. Computation of the gradient of criterion function, with respect to the controller parameters, is based on available input/output (I/O) measurement data of the controlled plant . The MFAC approach, in opposition to the VRFT and IFT, was developed for a class of discrete-time nonlinear systems. The main characteristic of the MFAC is utilization of local dynamic linearization data models. The models are computed along the dynamic operation points of the closed-loop system. Computation is performed by the dynamic linearization techniques, with a pseudo-partial derivative concept, based on the real-time I/O measurements of the controlled plant . The above mentioned algorithms, i.e. VRFT, IFT and data-driven MFAC, are direct adaptive control algorithms, as they tune controller without prior plant identification procedure. Further, in applications rich with measurement data, regarding processes with high complexity, neural networks (NNs), with their ability to approximate functions through training procedure [5, 1], might be a good solution. Specially, in nonlinear control tasks, NNs are employed as nonlinear process models and/or nonlinear controllers [6, 7, 1]. However, design of an appropriate NN, meaning choice of an NN structure, input and output vectors, training set and training algorithm, might be a difficult task. Within signal prediction applications, we design a model that should produce, on its output, the desired signal. Usually, the model is parameterized and model parameters are tuned in the real-time based on the incoming process measurements, i.e.
As stated in the introduction, GSSs are a natural solution to handle this kind of work. Nevertheless, as GSSs are built upon the idea of sequential support (see Ref. 28, for a thorough discussion on the advantages of using technology that provides clear and simple instruction on good problem solving practices instead of a complex array of tools, at least on the case of ill-structured problems) for the decision-making stages, 16 it is not always easy to understand the earlier stages of a discussion. This is particularly evident at the end of discussions when classes, which were created to encompass the discussion elements and some of the details, are “flattened”. For instance, in a GSS voting environment, it is usual to expect changes in initial votes, as part of the group process. 29 Even if people are allowed to review their votes (for instance, after discussing the results), when the decision is made and results are disclosed, the final report is poor when it comes to show discussion progress, changes of opinions (and by who, if possible), convincing arguments, etc., which were involved from the start of the discussion to its end. In this case, a new group iteration (which could be the point when a vote changed) substitutes the earlier one, discarding the previous discussion scenario. However, reports usually only embed the latest result, especially when reporting is an automatic feature.
The Macaé County is one of the greatest economy of the state of Rio de Janeiro. With the use of the information technology is possible to create a powerful tool for supporting the decision making processing for this County, aiding the process of improvement of life quality. For that one, intends to use a DecisionSupport System able to give different kind of information of County areas, like health and education. For the union of all information the datawarehouse technology will be used. For query implementation the technologies of OLAP and GIS are used together. Therefore, those technologies together make a powerful tool for aiding the decision making process of the Macaé County.
Healthcare-associated infections (HAI) are one of the major worldwide causes of death and disability, with an important economic impact of several billion dollars/year in the USA . Antimicrobial resistance (AR) increase even more morbidity, mortality and costs associated with HAI . Antibiotic (AB) misuse is at the heart of AR: uncritical AB prescription leads to the development of antimicrobial resistant bacteria . Antimicrobial resistant HAI can be addressed by interventions reducing unnecessary AB prescribing . However, infection control team spend most of their time on data collection, suspected cases finding, medical records review, etc., and it is hard for them to have time for dealing with proactive preventive activities . This problem can be tackled with computerized surveillance and decisionsupportsystems, which have proven
and received positive responses regarding their importance when we contacted authors. We could not assess factors such as leadership, institutional support, application deployment, extent of end user training, and system usability. It is not possible for studies to report all potential determinants of success of computerised clinical decisionsupportsystems, and a prospective database of implementation details might be better suited to studying determinants of success than our retrospective study. The best design for some factors would be a cluster randomised controlled trial that studies a system containing a feature directly compared with the same system without that feature. Conducting such studies, however, would be difficult for many of the potential determinants, such as the institution’s implementation experience or culture of quality improvement. Rigorous randomised controlled trials are the best way of testing systems’ impact on health. 48-50 They can test only a few
The transformation from conventional government services to E-government services heralds a new era in public services. E-government services can replace the government’s traditional services with services of better quantity, quality and reach, and increase citizen satisfaction, using Information and Communication Technology (ICT). E-governance aims to make the interactions between government and citizens (G2C), government and business enterprise (G2B) and inter- government department dealing (G2G) friendly, convenient transparent and less expensive . A growing amount of informative text regarding government decisions, directives, rules and regulations are now distributed on the web using a variety of portals, so that citizens can browse and peruse them. This assumes, however, that the information seekers are capable of untangling the massive volume and complexity of the legally worded documents . Government regulations are voluminous, heavily cross-referenced and often ambiguous. Government information is in unstructured / semi-structured form, the sources are multiple (government regulations comes from national, state and local governments) and the formats are different – creating serious impediment to their searching, understanding and use by common citizens. In the G2G arena, the government departments are in an even greater need of a system that is able to provide information retrieval, data exchange, metadata homogeneity, and proper information dissemination across the administrative channels of national, regional / state, and local governments . The increasing demand for and complexity of government regulations on various aspects of economic social and political life, calls for advanced knowledge-based framework for information gathering, flow and distribution. For example, if policy makers intend to establish a new act, they need to know the acts related to the same topic that have been established before, and whether the content of the new act conflicts with or has already been included in existing acts . Also, regulations are frequently updated by government departments to reflect environmental changes and changes in policies. Tools that can detect ambiguity, inconsistency and contradiction are needed  because the regulations, amended provisions, legal precedence and interpretive guidelines together create a massive volume of semi-structured documents with potentially similar content but possible differences in format, terminology and context. Information infrastructures that can consolidate, compare and contrast different regulatory documents will greatly enhance and aid the understanding of existing regulations and promulgation of new ones.
The micro economic and technical data must be gathered in accordance with the Ministry of Agriculture Book – keeping from an adequate sample of farms and their production enterprises, which represent the region’s production plan. These micro economic and technical data are related with the following: Yields, product prices, necessary seeds – fertilizers – pesticides etc., necessary labor force and necessary machinery and all those necessary technical and economic data needed in order to estimate the gross margin, the variable cost and the gross profit of each production enterprise.
Perception, involves sensory of significant information about the system itself and the environment it is operating in. This information can be obtained with the help of data collection tools related to the technological infrastructure of an organization (hardware, services, databases). Comprehension, encompasses more than simply sensing/perceiving data, it relates the meaning of the information with the system goal/purpose. It can be represented through an ontology for context knowledge representation. Projection, consists of predicting how system current state will evolve (in time) and how it will affect the future states of the operating environment. Currently there are tools that comprise the different levels of situation awareness to help detect, prevent and recover from cyber incidents that could threaten the security of an organization. The present work shows a comparative analysis of the most popular and relevant tools in this area, and proposes a contribution in this domain.
To conclude this introduction, once more, we must refer that the proposed tool is intended exclusively to give a response to a type-standard configuration of MFS. Nevertheless, within this type-standard configuration, the user could easily evaluate different strategies under different values for the number of active machines, the number of maintenance crews and the number of spare machines This way, the resulting MFS model aims to fill a gap in terms of computer solutions currently existing for this specific type of maintenance systems.
The Precision Tree® system includes various tools for defining and analyzing decision trees and influence diagrams. In the software product, all decision model values, including the probabil- ities, are entered directly in spreadsheet cells, just like any other Excel models. It also allows link- ing values in the decision model directly to loca- tions specified in a spreadsheet model. The re- sults of solving that model can be utilized as payoffs for each path through the decision tree. All calculations of payoffs happen in real-time, that is, as the tree is edited, all payoffs and node values are automatically recalculated.