• Nenhum resultado encontrado

5. EMPIRICAL RESULTS AND DISCUSSION

5.5 Combining the empirical results

it is important to ensure the technical performance and reliability in the context. There- fore, the server has to be defined in accordance to the needs of the business; the connec- tion has to work, wherever you are, if the business is requesting it.

As a conclusion, the second interviewee emphasized that it is important to satisfy both the management and the users. To provide adequate training and support for the users, and manage the objectives that the project steering group has set are reached. Regarding practices and improvements to the framework, schedule updaters responsible for ensur- ing that schedules are always up-to-date were not assigned. Other improvements con- sidered the process and template alignment: templates created should be in a harmony with processes. That is a real benefit that you are able to deliver to the customer in your visible delivery process (schedules and progress, maybe key resources), even though they do not request it. It clearly displays certain professionalism and quality to custom- er. This was not reviewed in the implementation framework design. The interviewee recognized that their internal visibility was less considered than the external (customer), unlike Case company which also focused on the internal, since their manufacturing partners were situated in a different country.

Both interviewees emphasized the reliability of the cloud. According to their experi- ence, operations in China resulted to light structured project files due to low bandwidth speed. Similar issues should be considered in hotels abroad, for example. These facts were not considered in the framework methodically.

User support:

o POC/ training camp o Video training

Project evaluation:

o KPI

o User and customer satisfaction o System performance

o License usage

Testing:

o Reliability and functional testing at country level

Communication:

o Implementation schedule communication

Project resourcing methods:

o Clarifications to process descriptions

All these improvements were iterated to the framework (Figure 20). Since the correction was needed, evaluation was beneficial and gave rigor to the research, hence most likely increasing the framework’s ability to achieve its goal (validity, effectiveness), con- sistency with organization, and clarifying and completing structure. As a result, imple- mentation framework evaluation attributes were weighed by internal and external inter- views. Most interviewees emphasized that the implementation needs pragmatic ap- proaches to achieve its effectiveness and validity. As one of the project managers said

“there are most probably difference between the plan and the reality”. Therefore, valida- tion at this point was trying to give directional information about the framework’s abil- ity to be “utilizable” and fill its purpose. The validation was done accordingly:

Goal: validity and generality

o Validity and generality evaluated in internal and similar external context o Ability to reach the objectives evaluated partially during project

Environment: consistency with people, organization and technology

o Utility and ease of use evaluated by users and stakeholders, and feedback was considered

Structure:

o Clarity, completeness and level of detail evaluated by users and stake- holders

Evolution

o Learning capability was no evaluated

The result of the interviews of how the framework’s evaluation attributes should be weighed in the case organization’s context is presented in Figure 23. The colors indicat- ed whether the framework has achieved some of the dimensions and criteria. The green indicated achieved, yellow indicated not completely achieved and red indicated not

achieved. It is relevant to notice that evaluation is a constant process and criteria outputs can vary over time.

Figure 23. Evaluation attributes in hierarchy level in Case company

The hierarchy level was given order by Senior manager (E9) (see Appendix G) during the validation interviews in the respect of the current situation and Case company’s en- vironment, as supported by (March and Smith 1995). Therefore, the relevancy is based on his experience. The most important evaluation dimension was goal where effective- ness and validity were given importance. This dimension is also supported by Hevner et al. (2004) and Peffers et al. (2007). Based on the experience of the senior manager, the validity was considered achieved, while effectiveness was not completely achieved, but the communication process was evidence that the direction was right. Still, the evalua- tion covered a small group of stakeholders, so this has to be viewed critically. The sec- ond important evaluation dimension was environment, also supported by (March and Smith 1995) where performance is related to the environment in which framework op- erates. Consistency with technology was given importance, since one of the main objec- tives of the company was to have only one resource management tool in use, instead of many. However, this cannot be evaluated yet. The second important criterion was con- sistency with people which was considered to be achieved.

The third important evaluation criteria were consistency with organization, as the framework’s purpose was to build alignment with the organization, IT and processes.

Structure was weighed as the fourth most important criterion. There, completeness was not completely achieved, since the experimentation of the framework is was ongoing.

Evolution and, especially, learning ability, was seen as the least important criterion, together with generality because it was not yet relevant, when the evaluation took place.

As he expressed his thoughts: “I do not remember whether we have had any type of plan that would be as accurate as this plan, although its structure has been changing.”

Generality was considered the least important and the least irrelevant of all the criteria, however, the framework uses generally notified best practices and critical factors adopt- ed from different literature and external interviews. Nevertheless, the framework’s core idea was not to be generally adoptable.

Framework utilization required and will require constant validation by experienced us- ers and stakeholders. Also, external validation gave rigor and relevancy to the imple- mentation framework. To finalize the implementation project, outcomes need measure- ment. If the objectives of the implementation are fully successful, the artefact is ful- filling its purpose (Hevner et al. 2004). At the moment, the framework is still incom- plete because complete evaluation is not conducted. This requires complete case study results after Microsoft Project Server implementation is completed.

In the situation where an individual attempts to across the river, “I would design them a bridge” (Buckminster Fuller 1992). A correctly designed bridge enables an individual and organizations to move from their current situation to the desired location. Bridge building requires knowledge and scientific theories. The greatest reference for a bridge is its sustainability, an evidence of concept. Nevertheless, software development and implementation in a global scale is done in an environment where multiple users and other stakeholders are setting certain requirements from different perspectives which may be far from the agreement. In addition, complexity is generated from constantly evolving demands on how can the system support different internal and external stake- holders, such as a customer. On the other hand, technology sets specific requirements especially when cross-technology integrations exist. In the area of complexity, the methods should be empirical and iterative to build an artefact to fill its purpose. Notable is that scientific literature argues that the plan should have an ability to evolution and the ability to adaptation, especially when the complexity exists, therefore the develop- ment is rather a dynamic process than stable condition (Guckenheimer et al. 2012, pp.

3-5; Nunamaker et al. 1990; March and Smith 1995; Becker et al. 2003, p. 133; PMI 2013, p. 55-56).