The findings of the present study demonstrated that the subjective noise was significantly lower in children’s MBIR images compared with FBP or ASIR images. Similarly, images in series A seemed more delicate, with less granular noise artifacts and more homogenous image density compared with series B and C, which concur with previous studies [8,10–13]. The MBIR images could also display tiny structures, including the small bronchial walls, pleura, and small lymph nodes more clearly than the other two techniques (Figure 1,2), significantly facilitating diagnosis. In a study performed by Singh et al., blurred edges were found in the images reconstructed by ASIR . Interestingly, these blurred edges were also found in MIBR, and were more serious in the images reconstructed by MBIR than by ASIR. In the present study, blurred edges were found for almost all the tissues with large density difference, but because the density inside these tissues was relatively homogeneous and because image noise was very low, these blurred edges did not significantly influence diagnosis. Studies revealed that most radiologists were not prone to diagnose using MBIR images . However, both radiologists participating in the present study had reviewed IR images for more than one year, and they could readily use them for diagnosis. A score of 5 in diagnostic confidence was provided by both radiologists involved in the present study. However, a major limitation of MIBR is the long reconstruction time required to obtain the images, preventing us to use the high resolution mode. Therefore, the use of MIBR could be limited for some diagnoses, such as interstitial lung diseases.
Radiologists seek to reduce the radiation dose inCT screening, either by affecting the radiation dose directly via changing tube current, tube voltage, section thickness, scan length, NI, etc., or indirectly by usingreconstruction algorithms.  Unfortunately, reducing the radiation dose will inevitably increase image noise and affect imagequality. The conventional CTimage reconstruc- tion algorithm, FBP, reflects a trade-off between sharpness andimage noise that limits the reduction of the radiation dose to maintain the diagnostic imagequality.  ASIR is a newer imagereconstruction algorithm that reduces image noise by applying iterations between the raw data andimage space, generating images of higher qualityand greater structural detail at lower radiation doses than FBP,  helping to improve imagequalityand reduce radiation dose.[17,22,23,29–32] Iterative reconstruc- tion algorithms could effectively reduce radiation doses for chestCT: one study showed a 27% radiation dose reduction using 30% ASIR , and another reported that higher ASIR levels (100%) could reduce radiation doses even further (76%).  ATCM is another method that we used to minimize radiation dose. Several studies indicated that CT dose indices could be reduced by 40– 60% using ATCM without compromising imagequality, [27,34] and we could further reduce CT doses to sub-millisievert levels by using ATCM with ASIR.
The small patient collective is a relevant limitation of our study. The comparison of the contour sharpness showed no statistical significant difference in our analysis. Due to the small patient collective we decided to perform an intraindividual comparison instead of a rando- mised trial that may show differences in one patient at the same position in the coronary seg- ment, and not only between groups. Thus, our patients underwent the CT by using the same scanning conditions for both, AIDR 3D and FBP. Performing the AIDR 3D with lower radia- tion dose might have influenced the contour sharpness. In our analysis, FBP reconstruction was combined with QDS for additional noise reduction. Due to this difference in reference standard, comparison to recent studies is only partially possible. If ROIs are placed manually into distal coronary segments with a small diameter, the vessel wall could be included in the measurement and, as a result, the statistical noise is influenced by the profile of CT number and noise increases. In addition, we summarized the LM and proximal LAD, LCX and RCA to proximal measurement points though, in general, the LM vessel size is superior to that of LAD, LCX and RCA. While in our analysis there was no difference of SNR and CNR between proxi- mal and distal segments, the imagequality might me influenced by small changes of the vessel size. Although coronary CTA may be used to specify the different plaque entities , we did not include a plaque analysis in our manuscript. Finally, our standard scanning conditions were above 100 mAs. Even for standard clinical acquisition protocols of coronary CTA that ac- quire with mAs values above 100 mAs, AIDR 3D remains effective compared to original FBP to reduce SD values and improve the imagequality, however the effect is less outspoken when compared to the former QDS+ technique of Toshiba. As the noise reduction effects of these it- erative reconstruction techniques are expected more pronounced at lower mAs values, it will be important to explore coronary CTA protocols at lower mAs in future studies.
This study has several limitations. First, we could not make a comparative analysis between spectral CTand conventional CT at different ASIR percentages. Though previous studies have verified that the imagequality of gemstone spectral imaging (GSI) was superior to that of the conventional CT , they were performed without ASIR reconstruction, and our study was focused on the addition of ASIR in monochromatic spectral CT scans. Second, we did not take into account the potential effect of the BMI of patients even though the effect of BMI should be minor theoretically. A previous study on the abdominal imagequality by usinglow dose ASIR reconstructionand normal dose FBP reconstruction has shown that the sharpness of structure boundaries of low dose ASIR reconstructionin patients withlow BMI was decreased, although diagnostic acceptability was nearly identical to those for routine-dose CTwith FBP . Third, we didn’t use the power noise spectrum on the objective evaluation of interatively recon- structed CT images. The noise power spectrum for the iterativereconstruction algorithm is a very important parameter and should be investigated when the algorithm is developed. How- ever, it is less important in our study since the aim of our study was more focused on the resolv- ing of clinical problems, and relative performance between the different strengths of the iterativereconstruction was investigated.
8 Arbitrarily sampled k-space autocalibrating pMRI Another experiment is a simulation on a software phantom of a human head. This phantom was made by taking the central slice from a 256|256|176 voxel reconstruction from an MPRAGE sequence  and simulating non-Cartesian acquisition on it using NUFT software. It was acquired on a simulated Archimedean spiral, with the distance between two loops of the spiral being 1:32 times the Nyquist limit and sampled at even angular spacing such that the total number of samples is 25% the Nyquist rate. The data was simulated as an acquisition using 8 coils with coil profiles taken from a different SENSE experiment. Since in this setup there is no calibration-capable area, i.e. no bandwidth around the origin of k-space where the data is sufficiently densely sampled, the proposed algorithm automatically switches to the l1-regularized calibration algorithm (9). The only reference algorithm we can compare to is a sum of squares from ML reconstructed images. This is because no GRAPPA or SPIRIT kernels can be trained, as a k-space point pattern is never repeated and, strictly spoken, no autocalibration region exists. We can see in the result shown in Figure 17 that even this regularized calibration approach succeeds Figure 15. Reconstruction experiment for a simulated random
The concept is quite simple: We begin with an arbitrary phase-only filter in the object domain multiplying the input object (the original image), after a Fourier transform we obtain a Fourier domain imageand we impose the require Fourier intensity (actually the magnitude), leaving the phase unharmed. An inverse Fourier transform brings us back to the object domain. Since we demand a phase-only filter we impose the intensity of the input object in this plane. Next we calculate the Fourier transform and return to the Fourier domain, and so on. This procedure is required since using only the phase of the complex filter that converts the input image exactly to the Fourier image gives poor results. As can be seen, if we impose half of the information (intensity or phase) in both the input and the output domains the procedure converges monotonically. In later work Gerchberg  and Papoulis  suggested the use of this method for super-resolution. However, both presented relatively simple test cases and assumed the properties of all iterations to be identical (accept when noise reduction was addressed).
ABC are starting to be widely used in population genetics and other areas (Ratmann et al. 2009). We focused on models with single or two admixture events andwith up to four dif- ferent populations. Our results suggest that it is possible to separate the effect of admixture from that of shared polymorphism. This is particularly important as admixture events are likely to have occurred in many species after the last glaciations during the colonization of new regions from several refugia or when populations encountered habitats that were already occupied (e.g. Chikhi et al. 2002; Alvarado Bremer et al. 2005; Gum et al. 2005; Fraser and Bernatchez 2005). Admixture is also likely to have happened during the domestication of plants and animals and is still an ongoing process between breeds (e.g. Bray et al. 2009b). Identifying admixture events is important as admixture has been invoked in a number of genetic studies based on clustering methods. These methods (Pritchard et al. 2000; Falush et al. 2003; Corander et al. 2004) are very useful and have been very popular in the last decade to group individuals according to their genotypes under relatively simple population genetic models. However, the admixture parameter provided by these methods is of difficult biological interpretation and cannot separate shared polymorphism from proper admixture, as we saw for the I. lusitanicum data. The main reason is that the demographic and evolutionary history of the populations is not explicitly modeled. For instance, the fact that the populations may have different effective sizes is not taken into account. More work is required to find the situations where clustering and ABC methods are best applied. The former appears to be more suited for cases of ongoing gene-flow, and the latter when ancient admixture and population split events have been important. Regarding ABC methods for admixture models, some improvements are likely to come from the information about linkage disequilibrium (LD), as admixture is known to generate LD (Nordborg and Tavar´ e 2002; Chikhi and Bruford 2005). The use of sum- mary statistics based on the statistical association of alleles at different loci may thus prove very useful to separate scenarios with different numbers of admixture events, and perhaps to separate admixture from gene flow models. We clearly look forward to see these improvements in the next few years.
Progressive image transmission is a method which allows to obtain a high quality version of the original image from the minimal amount of data. In “progressive” mode of transmission, namely as more bits are transmitted, better quality reconstructed images can be produced at the receiver. The receiver need not wait for all of the bits to arrive before decoding the image; in fact, the decoder can use each additional received bit to improve somewhat upon the previously reconstructed image. In Progressive Image Transmission (PIT), an approximate image is built up quickly in one stage and refined progressively in later. Some of the advantages of a progressive image transmission are that it allows to interrupt the transmission when the quality of the received image has reached a desired accuracy or when the receiver recognizes that the image is not interesting or only needs a specific portion of the complete image or images.
Many numerical methods was proposed to solve (3). When A is an identity matrix, the ROF model (3) turns into a TV denoising problem, methods as Chambolle’s projection method , semi- smooth Newton methods , multilevel optimization method  and split Bregman method . When A is a blur kennel matrix, (3) turns into a TV deblurring problem, we have prime-dual optimization algorithms for TV regularization [6–9], forward backward operator splitting method , interior point method , majorization-minimization approach for image deblurring , Bayesian framework for TV regularization and parameter estimation [13–15], using local information and Uzawa’s algo- rithm [16,17], using regularized locally adaptive kernel regression , augmented Lagrangian methods [19,20] and so on. However, the problem is far from perfectly solved, problems as edge and detail preserving [21,22], ringing effect reducing [23–25] and varied blur kennels and noise types image restoration [26,27] still need better solutions.
The patient was submitted to fiberoptic bron- choscopy, which revealed stenosis of 70-80% of the tracheal lumen. It was not possible to advance the device through the stenosis. There was no evidence of inflammatory process. Nor were there any alterations of the tracheal or larynx proximal to the stenosis. Dilatation using a rigid broncho- scope was performed, albeit with no satisfactory results. Tracheoplasty, using a median sternotomy approach, was then performed.
A CT scan of the airway was performed on postoperative day 3 (with the OTT being tempo- rarily removed under endoscopic guidance), and the dimensions of the tracheal stenosis were obtained. The three-dimensional reconstruction of the trachea and the measurements of the trachea were sent to Hood Laboratories (Pembroke, MA, USA). Based on those dimensions, a silicone (T-Y) tracheobron- chial stent compatible with the patient airway was purchased (Figure 2).
After estimating the pose parameters, the range values are calculated from image information of the corresponding pixels in all of images. The effective way to find matched points in different views is to employ epipolar rectification (Zhang, et. al., 1994). In epipolar rectification, all of the images are projected in a 1D space and resampled in order to remove the misalignment in X direction. So, the different between the Y coordinates of corresponding pixels is called parallax inimage space which is proportional to the height values in object space. Based on the epipolar matching approach, for a given point in the first image, its corresponding point must lie on the epipolar line in the second view. Therefore, the searching process is along a line rather than the entire imageand accordingly causes the matching process to be simpler and faster. Here, the dense stereo matching based on the Semi-Global Matching algorithm (Hirschmueller, 2005) is employed for estimating the disparity between corresponding pixels by minimizing a global cost function. Due to existing high overlap between images, the redundancy of disparity information is used for blunder detection and improving the accuracy of the final depth maps. As the final process, the depth maps from different stereo models are merged together to generate an integrated point cloud.
department with a 3-week history of dyspnea, hypoxemia, pleuritic chest pain, and lower limb edema. She had no history of comorbidities and had had two normal pregnancies. There was no family history of thrombosis. An electrocardiogram showed right axis deviation, and blood tests revealed elevated D-dimer levels. A routine chest X-ray showed oligemia in the right hemithorax and engorgement of the left pulmonary artery (Figure 1). ChestCT angiography conirmed the presence of a thrombus in the pulmonary artery trunk and full occlusion of the right segment (Figure 2). The coronal reconstruction shown in Figure 3 elegantly demonstrates the complete
Several studies have already demonstrated the efficacy of the GPU implementation of reconstruction algorithms. Iterative algorithms are computationally challenging, making GPU acceleration a need. Analytical algorithms are fully parallel whereas the iterative ones are essentially sequential. Thus, methods performing none or almost none com- putation within each iteration should not be implemented on the GPUs — insufficient parallel workload. This is the reason why, for instance, ART is not suitable for GPU (each iteration only processes a single projection line) but SART is. Besides SART, ML-EM and OS-EM, several other iterative algorithms were also adapted to the GPU, including the ordered-subsets convex algorithm , , and Total Variation (TV) reconstruc- tion , . In  the authors show that their implementation of both analytical anditerative reconstructions in GPU for CT allow speed-ups of more than one order of magnitude and the quality of the image is comparable. Further, the state of the art of the relevant algorithms for this work is provided:
in 1906 by Tanzini for coverage of the mastectomy area in a patient with breast cancer. Later, in the 1990s, this use was popularized both after quadrantectomies andin broad resec- tions 6 . Poland’s syndrome has a spectrum of clinical presen- tations that range from hypomastia to amastia, with absence of the sternal and clavicular portion of the latissimus dorsi in almost all cases 7 . In minor cases, breast reconstruction may simply require implants; however, in serious cases, it is necessary to associate the prosthesis with a latissimus dorsi lap for suitable coverage of the implant.
radiograph, which revealed opacification of the right lung base and soft tissue swelling in the right inferolateral chest wall, interspersed with gas images. The patient was tachypneic (RR, 26 breaths/min) and had decreased breath sounds at the right base. In addition, an area of weakness of the ipsilateral chest wall was noted on palpation. A CT scan of the chest demonstrated herniation of abdominal adipose tissue into the right lung base andchest wall, through discontinuities in the diaphragmatic and intercostal muscles, associated with the presence of fractures of the last ribs on the right. Magnetic resonance confirmed the extensive herniation of the abdominal content through the muscle discontinuities. The patient subsequently underwent surgical correction of changes, with the presence of intestinal loops in the subcutaneous and intrathoracic regions being confirmed after a thoracotomy. On examination of the cavity, it was of Figure 2 - In A, photograph showing visible bulging of the region of the right eighth intercostal space, on examination of the chest wall. In B, surface ultrasound examination of the region demonstrating a formation with echotexture similar to that of the liver, extending beyond the musculoskeletal boundaries of the chest wall and occupying the subcutaneous region (arrows). In C and D, axial CT slices of the upper abdomen demonstrating that part of hepatic segment V protruded through the area of muscle discontinuity, headed toward the adjacent subcutaneous region.
The overall DRL of DLP in Taiwan was 999 mGy-cm, lower than those found in Japan, Korea, Kenya, Syria and the U.K. ( Table 4 ) . However, the DRLs in Group B and Group C were all > 1000 mGy-cm, very close to the values in Korea and Japan. Our study confirms that with the introduction of MDCT, the associate CTDI increased. However, MDCT with more detectors (Group C) did not necessarily lead to significantly higher CTDI than MDCT with fewer detectors (Group B). Given the very similar kV, we hypothesize that the higher tube cur- rent combined with longer pitch in Group C resulted in similar CTDI for the two groups. How- ever, Group C displayed higher effective doses than Group B by 14%, probably due to wider scan ranges used in this group.
Wanting to further separate themselves from a relativist position they agree with Lukes that, “[t]he fact that there is no one worldview and set of values that everyone adheres to ‘does not render us unable to make universally applicable judgements´” (ibid). A reasonable disagreement for them is “generated by conflicting but reasonable values and goals or by different rankings of the same values and goals” (p. 60). Addressing what unreasonable values are, Fairclough and Fairclough describe the normative foundations of Critical Discourse Analysis which is founded upon the notion of human rights or duties to fellow humans which contribute to their flourishing. Accordingly, “[n]ot any difference should be given recognition: in particular those that infringe human rights, hinder human capabilities, or violate fundamental duties we have toward each other should not be among those that can ground good practical arguments” (ibid). Thus, it seems that their aim is allow any individual to hold any value in any rank and reasonably never change that view, unless that value is internally contradictory or goes against what have been called universal human rights or duties to protect and encourage human capability. After noting the existence of alternative moral frameworks, they then admit that “fundamental moral c onsiderations can conflict with each other and […] and deciding what to do in such cases will involve deciding which one should be given priority, which should override others.” (p. 61). However, a characterization of how these decision procedures should work is left unarticulated.
The incomplete geometrical coverage of the Global Navigation Satellite System (GNSS) makes the ionospheric tomographic system an ill-conditioned problem for ionospheric imaging. In order to detect the principal limitations of the ill-conditioned tomographic solutions, numerical simulations of the ionosphere are under constant investigation. In this paper, we show an investigation of the accuracy of Algebraic Reconstruction Technique (ART) and Multiplicative ART (MART) for performing tomographic reconstruction of Chapman profiles using a simulated optimum scenario of GNSS signals tracked by ground-based receivers. Chapman functions were used to represent the ionospheric morphology and a set of analyses was conducted to assess ART and MART performance for estimating the Total Electron Content (TEC) and parameters that describes the Chapman function. The results showed that MART performed better in the reconstruction of the electron density peak and ART gave a better representation for estimating TEC and the shape of the ionosphere. Since we used an optimum scenario of the GNSS signals, the analyses indicate the intrinsic problems that may occur with ART and MART to recover valuable information for many applications of Telecommunication, Spatial Geodesy and Space Weather.
As stated in the introduction, GSSs are a natural solution to handle this kind of work. Nevertheless, as GSSs are built upon the idea of sequential support (see Ref. 28, for a thorough discussion on the advantages of using technology that provides clear and simple instruction on good problem solving practices instead of a complex array of tools, at least on the case of ill-structured problems) for the decision-making stages, 16 it is not always easy to understand the earlier stages of a discussion. This is particularly evident at the end of discussions when classes, which were created to encompass the discussion elements and some of the details, are “flattened”. For instance, in a GSS voting environment, it is usual to expect changes in initial votes, as part of the group process. 29 Even if people are allowed to review their votes (for instance, after discussing the results), when the decision is made and results are disclosed, the final report is poor when it comes to show discussion progress, changes of opinions (and by who, if possible), convincing arguments, etc., which were involved from the start of the discussion to its end. In this case, a new group iteration (which could be the point when a vote changed) substitutes the earlier one, discarding the previous discussion scenario. However, reports usually only embed the latest result, especially when reporting is an automatic feature.