Software reliability and testing time allocation an architecture-based approach 2010




















The model aims to quantitatively identify the most critical components of software architecture in order to best assign the testing resources to them. A tool for the solution of the model is also developed. Abstract—Software reliability allocation plays an important role during software product design phase, which has close relationship with software modeling and cost evaluation. We formulated an architecture-based approach for modeling software reliability optimization problem, on this basis a dynamic Abstract - Cited by 1 0 self - Add to MetaCart Abstract—Software reliability allocation plays an important role during software product design phase, which has close relationship with software modeling and cost evaluation.

We formulated an architecture-based approach for modeling software reliability optimization problem, on this basis a dynamic programming algorithm has been illustrated in this paper which can be used to allocate the reliability to each component so as to minimize the cost of designing software while meeting the desired reliability goal.

The result of our experiment show an optimal or near optimal solution to the problem of selecting the component comprising the software can be obtained with lower cost.. Guenab, D. Theilliol, P. Weber, J. Abstract: The aim of Fault Tolerant Control FTC is to preserve the ability of the system to reach performances as close as possible to those which were initially assigned to it. The main goal of this paper consists in the development of a FTC strategy, based on both reliability and life cost of co Abstract - Cited by 1 1 self - Add to MetaCart Abstract: The aim of Fault Tolerant Control FTC is to preserve the ability of the system to reach performances as close as possible to those which were initially assigned to it.

The main goal of this paper consists in the development of a FTC strategy, based on both reliability and life cost of components. The proposed approach is illustrated through simulations considering a heating system benchmark used in the Intelligent Fault Tolerant Control in Integrated Systems.

Reliability Allocation by Yashwant K. AbstractA system is generally designed as an assembly of subsystems, each with its own reliability attributes. The overall system reliability is a function of the subsystem reliability metrics. The cost of the system is the sum of the costs for all the subsystems. This article examines possible a Abstract - Cited by 1 1 self - Add to MetaCart AbstractA system is generally designed as an assembly of subsystems, each with its own reliability attributes.

This article examines possible approaches to allocate the reliability values such that the total cost is minimized. A basic block, or simply a block, is a sequence of instructions that, except for the last instruction, is free of branches and function calls. The parameters caused by the low quality of extracted instructions in any basic block are either executed all together or not at all. Software architecture. It is a program to provide user interface for the configuration of an array of antennas.

The program consists us , since it has been extensively used without of about 10, lines of code. Its purpose is to prepare a data having failures for a long time. Testing execution for the faulty version for a certain from a user, given the array antenna configuration amount of total testing time.

Only a fraction of the described using an appropriate Array Definition Language. The OS is included among This case study has been used in other studies about the components. Failure data and execution times reliability analysis and evaluation e. In functionalities to obtain the SRGMs. Applying the optimization model, testing times for tional analyses and in a greater amount of available data that each component is predicted for the current version. The choice of component granular- algorithm to solve the nonlinear constrained optimi- zation problem [40].

The current version is then tested ity also depends on how much they are decoupled. High according to the computed test times allocation. The prediction The granularity of component, in this case, is chosen to error is then analyzed against the possible prediction be a subsystem.

No fault tolerance mechanism, such as errors that could occur 1 in the transition probabilities those described above, is present in this system. In the experiments conducted, we computed a single-application complete According to the described steps, we injected 70 faults in the solution with the performance and without fault tolerance software 31, 28, and 11, respectively, in components 1, 2, means.

To estimate the parameters, we used a hybrid and 3 based on the fault categories of Table 1. In particular, the experimental procedure is time and test case number is reported in Table 2. Creation of a faulty version of the program, by Test execution for this faulty version was carried out by reinserting faults belonging to real fault set dis- covered during integration testing and operational randomly generating test cases based on the operational usage Table 1.

This faulty version emulates the profile in this phase, test cases were generated. After previous version of the application. Note that it is the testing phase, 46 faults were removed respectively, 20, likely that the version of the application we used 19, and 7 , leaving 24 faults in the software.

Granularity of visit. We assumed a reliability goal for the next release of return of the control flow to itself. Thus, a user function F calling another function turn, derived from execution counts with the procedure will have two visits: one from the caller function and described in Section 3. Results are summarized in Table 3. The average As for the OS, the visit granularity is slightly different.

Correspondingly, the time per visit will also have a executions , and the denominator is the average number of rougher granularity: it is the average time spent in an entire visits per execution to all the application components: calls system call execution, computed as the average OS time is the number of user functions called per execution obtained by timeit9 divided by the average number of doubled to consider the return , systemCalls is the number system calls i. Execution counts visit counts for other components.

Finally, the performance of The difference is that only the calls from a factor is set to 0. Results of this 7.

Resource Kit Tools, that records the time a specified command takes to run. The initial architecture configuration. Note that since the visit granularity is so fine, the expected visit counts during an execution are very high and the transition probabilities toward the end state are very low because only a minimal part of code leads to the end.

Moreover, also note that the Parser subsystem component 1 and the Formatting subsystem component 3 make great use of the OS, and the corresponding transition probabil- ities are significantly higher than the Computational subsystem.

The high values for visit counts could also be due to the first-order DTMC: A first-order DTMC does not allow one to consider the dependence of transition probability from a component i to j on the current as well as the previous components from which the control arrived at component i.

The configuration of the parameters of one of the components. Thus, to keep the treatment simple, the first-order results are considered in the following. Finally, we did not 6. This value is clearly an reliability, the model was built and solved by our tool, giving as output the optimal testing times for each overestimation due to the low number of test cases used to component.

After executing the tests always by generating test cases from the operational profile according to the optimal 6. Table 4 shows the testing times version testing, an SRGM for each component was built, devoted to each component, the corresponding number of using SREPT to fit the best model to the data.

The fault executed test cases, the detection time, and the detecting content parameters for the current version were derived test case number for each fault. Clearly, the actual devoted from the estimated remaining fault contents, while the rate time will slightly exceed the allotted time, because the latter parameters were set at the same value of the previous is not a perfect multiple of the execution time of a test case. For all of the components, the same kind of SRGM was 6.

Measuring the actual reliability using the same the testing processes followed for the three components. Sensitivity to visit counts and failure intensities variation. Third, since no OS failures have been observed, we The main sources of error in the prediction are: 1 the estimated the OS reliability to be equal to 1; Fig. In this case, the overall reliability. In our case, the architecture of the application and prediction is more sensitive to OS reliability prediction the components themselves have not significantly changed errors than the previous parameters.

Such a good prediction is also due to this. However, we can see how in the presence of significant changes in such values, the prediction is still good. First, Fig. For a maximum of 20 percent in the visit counts estimation errors underestimation , we have a reliability prediction error of about 0.

Second, in the same figure Fig. We evaluated such an effect by varying the failure intensities. Results in Fig. The failure intensities plot shows al- most the same behavior as the visit counts plot because, in the reliability computation, their product appears in the Fig. Sensitivity to OS reliability variation. For that are useful to successive designs for future versions.

Moreover, we plan to include other fault tolerance case, would correspond to about failures, i. It is a describe more systems. A better investigation on the huge prediction mistake. Moreover, OS usually has much performance testing time relations and on the OS influence more historical failure data than the other application is also desirable.

Finally, considering the testing strategies components; this makes OS reliability estimation easier and influence on the testing process of a component, and hence, more accurate. In this case, for a maximum error of 0.

Trivedi was funded in part by US a reliability of 0. Wattanapongsakorn and S. Reliability, vol. Onishi, S. Kimura, R. James, and Y. Rice, C. Cassady, and R. Reliability and the testing resources to different system components in Maintainability Symp.

Lyu, S. Rangarajan, and A. Reliability, through the tool implementing it, is, therefore, to drive vol. An architecture-based [7] R. Hou, S. Skip to Main Content. A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity. Use of this web site signifies your agreement to the terms and conditions. Later, the maximal entropy minimum variance ordered weighted averaging MEMV-OWA method has been used to determine the module weights for resource allocation and ranking of various modules based on conflicting nature of characteristics in order to allocate the resources in a competent way.

The detailed methodology has been illustrated through a numerical example. Skip to main content. This service is more advanced with JavaScript available. Advertisement Hide. Chapter First Online: 30 September This is a preview of subscription content, log in to check access. Biol Med 3 2 — Google Scholar.



0コメント

  • 1000 / 1000