Dikree For example, it seems reasonable that there should be a core set of references for these products, so part of the work on the SWEBOK Version 3 will establish that core set. While all reasonable care has been taken in the preparation and review of SWECOM, the IEEE does not warrant that the document is accurate, current, or suitable for your particular purpose. Is a Data Model, in Fact, a Model? Since it is usually not possible to put the full body of knowledge of even an emerging discipline, such as software engineering, into a single document, there is a need for a Guide to the Software Engineering Body of Knowledge. Next, a larger group of invited practitioners answers a set list of about 14 questions on the new draft.

Author:Shazragore Zoloramar
Country:Brunei Darussalam
Language:English (Spanish)
Published (Last):10 September 2011
PDF File Size:10.36 Mb
ePub File Size:12.47 Mb
Price:Free* [*Free Regsitration Required]

Issues list associated with this product Technical review procedure. The team follows the documented review procedure. The technical review is completed once all the activities listed in the examination have been completed.

Technical reviews of source code may include a wide variety of concerns such as analysis of algorithms, utilization of critical computer resources, adherence to coding standards, structure and organization of code for testability, and safetycritical considerations. Note that technical reviews of source code or design models such as UML are also termed static analysis see topic 3, Practical Considerations.

Some important differentiators of inspections as compared to other types of technical reviews are these: 1. Inspections are based upon examining a work-product with respect to a defined set of criteria specified by the organization.

Sets of rules can be defined for different types of workproducts e. Rather that attempt to examine every word and figure in a document, the inspection process allows checkers to evaluate defined subsets samples of the documents under review. Individuals holding management positions over members of the inspection team do not participate in the inspection.

This is a key distinction between peer review and management review. An impartial moderator who is trained in inspection techniques leads inspection meetings.

The inspection process includes meetings face to face or electronic conducted by a moderator according to a formal procedure in which inspection team members report the anomalies they have found and other issues.

Software inspections always involve the author of an intermediate or final product; other reviews might not. Inspections also include an inspection leader, a recorder, a reader, and a few two to five checkers inspectors. The members of an inspection team may possess different expertise, such as domain expertise, software design method expertise, or programming language expertise. Inspections are usually conducted on one relatively small section of the product at a time samples.

Each team member examines the software product and other review inputs prior to the review meeting, perhaps by applying an analytical technique see section 3. During the inspection, the moderator conducts the session and verifies that everyone has prepared for the inspection and conducts the session. The inspection recorder documents anomalies found.

A set of rules, with criteria and questions germane to the issues of interest, is a common tool used in inspections. The resulting list often classifies the anomalies see section 3. The inspection exit decision corresponds to one of the following options: 1.

Accept with no or, at most, minor reworking 2. Accept with rework verification 3. A walkthrough may be conducted for the purpose of educating an audience regarding a software product. Walkthroughs are distinguished from inspections. The main difference is that the author presents the work-product to the other participants in a meeting face to face or electronic.

Unlike an inspection, the meeting participants may not have necessarily seen the material prior to the meeting. The meetings may be conducted less formally. The author takes the role of explaining and showing the material to participants and solicits feedback.

Like inspections, walkthroughs may be conducted on any type of work-product including project plan, requirements, design, source code, and test reports. Process assurance audits determine the adequacy of plans, schedules, and requirements to achieve project objectives [5]. The audit is a formally organized activity with participants having specific roles—such as lead auditor, another auditor, a recorder, or an initiator—and including a representative of the audited organization.

Audits identify instances of nonconformance and produce a report requiring the team to take corrective action. Information on these factors influences how the SQM processes are organized and documented, how specific SQM activities are selected, what resources are needed, and which of those resources impose bounds on the efforts. This is the case for the following reasons: system failures affect a large number of people; users often reject systems that are unreliable, unsafe, or insecure; system failure costs may be enormous; and undependable systems may cause information loss.

System and software dependability include such characteristics as availability, reliability, safety, and security. When developing dependable software, tools and techniques can be applied to reduce the risk of injecting faults into the intermediate deliverables or the final software product. Verification, validation, and testing processes, techniques, methods, and tools identify faults that impact dependability as early as possible in the life cycle.

Additionally, mechanisms may need to be in place in the software to guard against external attacks and to tolerate faults. Software integrity levels are a range of values that represent software complexity, criticality, risk, safety level, security level, desired performance, reliability, or other project-unique characteristics that define the importance of the software to the user and acquirer.

The characteristics used to determine software integrity level vary depending on the intended application and use of the system. The software is a part of the system, and its integrity level is to be determined as a part of that system. The assigned software integrity levels may change as the software evolves.

Design, coding, procedural, and technology features implemented in the system or software can raise or lower the assigned software integrity levels.

The software integrity levels established for a project result from agreements among the acquirer, supplier, developer, and independent assurance authorities. A software integrity level scheme is a tool used in determining software integrity levels. Characterizing these techniques leads to an understanding of the product, facilitates corrections to the process or the product, and informs management and other stakeholders of the status of the process or product.

Many taxonomies exist and, while attempts have been made to gain consensus, the literature indicates that there are quite a few in use. Defect characterization is also used in audits and reviews, with the review leader often presenting a list of issues provided by team members for consideration at a review meeting. As new design methods and languages evolve, along with advances in overall software technologies, new classes of defects appear, and a great deal of effort is required to interpret previously defined classes.

When tracking defects, the software engineer is interested in not only the number of defects but also the types. Information alone, without some classification, may not be sufficient to identify the underlying causes of the defects.

Specific types of problems need to be grouped to identify trends over time. The point is to establish a defect taxonomy that is meaningful to the organization and to software engineers. Software quality control activities discover information at all stages of software development and maintenance.

In some cases, the word defect is overloaded to refer to different types of anomalies. However, different engineering cultures and standards may use somewhat different meanings for these terms.

Also called human error. Fault: A defect in source code. Fault is the formal name of a bug. Using these definitions three widely used software quality measurements are defect density number of defects per unit size of documents , fault density number of faults per 1K lines of code , and failure intensity failures per use-hour or per test-hour.

Reliability models are built from failure data collected during software testing or from software in service and thus can be used to estimate the probability of future failures and to assist in decisions on when to stop testing. One probable action resulting from SQM findings is to remove the defects from the product under examination e. Other activities attempt to eliminate the causes of the defects—for example, root cause analysis RCA. RCA activities include analyzing and summarizing the findings to find root causes and using measurement techniques to improve the product and the process as well as to track the defects and their removal.

Data on inadequacies and defects found by software quality control techniques may be lost unless they are recorded. For some techniques e. When automated tools are used see topic 4, Software Quality Tools , the tool output may provide the defect information.

Reports about defects are provided to the management of the organization. Dynamic techniques involve executing the software; static techniques involve analyzing documents 3.

There are many tools and techniques for statically examining software work-products see section 2. In addition, tools that analyze source code control flow and search for dead code are considered to be static analysis tools because they do not involve executing the software code.

Other, more formal, types of analytical techniques are known as formal methods. They are notably used to verify software requirements and designs. They have mostly been used in the verification of crucial parts of critical systems, such as specific security and safety requirements. Different kinds of dynamic techniques are performed throughout the development and maintenance of software.

Generally, these are testing techniques, but techniques such as simulation and model analysis may be considered dynamic see the Software Engineering Models and Methods KA. Code reading is considered a static technique, but experienced software engineers may execute the code as they read through it. Code reading may utilize dynamic techniques.

This discrepancy in categorizing indicates that people with different roles and experience in the organization may consider and apply these techniques differently. Different groups may perform testing during software development, including groups independent of the development team. The Software Testing KA is devoted entirely to this subject.

The third party is not the developer, nor is it associated with the development of the product. Instead, the third party is an independent facility, usually accredited by some body of authority. Their purpose is to test a product for conformance to a specific set of requirements see the Software Testing KA. With the increasing sophistication of software, questions of quality go beyond whether or not the software works to how well it achieves measurable quality goals.

Decisions supported by software quality measurement include determining levels of software quality notably because models of software product quality include measures to determine the degree to which the software product achieves quality goals ; managerial questions about effort, cost, and schedule; determining when to stop testing and release a product see Termination under section 5.

The cost of SQM processes is an issue frequently raised in deciding how a project or a software development and maintenance group should be organized.

Often, generic models of cost are used, which are based on when a defect is found and how much effort it takes to fix the defect relative to finding the defect earlier in the development process. Software quality measurement data collected internally may give a better picture of cost within this project or organization. While the software quality measurement data may be useful in itself e. These techniques include descriptive statistics based e.

Descriptive statistics-based techniques and tests often provide a snapshot of the more troublesome areas of the software product under examination. The resulting charts and graphs are visualization aids, which the decision makers can use to focus resources and conduct process improvements where they appear to be most needed. Results from trend analysis may indicate that a schedule is being met, such as in testing, or that certain classes of faults may become more likely to occur unless some corrective action is taken in development.

The predictive techniques assist in estimating testing effort and schedule and in predicting failures. More specific information on testing measurement is presented in the Software Testing KA.


Chapter 10: Software Quality



Guide to the Software Engineering Body of Knowledge (SWEBOK)





Related Articles