Thinking About the Process Have a clear vision for the project -- if you don't know exactly what you're trying to build, you're going to build the wrong thing. The old adage is "You built exactly what we asked for but not what we need". Have a rigorous process -- software engineering is a creative design activity but must be practiced systematically. You don't need to get bogged down in process, but you can't just rush into a solution with guns blazing. We're solving some pretty complex problems, so you need to be mindful of taking a logical and thoughtful approach to solving them and a rigorous approach to managing your projects or they'll quickly get away from you.
Develop applications rapidly -- this is one that we'll cover a bit more in the Agile section but it's certainly worth noting here. Software project requirements change constantly. The faster your process works, the better you're able to respond to those changes. More specifically, it says that you should first build something that works even if it's held together with string and duct tape and then seeing if it's worth investing the time to make it work right.
Not until the final phase of the process should you actually make your solution "look" good e. Don't do it! You're wasting effort because you really aren't going to need all those extra features or options or flexibilities. Just build what you need. Trust us and every other engineer who's looked over old code and facepalmed at all the wasted effort.
If you write a cool bit of code that solves a useful problem in one place, refer back to it when the problem comes up in other places as well. From your perspective, any time you find yourself manually typing something in multiple times, there's a way to combine it all into a single task that gets run multiple times. Embrace abstraction -- Software engineering is all about abstraction, or ignoring the details and solving higher order problems.
You don't have to write machine code or assembly code for a reason -- today's programming languages allow you to basically just tell the computer what you want and it will deliver. This also applies to how you approach complex systems -- focus on making sure the system functions properly without needing to know the implementation details of every component part.
Any time you're building code to do something general that's not directly related to the fundamentals of your application, someone else probably already wrote that code and better.
It's either posted on a blog somewhere, on Stack Overflow, or open-sourced as a gem. Learn from and use their code instead of wasting your time reinventing the wheel. Write code that does one thing well -- a single piece of code should only do a single thing and do it well. If you try to make a do-everything miracle solution and jam it all into one piece of code, you've got a maintainability nightmare and it probably violates multiple best practices.
Debugging is harder than writing code -- readable code is better than compact code. So stop trying to combine those 10 lines into one run-on chain of methods that use obscure input patterns! Someone else will have to read and debug it later and you're not doing them any favors.
Kaizen leave it better than when you found it -- fix not just the bug you're trying to solve but the code around it. A band-aid bug-fix doesn't help if the real problem is a design flaw which it usually is. Rules of Thumb Finding and fixing a software problem in production is times more expensive than finding and fixing it during the requirements and design phase. Catch those bugs and design flaws early! In the same token, more than half of errors in design are committed during the design phase.
Just about all the time spent fixing issues can be traced back to just a small handful of trouble modules. Sign up to track your progress for free. Title System Documentation for Software Maintenance. Problems and Importance of the Study Until today, maintenance is seen as the most costly phase in a software life cycle. Problems in software maintenance relates to how much information is available … Expand.
Software architectural evaluation becomes a familiar practice in software engineering community for developing quality software. Architectural evaluation reduces software development efiort and … Expand.
The analysis of the combined results from three independent industry focused case studies, undertaken in the area of distributed software development over a period of eight years, has resulted in the … Expand. ECSA Workshops. View 1 excerpt, cites methods.
Write a position paper on the role of correctness proofs in software development. Related Papers. By clicking accept or continuing to use the site, you agree to the terms outlined in our Privacy Policy , Terms of Service , and Dataset License. The client is entitled to receive a certain result.
The contractor must not be liable for failing to carry out tasks outside the specified scope. For a routine, the precondition expresses requirements that should be satisfied upon calling. The postcondition expresses properties that will be true upon return.
Together, they constitute a contract for the implementor of the routine. Following the contract metaphor, the effect of such a redefinition must stay within the realm of the original definition. This means that the precondition may be replaced only by a weaker one i.
If P redefines certain features of Q, we request that the subcontractor is able to do the job better or cheaper, i. And the subcontractor should at least do the job requested, i. Note that most object-oriented languages do not enforce these restrictions in their in- heritance mechanism. The architecture has a wider scope and purpose: — it it supports the communication with all stakeholders, not only the developers — it captures early design decisions, for example on behalf of early evaluation — it is a transferable abstraction, and as such its role surpasses the present project or system.
For this communication to be effective, its language must have a clear semantics. Second, design elements correspond to concepts in the application or solution domain.
If these concepts are known to the parties involved by simple labels, these labels will in due time serve as representations of these knowledge chunks, and thus improve effective communication about the design. Like sine and cosine are well-known concepts from the language of mathematics, so are quicksort and B-tree in the language of computer science.
At the design level, the factory pattern, MVC and the implicit-invocation architectural style represent such knowledge chunks. From an application domain, say finance, concepts like ledger result. The single developer was Tim Berners- Lee, a researcher with a background in internet and hypertext. His aim was a system to support the informal communication between physics researchers.
He anticipated a weak notion of central control, as in the then existing internet, and not uncommon in a research environment. The main requirements were: remote access the researchers should be able to communicate from their own research labs , interoperability they used a variety of hardware and sofware , extensibility and scalability. These requirements were met through libWWW, a library that masks hardware, oper- ating system, and protocol dependencies.
For further details, see Bass et al. If the reader knows that the proxy pattern is used, such guides him in building a model of what the software does. Usually, such information will not be obvious from the code, but be indicated in the documentation. See also the answer to exercise 13, and Section These best practices have stood the test of time; they constitute proven solutions to recurring problems.
Second, many design patterns explicitly address specific quality issues. For example, separation of concerns flexibility is addressed in the proxy pattern, while expandability is addressed in the factory pattern. The difference with the answer to the previous exercise is caused by the fact that the compound boolean expression is treated as one decision in exercise 7, and as two decisions in this exercise. Of course, this does not obey the representation condition. For one thing, it concentrates on measuring intra-modular complexity, by counting the number of decisions made.
For a system as a whole, the information flow between modules the amount and kind of data passed to and from modules is a major determinant of system complexity as well. This secret could be the representation of some abstract data type, but it could be something else as well.
So, the result of information hiding need not be the encapsulation of an object and its operations. Inheritance does not result from the application of the information hiding principle. The flow graph of the same program, with procedure P drawn inline, is given in figure 4. We would expect the outcome for the two versions of this program to be the same and, since the program contains two decisions, the answer should be 3. Most text book discussions of cyclomatic complexity give the wrong formula, and use examples consisting of one component only.
In that case the outcome of both formulas is the same. It is then only natural to expect that m G equals 0 if and only if G is a tree property 1.
If the number of edges that has to be removed in order to get a proper tree structure is the same for two graphs G1 and G2 , but the number of nodes in G1 is larger than that of G2 then, relatively speaking, G1 is better than G2.
The penalty of having a few extra edges should be relative to the number of nodes in the graph property 3. Finally, the worst possible situation occurs if each pair of nodes in the graph is con- nected through an edge. In that case, the graph is called complete property 4. The upperbound 1 is somewhat arbitrary; it could have been any constant. The objects client and library are the same as the corresponding objects in figure The identification card is assumed not to play an active role either; it is simply used to identify the client.
We assume there is only one library. BooksOnLoan is a simple count of the number of books this client has loaned. Fine is account of the outstanding fines for this client. If a book is returned whose due-back date has passed, AddToFine updates the account of his fine. If part of the fine is settled, SettleFine takes care of that. An employee has an EmployeeName and Password.
A book copy is identified by its Number. It has an attribute RefBook which refers to the book this one is a copy of.
These attributes are updated when the copy is loaned and returned; they are also used to update the fine administration. Object-oriented design results in a model of the problem, rather than any particular solution to that problem. Conversely, data-flow design is much more concerned with modeling a solution, in terms of functions to be performed in some given order.
These functions need not map well onto problem domain concepts, which may result in a less natural design, a design which poses comprehension problems when studying the rationale for certain design decisions. Note that we have not labeled the lines containing exit statements.
We could have done so, but it does not really make a difference as far as the control flow graph is concerned. Execution of such a statement incurs execution of the statement at line 8 as well. The numbers inside the bubbles refer to the linenumbers given in the routine text. On behalf of exercise 10, the edges have been labelled with capital letters.
When the routine is executed with the given input, all lines labeled with a number will be executed. In particular, the branches labelled E, G, and I will not be executed by this test. The program has four variables: parent, child, Ak, and insert. This leads to the following paths: 1.
Note that the reverse need not be true. If, on the other hand, the actual input during the operational use of the program is much more skewed i. In particular: — if there is little manpower available to identify and correct faults, failures observed will not be corrected until manpower does become available. Increase in reliability would be higher if more manpower were available, while the actual reliability is the same in both cases: the same number of failures is observed in the same number of test cases.
Only the array with random elements will probably give the wrong result, since the first element will not move. Again, the array with random elements will be the only one to give the wrong result, since no sorting will take place.
The array with random elements will probably give the wrong result. Now the array will be sorted in reverse order, so the sorted array and the array with random numbers will give the wrong results.
No swapping will take place, so the random array again gives the wrong result. All tests will yield the right answer. Not only will the array with random elements probably give the wrong sorting order; its elements will also change.
The array with random elements will probably give the wrong answer. So this test set leaves us with 1 live mutant. This means that the quality of the test set is quite high.
Note that the situation becomes really worse if we drop the array with random numbers. The quality of this test set is really determined by the latter test. Most of the other tests only exercise some boundary value. Statement or branch coverage for the sorting part is then adequate too, while it may well contain errors that are reveiled if the algorithm is included in some other environment.
The anticomposition axiom reflects the opposite: if components have been tested ade- quately, their composition need not be tested adequately. Experiments suggest that these methods tend to find different types of faults see Section Both methods are less suited for confidence building, for two reasons. For both methods, the testing quality hinges on the quality of the equivalence partitioning, which is often not perfect.
Secondly, they treat every fault equally hazardous, which need not be the case during operation of the system. Under the assumption that this other description is correct, correctness proofs are both good at finding faults and increasing our confidence. Additional testing is needed to cater for the possible imperfectness of the formal description against which the proof is given and possible errors made in the proof itself.
Experiments suggest that it is quite good at building confidence. Again, experiments suggest that inspec- tions reveil different types of errors. Discussion of possible usage sce- narios also has a stronger validation character than other types of review. Faults that never show up, for instance because they are located in a piece of code that never gets executed, are not important. A fault in a piece of code that gets executed many times a day is important, though.
On the other hand, both testing and reliability assessment are needed. Testing, if started early on in the project, can help to prevent errors, and provides for systematic means to assess software quality. At a later stage, reliability assessment helps to assess the operational quality of the system. The key idea is that design recovery requires outside information, information that has gone lost during the transition from requirements to design, or design to code.
This results in different versions of those documents. Software configuration management helps keep track of revision histories and versions. Older versions remain available, so that changes can be undone, and the revision history itself can be of help during maintenance. Additional support for building executables both optimizes this process unchanged components need not be compiled anew and helps to get the right executables those that contain the most recent version. The data stored in the software management system can also be used for mining the project data.
For instance, trends in the number of change requests in certain parts of the software archive can be studied; see also Section Such a test will then pay particular attention to aspects that are relevant during maintenance: the quality of the technical documentation, the structure and complexity of individual components as well as the system as a whole, the reliability of the system.
The structure of such an organization could be similar to that of other test groups. In particular, the future maintainers should be represented. Everything else is maintenance. Since development from scratch is the exception rather than the rule, the distinction between development and perfective maintenance easily gets blurred.
The classification of development and maintenance activities as given in the exercise does make this careful distinction between adding functionality, and everything else. Explicit codification of this knowledge, and subsequent help in browsing through the resulting network of knowledge chunks, offers additional support over other means to acquire that knowledge documentation, design information, and the like are essentially linear organizations of this knowledge, from which subtle interactions and mutual dependencies are hard to distill.
They have simply stood the test of time. This in itself should have a positive impact on corrective maintenance effort. Secondly, reused components are likely to be more stable; they reflect the growing understanding of domain concepts.
This in turn should positively impact perfective maintenance effort. The structure of the resulting system should then better reflect the structure of the problem domain. Stable entities are the focus of attention, and volatile functionality is implemented through operations on objects. This should help to reduce maintenance effort. At the code level, systems written in OO languages tend to be shorter because of the code sharing that results from inheritance. Smaller programs require less maintenance.
Finally, changes to programs can be realized through subclasses, rather than through tinkering with existing code. On the negative side: OO programs may be more difficult to comprehend, because the thread of control is more difficult to discern.
Subtasks 1 and 3 require an effort proportional to the length of the program; this effort is hardly, if at all, affected by the size of the change. Subtask 2 may be expected to incur a cost proportional to the size of the change. Automatic support for configuration control of these artifacts thus offers similar help in the control these types of information. As a consequence, sup- port for a number of activities is often given, but not for all. It is not clear to what extent systems developed using such an environment can also be easily maintained.
A lot of research and development in this area is still going on, and future environments will certainly differ from the present ones. Many development activities however are ill-formalizable, and the envi- ronment may then be a hindrance rather than a help. Also, tuning of a support environment to a specific situation often is not easy. For instance, if requirements are kept in one tool, and code modules in another, it becomes difficult to follow traceability links from requirements to code.
The latter is especially important if the command mnemonics do not really fit their semantics, and in cases where command sequences have to be issued to realize a compound task.
A dynamic help system takes into account the current state and the history of the current job when offering help. Present-day desktop applications often offer both varieties. Requirements en- gineering is likely to start with a feasibility study. Part of this feasibility study is to decide on the system boundaries: what is done by the computer, and what is left to the user.
A global task analysis, possibly with a few user representatives only, may be done to clarify these issues. Once this feasibility study is done, and a decision on a full requirements engineering phase is taken, a more elaborate task analysis step is conducted.
Interviews, observa- tions, and other techniques can be used to get at a full task catalog and task hierarchy. At this stage also, certain aspects of the interface functionality dialog style, type of error messaging and help functions, dialog sequencing, and default behavior is determined. This can be user-tested using rapid prototyping and screen walkthroughs. During the design stage, several alignment issues deserve our attention, such as those between detailed data modeling and the objects that appear on the screen, between task structure and system structure, and the physical layout of screens.
Finally, testing should also include usability testing. See Summersgill and Browne for a more detailed description of how to integrate user-interface issues with a classical waterfall-type development method, viz. Advantages include: user involvement from the start, real-life examples that users feel comfortable with, expressed in the language of the user, no big investments needed, quick results.
Possible disadvantages include: the extent to which the scenarios cover everything needed, how to document the results, the process need not converge, it is difficult to include dynamics, and the scenarios tend to be simple ones. See Rettig for a more elaborate discussion of a similar approach, viz. The advantages and disadvantages hereof are discussed in Section 3.
From a managerial point of view, this approach has definite advantages when it comes to control progress. Functionality is decided upon first, and the user interface is seen as a layer on top of this so, while working on the user interface, no rework is needed because of wrong functionality. It also allows for a clear separation of concerns in the architecture viz.
A major advantage of formal descriptions is that they allow for formal evaluation. However, it remains to be seen whether the user interface requirements can be sufficiently captured formally. Also, discussing formal specifications with users is not easy, and most developers are not familiar with formal techniques.
Domain-independent data abstractions are limited in number: lists, queues, trees of various kinds, and the like.
0コメント