Friday, August 7, 2009

Prototyping

What is Prototyping?


Prototyping is the process of building a model of a system. In terms of an information system, prototypes are employed to help system designers build an information system that intuitive and easy to manipulate for end users. Prototyping is an iterative process that is part of the analysis phase of the systems development life cycle.


During the requirements determination portion of the systems analysis phase, system analysts gather information about the organization's current procedures and business processes related the proposed information system. In addition, they study the current information system, if there is one, and conduct user interviews and collect documentation. This helps the analysts develop an initial set of system requirements.


Prototyping can augment this process because it converts these basic, yet sometimes intangible, specifications into a tangible but limited working model of the desired information system. The user feedback gained from developing a physical system that the users can touch and see facilitates an evaluative response that the analyst can employ to modify existing requirements as well as developing new ones.


Prototyping comes in many forms - from low tech sketches or paper screens(Pictive) from which users and developers can paste controls and objects, to high tech operational systems using CASE (computer-aided software engineering) or fourth generation languages and everywhere in between. Many organizations use multiple prototyping tools. For example, some will use paper in the initial analysis to facilitate concrete user feedback and then later develop an operational prototype using fourth generation languages, such as Visual Basic, during the design stage.


Some Advantages of Prototyping:


Reduces development time.

Reduces development costs.

Requires user involvement.

Developers receive quantifiable user feedback.

Facilitates system implementation since users know what to expect.

Results in higher user satisfaction.

Exposes developers to potential future system enhancements.


Some Disadvantages of Prototyping

Can lead to insufficient analysis.

Users expect the performance of the ultimate system to be the same as the prototype.

Developers can become too attached to their prototypes

Can cause systems to be left unfinished and/or implemented before they are ready.

Sometimes leads to incomplete documentation.

If sophisticated software prototypes (4th GL or CASE Tools) are employed, the time saving benefit of prototyping can be lost.


Because prototypes inherently increase the quality and amount of communication between the developer/analyst and the end user, its' use has become widespread. In the early 1980's, organizations used prototyping approximately thirty percent (30%) of the time in development projects. By the early 1990's, its use had doubled to sixty percent (60%). Although there are guidelines on when to use software prototyping, two experts believed some of the rules developed were nothing more than conjecture.


In the article "An Investigation of Guidelines for Selecting a Prototyping Strategy", Bill C. Hardgrave and Rick L. Wilson compare prototyping guidelines that appear in information systems literature with their actual use by organizations that have developed prototypes. Hardgrave and Wilson sent out 500 prototyping surveys to information systems managers throughout the United States. The represented organizations were comprised of a variety of industries - educational, health service, financial, transportation, retail, insurance, government, manufacturing and service. A copy of the survey was also presented to a primary user and a key developer of two systems that the company had implemented within the two years of the survey.


There were usable survey results received from 88 organizations representing 118 different projects. Hardgrave and Wilson wanted to find out how many of the popular prototyping guidelines outlined in literature were actually used by organizations and whether compliance affected system success (measured by the user's stated level of satisfaction). It should be noted that, although not specifically stated, the study was based on the use of "high tech" software models, not "low tech" paper or sketch prototypes.


Based on the results of their research, Hardgrave and Wilson found that industry followed only six of the seventeen recommended in information system literature. The guidelines practiced by industry whose adherence was found to have a statistical effect on system success were:

Prototyping should be employed only when users are able to actively participate in the project.

Developers should either have prototyping experience or given training.

Users involved in the project should also have prototyping experience or be educated on the use and purpose of prototyping.

Prototypes should become part of the final system only if the developers are given access to prototyping support tools.

If experimentation and learning are needed before there can be full commitment to a project, prototyping can be successfully used.

Prototyping is not necessary if the developer is already familiar with the language ultimately used for system design.


Instead of software prototyping , several information systems consultants and researchers recommend using "low tech" prototyping tools (also known as paper prototypes or Pictive), especially for initial systems analysis and design. The paper approach allows both designers and users to literally cut and paste the system interface. Object command and controls can be easily and quickly moved to suit user needs.


Among its' many benefits, this approach lowers the cost and time involved in prototyping, allows for more iterations, and gives developers the chance to get immediate user feedback on refinements to the design. It effectively eliminates many of the disadvantages of prototyping since paper prototypes are inexpensive to create, developers are less likely to become attached to their work, users do not develop performance expectations, and best of all, your paper prototypes are usually "bug-free" (unlike most software prototypes)!

Spiral model

Spiral model (Boehm, 1988).
Software development process
Activities and steps
Requirements · Specification
Architecture · Design
Implementation · Testing
Deployment · Maintenance
Models
Agile · Cleanroom · DSDM
Iterative · RAD · RUP · Spiral
Waterfall · XP · Scrum · Lean
V-Model · FDD
Supporting disciplines
Configuration management
Documentation
Quality assurance (SQA)
Project management
User experience design
Tools
Compiler · Debugger · Profiler
GUI designer
Integrated development environment

The spiral model is a software development process combining elements of both design and prototyping-in-stages, in an effort to combine advantages of top-down and bottom-up concepts. Also known as the spiral lifecycle model, it is a systems development method (SDM) used in information technology (IT). This model of development combines the features of the prototyping model and the waterfall model. The spiral model is intended for large, expensive and complicated projects.

Contents

History

The spiral model was defined by Barry Boehm in his 1988 article "A Spiral Model of Software Development and Enhancement"[1]. This model was not the first model to discuss iterative development, but it was the first model to explain why the iteration matters.[citation needed]

As originally envisioned, the iterations were typically 6 months to 2 years long. Each phase starts with a design goal and ends with the client (who may be internal) reviewing the progress thus far. Analysis and engineering efforts are applied at each phase of the project, with an eye toward the end goal of the project.

The Spiral Model

The steps in the spiral model can be generalized as follows:

  1. The new system requirements are defined in as much detail as possible. This usually involves interviewing a number of users representing all the external or internal users and other aspects of the existing system.
  2. A preliminary design is created for the new system.This phase is the most important part of "Spiral Model". In this phase all possible (and available) alternatives, which can help in developing a cost effective project are analyzed and strategies are decided to use them. This phase has been added specially in order to identify and resolve all the possible risks in the project development. If risks indicate any kind of uncertainty in requirements, prototyping may be used to proceed with the available data and find out possible solution in order to deal with the potential changes in the requirements.
  3. A first prototype of the new system is constructed from the preliminary design. This is usually a scaled-down system, and represents an approximation of the characteristics of the final product.
  4. A second prototype is evolved by a fourfold procedure:
    1. evaluating the first prototype in terms of its strengths, weaknesses, and risks;
    2. defining the requirements of the second prototype;
    3. planning and designing the second prototype;
    4. constructing and testing the second prototype.

Applications

The spiral model is used most often in large projects. For smaller projects, the concept of agile software development is becoming a viable alternative. The US military has adopted the spiral model for its Future Combat Systems program.

Advantages

The spiral model promotes quality assurance through prototyping at each stage in systems development.

Waterfall model

The waterfall model is a sequential software development process, in which progress is seen as flowing steadily downwards (like a waterfall) through the phases of Conception, Initiation, Analysis, Design (validation), Construction, Testing and maintenance.

The unmodified "waterfall model". Progress flows from the top to the bottom, like a waterfall.

Activities and steps

Models
Agile · Cleanroom · DSDM
Iterative · RAD · RUP · Spiral
Waterfall · XP · Scrum · Lean
V-Model · FDD
Supporting disciplines
Configuration management
Documentation
Quality assurance (SQA)
Project management
User experience design
Tools
Compiler · Debugger · Profiler
GUI designer
Integrated development environment

The waterfall development model has its origins in the manufacturing and construction industries; highly structured physical environments in which after-the-fact changes are prohibitively costly, if not impossible. Since no formal software development methodologies existed at the time, this hardware-oriented model was simply adapted for software development.

The first formal description of the waterfall model is often cited to be an article published in 1970 by Winston W. Royce (1929–1995),[1] although Royce did not use the term "waterfall" in this article. Royce was presenting this model as an example of a flawed, non-working model (Royce 1970). This is in fact the way the term has generally been used in writing about software development—as a way to criticize a commonly used software practice.[2]

Contents

[hide]

[edit] Model

In Royce's original Waterfall model, the following phases are followed in order:

  1. Requirements specification
  2. Design
  3. Construction (AKA implementation or coding)
  4. Integration
  5. Testing and debugging (AKA Validation)
  6. Installation
  7. Maintenance

To follow the waterfall model, one proceeds from one phase to the next in a purely sequential manner. For example, one first completes requirements specification, which are set in stone. When the requirements are fully completed, one proceeds to design. The software in question is designed and a blueprint is drawn for implementers (coders) to follow — this design should be a plan for implementing the requirements given. When the design is fully completed, an implementation of that design is made by coders. Towards the later stages of this implementation phase, separate software components produced are combined to introduce new functionality and reduced risk through the removal of errors.

Thus the waterfall model maintains that one should move to a phase only when its preceding phase is completed and perfected. However, there are various modified waterfall models (including Royce's final model) that may include slight or major variations upon this process.

[edit] Supporting arguments

Time spent early on in software production can lead to greater economy later on in the software lifecycle; that is, it has been shown many times that a bug found in the early stages of the production lifecycle (such as requirements specification or design) is cheaper, in terms of money, effort and time, to fix than the same bug found later on in the process. ([McConnell 1996], p. 72, estimates that "a requirements defect that is left undetected until construction or maintenance will cost 50 to 200 times as much to fix as it would have cost to fix at requirements time.") To take an extreme example, if a program design turns out to be impossible to implement, it is easier to fix the design at the design stage than to realize months later, when program components are being integrated, that all the work done so far has to be scrapped because of a broken design.

This is the central idea behind Big Design Up Front (BDUF) and the waterfall model - time spent early on making sure that requirements and design are absolutely correct will save you much time and effort later. Thus, the thinking of those who follow the waterfall process goes, one should make sure that each phase is 100% complete and absolutely correct before proceeding to the next phase of program creation. Program requirements should be set in stone before design is started (otherwise work put into a design based on incorrect requirements is wasted); the program's design should be perfect before people begin work on implementing the design (otherwise they are implementing the wrong design and their work is wasted), etc.

A further argument for the waterfall model is that it places emphasis on documentation (such as requirements documents and design documents) as well as source code. In less designed and documented methodologies, should team members leave, much knowledge is lost and may be difficult for a project to recover from. Should a fully working design document be present (as is the intent of Big Design Up Front and the waterfall model) new team members or even entirely new teams should be able to familiarize themselves by reading the documents.

As well as the above, some prefer the waterfall model for its simple approach and argue that it is more disciplined. Rather than what the waterfall adherent sees as chaos, the waterfall model provides a structured approach; the model itself progresses linearly through discrete, easily understandable and explainable phases and thus is easy to understand; it also provides easily markable milestones in the development process. It is perhaps for this reason that the waterfall model is used as a beginning example of a development model in many software engineering texts and courses.

It is argued that the waterfall model and Big Design up Front in general can be suited to software projects which are stable (especially those projects with unchanging requirements, such as with shrink wrap software) and where it is possible and likely that designers will be able to fully predict problem areas of the system and produce a correct design before implementation is started. The waterfall model also requires that implementers follow the well made, complete design accurately, ensuring that the integration of the system proceeds smoothly.

[edit] Criticism

The waterfall model is argued by many to be a bad idea in practice, mainly because of their belief that it is impossible, for any non-trivial project, to get one phase of a software product's lifecycle perfected before moving on to the next phases and learning from them. For example, clients may not be aware of exactly what requirements they want before they see a working prototype and can comment upon it; they may change their requirements constantly, and program designers and implementers may have little control over this. If clients change their requirements after a design is finished, that design must be modified to accommodate the new requirements, invalidating quite a good deal of effort if overly large amounts of time have been invested in Big Design Up Front. Designers may not be aware of future implementation difficulties when writing a design for an unimplemented software product. That is, it may become clear in the implementation phase that a particular area of program functionality is extraordinarily difficult to implement. If this is the case, it is better to revise the design than to persist in using a design that was made based on faulty predictions and that does not account for the newly discovered problem areas.

Even without such changing of the specification during implementation, there are the possibilities of either to start a new SW project from a scratch, "on a green field", or to continue some already existing, "a brown field" (from construction again). The waterfall methodology can be used for continuous enhancement, even for existing SW, originally from another team. As well as in the case when the system's analyst fails to capture the customer requirements correctly, the resulting impacts on the following phases (mainly the coding) still can be tamed by this methodology, in practice: A challenging job for a QA team.

Dr. Winston W. Royce, in "Managing the Development of Large Software Systems"[3], the first paper that describes the waterfall model, also describes the simplest form as "risky and invites failure".

Steve McConnell in Code Complete (a book which criticizes the widespread use of the waterfall model) refers to design as a "wicked problem" — a problem whose requirements and limitations cannot be entirely known before completion. The implication of this is that it is impossible to perfect one phase of software development, thus it is impossible if using the waterfall model to move on to the next phase.

David Parnas, in "A Rational Design Process: How and Why to Fake It", writes:[4]

“Many of the [system's] details only become known to us as we progress in the [system's] implementation. Some of the things that we learn invalidate our design and we must backtrack.”

The idea behind the waterfall model may be "measure twice; cut once", and those opposed to the waterfall model argue that this idea tends to fall apart when the problem being measured is constantly changing due to requirement modifications and new realizations about the problem itself then. As a solution can serve some "investment of man-time", ordering some experienced developer to spend time on refactoring, to consolidate the SW back together. Another approach is to prevent these needs by a predicting design, targeting modularity with interfaces.

[edit] Modified models

In response to the perceived problems with the pure waterfall model, many modified waterfall models have been introduced. These models may address some or all of the criticisms of the pure waterfall model.[citation needed] Many different models are covered by Steve McConnell in the "lifecycle planning" chapter of his book Rapid Development: Taming Wild Software Schedules.

While all software development models will bear some similarity to the waterfall model, as all software development models will incorporate at least some phases similar to those used within the waterfall model, this section will deal with those closest to the waterfall model. For models which apply further differences to the waterfall model, or for radically different models seek general information on the software development process.

[edit] Sashimi model

The Sashimi model (so called because it features overlapping phases, like the overlapping fish of Japanese sashimi) was originated by Peter DeGrace. It is sometimes referred to as the "waterfall model with overlapping phases" or "the waterfall model with feedback". Since phases in the sashimi model overlap, information of problem spots can be acted upon during phases that would typically, in the pure waterfall model, precede others. For example, since the design and implementation phases will overlap in the sashimi model, implementation problems may be discovered during the design and implementation phase of the development process. This helps alleviate many of the problems associated with the Big Design Up Front philosophy of the waterfall model.

[edit]

Component-based software engineering

A simple example of two components expressed in UML 2.0. The checkout component, responsible for facilitating the customer's order, requires the card processing component to charge the customer's credit/debit card (functionality which the latter provides).

Component-based software engineering (CBSE) (also known as component-based development (CBD)) is a branch of software engineering, the priority of which is the separation of concerns in respect of the wide-ranging functionality available throughout a given software system. This practice brings about an equally wide-ranging degree of benefits in both the short-term and the long-term for the software itself and the organisation that sponsors it.

Components are considered to be part of the starting platform for service orientation throughout software engineering, for example Web Services, and more recently, Service-Oriented Architecture (SOA) - whereby a component is converted into a service and subsequently inherits further characteristics beyond that of an ordinary component.

Contents

Component Definition

An individual component is a software package, or a module, that encapsulates a set of related functions (or data).

All system processes are placed into separate components so that all of the data and functions inside each component are semantically related (just as with the contents of classes). Because of this principle, it is often said that components are modular and cohesive.

With regards to system-wide co-ordination, components communicate with each other via interfaces. When a component offers services to the rest of the system, it adopts a provided interface which specifies the services that can be utilised by other components and how. This interface can be seen as a signature of the component - the client does not need to know about the inner workings of the component (implementation) in order to make use of it. This principle results in components referred to as encapsulated. In the UML illustrations in this article, provided interfaces are represented by a lollipop attached to the outer edge of the component.

However when a component needs to use another component in order to function, it adopts a required interface which specifies the services that it needs. In the UML illustrations in this article, required interfaces are represented by an open socket symbol attached to the outer edge of the component.

A simple example of several software components - pictured within a hypothetical holiday reservation system. Represented in UML 2.0.

Another important attribute of components is that they are substitutable, so that a component could be replaced by another (at design time or run-time), if the requirements of the initial component (expressed via the interfaces) are met by the successor component. Consequently, components can be replaced with either an updated version or an alternative for example, without breaking the system in which the component operates.

As a general rule of thumb for engineers substituting components, component B can immediately replace component A, if component B provides at least what component A provided, and requires no more than what component A required.


Software components often take the form of objects or collections of objects (from object-oriented programming), in some binary or textual form, adhering to some interface description language (IDL) so that the component may exist autonomously from other components in a computer.

When a component is to be accessed or shared across execution contexts or network links, techniques such as serialization or marshalling are often employed to deliver the component to its destination.

Reusability is an important characteristic of a high quality software component. A software component should be designed and implemented so that it can be reused in many different programs.

It takes significant effort and awareness to write a software component that is effectively reusable. The component needs to be:

  • fully documented
  • more thoroughly tested -
    • robust - with comprehensive input validity checking
    • able to pass back appropriate error messages or return codes
  • designed with an awareness that it will be put to unforeseen uses

In the 1960s, scientific subroutine libraries were built that were reusable in a broad array of engineering and scientific applications. Though these subroutine libraries reused well-defined algorithms in an effective manner, they had a limited domain of application. Commercial sites routinely created application programs from reuseable modules written in Assembler,COBOL,PL/1 and other 2nd and third generation languages using both System and user application libraries.

Today, modern reusable components encapsulate both data structures and the algorithms that are applied to the data structures. It builds on prior theories of software objects, software architectures, software frameworks and software design patterns, and the extensive theory of object-oriented programming and the object oriented design of all these. It claims that software components, like the idea of hardware components, used for example in telecommunications, can ultimately be made interchangeable and reliable.

[edit] History

The idea that software should be componentized, built from prefabricated components, was first published in Douglas McIlroy's address at the NATO conference on software engineering in Garmisch, Germany, 1968 titled Mass Produced Software Components. This conference set out to counter the so-called software crisis. His subsequent inclusion of pipes and filters into the Unix operating system was the first implementation of an infrastructure for this idea.

The modern concept of a software component was largely defined by Brad Cox of Stepstone, who called them Software ICs and set out to create an infrastructure and market for these components by inventing the Objective-C programming language. (He summarizes this view in his book Object-Oriented Programming - An Evolutionary Approach 1986.)

IBM led the path with their System Object Model (SOM) in the early 1990s. Some claim that Microsoft paved the way for actual deployment of component software with OLE and COM. Today, many successful software component models exist.

[edit] Differences from object-oriented programming

The idea in object-oriented programming (OOP) is that software should be written according to a mental model of the actual or imagined objects it represents. OOP and the related disciplines of object-oriented design and object-oriented analysis focus on modeling real-world[citation needed] interactions and attempting to create 'verbs' and 'nouns' which can be used in intuitive[citation needed] ways, ideally by end users as well as by programmers coding for those end users.

Component-based software engineering, by contrast, makes no such assumptions, and instead states that software should be developed by gluing prefabricated components together much like in the field of electronics or mechanics. Some peers[who?] will even talk of modularizing systems as software components as a new programming paradigm.

Some argue that this distinction was made by earlier computer scientists, with Donald Knuth's theory of "literate programming" optimistically assuming there was convergence between intuitive and formal models, and Edsger Dijkstra's theory in the article The Cruelty of Really Teaching Computer Science, which stated that programming was simply, and only, a branch of mathematics.

In both forms, this notion has led to many academic debates about the pros and cons of the two approaches and possible strategies for uniting the two. Some consider them not really competitors, but only descriptions of the same problem from two different points of view.

[edit] Architecture

A computer running several software components is often called an application server. Using this combination of application servers and software components is usually called distributed computing. The usual real-world application of this is in financial applications or business software.

Softwere Characteristics

In this post, I would like to summarize typical software characteristics that a Quality Control Leader needs to understand in order to recognize typical risks, develop appropriate testing strategies and specify effective test cases. In general, I consider this is the backbone of Quality Control Engineering because in the end of the day, no matter whatever test design techniques testers use in whatever test levels, no matter whatever tools they use in whatever test types, the ultimate purpose is to estimate/control/monitor the following software quality characteristics.

Please note that when talking about software characteristics, we divide them into two categories, one for Functional Attributes and one for Technical Attributes.

Functional Attributes

Functional Accuracy:

  • Objective: test based on specified or implied functional requirements to evaluate whether the system gives the right answer and produce the right effects. Accuracy also refers to the right degree of precision in the results (Ex: computational accuracy)
  • Techniques: All black-box test design techniques can be used

Functional Suitability:

  • Objective: to evaluate whether the system solves the given problem and appropriate to intended tasks.
  • Techniques: Use case, exploratory testing.

Functional Interoperability:

  • Objective: to evaluate whether the system functions correctly in all intended environments. Environment includes not only elements that the system must interoperate with but also those that interoperate indirectly with or even simply cohabitate with. Cohabitation implies sharing computer resources (CPU, memory…) but do not work together.
  • Techniques:
    - Equivalence Partitioning: to determine environment set when you know possible interactions between one or more environments and one or more functions.
    - Pairwise and classification tree: to determine environment set when you’re not sure about interactions and want to generate more arbitrary configurations.
    - Use case testing in each configuration

Functional Security:

  • Objective: to evaluate the ability of the software to prevent unauthorized access
  • Techniques: Attack and Defect Taxonomies

Accessibility:

  • Objective: to evaluate the ability of how to use the system under particular requirements, restrictions or disabilities. These are often arisen from national standards, industry compliance by law or by contract.
  • Techniques: Specification, requirement-based testing used in risk-based testing approach. Since it obligated strictly by law, it usually not sufficient to test just a few representative fields or functions but every field and function might be required.

Usability:

  • Objective: to evaluate whether the users are effective, efficient and satisfied with the software
  • Techniques:
    - Inspection, evaluation and review
    - Use-case testing along with syntax and semantic tests
    - Survey or questionnaire

Technical Attributes

Technical Security:

  • Objective:
    - Technical Security is different from Functional Security is that Technical Security leverages technical knowledge and experience to take advantage of unintended side effects and bad assumptions to subvert or attack the software.
    - Here, we try to evaluate software security vulnerabilities related to data access, functional privileges, the ability to insert malicious programs into the system, the ability to sniff or capture secret information, the ability to break encrypted traffic and the ability to deliver virus or worms.
  • Techniques and tools:
    - Information retrieval
    - Vulnerability Scanning tools
    - Security attacks techniques (Dependency attacks, user interface attacks)

Reliability:

  • Objective: monitor software maturity and compare it to desired, statistically valid goals. Reliability is important for high-usage, safety-critical systems. Special types of Reliability tests are Robustness and Recoverability.
  • Techniques:
    - Select an appropriate mathematical model from Reliability Growth Models or Software Reliability Growth Models to monitor software increase or decrease in reliability.
    - TAAF (Test, Analyze and Fix). Because the “around-the-clock” nature of testing process, reliability testing is almost always automated. It uses empirical test data performed in a simulated real-life operational environment.
    - Recoverability testing (Ex: failover test, disaster recovery, backup/restore): to evaluate system’s ability to recover from some hardware or software failure in its environment.

Efficiency:

  • Objective: to evaluate whether system has good time response and resource usage or not.
  • Techniques:
    - Review, static analysis before and during design, implementation phase
    - Performance testing: to evaluate time response within specified period of time and under various legal conditions.
    - Load testing: to see how system handles under different level of loads, usually focused on realistic or anticipated loads.
    - Stress testing: put the load to the extreme and beyond to determine system limit and observe its degradation behavior at or above maximum load.
    - Scalability testing: take stress testing further by finding the bottlenecks and then estimate the ability of the system to be enhanced to resolve the problem.

Maintainability:

  • Objective: to evaluate the ability to update, modify, reuse and test the system.
  • Techniques:
    - Static analysis and reviews (maintainability defects are usually found with code analysis tools, design and code walk-through)
    - Test updates, patches, upgrades and migration
    - Collect project and productions metrics (Ex: number of regression test failures, long closure periods of bugs, duration of test cycle) to determine analyzability, stability and testability of the system.

Portability:

  • Objective: to evaluate the ability of the system to install to, use in and move to various environments.
  • Techniques:
    - Equivalence Partitioning, Pairwise, Classification Tree, Decisions table, State Transition
    - Installability testing: install software using its standard installation, update, patch facilities on its target environments. The purpose is to check installation instructions, user’s manual and observe software’s failures during installation/uninstallation.
    - Coexistence testing: to check whether one or more systems that work in the same environment do so without conflict.
    - Replaceability testing: to check that whether we can exchange our software components for other 3rd party ones or not.
    - Adaptability testing: execute test cases to evaluate Functional Interoperability. The techniques are the same with those in Functional Interoperablity section above.

Software Engineering According To Raj. Uni. BCA 3rd Year

Softwere Characteristics, Components, Applications, Softwere process Models : Waterfall, spiral, Prototyping, Fourth Generation Techniques, Concepts of Project Management, Role of Metrics & Measurements.
S/W Project planning Objectives, Decomposition techniques : S/W Sizing, Problem-based estimation, Process based estimation, Cost Estimation Models : COCOMO Model.
S/W Design : Objectives, Principles, Concepts, Design methodologies Data design, Architectural design, procedural design, Object oriented concepts
Testing fundamentals : Objectives, principles, testability, Test cases: White box & Black box testing strategies: verification & validation, unit test, integration testing, validation, testing, system testing

Software Engineering

The new Airbus A-380 uses a substantial amount of software to create a "paperless" cockpit. Software engineering successfully maps and plans the millions of lines of code comprising the plane's software

Software engineering is application of a systematic, disciplined, quantifiable approach to the development, operation, and maintenance of software, and the study of these approaches; that is, the application of engineering to software.[1]

The term software engineering first appeared in the 1968 NATO Software Engineering Conference and was meant to provoke thought regarding the current "software crisis" at the time.[2] [3] Since then, it has continued as a profession and field of study dedicated to creating software that is of higher quality, more affordable, maintainable, and quicker to build. Since the field is still relatively young compared to its sister fields of engineering, there is still much debate around what software engineering actually is, and if it conforms to the classical definition of engineering. It has grown organically out of the limitations of viewing software as just programming. "Software development" is a much used term in industry which is more generic and does not necessarily subsume the engineering paradigm. Although it is questionable what impact it has had on actual software development over the last more than 40 years,[4][5] the field's future looks bright according to Money Magazine and Salary.com who rated "software engineering" as the best job in America in 2006.[6]

Computer engineering as an academic discipline

The first accredited computer engineering degree program in the United States was established at Case Western Reserve University in 1971; as of October 2004 there were 170 ABET-accredited computer engineering programs in the US.[3]

Due to increasing job requirements for engineers, who can design and manage all forms of computer systems used in industry, some tertiary institutions around the world offer a bachelor's degree generally called computer engineering. Both computer engineering and electronic engineering programs include analog and digital circuit design in their curricula. As with most engineering disciplines, having a sound knowledge of mathematics and sciences is necessary for computer engineers.

In many institutions, computer engineering students are allowed to choose areas of in-depth study in their junior and senior year, as the full breadth of knowledge used in the design and application of computers is well beyond the scope of an undergraduate degree. The joint IEEE/ACM Curriculum Guidelines for Undergraduate Degree Programs in Computer Engineering defines the core knowledge areas of computer engineering as[4]

The breadth of disciplines studied in computer engineering is not limited to the above subjects but can include any subject found in engineering.

Coputer Engineering

Computer Engineering (also called Electronic and Computer Engineering , or Computer Systems Engineering) is a discipline that combines both Electrical Engineering and Computer Science.[1] Computer engineers usually have training in electrical engineering, software design and hardware-software integration instead of only software engineering or electrical engineering. Computer engineers are involved in many aspects of computing, from the design of individual microprocessors, personal computers, and supercomputers, to circuit design. This field of engineering not only focuses on how computer systems themselves work, but also how they integrate into the larger picture.[2]

Usual tasks involving computer engineers include writing software and firmware for embedded microcontrollers, designing VLSI chips, designing analog sensors, designing mixed signal circuit boards, and designing operating systems.[citation needed] Computer engineers are also suited for robotics research,[citation needed] which relies heavily on using digital systems to control and monitor electrical systems like motors, communications, and sensors.