Monday, April 17, 2017

Software development models

Today we were asked to give an example of the real use of one of the software development models, and, of course, to expand it on the shelves. Of course, most of us have never participated in anything like that, including me (the creation of an online store does not count - the level, to put it mildly, not the one !!!). Therefore, I concluded that I have the right to bring in an example and disassemble someone else's work.

As a result of searches I came across the article "Scientists from Tomsk State University created a microtomograph. It allows to know, with an accuracy to a micron, the internal structure of various materials. For example, diamonds. "

As a result, I decided that this is exactly what you need, because for sure everyone will write about the most advanced versions of models, and this mechanism, which I wrote above - was created on the simplest - the first model. Pretty funny.

But for understanding, we still have to go through all the models. This will be the first part of the article.

Software development knows many worthy methodologies - in other words, established best practices. The choice depends on the specifics of the project, the system of budgeting, subjective preferences and even the temperament of the leader.

1. "Waterfall Model" (cascade model or "waterfall")

One of the oldest, involves the sequential passage of the stages, each of which must be completed completely before the beginning of the next. In the Waterfall model it is easy to manage the project. Due to its rigidity, the development takes place quickly, the cost and time are predetermined. But this is a double-edged sword. Cascade model will give an excellent result only in projects with clearly defined requirements and methods of their implementation. There is no way to step back, testing begins only after the development is completed or almost completed. Products developed for this model without a valid choice of it can have shortcomings (the list of requirements can not be adjusted at any time), which becomes known only at the end due to a strict sequence of actions. The cost of making changes is high, because to initialize it you have to wait for the completion of the entire project. However, the fixed cost often outweighs the disadvantages of the approach. Correction of the deficiencies realized in the process of creation is possible, and, in our experience, requires from one to three additional agreements to a contract with a small TOR.

When to use the cascading methodology?


  • Only when the requirements are known, understandable and fixed. Contradictory requirements are not available.
  • No problem with the availability of programmers of the required skills.
  • In relatively small projects.

2. "V-Model"

It inherited the structure "step by step" from the cascade model. The V-shaped model is applicable to systems that are particularly important for trouble-free operation. For example, applications in clinics for monitoring patients, integrated software for emergency airbag control mechanisms in vehicles, and so on. The peculiarity of the model can be considered that it is aimed at thorough testing and testing of the product, which is already at the initial stages of design. The testing phase is carried out simultaneously with the corresponding development stage, for example, unit tests are written during the coding.

When to use the V-model?


  • If thorough testing of the product is required, then the V-model will justify the inherent idea: validation and verification.
  • For small and medium-sized projects, where requirements are clearly defined and fixed.
  • In conditions of availability of engineers of necessary qualification, especially testers.

3. "Incremental Model"

In the incremental model, the total system requirements are divided into different assemblies. Terminology is often used to describe the phased assembly of software. There are several development cycles, and together they make up the life cycle of "multi-waterfall". The cycle is divided into smaller easily created modules. Each module goes through the phases of requirements definition, design, coding, implementation and testing. The development procedure for the incremental model assumes the release at the first major stage of the product in the basic functionality, and then the sequential addition of new functions, the so-called "increments". The process continues until a complete system is created.

Incremental models are used where individual requests for change are clear, can be easily formalized and implemented.

An example of one increment for a simpler understanding. The network of electronic libraries Vivaldi replaced DefView. DefView connected to one server of documents, and now can connect to many. On the site of the institution, which wants to broadcast its content to a certain audience, a storage server is installed, which directly accesses the documents and converts them into the desired format. There was a root element of the architecture - the central server Vivaldi, acting as a single search system for all storage servers installed in various institutions.

When to use the incremental model?


  • When the basic requirements for the system are clearly defined and understandable. At the same time, some details can be modified over time.
  • An early withdrawal of the product to the market is required.
  • There are several risky features or goals.

4. "RAD Model" (rapid application development model)

The RAD model is a variation of the incremental model. In the RAD model, components or functions are developed by several highly qualified teams in parallel, like several mini-projects. The time frame of one cycle is strictly limited. The created modules are then integrated into one working prototype. Synergy allows very quickly to provide the client with feedback.

The rapid application development model includes the following phases:

  1. Business modeling: the definition of the list of information flows between different units.
  2. Data Modeling: The information collected in the previous step is used to identify the objects and other entities necessary for the circulation of information.
  3. Process modeling: Information flows connect objects to achieve development goals.
  4. Assembling the application: automatic assembly tools are used to convert models of the automatic design system into code.
  5. Testing: new components and interfaces are tested.
When is the RAD model used?

  • It can be used only with highly qualified and highly specialized architects. The project budget is big to pay for these specialists along with the cost of ready-made tools for automated assembly. The RAD-model can be chosen with confident knowledge of the target business and the need for urgent production of the system within 2-3 months.
5. "Agile Model" (flexible methodology of development)

In the "flexible" development methodology, after each iteration, the customer can observe the result and understand whether it satisfies it or not. This is one of the advantages of a flexible model. Its disadvantages include the fact that, because of the lack of specific wording of the results, it is difficult to estimate the effort and cost required for development. Extreme programming (XP) is one of the most well-known applications of a flexible model in practice.

At the heart of this type - short daily meetings - "Scrum" and regularly recurring meetings (once a week, every two weeks or once a month), which are called "Sprint". At daily meetings, team members discuss:

  • A report on the work done since the last Scrum;
  • A list of tasks that an employee must perform before the next meeting;
  • Difficulties encountered in the course of work.
The methodology is suitable for large or long-term projects that are constantly adapted to market conditions. Accordingly, requirements change during implementation. It is worth remembering the class of creative people who tend to generate, give out and try out new ideas weekly or even daily. Flexible development is best suited for this psychotype of executives.

When to use Agile?

  • When the needs of users are constantly changing in a dynamic business.
  • Changes to Agile are realized at a lower price due to frequent increments.
  • In contrast to the waterfall model, in a flexible model, only a small planning is sufficient to start a project.
6. "Iterative Model" (iterative or iterative model)

An iterative life-cycle model does not require a complete specification of requirements to begin. Instead, the creation begins with the implementation of part of the functional, which becomes the basis for determining further requirements. This process is repeated. The version may be imperfect, the main thing is that it works. Understanding the ultimate goal, we strive for it so that each step is effective, and each version is workable.

An example of iterative development is voice recognition. The first research and preparation of the scientific apparatus began long ago, in the beginning - in thoughts, then - on paper. With each new iteration, the recognition quality improved. Nevertheless, perfect recognition has not yet been achieved, hence, the problem has not yet been fully solved.

When is it optimal to use an iterative model?

  • The requirements for the final system are clearly defined and understood in advance.
  • The project is big or very big.
  • The main task should be defined, but implementation details can evolve over time.
7. "Spiral Model" (spiral model)

The "spiral model" is similar to incremental, but with an emphasis on risk analysis. It works well for solving mission-critical business tasks, when failure is incompatible with the company's activities, in the conditions of issuing new product lines, if necessary, scientific research and practical testing.

The helical model assumes 4 stages for each turn:

  • Planning;
  • Risk analysis;
  • Design;
  • Evaluation of the result and a satisfactory quality of the transition to a new coil.
This model is not suitable for small projects, it is reasonable for complex and expensive, for example, such as the development of a document circulation system for a bank, when each next step requires more analysis to assess the consequences than programming.

Now we can consider the concrete solution, the concrete task, about which I spoke at the very beginning!

How to create a software for a microtomograph using a cascade model?

I want to tell you more about the interesting project of Edison. Before the developers set the task to write software for the microtomograph, they coped with it perfectly, and then pushed seeds, bolts, capacitors and moth into this tomograph. A serious man, this tomograph is needed to check diamonds and not buy leaky.

I want to tell you more about the interesting project of Edison. Before the developers set the task to write software for the microtomograph, they coped with it perfectly, and then pushed seeds, bolts, capacitors and moth into this tomograph. A serious uncle, this tomograph is needed to check diamonds and not to buy holes. The tomograph can enlighten the material with a resolution of up to a micron. It is 100 times thinner than a human hair. After scanning, the program creates a 3D model, where you can see not only the exterior of the part, but also to find out what's inside it.

Mathematical algorithms used.

  • Inverse Radon transform.
  • Marching / walking cubes.
  • Gauss filter.
  • Filtration / convolution, normalization of projections.
  • Implementation and technology.
  • C ++ / Qt.
  • Ubuntu, Windows.
  • CUDA.
  • Volumetric-voxel model rendering.
  • Own format for storing images with grayscale depths of 12 bits.
  • Separation of reconstruction to the client-server.
  • Preservation of volumetric data in the form of "Octographs".
  • Labor costs: 5233 person-hours.
Debugging was performed on raw data (projection images) obtained from a tomograph.

Algorithms

First of all, it was necessary to select a mathematical apparatus for solving the problem. The main algorithm - the inverse Radon transform - was laid down in the formulation of the problem, but it had to be adapted to the peculiarities of our work and to use several additional, auxiliary algorithms. For example, in view of the fact that the object was illuminated by a single "bulb", it was necessary to adapt the formulas of the inverse Radon transform to conic rather than direct projections. The standard algorithm implies that the object is illuminated by a beam of parallel X-rays emanating from an infinitely remote source. In reality, the source of the rays is a point source, and therefore the beam of rays has the shape of a cone. In this connection, the Radon inverse transformation algorithm required the introduction of a coordinate transformation from a conical system to a rectangular one.

At the first stage of the calculations, preliminary filtration / convolution is performed, normalization of the projections. This is necessary in order to mute the noise on the projections, and more clearly isolate the densities.
To build 3D models of surfaces of objects in a standard format, the algorithm "Marching (walking) cubes" was used for viewing in 3D editors Compass, SolidWork, 3D Max Studio. The essence of the algorithm is that it runs through a scalar field, at each iteration, it looks through 8 neighboring positions (the vertices of the cube parallel to the coordinate axes) and determines the polygons needed to represent the part of the isosurface passing through the given cube. Next, the polygons forming the given isosurface are displayed.

Under the Gaussian filter, the project refers to matrix image processing filters using a convolution matrix. The convolution matrix is a coefficient matrix that is "multiplied" by the pixel value of the image to obtain the desired result. The filter is used to smooth out the voxel data and the projections of the slices, which in turn makes it possible to improve the quality of the generated 3D models.

Implementation and technology

In the course of the work, a number of specialized technical solutions were also created: a library for volumetric-voxel rendering of the model; Recording video while performing operations with the model; Own image storage format with grayscale depth of 12 bits; Preservation of volumetric data in the form of "Octographs"; Algorithms for polygonization of 3D models.

Volumetric-voxel model rendering in the project was used to view the model with the ability to rotate and scale in real time. A voxel is a three-dimensional pixel. Also, with the help of voxel rendering, the operator is provided with convenient tools for determining the viewing area with an automatic increase in the level of detail and the possibility of locating the cutting plane at any angle in two clicks. On the basis of the cutting plane, you can later obtain a cut image with the maximum resolution.

Oktotree (octant tree, octree tree, English octree) is a type of tree data structure in which each inner node has exactly 8 "descendants". Octal trees are most often used to separate three-dimensional space, recursively dividing it into eight cells. In the project, an octree allows you to display data in preview mode, when there is no need for data that is not visible to the user. For example, volumetric rendering receives a set of data for display with detail depending on the selected area using an octree, which provides a high FPS when the entire model is visible, and at the same time, an increase in detail if a smaller part of the model is selected.

Debugging

Then the optimization step followed. For one object, the scanner produces 360 images, each with a resolution of 8000x8000. Since the amount of data processed is large, solving the problem "on the forehead" would be completely unsatisfactory. This was taken into account at the design stage, however, after obtaining the first version, the algorithms had to be optimized and adapted several times. The task required that the time to create a three-dimensional model of the microstructure should not exceed 2 hours, so the optimization phase was laid down initially. During the testing of the first version, we were faced with the fact that the use of a standard format for storing projected images is not suitable for the project. The input data contains TIFF images with 16-bit gray-scale encoding. Such a color depth for calculations is excessive, and disk space, network channel, RAM and processor time processing of such images requires a lot. On the other hand, the standard 8-bit color depth was not enough for us to maintain the accuracy of the reconstruction. Therefore, a format for storing images with a 12-bit color depth was developed.

In the technical design was laid the horizontal scaling of calculations. The reconstruction of the 3D model, that is, the main computational task, was divided into small task packages that the central software module distributed over a network of servers in the cluster. The servers used CUDA technology, which allows to use the computing power of graphics processors for calculations. The time required to calculate one model is reduced in proportion to the number of servers in the cluster, since the computational tasks are ideally parallelized, and all servers are 100% loaded.

The CUDA architecture is applicable not only to high-performance graphics computations, but also to various scientific calculations using nVidia graphics cards. Scientists and researchers widely use CUDA in various fields, including astrophysics, computational biology and chemistry, fluid dynamics modeling, electromagnetic interactions, computed tomography, seismic analysis and much more. In CUDA, you can connect to applications that use OpenGL and Direct3D. CUDA is a cross-platform software for operating systems such as Linux, Mac OS X and Windows.

In this project CUDA is used for the main process - reconstruction of volumetric data from projections. Since the graphics processors have a dedicated set of commands, the reconstruction calculations fit well on the graphics cards through the CUDA technology. On the CPU, this task is solved longer at the design stage and at the execution stage.

Maps supported by the created software:

  • Nvidia Tesla K80 24GB (scientific);
  • EVGA GeForce GTX TITAN X 12GB (for playing).


The task provided that the software should work in a Microsoft Windows XP / Vista / 7, Linux environment. In this regard, the cross-platform solution was laid down initially. C ++ / Qt was chosen as the development language, which allowed to have a single source code and to assemble software for different OS.

Radon transform formula, if somebody do not know))))


No comments:

Post a Comment