research article

Incremental Learning of the Different Dynamic Signatures of Mitochondrial Movement in Drug Discovery and System Biology

Petra Perner* 

Institute of Computer Vision and Applied Computer Sciences, IBaI, Leipzig, Germany

*Corresponding author: Petra Perner, Institute of Computer Vision and Applied Computer Sciences, IBaI, Leipzig, Germany. Tel: +493418612273; Fax: +493418612275; Email:pperner@ibai-institut.de 

Received Date:16 October, 2018; Accepted Date:22 November, 2018; Published Date:29 November, 2018

Citation:Perner P (2018) Incremental Learning of the Different Dynamic Signatures of Mitochondrial Movement in Drug Discovery and System Biology. Adv Proteomics Bioinform: APBI -101. DOI: 10.29011/APBI -101. 100001

 

1.       Abstract

In this paper, we described how prototype-based classification can be used for knowledge acquisition and automatic image classification. We developed the prototypical methods and techniques of the system in order to serve the special development issues of an expert when starting a new image-based application. Often an expert can present a catalogue of prototypical images instead of a large enough image database for setting up the system. Starting with the set of prototypical images we can learn the important image features and the conceptual description of an image class. In this paper, wedescribed the necessary functions a prototype-based classifier should have. Besides the similarity calculated based on the numerical image features, we introducedthe experts estimated similarity as new knowledge piece and a new function that optimizes between this similarity and the automatically calculated similarity by the system in order to improve the system accuracy. This function reduces the influence of the uncertainty in the calculated features and the similarity measure and brings the similarity value closer to the true similarity value.The test of the system is done inthe study of the internal mitochondrial movement of cells. The basis for the development is fluorescent cell images. The aim was to discover the different dynamic signatures of mitochondrial movement. For this application, the expert knows from the literature how the different signatures should look like and based on this knowledge he picks prototypical images from his experiment. We present our results and give an outlook for future work.

2.      Keywords: Adjustment theory;Discover dynamic signatures of mitochondrial movement;Feature selection; Knowledge acquisition;Prototype-based classification; Prototype selection

  1. Introduction

In this paper, the behavior of mitochondria in living cells is studied in an experiment in drug discovery. Mitochondria are semi-autonomous organelles with a large variety of functions in cellular metabolism. Metabolic control mechanisms, as well as the balance of the replication cycles of the nucleus and mitochondria, require a subtle interplay between these organelles and the other parts of a cell. The discovery of micro compartmentation attributes that play a new physiological role to of cellular structure is essential. They are no longer only morphological entities but rather provide a basis for substance gradients and the organization of multi enzyme multienzyme complexes. The cytoskeleton is the most prominent and ubiquitous system bringing organelles into the right position and provides a large and heterogeneous surface for associations with other structures and molecules. Thus, neither the distribution nor the appearance of mitochondria (and all other organelles) might be at random and without control.

Mitochondria are semi-autonomous organelles which are endowed with the ability to change their shape (e.g., by elongation, shortening, branching, buckling, swelling) and their location inside a living cell. In addition, they may fuse or divide. Dislocation of mitochondria may result from their interaction with elements of the cytoskeleton, with microtubules in particular, and from processes intrinsic to the mitochondria themselves [1].

The emphasis in system biology is laid on the methods for visualizing mitochondria in cells and following their behaviour. The most powerful tools to detect and identify mitochondria in situ are advanced fluorescence techniques. Fluorochroming endows the organelles with the capability of luminescence. Thus, very fine extensions, below the resolution power of a light microscope, can be detected because of their fluorescence. In addition, spatial or temporal variations of the fluorescence emission along a single mitochondrion report changes of the inner compartment. Fluorescence methods provide unique possibilities because of their high resolving power and because some of the mitochondria-specific fluorochromes can be used to reveal the membrane potential. Fusion and fission often occur in short time intervals within the same group of mitochondria.

The main disadvantage of fluorescence techniques is that the dyes are susceptible to photobleaching, which leads to the formation of cytotoxid free radicals and singlet oxygen, and even leads to dissipation of the electrochemical gradient [2]. Photobleaching and its deleterious effects can be almost avoided by using low excitation intensities and image acquisition systems (e.g., intensified SIT cameras, or cooled CCD cameras which allow photon integration on the chip).

Despite the general uniformity of mitochondria (an outer membrane enclosing an inner membrane to which the tubular, vesicular, crestlike, or prismatic membrane invaginations are connected), it is still uncertain whether a single population, or several populations inhabit a cell. The answer to this question depends on the definition of ‘‘population’’ which could either be characterized by morphological criteria, by different fate, or by genetic differences. This association with the spindle assures equal distribution of the mitochondria to the spermatocytes during the meiotic divisions. Proteins which bind to the outer mitochondrial membrane and to microtubules have been identified.

The following work is restricted to those aspects related to mitochondrial motion and the physiological significance of the interactions. While imaging with fluorescence techniques allows the visualization of the mitochondria, the automatic image analysis and the detection of the different stage of the mitochondria appearances is still missing.

In this paper, we described how prototype-based classification can be used for knowledge acquisition and automatic image interpretation of the appearances of the mitochondria. We described why prototype-based classification is a novel method for this kind of application compared to normal clustering. Also, we described the necessary functions a prototype-based classifier should have. We introduced the expert`s estimated pairwise similarity between the images as new knowledge piece and a new function that optimizes between this similarity and the automatically calculated similarity by the system in order to improve the system accuracy. This function reduces the influence of the uncertainty in the calculated features and similarity measure and brings the similarity value closer to the true similarity value. The test of the system is done in the study of the internal mitochondrial movement of cells. The basis for the development is fluorescent cell images. The aim was to discover the different dynamic signatures of mitochondrial movement. In Section 2, we presented related work. The material used for this study is described in Section 3. Section 4 explains the methodology for knowledge acquisition and for the development of the automatic image classifier based on prototypical images. The image analysis procedure is described in Section 5. Our novel texture descriptor is presented in Section 6. The method and techniques of the prototypical classifier are given in Section 7. They are represented in our novel software tool protoclass. Results are given in Section 8. We gave a conclusion and an outlook for future work in Section 9.

  1. Related Work

Prototypical classifiers have been successfully studied for medical applications by Schmidt and Gierl [3], Perner [4] for image interpretation and by Nilsson and Funk [5] on time-series data. The simple nearest-neighbor-approach [6] as well as hierarchical indexing and retrieval methods have been applied to the problem. It has been shown that an initial reasoning system could be built up based on prototypical cases. The systems are useful in practice and can acquire new cases for further reasoning during utilization of the system.

Prototypical images are a good starting point for the development of an automated image classifier [7]. This knowledge is often collected by human experts in the form of an image catalogue. It is often easier for an expert to show prototypical images instead of describing the appearance of an object under consideration and name the important image features. In our experiment described in this paper, the biologist knows what he wants to trigger in a cell by putting some chemical on it and how a prototypical image should look like. This knowledge can be used as starting point for the development of an automatic image classification system. Therefore, our description is based on the study of the internal mitochondrial movement of cells [8] how such a classifier in combination with image analysis and feature extraction can be used for incremental knowledge acquisition and automatic classification. We not only use the numerical calculated similarity value as input we also used the experts estimated pairwise similarity between the images as new knowledge piece and a new function that adjusts this similarity value given by the expert and the automatically calculated similarity value by the system in order to improve the system accuracy. The test of the system is done in the study of the internal mitochondrial movement of cells.

The classifier is set up based on prototypical cell appearances in the image such as for e.g. „healthy cell“, „cell dead“, and „cell in transition stage“. For these prototypes are calculated image features based on the random set theory that describes the texture of the cells [9]. The prototype is then represented by the attribute-value pair, experts gave pairwise similarity value and the class label. These settings are taken as initial classifier settings in order to acquire the concept description about the dynamic signatures.

The importance of the features and the feature weights are learned by the protoclass-based classifier [4]. After the classifier is set up each new cell is then compared with the protoclass-based classifier and the similarity to the prototypes is calculated. If the similarity is high the new cell gets the label of the prototype. If the similarity to the prototypes is too low, then there is evidence that the cell is in a transition stage and a new prototype has been found. With this procedure, we can learn the dynamic signature of the mitochondrial movement.

  1. The Application

After the assay has been set up and the interaction of the cells with drug and proteins has been started it is not quite clear what the concepts of the different phases of a cell are. This has to be learnt during the usage of the system.

Based on their knowledge the biologists set up several descriptions for the classification of the mitochondria. They grouped these classes in the following classes: tubular cells, round cells and dead cells. For the appearance of these classes the expert could show different prototypical images (see images in Figure 1). It is to be emphasized that the expert did not only pick one unique prototypical image instead of he picked several prototypical images to show the variances of the objects among the respective class. This information can be taken as starting point for the development of an automatic image classification system. We start with a set of images for each class that is limited to a few number of cases.

The aim should be to learn from these limited prototypical image data set the important features for the object description and the conceptual description of the different classes.  

The prototypical cells were selected, and the features were calculated [10]. We chose to describe the texture of the cells.

The expert rated the similarity between these prototypical images. Our dataset consisted of 223 instances with the following class partition: 36 instances of class Death, 120 instances of class Round, 47 instances of class Tubular, and 114 features for each instance.

The expert chose for each class a prototype shown in Figure 2. The test data set dataset for classification has had then 220 instances. For our experiments, we also selected 5 prototypes pro class respectively 20 prototypes pro class. The associate test data sets do not contain the prototypes.

  1. Methodology

Figure 3 summarizes the knowledge acquisition process based on protoclass-based classification.

We started with one prototype for each class. This prototype is chosen by the biologist based on the appearance of the cells. It requires that the biologist has enough knowledge about the processes going on in cell-based assays and can decide what kind of reaction the cell is showing.

The discrimination power of the prototypes is checked first based on the attribute values measured from the cells based on our random set texture descriptor and the chosen similarity measure. Note that we calculated a large number of attributes for each cell. However, many attributes do not mean that we will achieve a good discrimination power between the classes. It is better to come up with one or two attributes for small sample sizes in order to ensure a good performance of the classifier. The expert manually estimates the similarity between the prototypes and inputs these values into the system. The result of this process is the selection of the right similarity measure and the right number of attributes. With this information is set-up first classifier and applied to real data.

Each new data is associated with the label of the classification. Manually we evaluated the performance of the classifier. The biologist gives the true or gold label for the sample seen so far. This is kept in a database and serves as a gold standard for further evaluation. During this process, the expert will sort out wrong classified data. This might happen because of too few prototypes for one class or because the samples should be divided into more classes. The decision on what kind of technique should be applied is made based on the visual appearance of the cells. Therefore, it is necessary to display the prototypes of class and the new samples. The biologist sorts these samples based on the visual appearance. That this is not easy to do by a human and needs some experiences in describing image information [6]. However, it is a standard technique in psychology in particular gestalts psychology and is known as categorizing or card sorting. As a result of this process, we came up with more prototypes for one class or with new classes and at least one prototype for these new classes.


The discrimination power needs to get checked again based on this new dataset. New attributes, the new number of prototypes or a new similarity measure might be the output. The process is repeated as long as the expert is satisfied with the result. As a result of the whole process, we get a dataset of samples with true class labels, the settings for the protoclass-based classifier, the important attributes and the real prototypes. The class labels represent the categories of the cellular processes going on in the experiment. The result can now be taken as a knowledge acquisition output. Just about discovering the categories or the classifier can now be used in routine work on the cell-line.

  1. Image Analysis

The color image has been transformed into a gray level image (see Figure 4). The image is normalized to the mean and standard gray-level calculated from all images to avoid invariance caused by the inter-slice staining variations. Automatic thresholding has been performed by the algorithm of Otsu [11]. The algorithm can localize the cells with their cytoplasmatic structure very well. We then applied morphological filters like dilation and erosion to the image in order to get a binary mask for cutting out the cells from the image.

The gray levels ranging from 0 to 255 are quantized into 12 intervals t. Each subimage f(x,y) containing only a cell gets classified according to the gray level into t classes, with t=[0,1,2,..,12]. For each class, a binary image is calculated containing the value “1” for pixels with a gray level value falling into the gray level interval of class t and value “0” for all other pixels, see Figure 4. We called the image f(x,y,t) in the following class image. Object labeling is done in the class images with the contour following method [12]. Then the texture features from these objects were calculated for classification.

  1. Texture Feature Description based on Random Sets

Boolean sets were invented by Matheron [13]. An in-depth description of the theory can be found in Stoyan et al. [14]. The Boolean model allows to model and simulates a huge variety of textures e.g. for crystals, leaves, etc. The texture model X is obtained by taking various realizations of compact random sets, implanting them in Poisson points in Rn, and taking the supremum. The functional moment of X, after Booleanization, is calculated as:


(1)

where is the set of the compact random set of Rn, the density of the process and is an average measure that characterizes the geometric properties of the remaining set of objects after dilation. Relation (25) is the fundamental formula of the model. It completely characterizes the texture model. does not depend on the location of , i.e., it is stationary. One can also provide that it is ergodic so that we can peak the measure for a specific portion of the space without referring to the particular portion of the space.

Formula 1 shows us that the texture model depends on two parameters:

  • the density of the process and

  • a measure that characterizes the objects. In the one-dimensional space, it is the average length of the lines and in the two-dimensional space is the average measure of the area and the perimeter of the objects under the assumption of convex shapes.

We considered the two-dimensional case and developed a proper texture descriptor. Suppose now that we have a texture image with 8-bit gray levels. Then we can consider the texture image as the superposition of various Boolean models, each of them having a different gray level value on the scale from 0 to 255 for the objects within the bit plane.

To reduce the dimensionality of the resulting feature vector, the gray levels ranging from 0 to 255 are now quantized into S intervals t (S=12). Each image f(x,y) is classified according to the gray level into t classes, with t={0,1,2,..,S}. For each class, a binary image is calculated containing the value “1” for pixels with a gray level value falling into the gray level interval of class t and value “0” for all other pixels. The resulting bit plane f(x,y,t) can now be considered as a realization of the Boolean model. The quantization of the gray level into S intervals was done at equal distances. In the following, we call the image f(x,y,t) a class image. In the class image, we can see a lot of different objects. These objects get labeled with the contour following method [12]. Afterwards, features from the bit-plane and from these objects are calculated. Since it does not make sense to consider the features of every single object due to the curse of dimensionality, we calculated the mean and standard deviation for each feature that characterizes the objects such as the area and the contour. In addition to that, we calculated the number of objects and the areal density in the class image.

The list of features and their calculation are shown in Table 1. The first one is the areal density of the class image t which is the number of pixels in the class image, labeled by “1”, divided by the area of the image. If all pixels of an image are labeled by “1”, then the density is one. If no pixel in an image is labeled, then the density is zero.

From the objects in the class image t, the area, a simple shape factor, and the length of the contour are calculated. Per the model, not a single feature of each object is taken for classification due to the curse of dimensionality, but the mean and the standard deviation of each feature were calculated over all the objects in the class image t. We also calculate the frequency of the object size in each class image t.

  1. Protoclass Classifiers

    1. The Overall Method

A prototype-based classifier classifies a new sample according to the prototypes in the database and selects the most similar prototype as the output of the classifier. A proper similarity measure is necessary to perform this task but in most applications, there is no a-priori knowledge available that suggests the right similarity measure. The method of choice to select the proper similarity measure is therefore to apply a subset of the numerous similarity measures known from statistics to the problem and to select the one that performs best according to a quality measure such as, for example, the classification accuracy. The other choice is to automatically build the similarity metric by learning the right attributes and attribute weights. In the later one, we chose as one option to improve the performance of our classifier.

When people collect prototypes to construct a dataset for a prototype-based classifier, it is useful to check if these prototypes are good prototypes. Therefore, a function is needed to perform prototype selection and to reduce the number of prototypes used for classification. This results in better generalization and a more noise tolerant classifier. If an expert selects the prototypes, this can result in bias and possible duplicates of prototypes causing inefficiencies. Therefore, a function to assess a collection of prototypes and identify redundancy is useful. Finally, an important variable in a prototype-based classifier is the value used to determine the number of the closest cases and the final class label. Consequently, the design-options the classifier has to improve its performance are prototype selection, feature-subset selection, feature weight learning and the ‘k’ value of the closest cases (see Figure 1).

We assumed that the classifier can start in the worst case with only one prototype per class. By applying the classifier to new samples, the system collects new prototypes. During the lifetime of the system, it will change its performance from an oracle-based classifier, which will classify the samples roughly into the expected classes, to a system with high performance in terms of accuracy.

In order to achieve this goal, we need methods that can work on less number of prototypes and on a large number of prototypes. As long as we have only a few numbers of prototypes feature subset selection and learning the similarity might be the important features the system needs. If we have more prototypes we also need prototype selection.

In the case with less number of prototypes, we chose methods for feature subset selection based on the discrimination power of attributes. We used the feature based calculated similarity and the pair-wise similarity rating of the expert and applied the adjustment theory [15] to fit the similarity value more to the true value.

For a large number of the prototypes, we chose a decremental redundancy-reduction algorithm proposed by Chang [16] that deletes prototypes as long as the classification accuracy does not decrease. The feature-subset selection is based on the wrapper approach [17] and an empirical feature-weight learning method [18] was used. Cross-validation was used to estimate the classification accuracy. A detailed description of our prototype-based classifier ProtoClass was given in [4]. The prototype selection, the feature selection, and the feature weighting steps were performed independently or in combination with each other in order to assess the influence, these functions had on the performance of the classifier. The steps were performed during each run of the cross-validation process. The classifier schema shown in Figure 5 is divided between the design phase (Learning Unit) and the normal classification phase (Classification Unit). The classification phase starts after we had evaluated the classifier and determined the right features, feature weights, the value for ‘k’ and the cases.

Our classifier has a flat database instead of a hierarchy that makes it easier to conduct the evaluations.

    1. Classification Rule

Assume we have n prototypes that represent m classes of the application. Then, each new sample is classified based on its closeness to the n prototypes. The new sample is associated with the class label of the prototype that is the closest one to sample.

More precisely, we call ∈{x1,x2,…,xi,…xn} a closest case to x if , where i=1,2,…,n.

The rule chooses to classify x into category , where is the closest case to x and belongs to class with .

In the case of the k-closest cases, we required k samples of the same class to fulfill the decision rule. As a distance measure, we can use any distance metric. In this work, we used the city-block metric.

The pair-wise similarity measure Simij among our prototypes shows us the discrimination power of the chosen prototypes based on the features.


The calculated feature set must not be the optimal feature subset. The discriminatory power of the features must be checked later. For a less number of prototypes, we can let the expert judge the similarity SimEij between the prototypes. This gives us further information about the problem which can be used to tune the designed classifier.

    1. Using Expert’s Judgment on Similarity and the Calculated Similarity to Adjust the System

Humans can judge the similarity SimEij among objects at a rate between 0 (identity) and 1(dissimilar). We can use this information to adjust the system to the true system parameters [15].

Using the city-block distance as the distance measure, we got the following linear system of equations:

with , the feature l of the i-th prototype and N the number of attributes.

The attribute al is the normalization of the feature to the range {0,1} with that is calculated from the prototypes. That this is not the true range of the feature value is clear since we have fewer samples. The factor al is adjusted closer to the true value by the least square method using expert`s SimEij:

,

with the restriction .

  1. Results

Figure 6a shows the accuracy of classification based on a different number of prototypes for all attributes and Figure 6b shows the accuracy of a test set based on only the three most discriminating attributes. The test shows that the classification accuracy is not so bad for only three prototypes but with the number of prototypes the accuracy increases. The selection of the right subset of features can also improve the accuracy and can be done based on the method presented in Section 6 for a low number of samples. The right chosen number of closest cases k can also help to improve accuracy but cannot be applied if we only have three prototypes or fewer prototypes in the database.

Figure 7 shows the classification results for the 220 instances started without adjustment, meaning the weights al are equal to one (1;1;1) and with adjustment based on expert`s rating where the weights are (0.00546448; 0.00502579;0.00202621) as an outcome of the minimization problem.

Table 2 shows the different values of three prototypes. The result shows that accuracy can be improved by applying the adjustment theory and especially the class-specific quality is improved by applying the adjustment theory (see

The application of the methods for larger samples set did not bring any significant reduction in the number of prototypes (see Figure 9) or in the feature subset (see Figure 10). The prototype selection method reduced the number of prototypes only by three prototypes. We took it as an indication that we have had not yet enough prototypes and that the accuracy of the classifier can be improved by collecting more prototypes. How these functions worked on another data set can be found in [18].


In Summary, we have shown that the chosen methods are valuable methods for a prototype-based classifier and can improve the classifier performance. For future work, we will do more investigations on the adjustment theory as a method to learn the importance of features based on less number of features and for feature subset selection for less number of samples.

  1. Conclusions

We have presented our results on a prototype-based classification. Such a method can be used for incremental knowledge acquisition and automatic image classification. Therefore, the classifier needs methods that can work with fewer numbers of prototypes and on a large number of prototypes. Our result shows that feature subset selection based on the discrimination power of a feature is a good method for fewer numbers of prototypes. The adjustment theory in combination with an expert similarity judgment can be taken to learn the true concept description of a class in case of fewer prototypes. If we had a large number of prototypes, an option for prototype selection that can check for redundant prototypes is necessary.

The system can start to work on a low number of prototypes and can instantly collect samples during the usage of the system. These samples get the label of the closest case. The system performance improves as more prototypes the system has in its database. That means an iterative process of labeled sample collection based on prototype-based classification is necessary followed by a revision of these samples after some time in order to sort out wrong classified samples until the system performance has been stabilized.

The test of the system is done in the study of the internal mitochondrial movement of cells. The biologist knows from the literature how the different signature of mitochondrial movement of cells should look like. Based on this knowledge he can pick prototypical images that are the starting point for our system development. If we give him an introduction to the concept of similarity [17] he is also able to give a value for the pairwise similarity value between the different prototypical images. These values and the calculated similarity values can be used to come close to the true similarity value from our adjustment function. It reduces the influence of the uncertainty in the features.

  1. Acknowledgement

This project has been sponsored by the Ministry of Science and Technology „Quantitative Measurement of dynamic time-dependent cellar events”, QuantPro (BMBF 0313831B).


Figure1: Sample Images for three Classes.



Figure2: The Prototypes for the class Death, Round and Tubular.



Figure3: Methodology for Prototype-based Classification.

 



Figure4: Examples of Cell Images for 10 different Classes.




Figure 5: Prototype-based Classifier.






Figure 6a:Accuracy for different number of prototypes using all attributes. Figure 6b: Accuracy for different number of prototypes using 3 attributes (Area5, ObjCtn0, ConSk3).



Figure7: Accuracy depending on choice of attributes (k=1)



Figure9: Number of removed Prototypes.




Table1: Texture Features based on Random Set.

 

 

B6_23

B03_22

F10_2

B6_23

0

0,669503257 (0,8)

0,989071038 (0,6)

B03_22

0,669503257 (0,8)

0

0,341425705 (0,9)

F10_2

0,989071038 (0,6)

0,341425705 (0,9)

0

 

Table 2:Difference between 3 Prototypes using the 3 attributes (ObjCnt0,ArSig0, ObjCnt1),

 

1.       Bereiter-Hahn J, Voth M(1994) Dynamics of Mitochondria in Living Cells: Shape Changes,Dislocations, Fusion, and Fission of Mitochondria,Microscopic Research and Technique: 27: 198-219.

2.       JohnsonLV, WalshML, Chen LB (1980) Localization of mitochondria in living cells with rhodamine, Proc. NatLAcad Sci 77: 990-994.

3.       Schmidt R and Gierl L (2001) Temporal Abstractions and Case-Based Reasoning for Medical Course Data: Two Prognostic Applications, “in Machine Learning and Data Mining in Pattern Recognition, MLDM2001, edited by P. Perner, lnai 2123, Springer-Verlag: Berlin Heidelberg:23-34.

4.       PernerP (2008)Prototype-Based Classification, Applied Intelligence 28: 238-246.

5.       Nilsson M,Funk P(2004) A Case-Based Classification of Respiratory Sinus Arrhythmia,” in Advances in Case-Based Reasoning, ECCBR 2004, edited by P. Funk and P.A. Gonzalez Calero, lnai 3155, Springer-Verlag: Berlin Heidelberg: 673-685.

6.       Aha DW, Kibler D, Albert MK (1991) Instance-based Learning Algorithm. Machine Learning 6: 37-66.

7.       Sachs-Hombach Kl(2002) Bildbegriff und Bildwissenschaft. In: Gerhardus, D., Rompza, S. (Eds.) kunst - gestaltung - design, Heft 8: 1-38, Verlag St. Johann, Saarbrücken.

8.       Krausz E, Prechtl St, Stelzer EHK, Bork P, Perner P (2006) Quantitative Measurement of dynamic time dependent cellular events. Project Description.

9.       Perner P, Perner H, Müller B(2002)Mining Knowledge for Hep-2 Cell Image Classification,Journal Artificial Intelligence in Medicine 26: 161-173.

10.    Perner P(2008)Novel Computerized Methods in System Biology -Flexible High-Content Image Analysis and Interpretation System for Cell Images. In: Perner, P. Salvetti, O. (Eds.) Advances in Mass Data Analysis of Images and Signals in Medicine, Biotechnology, Chemistry and Food Industry, MDA 2008, lnai5108;139-157, Springer Verlag

11.    Otsu N (1979) A threshold selection method from gray-level histograms, IEEETransSMC-9: 38-52.

12.    Zamperoni P (1996) ‘Feature Extraction’ In: H. Maitre and J. Zinn-Justin, Progress in Picture Processing, Elsevier Science: 123-184.

13.    Matheron G(1975) Random Sets and Integral Geometry.J. Wiley&Sons, New York London.

14.    Stoyan D, Kendall WS, Mecke J (1997) Stochastic Geometry and Its Applications. Akademie Verlag.

15.    Niemeier W (2008) Ausgleichsrechnung, de Gruyter, Berlin New York.

16.    Chang CL (1974) Finding Prototypes for Nearest Neighbor Classifiers. IEEE Trans. on Computers.C-23;11

17.    Perner P (2002) Data Mining on Multimedia Data. LNCS, Springer Verlag 2558: 1-131.

LittleS, Colantonio S, Salvetti O, Perner P (2010) Evaluation of Feature Subset Selection, Feature Weighting, and Prototype Selection for Biomedical Applications. J Software Engineering & Applications 3: 39-49.

© by the Authors & Gavin Publishers. This is an Open Access Journal Article Published Under Attribution-Share Alike CC BY-SA: Creative Commons Attribution-Share Alike 4.0 International License. With this license, readers can share, distribute, download, even commercially, as long as the original source is properly cited. Read More.

Advances in Proteomics and Bioinformatics

slot deposit danatips ampuh bermain slot mahjong waystrik slot sugar rushakun pro mahjong gacorrtp slot terjituslot mahjong ways gacorcara dapetin maxwin olympuspancing scatter mahjong ways 1rekomendasi slot mahjong ways 2scatter mahjong terbarupola mahjong ways hari inimahjong ways modal recehcuan mahjong waysdemo slot pg softnaga awal julyrtp slot awal julymahjong bulan mudamodal receh slotlink slot mahjongwinrate tinggi rtpslot server filipinavolatility pg softwaktu tepat slot gacorjam gacor saldo bancarfitur bonus lucky neko4 simulasi jackpot mahjongtrik sepuh mantan napicara menggunakan pola slot mahjongrtp tertinggi hari inislot mahjong ways 1pola gacor olympus hari inipola gacor starlight princessslot mahjong ways 2strategi olympustrik mahjong ways 2trik olympus hari inirtp koi gatertp pragmatic tertinggicheat jackpot mahjongpg soft link gamertp jackpotelemen sakti mahjongpola maxwin mahjongslot olympus mudah mainrtp live starlightrumus slot mahjongmahjong scatter hitamslot pragmaticjam gacor mahjongpola gacor mahjongstrategi maxwin olympusslot jamin menangrtp slot gacorscatter wild banditoamantotorm1131