Advances in Biochemistry and Biotechnology

Volume 2017; Issue 06
7 Nov 2017

On the Big Data Application to the Practice and Theory of Biomedicine

Review Article

Simon Berkovich*

Department of Computer Science, The George Washington University, Washington, USA

*Corresponding author: Simon Berkovich, Department of Computer Science, The George Washington University, Washington, DC 20052, USA. Email:

Received Date: 11 October, 2017; Accepted Date: 26 October, 2017; Published Date: 2 November, 2017




Suggested Citation



The Shock of the Big Data


“You always have to go for bigger and better things” Beth Movins


Traditional way of developing science as well as of making decisions in complicated business situations is to collect as much information as possible to generate rational tested suggestions. Such an approach takes for granted that the more information could be obtained the more successful this tactic should be. So, it comes as a surprise that notwithstanding the remarkable technological advances in handling largely increased amounts of information, this obvious “Big Data” approach does not bring in the expected results [1].


The reason for this troublesome situation is twofold. To begin with, according to a very well recognized philosophical dictum a simple accumulation of quantitative differences beyond a certain point gets into a sudden qualitative jump. Thus, firstly, a mere increase in the applied computational power does not bring in the expected outcomes from processing large amounts of information. Apparently, brain works as a “Big Data” machine, and the corresponding algorithms, if discovered, would be beneficial to imitate [2]. Information processing is basically determined by the realization of a computational model – an abstract construction for symbol manipulations. The primitive computational model is presented by the Turing Machine, which works on individual symbols using a sequential access memory. Turing machine is mostly of a theoretical interest just as a formal definition of an algorithm. Practical information processing started with von Neumann’s model operating on joined words using a random-access memory. Trivial efforts to further von Neumann’s model by means of parallelization stumble gravely upon software complexities and Amdahl’s law limitations. So, to reach the Big Data performance brain should rely on a completely different computational model [2]. This Computational Model has to effectively operate with composite items using a content-addressable memory restricting a prompt versatile access only to relevant information, analogously to Google’s PageRank. The relevant information is chosen explicitly, while the rest information only has an effect on these choices, but somehow implicitly. As a result, the surmised Big Data computational model for the brain unravels the main Freudian characteristics – a considerable role of subconsciousness.


Big Data Processing Through Cluster Access


“Where does a thought go when it’s forgotten?” Sigmund Freud


Thus, the surmised computational model entails a qualitatively different organization of Big Data processing. For the brain this computational model can be implemented using cloud computing in the framework of the holographic resources of the Physical Universe [2-4].


The distinctive feature of the given Big Data computational model is separating the operational roles of explicit and implicit information. With conventional computer equipment this distinctive feature can be implemented through a special technique of clustering applied to data items having binary encoded attributes [5,6]. A rough outline of the involved technicalities is given in [7].


The presented approach provides a novel type of access to information objects – through an ensemble of clusters they belong to, rather than through their own individual information content. A brief review of such possibilities is presented in [8].


Accordingly, access to a Big Data system is to be accomplished by a novel query type – “Exemplar” searching [9] that considers a user query merely as an example of the data in which the user is actually interested. Subsequently, after appropriate clusters are selected user has to further analyze their characteristics as a requisite part of the algorithm for system exploration.


The Inductive Comprehension of Complex Phenomena


“Everyone complains of his memory, and no one complains of his judgment.” – François de La Rochefoucauld


Thus, Big Data possibilities allow the process of knowledge creation to develop contrarily to the ubiquitous way of comprehending complex phenomena by going from understanding simple facts first. Normally, this ubiquitous way of understanding implies the existence of a paradigm that has to be basically confirmed, with maybe some ad hoc modifications. On the other hand, any “Big Data” system, natural or artificial, is definitely a subject of various kinds of interpretations, so these systems cannot be appropriately understood solely from the data depictions without employing a suitable paradigm. As a result, the Evidence Based rough complexity of “Big Data” is more amenable to discovery of adequate paradigms than elegant simplicity of well-designed developments that may end up in blind alleys wanderings. Thus, Big data approach to biomedicine may lead to a long-awaited elucidation of fundamental science while the continuing expansion of a predictable paradigm could not get rid of the accrued inconsistencies.


Thus, revealing the underlying paradigm is a precondition for the beneficial application of the Big Data, particularly for the precision medicine, which essentially involves the main diversities of the universal Weltanschauung paradigm – “The whole general culture and social structure”. For example, consider, a group of people walking through the forest who see and respond to their environment in different ways: “The lumberjack sees the forest as a source of wood, the artist as something to paint, the hunter as various forms of cover for game, and the hiker as a natural setting to explore” [10].


Perception in modern science takes place essentially through the mind rather than computer-processed data, so inward intention and general disposition most strongly affect what is “Seen”. For example, building and operating elementary particle accelerators, almost subliminally, predispose scientists to develop theories in terms of particles; the whole social structure of physics has the effect of confirming the particle hypothesis of matter [10]. A similar situation occurs in biomedicine, particularly in massive attempts of studying the brain to corroborate neural net expectations [1]. The relation between mind and body considered as the most intractable problem of science: “So, philosophical wisdom would consist in giving up the attempt to understand the relation in terms of other more familiar ones and accepting it as the anomaly it is [11]. Thus, it becomes extremely insistent to determine an adequate paradigm for the operational organization of living matter (see, our short preliminary note [12]; work on extension of this article is in progress).


Towards the Ultimate Understanding of Nature


“Empty is the argument of the philosopher which does not relieve any human suffering.” – Epicurus


As long as human mind is a part of the physical world revealing the right paradigm for Nature could be more effectual through the complex versatility of the mind rather than through the deceptive simplicity of the inanimate matter.


Nowadays, humankind confronts two fundamental seemingly almost not related problems: overabundance of information and dearth of energy. Yet the Big Data circumstances are closely connected to the inherent workings of Nature [I3] So, an appropriate contemplation of the Big Data in connection to biological energy is necessary for revealing the general Weltanschauung paradigm, which is vital for the whole well-being of modern society.



  1. Megan Scudellari (2014) “Scientists Question the Big Price Tags of Big Data”.
  2. Simon Berkovich (2014) Formation of Artificial and Natural Intelligence in Big Data Environment. Springer, Network Science and Cybersecurity, Advances in Information Security 55: 189-203.
  3. Berkovich S (1993) On the information processing capabilities of the brain: shifting the paradigm. Nanobiology 2: 99-107.
  4. Berkovich S (2014) Organization of the brain in Light of the Big Data Philosophy Fifth International Conference on Computing for Geospatial Research and Application. 91-92.
  5. Berkovich E (2007) Method of and system for searching a data dictionary with fault tolerant indexing US Patent number: 7168025 Date of Patent.
  6. Berkovich S, Liao Duoduo (2012) On clusterization of “big data” sreams. COM.Geo12 Proc. of the 3rd International Conference for Geospatial Research and Applications. ACM, New York, Washington, DC, USA.
  7. Liao D, Yammahi M, Alhudhaif A, Alsaby F, AlGemili U, et al. (2016) A qualitatively Different Principle for the Organization of Big Data Processing”, Big Data: Storage, Sharing, and Security, edited by Fei Hu, CRC Press Chapter 7: 171-198.
  8. Simon Berkovich (2017) Organization of Intelligent Memory Introducing Cluster Access 3.
  9. Motin D, Lissandrini M, Velegrakis Y, Palpanas T (2015) Exemplar queries a new way of searching”, The VLDB Journal 25: 741-765.
  10. Bohm D, Peat FD (1987) Science Order, and Creativity, Bantam Books, New York 1987
  11. Shaffer J (1972) Mind-Body Problem in The Encyclopedia of Philosophy 5: 345.
  12. Simon Berkovich (2016) Connotation of Life beyond molecular biology. Advanced Techniques in A Biology & Medicine.
  13. Simon Berkovich. Physical world as an Internet of Things.
Suggested Citation


Citation: Berkovich S (2017) On the Big Data Application to the Practice and Theory of Biomedicine. Adv Biochem Biotehcnol: ABIO-145. DOI: 10.29011/2574-7258.000045

Leave a Reply