PRELIMINARY CONSIDERATIONS ABOUT DEEPLEARNING
The theory of neural networks and their algorithms was proposed in the 40s. At the Engineering level, during this time work has been done on Data Mining.
However, at this time, computational capacity did not allow us to deal with these calculations effectively. Thanks to the emergence of Big Data, with a large amount of structured and unstructured data, together with the improvement of HPC systems, it has allowed effective Machine learning & Deep learning systems to be achieved. The fundamental idea is that in recent years, data has grown exponentially, especially in sectors such as medical imaging, bioinformatics, etc.
In these environments, Deep learning allows us to face the need to search for patterns among hundreds or thousands of unstructured data, whether images, behavior patterns, etc.
Companies like NVDIA or Intel (with its acquisition of Altera), are interested in these technologies, because although it is possible to approach the training of the machine from general-purpose processors such as the Broadwell, at the level of productivity, it is more efficient, downloading from this task to the main processor and perform it from a coprocessor. Given that for this task, simple (or even average) precision is sufficient, the result is that GPU cards like the ones NVDIA has with more than 3.000 cores, are an ideal element to approach this problem. Also, Pascal technology (versus old Kepler or even Maxwell) has of course in some cases increased 100% performance in Tflops and also doubles the bandwidth to memory which has greatly improved efficiency for these kinds of calculations.
These systems, also previously called neural, behave better with a greater number of less powerful cores (such as those offered by GPUs and coprocessors, whether FPGA or Phi) and with more difficulty, on Xeon Broadwell processors, for example, which are excellent at processing. general purpose and as execution of operating systems.
There is also beginning to be a software ecosystem of great interest, with proprietary tools such as Mathlab or open source tools such as R Python. Google and other companies have also developed their own algorithms.
It is perhaps the theoretical part, the one that can interest researchers the most. However, it has a great practical application, which draws from this basic research, in fields among which it is worth mentioning:
computer aided detection
It will be the most common application in the near future due to its relative ease of implementation. These systems can help alert radiologists to suspicious lesions that require further review. This implies that the analysis will be faster and reduces the risk of human error.
quantitative image
Quantitative imaging tools help specialists segment, visualize and quantify what they are seeing. This can streamline processes and simplify the decision-making process.
Decision support tools
They combine the two areas above to help create automated workflow tools for professionals. They can provide guidelines and specific objectives of detailed analysis after a previous analysis carried out automatically by these systems.
computer aided diagnosis
Computer-aided diagnostic tools combine all of the aforementioned tools to offer diagnoses supported by probabilities. The systems won't make the final diagnosis, but if a radiologist is informed by the output of running a deep learning algorithm that a certain diagnosis has about a ninety percent chance of being correct, the radiologist can focus on confirm that diagnosis instead of wasting time reviewing other cases in which the probability of a diagnosis is very low.
At the base of all this, a self-taught artificial neural network is used that has learned from thousands of cases so far and will continue to improve with each new case it analyzes.
Returning to the Mathlab tool, the recognition of patterns in images or even videos, begins to be in orders of 80-90% success, once we have a well-trained system, with previous patterns.
Within Mathlab, machine learning for computer vision is very successful, being able to have learning patterns that are quite accurate when similar images must be identified.
SIE, as Intel Platinum Partner and HPC Specialist and NVDIA Partner, has had access to the latest technologies in this field, both NVIDIA Pascal technology with Cuda 8, as well as the new Intel Phi Knight Landing implementations, of which He has rich experience in HPC field.