In my previous blog, I have explained the differences and similarities between terms such as AI, analytics, KDD, and other data-related technologies such as machine learning and deep learning. I also observed that some terms are used interchangeably, which also illustrates the popularity of everything related to AI and analytics.
There are, however, some acronyms that are not open for interpretation and which tell us a lot about the evolution of AI in the past years (and decades). I am referring to the processor-related acronyms CPU, GPU and TPU.
Central Processing Unit
CPU is probably the best-known acronym. It stands for Central Processing Unit and has been used for decades, ever since the 1960’s. Throughout the decades, chip vendors and other technology organizations have been working on making these CPUs faster and more powerful while continuously shrinking them in size. Moore’s Law, which - simply put – observes that the processing power doubles every 18 to 24 months, has been a convincing proof of their success.
For years, the AI and analytics vendors followed this Moore’s Law cycle to improve the performance of their own technology, and everybody seemed quite content with that. But as the use of analytics in online and real-time processes increased, the need for even more powerful processors grew steadily. That is why they turned to GPUs.
Graphical Processing Unit
GPU stands for Graphical Processing Unit. The best way to describe this is as a CPU designed specifically for graphical applications and environments. As one needs to process more complex files such as images and video, the graphical world felt that the traditional CPU was too slow for their specific tasks. That is why the GPU came to existence. Graphical Processing Units are designed to cope better with parallel processing, allowing them to use a buffer for the rapid creation of images and graphics. And, as explained, this technology increasingly gets embraced by data-intensive organizations and applications as well.
Tensor Processing Unit
Now, as a “living proof” that data analysis and machine learning have reached an even higher level, more and more organizations are looking at TPUs as the next processing platform. TPU stands for Tensor Processing Unit, and refers to a new generation of chips, which are developed by Google. They are generally considered to be the first type of processor to be designed with AI and analytics in mind. Google has developed it for use within their own datacenters to enable neural network machine learning. And it is eagerly adopted by SAS and other leading analytics companies as a means to lift AI and machine learning to the next level.
CPU, GPU, TPU … One letter of difference which perfectly illustrates how the world of AI and analytics has never stopped growing, meeting ever higher requirements and turning new dreams into reality. We can’t wait to find out what letter is next!