Like other purpose-built accelerators, such as graphics processing units (GPUs), auxiliary power units (APUs) and power processing units (PPUs), AI accelerators are designed to perform their particular tasks in a way that’s impossible for traditional CPUs like the common x86 derivatives in most desktops and notebooks. A purpose-made accelerator delivers greater performance, more features and greater power efficiency to facilitate its given task.
Some computing tasks can be massively parallel, including many in AI. A GPU can accelerate such tasks well using the many simple cores that are normally used to deliver pixels to a screen. With a general purpose GPU (GPGPU), a graphics card can be used to great effect in massively parallel computing implementations such as AI, where they deliver up to 10 times the performance of CPUs.
The designs of AI accelerators are also generally focused on multicore implementation. These cores are designed for simpler arithmetic functions common to AI, where the number of such functions required for a task can render it impossible for traditional computing approaches. Such was the case with the game of Go tasked to Google DeepMind’s AlphaGo project -- the number of possible piece positions in the game made processing the task impossible with a brute force approach. Many clever adjustments to algorithms had to be made, despite massive hardware power. With purpose-designed application-specific integrated circuits (ASICs), it is believed that efficiency can be even greater than that achieved with GPGPU, which can benefit edge AI tasks such as autonomous driving.
Current hardware for AI acceleration includes Google Tensor, Adapteva Epiphany, Intel