Building a Neural Network

A neural network needs thousands of perceptrons. Each is a processor that executes the same function again and again with changing data. Thousands of simple processors with a very small instruction set are needed. This lies in the domain of HPC (High-Performance Supercomputers) with many processors running in parallel, which is very different from the software running on the processor in your personal computer or smartphone, which operates as a sequence of many instructions, needing a complex architecture. Interestingly, HPC architectures are also used in graphics processing in video games. As a result, Graphics Processing Unit (GPU) components (e.g. from NVidia), typically used in advanced video games consoles and PCs provided an excellent start in building fast, compact and low cost neural computing hardware. Now, dedicated hardware for deep learning neural networks have become the biggest rage in computer hardware, with many AI focused companies like Google, Facebook, IBM and Apple are building their own components for AI, along with traditional semiconductor companies like Intel, Qualcomm, and NVidia.

Leave a Reply

Your email address will not be published. Required fields are marked *